Locks are an important tool for resolving concurrency conflicts. We use many types of locks in development, each with its own characteristics and scope of application.
It is necessary to have a deep understanding of the concept and difference of the lock in order to use the lock correctly and reasonably.
Common lock type
Optimistic lock and pessimistic lock
Pessimistic locks are pessimistic about concurrent conflicts. By first accessing the lock and accessing the data, data security can be ensured to a large extent.
The optimistic lock thinks that the probability of data conflict is relatively low, and can access the data as much as possible, and only acquire the lock when the final commit data is persisted.
Pessimistic locks always acquire locks first, which adds a lot of extra overhead and increases the chance of deadlocks. Especially for read operations, data is not modified, and pessimistic locking greatly increases the response time of the system.
The last step of optimistic locking is to commit data. The chance of deadlock is low, but if there are multiple transactions processing the same data at the same time, there is a chance that it will conflict or even cause a system exception.
Traditional relational databases often use pessimistic locking to improve data security. In scenarios where optimistic locking is used, the version number is usually used to ensure data security.
A spin lock causes a thread in a wait state to execute an empty loop for a period of time. If the lock is acquired after executing the empty loop, the lock is acquired immediately, otherwise the thread is suspended.
Using a spin lock can reduce the probability that a waiting thread will be suspended. The thread enters the blocking state and wakes up again. It needs to switch between the user state and the kernel state. The spin lock avoids entering the kernel state, so it has better performance.
Spin locks are suitable for scenarios where competition is not intense and thread task execution time is short. However, for scenes with intense competition or long task execution time, it is not suitable to use a spin lock, otherwise CPU time slices will be wasted.
The reentrant lock provided in Java is a recursive non-blocking synchronization mechanism that allows the inner method to acquire the lock again if the outer method is already locked.
ReentrantLock maintains a counter that increments the counter by one for each lock and unlocks the counter by one. Synchronized in Java is also a reentrant lock.
Polling lock and timing lock
Polling locks are implemented by threads constantly trying to acquire locks, avoiding deadlocks and better handling error scenarios. In Java, polling can be done by calling the lock's tryLock method. The tryLock method also provides an implementation that supports timing, and the wait time for getting the lock can be specified by parameters. If you can get the lock immediately, then return immediately, otherwise wait for a while to return.
The read-write lock ReadWriteLock can elegantly implement access control on resources, which is implemented as ReentrantReadWriteLock. The read-write lock provides two locks, a read lock and a write lock. It uses a read lock when reading data and a write lock when writing data.
Read-write locks allow multiple read operations to occur simultaneously, but only one write operation is allowed. If the write lock is not locked, the read lock will not block, otherwise it will wait for the write to complete.
ReadWriteLock lock = new ReentrantReadWriteLock();
Lock readLock = lock.readLock();
Lock writeLock = lock.writeLock();
Use of locks
Reduce the range of the lock
Locking ensures that a method or piece of code has only one thread to access, so the locking range should be as small as possible. For example, when using synchronized, you can lock the code block, and try not to lock the method.
Object lock and class lock
If you can lock the object, don't lock the class and try to control the scope. After locking the class, all threads use the same lock, only one thread can lock at the same time; and locking the object can increase the number of locks and improve the efficiency of concurrency.
Most locks support setting fairness: fair lock refers to which thread waits for the lock to be acquired according to the time the thread waits. Unfair lock refers to randomly selecting a thread to acquire the lock. Reentrant locks and read-write locks are unfair locks by default, and can also be set by parameters. It is necessary to decide whether the setting is fair or unfair according to the specific scenario.
Do not use locks if not necessary. The Java virtual machine can also determine whether the locked code is thread-safe according to the escape analysis, and if the thread-safe virtual machine is confirmed to perform lock elimination, the efficiency is improved.
If a piece of code requires multiple locks, it is recommended to use a larger range of locks to improve execution efficiency. The Java virtual machine is also optimized. If the same object lock is found to have a series of lock unlock operations, the virtual machine performs lock coarsening to reduce the lock time.