Daniel Mitterdorfer

Cheap Read-Write Lock Explained

I have recently stumbled across a nice idiom called the cheap read-write lock. It is intended for very frequent concurrent reads of a field where synchronization would lead to too much contention. I think that the idiom is a bit odd and begs for an explanation.

Usage Scenario

For the sake of demonstration consider the following situation: Our system has a globally available calendar that holds the current day within the year. Every part of the system may read the current day. Once a day, a background thread will update it. Note that this scenario might not justify using the idiom in practice, but the example is sufficient for illustrating it.

@NotThreadSafe
public final class SystemCalendar {
  // may be a value between 0 and 364
  private int currentDayOfYear;
  
  public int getCurrentDayOfYear() {
    return currentDayOfYear;
  }
  
  public void advanceToNextDay() {
    // let's ignore leap years for this example...
    this.currentDayOfYear = (this.currentDayOfYear + 1) % 365;
  }
}

That's all nice and well, but this code is not thread safe. To demonstrate why, follow me on a short detour. The picture below illustrates what could happen to currentDayOfYear on a hypothetical and very simple multicore system. We assume that the updating thread always runs on CPU1 and reader threads run on CPU2. On initial access, the value of currentDayOfYear is read from main memory and cached on each CPU separately. The yellow circle indicates the currently known value of currentDayOfYear to any part of the system.

Memory Hierarchy with missing happens-before order

As you can see, the value has been updated on CPU1, but it has never been reconciled with main memory and thus never reached CPU2. How can this happen? Java 5 has brought us JSR 133, better known as the Java Memory Model, which defines a happens‑before order between memory accesses. I won't go into the details here, but we failed to establish a happens‑before order between writes and reads of currentDayOfYear, which allowed the runtime to cache its value in each processor separately. Let's fix it:

@ThreadSafe
public final class SystemCalendar {
  private int currentDayOfYear;
  
  public synchronized int getCurrentDayOfYear() {
    return currentDayOfYear;
  }
  
  public synchronized void advanceToNextDay() {
    this.currentDayOfYear = (this.currentDayOfYear + 1) % 365;
  }
}

By marking both methods as synchronized, we have established a happens‑before order. Writes will now be flushed to main memory, and the value will be fetched from main memory before reads (This is not entirely correct and the reality is more complicated, but this mental model is sufficient for understanding the idiom. If you are interested in the details, have a look at the MESI protocol).

Memory Hierarchy with established happens-before order

Problem solved? You bet! But let's suppose that there are thousands of concurrent readers of currentDayOfYear. As a result, #getCurrentDayOfYear() will suffer from heavy contention. Each reader has to obtain the monitor on SystemCalendar.this just to ensure it gets the most recent value of currentDayOfYear.

Removing the synchronization on #getCurrentDayOfYear() in this situation is fatal, as the class would not be thread safe anymore. JLS ยง17.4.4 states: "An unlock action on monitor m synchronizes‑with all subsequent lock actions on m [...]" (you can substitute synchronizes‑with with happens‑before for our purposes). The mentioned "subsequent lock action" we want is exactly the acquisition of the monitor lock by #getCurrentDayOfYear() after #advanceToNextDay() has been called. Only then, we have established a happens‑before order and are thus able to see the new value. Too bad: It seems we are stuck with the lock.

It turns out we can relax the constraint by declaring currentDayOfYear as volatile instead of marking #getCurrentDayOfYear() as synchronized. That's essentially the cheap read-write lock:

@ThreadSafe
public final class SystemCalendar {
  private volatile int currentDayOfYear;
  
  public int getCurrentDayOfYear() {
    return currentDayOfYear;
  }
  
  public synchronized void advanceToNextDay() {
    this.currentDayOfYear = (this.currentDayOfYear + 1) % 365;
  }
}

Why does this work? Both synchronized and volatile establish a happens‑before order, but only synchronized guarantees that the guarded block is executed atomically. The setter performs multiple operations that have to be atomic, so synchronizing the method is the correct choice. However, the getter performs only one action, namely a read of the field, which is atomic (as per guarantee of the JLS). This allows us to use volatile instead of synchronized to establish a happens‑before order.

When to use the cheap read-write lock?

This idiom may be applied when reads far outweigh writes and synchronization would lead to too much contention. You should not just assume that there is a bottleneck but actually measure it before and after applying this idiom.

When to avoid the cheap read-write lock?

In short: almost always. In the original article, Brian Goetz warns firmly that you should not use this idiom lightly. First, you should empirically demonstrate that there is a performance problem and using the cheap read-write lock eliminates it. The code should be properly encapsulated and be documented very well. You must not use the cheap read-write lock if you deviate from the standard use case above. For example, the idiom is obviously insufficient if your getter consists of multiple statements.

Alternatives

The safer alternative in most similar situations that occur in practice will be ReadWriteLock, which is provided by the JDK since Java 5:

@ThreadSafe
public final class SystemCalendar {
  private final ReadWriteLock lock = new ReentrantReadWriteLock();
  private final Lock readLock = lock.readLock();
  private final Lock writeLock = lock.writeLock();
  
  private int currentDayOfYear;
  
  public int getCurrentDayOfYear() {
    readLock.lock();
    try {
      return currentDayOfYear;
    } finally {
      readLock.unlock();
    }
  }
  
  public void advanceToNextDay() {
    writeLock.lock();
    // The try - finally block is not strictly necessary in this case 
    // but it is the common idiom when using locks. You almost always want to use it.
    try {
      this.currentDayOfYear = (this.currentDayOfYear + 1) % 365;
    } finally {
      writeLock.unlock();
    }
  }
}

Since Java 8 you can also use StampedLock.

Conclusion

With careful thought and a very good understanding of the runtime behavior of our system, we are able to relax the cost that may be introduced by synchronized access to highly contended code paths with a lot of reads. Which technique is sufficient to solve a performance problem is best demonstrated by intensively profiling your system in different usage scenarios. While the cheap read-write lock is certainly not something you'll pull out every day, it is a nice technique for your concurrency toolbox.

If you have any questions or comments just ping me on Twitter.