This is a recap of my talk on the same subject at EventMachine RubyConf in Baltimore on the final day of RailsConf 2011.
Concurrency is a hotly debated subject in the Ruby community. Shared state or shared nothing? Threads or Events? Sync or Async? The fact that the standard Ruby interpreter does not provide multiple-core saturation without resorting to process management clouds the issue, causing developers to constantly evaluate new approaches for using all available CPUs.
JRuby enters the discussion, sporting its use of native (kernel) threads, allowing single-process access to all of your cores. Is true concurrently-executing Ruby code obtained simply by switching to JRuby? Before you think that JRuby will make your threaded code run faster, we need to take a step back and explain.
First, a new mental model is needed. Although JRuby is just another Ruby implementation, it's also a new tool running on a completely different VM, the Java Virtual Machine, which has performance characteristics much different than Ruby's VM. These characteristics vary due to the use of native threads compared to green threads, the JVM's sophisticated garbage collection facilities, and most importantly JRuby's own codebase. So your assumptions about how code works do not carry across Ruby implementations. Code that previously ran slow may now be fast and vice versa.
Adding to the uncertainty of the situation is the unpredictability of native threads. Have you ever seen "should never happen" comments in code, where some programmer was convinced that a branch of code was completely unreachable? If the code branches based on a piece of shared state corrupted by multiple threads scheduled across multiple cores, the impossible code just might end up executing.
Here's a hypothetical example running on some fictitious native-threaded, optimizing Ruby VM. Say we have this singleton object that's expensive to create, so the programmer wrote it to be constructed lazily.
As we all know, the "or-equals" operator is really just sugar for the following code:
Now let's play the role of the optimizing VM. Let's say that this VM decides to inline the new method like so:
What if two threads try to initialize the instance at the same time? Trouble! We have the potential for a race: the first thread on line 5 is not finished performing the expensive initialization, but the @instance variable has already been defined, so the second thread will happily return the uninitialized instance and try to use it. (Some of you will recognize this as a variation on the Double-checked locking problem.)
So does this mean that we need to be extra vigilant with our code, sprinkling it with mutex blocks everywhere? Will it become an unreadable, unmaintainable mess? Certainly not, as long as we follow a simple rule:
Avoid shared, mutable state.
This includes lazy initialization, which is mutating shared state at the time it is first accessed.
The consequences of programming with real threads are difficult to conceptualize at first, especially if you're used to Ruby's green threads or Ruby 1.9's global interpreter lock (GIL). Consider this code:
What happens to the data array after all threads have finished? Under Ruby 1.8 and Ruby 1.9, we always get an array of integers of size M * N. There may be a little randomness in the ordering of the entries, but otherwise the array is intact and well-behaved.
Under JRuby, arrays (as well as strings, hashes and other core library data structures) are not safe for mutation by multiple threads. So when we run the code above with JRuby, the array and its internals become corrupted. The array's size is frequently less than M * N, and what's more, we often observe some of the entries are nil rather than the integers we expect. Sometimes we'll encounter a ConcurrencyError raised as well.
This uncertainty can be the cause of some nasty, hard-to-pinpoint bugs in your code. So if your code works well with Ruby but blows up with unexpected nils or otherwise unexplained behavior, you can at least start to point the blame at threaded code that mutates state.
What about metaprogramming in the presence of threads? Can we corrupt the interpreter by defining classes and/or methods from many threads at once? Fortunately the answer here is no. JRuby explicitly takes steps to ensure that class and method definition are properly synchronized internally. Also, since class variables are frequently used for sharing state between objects, they are synchronized as well.
Using Native Threads
As you might expect, using native threads in JRuby is as simple as working with the regular Ruby Thread class. (Note that there are some caveats). For example, you can easily offload some computation to the background:
(For systems with large volumes of email, this naive approach may not work well. Native threads carry a bigger initialization cost and memory overhead than green threads, so JRuby normally cannot support more than about 10,000 threads.)
To work around this, we can use a thread pool. Using JRuby's Java integration, we can easily access the built-in Executor classes:
Here we're using two thread pools. The first, the "cached" thread pool, is a general-purpose pool that grows as needed by demand and frees up system resources by releasing threads after they have been idle. The second example uses a fixed pool of two threads for when you want a place hard limit on the amount of background processing.
Java's java.util.concurrent package has a number of useful utilities like these for concurrent programming including locks, semaphores, latches, queues, concurrent lists and maps, and atomic objects such as the AtomicInteger used above. And they're all trivially available to you via JRuby.
Concurrency with Actors
The shift in thinking around concurrent programming in recent years has been around the development of higher-level abstractions. This arose out of the realization that lower-level coding with fine-grained locks is hard: it's error-prone, makes code less readable and maintainable, and is difficult to troubleshoot. The upside of this is that we get to leave the hard stuff to the library programmers who create the implementations of these abstractions.
Of all the higher level ways of doing concurrent programming, the actor model has become preferred in recent years coincident with the rise in popularity of Erlang where the actor model has been proven to work well.
Ruby has had a number of Actor frameworks for some time, including a recent entry, Celluloid. (Be sure to watch Celluloid's creator Tony Arcieri in a screencast for EMRubyConf.) While these all work great on JRuby, again I'd like to focus on two Java libraries that are just as accessible from JRuby but go above and beyond what is currently possible with the pure Ruby libraries.
Jetlang isn't quite a full actor library, but instead claims to be "designed specifically for high performance in-memory messaging". (Jretlang is the JRuby wrapper around Jetlang). The main primitives in Jetlang are Fibers and Channels. Here's a "Hello World" example:
If you want a toolkit for building a message-passing framework in your application, give Jretlang a look.
Akka is a platform and toolkit for concurrent, scalable, and fault-tolerant systems. It has many features inspired by Erlang, including many flavors of actors and fault-tolerant supervisor hierarchies. Here's the simplest-possible Akka-in-Ruby example:
When run, this example prints:
Of note here is that we're creating an actor reference to the HelloWorld object and sending it the #hi message, but that does call the method immediately. Instead, the message is routed through Akka and back to the object later.
JRuby Makes Concurrency Easy
Once again, the long and short of the concurrency story on JRuby is buttressed by the ease of which you can access both the best of Ruby and Java libraries. Go forth and glue together concurrency-heavy applications with JRuby, and please share them with us!