One of the more frequent questions that new JRubyists ask is "Which server should I use to build and deploy my application?" I'm glad you asked! But first, I need to know a little more about you, your environment, and your deployment needs. Do you have an existing Java environment? Do you prefer to make sure you're using the same components for both development and production? Do you like rolling your own servers? Do you like experimentation? Do you like running background services in the same process? How about zero-downtime redeployments? Read on, we have something for everybody.
You have an existing Java web server
Maybe your organization is already Java-heavy, or you have an operations team that's familiar with operating an existing Java server. That's fine! In that case, JRuby-Rack is what you need.
JRuby-Rack is the foundation that enables Ruby applications to run on virtually any existing Java web server. Put simply, JRuby-Rack is a bridge from the Java Servlet API to Ruby's Rack API. So if you have a Servlet container (and pretty much every widely-used Java server in existence has one), you can run any Rack application with JRuby-Rack. We regularly test JRuby-Rack with Tomcat 6/7, Jetty 6/7, JBoss 5/6, Resin 4, and GlassFish 3, so chances are good that your application will run fine on your server.
JRuby-Rack is also the hidden engine behind Warbler, the tool that quickly assembles a full .war file from your Ruby application. When you run warble, Warbler includes a copy of JRuby-Rack in the final assembled archive and wires up the RackFilter Servlet filter that delegates requests to your Ruby application.
In addition to managing the lifecycle of your Ruby application inside the Java web server, JRuby-Rack has accumulated a number of useful extensions over its three-year life. Here are three.
Say you'd like to do some early flushing of your response. With JRuby-Rack, simply set the Transfer-Encoding: chunked header in the response, and JRuby-Rack will flush each element of the response body individually. (A word of caution: Aaron Patterson reports how the Rack API doesn't support this very well, so make sure you know what you're doing.) Here's a Sinatra example that perpetually reports the server time in a series of JSON chunks:
A more efficient X-Sendfile
You can also use a Ruby File or Tempfile object as the response body, and JRuby-Rack will write it to the response using the Java NIO FileChannel#transferTo method, which, depending on the OS, can transfer bytes directly from the filesystem cache to the target channel without actually copying them. JRuby-Rack will close the file after the request is finished so you don't have to worry about leaking the resource.
Interaction with Servlets
You can use the JRuby-Rack RackFilter alongside other servlets or JSPs in a Java web application. For example, you can internally redirect from Ruby to another servlet. JRuby-Rack adds a custom #forward_to method to Rails' ActionController:
Inside public/jsps/index.jsp, we can consume the message attribute as follows:
With this technique it's easy to introduce Rails as the routing and controller layer on top of an existing Java web application.
We'll be writing much more about integrating Ruby alongside Java web applications in the near future, so watch this space.
You want to distribute your application
Warbler comes with a feature dubbed executable war, which builds a .war file that can be deployed in a container or run as an self-contained executable. Warbler embeds Winstone, a small servlet container, in the archive that self-extracts and listens at localhost:8080 when run:
The executable archive is completely self-contained, so you can distribute it and run it anywhere a Java Virtual Machine is installed.
Up to this point, we've learned that Warbler and JRuby-Rack are great for deploying across a wide range of environments, but what about development? When you're coding a new application, it would be mighty painful to generate a .war file and deploy it after every change. Rubyists are used to firing up a web server, writing code, and hitting the refresh button. JRuby certainly supports this mode, but with a different cast of characters than you know from the C Ruby world. Let's take a look at five JRuby servers that fill different needs.
You want uniformity across environments
Unlike WEBrick, the server included in Ruby's standard library, Trinidad by David Calavera is equally suited for development and production use. Written in pure Ruby, Trinidad is built by employing JRuby-Rack and Apache Tomcat as embedded Java libraries. To try out Trinidad, simply gem install trinidad and start the server with the trinidad command.
Trinidad is the most mature of the JRuby web servers, and most familiar in feel to other Ruby servers. trinidad -e production -p 3001 does what you expect. In addition, JRuby-specific options like --threadsafe, --classes, and --jars are easily accessible.
You can also customize Trinidad with extensions. Currently there are lifecycle, daemon, hotdeploy, and sandbox extensions available. Also notable is the brand new scheduler extension, developed by Brandon DeWitt, and is the first extension developed by someone other than David, showing growth in the Trinidad community beyond its primary maintainer.
You want a light, customizable server
Mizuno, by Don Werve, is a great example of how easy it is to leverage an existing Java library in a very small amount of code. Mizuno is advertised as "a set of Jetty-powered running shoes for JRuby and Rack", and clocks in at a svelte 335 lines of pure Ruby code. Size does not matter in this case though, as Mizuno manages to turn in good performance numbers as well. Try out the Mizuno gem and if you're thinking of writing a custom server, take a look at the Mizuno codebase as a starting point.
You like playing with asynchronous APIs
Kevin Williams has been hacking on his server, Aspen, for a while now, and has recently kicked it up a notch. Aspen's approach is to mimic the design of the Ruby web server Thin using the JBoss Netty asynchronous event-driven network application framework. If a JVM-based, Ruby-flavored Node.js is up your alley, get in touch with Kevin and take a look at Aspen.
You want a full-stack solution
[TorqueBox] attempts to go beyond providing web-centric services (supporting Rails, Rack, Sinatra, etc), to also expose other enterprise-grade services to Ruby applications. One of the niceties of TorqueBox's full-stack solution is integrated messaging and asynchronous tasks. Backgrounding is as simple as creating a task class in app/tasks/email_task.rb:
and invoking it:
This is much better than simply launching a background thread or even a subprocess, because behind the scenes TorqueBox creates a queue for your task on the internal HornetQ message bus and invokes an instance of the task class on the receiving end of the queue. If the server happens to crash, you're much less likely to lose the task.
So if you're familiar with JBoss or like the idea of having a wider platform to build messaging and other background infrastructure all in Ruby, TorqueBox is definitely worth a look.
You want zero-downtime deploys
Kirk, at two months old the youngest entry in the JRuby server category, is brought to us by Carl Lerche of Rails core and Bundler fame.
The key use case for Kirk is zero-downtime deploys in the same fashion as Passenger or Unicorn. Applications deployed in Kirk can be redeployed either by command-line or by updating files that Kirk is configured to watch.
Kirk deployments are configured with a Kirkfile (facepalm!). Here's a simple example:
As you surely guessed, this starts up three socket listeners on ports 9090-9092, all inside a single JVM. Each application is redeployable on its own, or all at once if you configure them to watch the same file.
If you redeploy while the application is servicing requests, Kirk warms up the new version of the application and swaps it in atomically. In a stress test, I compared response rates between a stable application and one that continuously redeploys every two seconds. The stable server performed at a rate of 285 replies per second:
The coutinuous-redeploy version responded to every request but dropped to a rate of 90 replies per second. Obviously redeploys aren't free, but as long as you don't deploy new code every two seconds (!), you won't see a noticeable slowdown.
There's certainly a swarm of activity in JRuby web server land, and you can count on seeing a few of these options appear in future JRuby offerings on the Engine Yard AppCloud.
If you haven't yet tried running your application with JRuby, stay tuned. My session at RailsConf 2011 covers porting to JRuby and you can rest assured that we'll cover the details in a future article as well.
Let us know in the comments about your preferred way of deploying JRuby applications, or tell us frankly what's holding you back from deploying with JRuby so we can correct those issues!