The business of executing Ruby code is booming; with so many Ruby environments in development, there are just as many different ways of actually running code. We've been hard at work in the world of Rubinius, and over the last few months we've been focused on a new way of executing Ruby code by converting it to machine code at run-time—a Just-In-Time compiler.
In this post, I'll walk you through how Ruby code begins as text and is converted into bytecode. In a follow up post, I'll go from bytecode to the machine code itself.
So let's get started: in the beginning, there was Ruby code (and the compiler saw it was good)!
def foo(a,b) a + b end
This is a simple file dot-rb file written to disk—the currency of Ruby developers. So first things first. To get this code into the environment, the code is parsed into a tree of Node objects. In Rubinius, this is typically done using the parser we've imported from 1.8.6. This tree of Node structures is a C data structure, and it's what 1.8 uses for direct execution, simply reading the tree and running code based on the structure of this tree.
The issue with directly executing a tree structure is that it's fairly inefficient. There's no simple way to move around the tree, and you get a lot of redundant operations performed at the CPU level to execute even the most trivial of Ruby code.
So what we want to do instead is lower this tree to bytecode, which is an easier representation to execute efficiently. We start by translating the tree of Nodes in C into S-Expressions (sexps) using a variant of Ryan Davis's ParseTree. I'd be remiss if I didn't mention that sexps can also be produced by using Ryan's ruby_parser project, which is what Rubinius uses for bootstrapping itself within MRI.
The neat part about this transform is that we've now got a representation of the program as normal Ruby objects. This allows us to write our bytecode compiler in Ruby itself, which is the phase that takes the sexp and transforms it into an internal tree of AST instances.
An astute reader will notice that not two paragraphs ago, we already had a tree of Nodes representation. This is an inefficiency in the current compiler, and one that Brian Ford is actively working to eliminate. The upside of the transform is that while it was a tree of Node structures in C before, we've now got a tree of normal ruby objects, which are much easier to operate on.
The Ruby AST is decorated with a lot of information concerning node position, effects on other nodes, etc. This information is used in the next phase, which uses the visitor pattern to flow downward from the root of the AST into its leaves. This is the bytecode generation phase.
Each node is responsible for generating bytecode for its representation, which it does by calling methods on a generator object. This has many advantages. It's trivial to pass down a special test generator, which keeps track of all of the calls it receives for later comparison with the results of another test generator. This is how we test the compiler's bytecode generation.
At this stage, a bytecode generation object has called all these methods, and that object has retained the information in a simple list. The list contains the name of the bytecode to emit and its operands (the arguments to a bytecode). This information contains an additional level of indirection to make generation easier, such as the use of goto targets as normal Label objects.
Because an InstructionSequence must be a list of Fixnums, these Label objects are resolved into bytecode locations, a simple offset from the beginning of the method. An InstructionSequence represents the full bytecode for a method as Tuple of Fixnums.
And so after all those transforms and operations, we've turned
def foo(a,b) a + b end
#<Rubinius::Tuple: 19, 0, 19, 1, 77, 0, 12>
or, rendered symbolically:
[[:push_local, 0], [:push_local, 1], [:meta_send_op_plus, 0], [:ret]]
It should be noted that because Rubinius uses normal Ruby objects for as much as possible, it's trivial to see this information for yourself. Here's my irb session to extract the previous values:
>> cm = def foo(a,b); a + b; end => #<Rubinius::CompiledMethod foo file=(irb)> >> i = cm.iseq => #<InstructionSequence:0x142> >> i.decode => [[:push_local, 0], [:push_local, 1], [:meta_send_op_plus, 0], [:ret]] >> i.opcodes => #<Rubinius::Tuple: 19, 0, 19, 1, 77, 0, 12>
You can also get a more assembly-oriented view:
>> cm = def foo(a,b); a + b; end => #<Rubinius::CompiledMethod foo file=(irb)> >> puts cm.decode 0000: push_local 0 # a 0002: push_local 1 # b 0004: meta_send_op_plus :+ 0006: ret => nil
So we've now got this list of Fixnums, but we're missing one part of the equation. A list of Fixnums is data poor—you can't represent much using that. So associated with each InstructionSequence within the CompiledMethod is a Tuple called the literals Tuple. It holds objects that can be directly referenced by the bytecode.
The simplest example of this is the :meta_send_op_plus opcode we see above. That 0 on the end says "the name of the method to send is located at position 0 in the literals Tuple." Lets take a look at the literals Tuple for this method:
>> cm.literals => #<Rubinius::Tuple: :+>
There we go. One entry for the symbol :+. Let's look at one more example:
>> cm = def foo(a); p a, "hello evan"; end => #<Rubinius::CompiledMethod foo file=(irb)> >> cm.iseq.decode => [[:push_self], [:push_local, 0], [:push_literal, 0], [:string_dup], [:allow_private], [:send_stack, 1, 2], [:ret]] >> cm.literals => #<Rubinius::Tuple: "hello evan", :p> >> puts cm.decode 0000: push_self 0001: push_local 0 # a 0003: push_literal "hello evan" 0005: string_dup 0006: allow_private 0007: send_stack :p, 2 0010: ret => nil
In this method, we see two literals being referenced. The string "hello evan" is used pushed onto the stack by the :push_literal directly, and the :send_stack instruction indicates that position 1 contains the method to send. In this case, it's :p.
So there you have it: we've taken the text of a program and turned into a set of Ruby objects that the Rubinius VM can execute. In a later post, I'll cover how the VM interprets the bytecode to run the method, and we'll cover compiling this bytecode down to machine code that can be directly executed by the CPU.
We're hard at work, but always looking for new contributors and community members. If you like what you see, find us on IRC or leave a comment!