Amazon recently announced micro instances for Elastic Compute Cloud, or EC2. Historically new instance types were larger than existing ones, but this time we got a smaller, more affordable option. It was an exciting announcement for me, as a hobbyist on the look out for servers on the cheap. At 613MB of RAM and two cents per hour, micro instances have roughly a third of the ram, at a quarter of the price of small instances. Small instances are 32 bit, while micro instances (unlike any other types) match the 32 or 64 bit value from their image. Beyond this simple comparison, however, things get a bit pear shaped.
Micro instances have no ephemeral storage, which also means they do not support instance-store images. Instance-store images are stored in snapshots and copied to the ephemeral root device when instances are booted. The alternative is to use volume-store images, which mount the image snapshot on an Elastic Block Store (EBS) volume. Making a choice between instance and volume store involves some trade offs. Instance-store boots slower but costs less. Volume-store boots faster, runs slower at first while data finishes copying and costs more (the price difference comes from EBS transfer/storage costs). Publicly available, supported AMIs do not necessarily provide volume-store versions. A conversion process will allow the transformation of an instance-store image into a volume-store image, but this effectively forks away from the supported version, and it is not clear that it's worth the trouble if your only goal is running micros.
The characteristic of micro instances that probably sticks out the most is the allotted CPU. EC2 describes its machines in terms of Elastic Compute Units, or ECU, each unit equivalent to a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. Instances prior to micro were listed with a number of allotted ECUs, with small providing a reference point at 1 ECU. Micro instances break the trend though, and are listed as providing "Up to 2 EC2 Compute Units (for short periodic bursts)". The fine print provides some guidelines but no hard numbers. Bursting is everything above some background level much lower than 1 ECU, and is allowed for some percentage of each time period. Unfortunately the specifics are not revealed and bursting out of turn will both be throttled and increase the time before the next burst. To provide context, micro instances are are described as serving tens of requests per minute, with the fine print containing a number of graphs showing recommended CPU usage patterns.
So, at the end of the day it turns out that micro really is not the new small. They are smaller and have fewer, weirder resources which are allotted in a non-specific way that seems counter to the way many people might use them. Load balancing, proxies, cron or monitoring systems might be workable, but running web or database servers would be pushing your luck. If you need something cheap but consistent, your best bet probably remains small instances (if not a vps or different cloud). If you need affordable, bursty processing, you're probably better served using non-micro spot instances. Based on these considerations, we are not currently rushing to add micro support to Engine Yard AppCloud, where it could easily provide a poor user experience for many customers. However, we will keep these things in mind as they become more tried and as we develop new features.
Anecdotal evidence aside, I welcome you to try them out for yourself:
Please let us know what you find out!