I've just completed generalizing the backup procedures I've created.
They now expect backup files to be stored in a single directory, and be sortable into date order with oldest sorting first.
A single script handles synchronizing this directory with Amazon S3 and maintaining a certain number of copies both locally and on S3. We're currently keeping 1 week's worth backups taken every 4 hours, so 42 files total, for instance.
Two additional scripts handle dumping MySQL databases and dumping Subversion repositories. Additional script will be added to backup user generated data (per customer), and the S3 script will back that data up to the users' S3 account.
I'll polish the script up a bit next week and release them to the public.
Along the way, I've added 2048 bit public key encryption to the files before storing them on S3, so there is now even lower risk of customer data security issues if Amazon S3 is ever compromised.