Managing Automated Image Resizing and Hosting from Within Azure

I’m going to try to explain a proof of concept I’ve recently completed which is intended to automate the process of resizing, hosting, and serving images, taking the responsibility away from your own systems and data centre/ web host.

We’ll be using Windows Azure for all of the things;

  1. Web API Azure Web role to proxy the incoming requests, checking blob storage and proxying if not there
  2. Web API Azure Web role to resize and return the image, and to add it to a Azure Service Bus
  3. Azure Service Bus to hold the event and data for the resized image
  4. Worker role to subscribe to the service bus and upload any images on there into blob storage for the next request

Overall Process

The overall process looks a bit like this:

Azure Image Resizing Conceptual Architecture

  1. The user makes a request for an image including the required dimensions, e.g. images.my-site.blah/400/200/image26.jpg, to the image proxy web role
  2. The image proxy web role tries to retrieve the image at the requested size from blob storage; if it finds it, it returns it; if not, it makes a request through to the image resizing web role
  3. The image resizing web role retrieves the original size image from blob storage (e.g. image26.jpg), and resizes it to the requested dimensions (e.g., 400×200)
  4. The resized image is returned to the user via the proxy and also added to an Azure Service Bus
  5. The image processing worker role subscribes to the Azure Service Bus, picks up the new image and uploads it to blob storage

First Things First: Prerequisites

To start with you’ll need to:

  1. Set up a Windows Azure account
  2. Have VS2010 or VS2012 and the appropriate Azure Cloud SDK

Fun Things Second: Image Resizing Azure Web Role

We’re going to focus on the resizing web role next.

Resizer Architecture

For this you need to set up blob storage by logging into your Azure portal and following these simple steps:

Click “Storage”

Storage

Click “New”

Add

Fill in the form

Form

Click “Create”

Create

Note your keys

Click
Manage Access Keys
and note your storage account name and one of the values in the popup:

Retrieve Access Keys

It doesn’t matter which one you use, but it means you can regenerate one key without killing a process that uses the other one. And yes, I’ve regenerated mine since I took the screenshot..

Upload some initial images

We can easily achieve this using the worker role we’re going to write and the service bus to which it subscribes to automate the process, but I’m just going to show off the nice little Azure Storage Explorer available over on CodePlex.

Go download and install it, then add a new account using the details you’ve just retrieved from your storage account

Add account to Storage Explorer

And initialise it with a couple of directories; origin and resized

Directory Init

Then upload one or two base images of a reasonable size (dimension wise).

CODE TIME

Setting up

Bust open VS and start a new Cloud Services project

VS2012 new cloud project

Select an MVC4 role

MVC role

And Web API as the project type

WebAPI project

At this point you could carry on but I prefer to take out the files and directories I don’t think I need (I may be wrong, but this is mild OCD talking..); I delete everything highlighted below:

DELETE

If you do this too, don’t forget to remove the references to the configs from global.asax.cs.

Building the Resizer

Create a new file called ImageController in your Controllers directory and use the Empty API controller template:

Empty API controller template

This is the main action that we’re going to be building up:

[HttpGet]
public HttpResponseMessage Resize(int width, int height, string source)
{
var imageBytes = GetFromCdn("origin", source);
var newImageStream = ResizeImage(imageBytes, width, height);
QueueNewImage(newImageStream, height, width, source);
return BuildImageResponse(newImageStream);
}

Paste that in and then we’ll crack on with the methods it calls in the order they appear. To get rid of the red highlighting you can let VS create some stubs.

First up:

GetFromCdn

This method makes a connection to your blob storage, connects to a container (in this case “origin”), and pulls down the referenced blob (image), before returning it as a byte array. There is no error handling here, as it’s just a proof of concept!

Feel free to grab the code from github and add in these everso slightly important bits!

private static byte[] GetFromCdn(string path, string filename)
{
var connectionString = CloudConfigurationManager.GetSetting("Microsoft.Storage.ConnectionString");
var account = CloudStorageAccount.Parse(connectionString);
var cloudBlobClient = account.CreateCloudBlobClient();
var cloudBlobContainer = cloudBlobClient.GetContainerReference(path);
var blob = cloudBlobContainer.GetBlockBlobReference(filename);

var m = new MemoryStream();
blob.DownloadToStream(m);

return m.ToArray();
}

This will give you even more red highlighting and you’ll need to bring in a few new assemblies;

  • System.IO.MemoryStream
  • Microsoft.WindowsAzure.Storage.CloudStorageAccount
  • Microsoft.WindowsAzure.CloudConfigurationManager

You’ll need to add in the setting value mentioned:

CloudConfigurationManager.GetSetting("""Microsoft.Storage.ConnectionString")

These config settings aren’t held in a web.config or app.config; you need to right click the web role within your cloud project and click properties, then settings, and Add Setting. Enter the values below, referring back to the details you previously got for your Storage Account

  • Name: Microsoft.Storage.ConnectionString
  • Value: DefaultEndpointsProtocol=http; AccountName=<your account name here>; AccountKey=<account key here>

ResizeImage

This method does the hard work; takes a byte array and image dimensions, then does the resizing using the awesome ImageResizer and spews the result back as a memorystream:

private static MemoryStream ResizeImage(byte[] downloaded, int width, int height)
{
var inputStream = new MemoryStream(downloaded);
var memoryStream = new MemoryStream();

var settings = string.Format("width={0}&amp;height={1}", width, height);
var i = new ImageJob(inputStream, memoryStream, new ResizeSettings(settings));
i.Build();

return memoryStream;
}

In order to get rid of the red highlighting here you’ll need to nuget ImageResizer; open the VS Package Manager window and whack in:

Install-Package ImageResizer

And then bring in the missing assembly reference;

  • ImageResizer.ImageJob

QueueNewImage

This method takes the generated image byte array and puts it on an Azure Service Bus instance. As such, we need to go and create a new Azure Service Bus within the Azure portal.

Click “Service Bus”

Service BUs

Click “New”

Add

Fill in the form

Form

Click “Create”

Create

What you’ve done here is set up a namespace to which you have assigned a new queue. When using this service bus you’ll connect to the namespace within the connection string and then create a queue client connecting to a named queue in code.

Note your connection details

At the service bus namespaces page click
Connection Info
and note your ACS connection string in the popup:

Connection String

Set it up

You’ll need to nuget the azure service bus package, so in your VS package manager run

Install-Package WindowsAzure.ServiceBus

And bring in the missing reference

  • Microsoft.ServiceBus.Messaging

Paste in the following methods:

private static void QueueNewImage(MemoryStream memoryStream, int height, int width, string source)
{
var img = new ImageData
{
Name = source,
Data = memoryStream.ToArray(),
Height = height,
Width = width,
Timestamp = DateTime.UtcNow
};
var message = new BrokeredMessage(img);
QueueConnector.ImageQueueClient.BeginSend(message, SendComplete, img.Name);
}

private static void SendComplete(IAsyncResult ar)
{
// Log the send thing
}

Now we need to define the ImageData and QueueConnector classes. Create these as new class files:

ImageData.cs

public class ImageData
{
public string Name;
public byte[] Data;
public int Height;
public int Width;
public DateTime Timestamp;

public string FormattedName
{
get { return string.Format(&quot;{0}_{1}-{2}&quot;, Height, Width, Name.Replace(&quot;/&quot;, string.Empty)); }
}
}

QueueConnector.cs

This class creates a connection to your service bus namespace using a connection string, creates a messaging client for the specified queue, and creates the queue if it doesn’t exist.

public static class QueueConnector
{
public static QueueClient ImageQueueClient;
public const string QueueName = &quot;azureimages&quot;;

public static void Initialize()
{
ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http;
var connectionString = CloudConfigurationManager.GetSetting(&quot;Microsoft.ServiceBus.ConnectionString&quot;);
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

if (!namespaceManager.QueueExists(QueueName))
{
namespaceManager.CreateQueue(QueueName);
}

var messagingFactory = MessagingFactory.Create(namespaceManager.Address, namespaceManager.Settings.TokenProvider);
ImageQueueClient = messagingFactory.CreateQueueClient(QueueName);
}
}

To get rid of the red you’ll need to reference

  • Microsoft.ServiceBus.Messaging.MessagingFactory
  • Microsoft.ServiceBus.NamespaceManager
  • Microsoft.WindowsAzure.CloudConfigurationManager

As before, there now needs to be a cloud project setting for the following:

CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");

Right click your web role within the cloud project, and click properties, then settings, and Add Setting. Enter the values below, referring back to the details you previously got for your Service bus

  • Name: Microsoft.ServiceBus.ConnectionString
  • Value: Endpoint=sb://<your namespace>.servicebus.windows.net/; SharedSecretIssuer=owner; SharedSecretValue=<your default key>

In order for this initialisation to occur, you need to add a call to it in the global.asax.cs Application_Start method. Add the following line after the various route and filter registrations:

QueueConnector.Initialize();

Lastly BuildImageResponse

This method takes the image stream result, creates an Http response containing the data and the basic headers, and returns it:

private static HttpResponseMessage BuildImageResponse(MemoryStream memoryStream)
{
var httpResponseMessage = new HttpResponseMessage {Content = new ByteArrayContent(memoryStream.ToArray())};
httpResponseMessage.Content.Headers.ContentType = new MediaTypeHeaderValue(&quot;image/jpeg&quot;);
httpResponseMessage.StatusCode = HttpStatusCode.OK;

return httpResponseMessage;
}

This one requires a reference to

  • System.Net.Http.Headers.MediaTypeHeaderValue

Running it all

Hopefully you should have something you can now hit F5 in and spin up a locally hosted web role which accesses remotely (Azure) hosted storage and an Azure Service Bus.

To get the action to fire, send off a request to – for example:
http://127.0.0.1/api/Image/Resize?height=200&width=200&source=image1.jpg

You should see something like:
resized image response

Change those height and width parameters and you’ll get a shiny new image back each time:
resized image response
resized image response

Passing in the value of 0 for either dimension parameter means it’s ignored and the aspect ratio is preserved/no padding added.

You’ll also notice that your queue is building up with messages of these lovely new images:
Queue increasing

Next Up

In the next post on this theme we’ll create a worker role to subscribe to the queue and upload the new images into blob storage.

I hope you’ve enjoyed this post; I certainly am loving working with Azure at the moment. It’s matured so much since I first tackled it several years ago.

The code for this post is available on GitHub; you’ll need to add in your own cloud settings though!

Troubleshooting

If you get the following error when hitting F5 –

Not running in a hosted service or the Development Fabric

Be sure to set the Cloud Project as the startup project in VS.

Chef for Developers: part 4 – WordPress, Backups, & Restoring

I’m continuing with my plan to create a series of articles for learning Chef from a developer perspective.

Part #1 gave an intro to Chef, Chef Solo, Vagrant, and Virtualbox. I also created my first Ubunutu VM running Apache and serving up the default website.

Part #2 got into creating a cookbook of my own, and evolved it whilst introducing PHP into the mix.

Part #3 wired in MySql and refactored things a bit.

WordPress Restore – Attempt #1: Hack It Together

Now that we’ve got a generic LAMP VM its time to evolve it a bit. In this post I’ll cover adding wordpress to your VM via Chef, scripting a backup of your current wordpress site, and finally creating a carbon copy of that backup on your new wordpress VM.

I’m still focussing on using Chef Solo with Vagrant and VirtualBox for the time being; I’m learning to walk before running!

Kicking off

Create a new directory for working in and create a cookbooks subdirectory; you don’t need to prep the directory with a vagrant init as I’ll add in a couple of clever lines at the top of my new Vagrantfile to initialise it straight from a vagrant up.

Installing WordPress

As in the previous articles, just pull down the wordpress recipe from the opscode repo into your cookbooks directory:

[bash]cd cookbooks
git clone https://github.com/opscode-cookbooks/wordpress.git
[/bash]

Looking at the top of the WordPress default.rb file you can see which other cookbooks it depends on:

[bash]include_recipe "apache2"
include_recipe "mysql::server"
include_recipe "mysql::ruby"
include_recipe "php"
include_recipe "php::module_mysql"
include_recipe "apache2::mod_php5"
[/bash]

From the last post we know that MySql also depends on OpenSSL, and MySql::Ruby depends on build-essentials. Go get those both in your cookbooks directory as well as the others mentioned above:

[bash]git clone https://github.com/opscode-cookbooks/apache2.git
git clone https://github.com/opscode-cookbooks/mysql.git
git clone https://github.com/opscode-cookbooks/openssl.git
git clone https://github.com/opscode-cookbooks/build-essential.git
git clone https://github.com/opscode-cookbooks/php.git
[/bash]

Replace the default Vagrantfile with the one below to reference the wordpress cookbook, and configure the database, username, and password for wordpress to use; I’m basing this one on the Vagrantfile from my last post but have removed everything to do with the “mysite” cookbook:

Vagrantfile

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"mysql" => {
"server_root_password" => "myrootpwd",
"server_repl_password" => "myrootpwd",
"server_debian_password" => "myrootpwd"
},
"wordpress" => {
"db" => {
"database" => "wordpress",
"user" => "wordpress",
"password" => "mywppassword"
}
}
}

chef.cookbooks_path = ["cookbooks"]
chef.add_recipe "wordpress"
end
end
[/ruby]

The lines

[ruby] config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
[/ruby]

mean you can skip the vagrant init stage as we’re defining the same information here instead.

You don’t need to reference the dependant recipes directly since the WordPress one has references to it already.

You also don’t need to disable the default site since the wordpress recipe does this anyway. As such, remove this from the json area:

[ruby] "apache" => {
"default_site_enabled" => false
},
[/ruby]

Note: An issue I’ve found with the current release of the WordPress cookbook

I had to comment out the last line of execution which just displays a message to you saying

[ruby]Navigate to ‘http://#{server_fqdn}/wp-admin/install.php’ to complete wordpress installation.
[/ruby]

For some reason the method “message” on “log” appears to be invalid. You don’t need it though, so if you get the same problem you can just comment it out yourself for now.

To do this, head to line 116 in cookbooks/wordpress/recipes/default.rb and add a # at the start, e.g.:

[ruby]log "wordpress_install_message" do
action :nothing
# message "Navigate to ‘http://#{server_fqdn}/wp-admin/install.php’ to complete wordpress installation"
end
[/ruby]

Give that a

[bash]vagrant up
[/bash]

Then browse to localhost:8080/wp-admin/install.php and you should see:

wordpress inital screen 8080

From here you could quite happily set up your wordpress site on a local VM, but I’m going to move on to the next phase in my cunning plan.

Restore a WordPress Backup

I’ve previously blogged about backing a wordpress blog, the output of which was a gziped tar of the entire wordpress directory and the wordpress database tables. I’m now going to restore it to this VM so that I have a functioning copy of my backed up blog.

I’d suggest you head over and read the backup post I link to above, or you can just use the resulting script:

backup_blog.sh

[bash]#!/bin/bash

# Set the date format, filename and the directories where your backup files will be placed and which directory will be archived.
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="rposbowordpressrestoredemo.$NOW.tar"
BACKUP_DIR="/home/<user>/_backup"
WWW_DIR="/var/www"

# MySQL database credentials
DB_USER="root"
DB_PASS="myrootpwd"
DB_NAME="wordpress"
DB_FILE="rposbowordpressrestoredemo.$NOW.sql"

# dump the wordpress dbs
mysql -u$DB_USER -p$DB_PASS –skip-column-names -e "select table_name from information_schema.TABLES where TABLE_NAME like ‘wp_%’;" | xargs mysqldump –add-drop-table -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

# archive the website files
tar -cvf $BACKUP_DIR/$FILE $WWW_DIR

# append the db backup to the archive
tar –append –file=$BACKUP_DIR/$FILE $BACKUP_DIR/$DB_FILE

# remove the db backup
rm $BACKUP_DIR/$DB_FILE

# compress the archive
gzip -9 $BACKUP_DIR/$FILE
[/bash]

That results in a gzipped tarball of the entire wordpress directory and the wordpress database dumped to a sql file, all saved in the directory specified at the top – BACKUP_DIR=”/home/<user>/_backup”

First Restore Attempt – HACK-O-RAMA!

For the initial attempt I’m just going to brute-force it, to validate the actual importing and restoring of the backup. The steps are:

  1. copy an archive of the backup over to the VM (or in my case I’ll just set up a shared directory)
  2. uncompress the archive into a temp dir
  3. copy the wordpress files into a website directory
  4. import the mysql dump
  5. update some site specific items in mysql to enable local browsing

You can skip that last one if you want to just add some HOSTS entries to direct calls to the actual wordpress backed up site over to your VM.

Prerequisite

Create a backup of a wordpress site using the script above (or similar) and download the archive to your host machine.

I’ve actually done this using another little vagrant box with a base wordpress install for you to create a quick blog to play around with backing up and restoring – repo is over on github.

For restoring

Since this is the HACK-O-RAMA version, just create a bash script in that same directory called restore_backup.sh into which you’ll be pasting the chunks of code from below to execute the restore.

We can then call this script from the Vagrantfile directly. Haaacckkyyyy…

Exposing the archive to the VM

I’m saving the wordpress archive in a directory called “blog_backup” which is a subdirectory of the project dir on the host machine; I’ll share that directory with the VM using this line somewhere in the Vagrantfile:

[ruby]config.vm.synced_folder "blog_backup/", "/var/blog_backup/"
[/ruby]

if you’re using Vagrant v1 then the syntax would be:

[ruby]config.vm.share_folder "blog", "/var/blog_backup/", "blog_backup/"
[/ruby]

Uncompress the archive into the VM

This can be done using the commands below, pasted into that restore_backup.sh

[bash]# pull in the backup to a temp dir
mkdir /tmp/restore

# untar and expand it
cd /tmp/restore
tar -zxvf /var/blog_backup/<yoursite>.*.tar.gz
[/bash]

Copy the wordpress files over

[bash]# copy the website files to the wordpress site root
sudo cp -Rf /tmp/restore/var/www/wordpress/* /var/www/wordpress/
[/bash]

Import the MySQL dump

[bash]# import the db
mysql -uroot -p<dbpassword> wordpress < /tmp/restore/home/<user>/_backup/<yoursite>.*.sql
[/bash]

Update some site-specific settings to enable browsing

Running these db updates will allow you to browse both the wordpress blog locally and also the admin pages:

[bash]# set the default site to locahost for testage
mysql -uroot -p<dbpassword> wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’siteurl’"
mysql -uroot -p<dbpassword> wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’home’"
[/bash]

Note: Pretty Permalinks

If you’re using pretty permalinks – i.e., robinosborne.co.uk/2013/07/02/chef-for-developers/ instead of http://robinosborne.co.uk/?p=1418 – then you’ll need to both install the apache::mod_rewrite recipe and configure your .htaccess to allow mod_rewrite to do its thing. Create the .htaccess below to enable rewrites and save it in the same dir as your restore script.

.htaccess

[bash]<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ – [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
[/bash]

restore_backup.sh

[bash]# copy over the .htaccess to support mod_rewrite for pretty permalinks
sudo cp /var/blog_backup/.htaccess /var/www/wordpress/
sudo chmod 644 /var/www/wordpress/.htaccess
[/bash]

Also add this to your Vagrantfile:

[ruby]chef.add_recipe "apache2::mod_rewrite"
[/ruby]

The final set up and scripts

Bringing this all together we now have a backed up wordpress blog, restored and running as a local VM:

wordpress restore 1

The files needed to achieve this feat are:

Backup script

To be saved on your blog host, executed on demand, and the resulting archive file manually downloaded (probably SCPed). I have mine saved in a shared directory – /var/vagrant/blog_backup.sh:

blog_backup.sh

[bash]#!/bin/bash

# Set the date format, filename and the directories where your backup files will be placed and which directory will be archived.
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="rposbowordpressrestoredemo.$NOW.tar"
BACKUP_DIR="/home/vagrant"
WWW_DIR="/var/www"

# MySQL database credentials
DB_USER="root"
DB_PASS="myrootpwd"
DB_NAME="wordpress"
DB_FILE="rposbowordpressrestoredemo.$NOW.sql"

# dump the wordpress dbs
mysql -u$DB_USER -p$DB_PASS –skip-column-names -e "select table_name from information_schema.TABLES where TABLE_NAME like ‘wp_%’;" | xargs mysqldump –add-drop-table -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE

# archive the website files
tar -cvf $BACKUP_DIR/$FILE $WWW_DIR

# append the db backup to the archive
tar –append –file=$BACKUP_DIR/$FILE $BACKUP_DIR/$DB_FILE

# remove the db backup
rm $BACKUP_DIR/$DB_FILE

# compress the archive
gzip -9 $BACKUP_DIR/$FILE
[/bash]

Restore script

To be saved in a directory on the host to be shared with the VM, along with your blog archive.

restore_backup.sh

[bash]# pull in the backup, untar and expand it, copy the website files, import the db
mkdir /tmp/restore
cd /tmp/restore
tar -zxvf /var/blog_backup/rposbowordpressrestoredemo.*.tar.gz
sudo cp -Rf /tmp/restore/var/www/wordpress/* /var/www/wordpress/
mysql -uroot -pmyrootpwd wordpress < /tmp/restore/home/vagrant/_backup/rposbowordpressrestoredemo.*.sql

# create the .htaccess to support mod_rewrite for pretty permalinks
sudo cp /var/blog_backup/.htaccess /var/www/wordpress/
sudo chmod 644 /var/www/wordpress/.htaccess

# set the default site to locahost for testage
mysql -uroot -pmyrootpwd wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’siteurl’"
mysql -uroot -pmyrootpwd wordpress -e "UPDATE wp_options SET option_value=’http://localhost:8080′ WHERE wp_options.option_name=’home’"
[/bash]

.htaccess

[bash]<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ – [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
[/bash]

Vagrantfile

[ruby]Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 80, host: 8080
config.vm.synced_folder "blog_backup/", "/var/blog_backup/"

config.vm.provision :shell, :inline => "apt-get clean; apt-get update"

config.vm.provision :chef_solo do |chef|

chef.json = {
"mysql" => {
"server_root_password" => "myrootpwd",
"server_repl_password" => "myrootpwd",
"server_debian_password" => "myrootpwd"
},
"wordpress" => {
"db" => {
"database" => "wordpress",
"user" => "wordpress",
"password" => "mywppassword"
}
}
}

chef.cookbooks_path = ["cookbooks"]
chef.add_recipe "wordpress"
chef.add_recipe "apache2::mod_rewrite"
end

# hacky first attempt at restoring the blog from a script on a share
config.vm.provision :shell, :path => "blog_backup/restore_backup.sh"
end
[/ruby]

myrootpwd

The password used to set up the mysql instance; it needs to be consistent in your Vagrantfile and your restore_backup.sh script

mywppassword

if you can’t remember your current wordpress user’s password, look in the /wp-config.php file in the backed up archive.

Go get it

I’ve created a fully working setup for your perusal over on github. This repo, combined with the base wordpress install one will give you a couple of fully functional VMs to play with.

If you pull down the restore repo you’ll just need to run setup_cookbooks.sh to pull down the prerequisite cookbooks, then edit the wordpress default recipe to comment out that damned message line.

Once that’s all done, just run

[bash]vagrant up[/bash]

and watch everything tick over until you get your prompt back. At this point you can open a browser and hit http://localhost:8080/ to see:

restored blog from github

Next up

I’ll be trying to move all of this hacky cleverness into a Chef recipe or two. Stay tuned.