WebPageTest Private Instance: 2021 Edition

Catchpoint's WebPageTest

The fantastic WebPageTest, free to use and public, has been available to set up your own private instances for many years; I wrote this up a while back, and scripted a Terraform version to make this as easy and automated as possible.

For AWS it was just a case of creating an EC2 instance (other installation options are available) with a predefined WPT server AMI (amazon machine image), add in a few configuration options and boom – your very own, autoscaling, globally distributed, website performance testing solution! New test agents would spin up automatically in other AWS regions, all based on WebPageTest Agent AMIs.

In 2020 WebPageTest was bought by Catchpoint and we finally saw improvements being made, pull requests being closed, and the WebPageTest UI getting a huge update; things were looking great for WebPageTest enthusiasts! If you havent heard of Catchpoint before, they are a company who are all about global network and web application monitoring, so a good match for WebPageTest.

Unfortunately, however, this resulted in the handy WebPageTest server EC2 AMIs no longer existing. If you want your own private installation you now need to build your own WebPageTest server from a base OS. It can be a bit tricky, though it gives you greater understanding of how it works under the hood, so hopefully you’ll feel more confident extending your installation in future.

In this article, I’ll show you how to create a WebPageTest private instance on AWS from scratch (no AMI), create your own private WebPageTest agents using the latest and greatest version of WebPageTest, and wire it all up.

Continue reading

Aside #4 – Accessing EC2 from my phone via SSH

Whilst I’ve been attempting to learn a new thing each month this year, I’ve been finding it really tricky to keep to the pretty loose schedule. As such, I though I’d try and note down every time a shiny new thing takes my interest, so that I have some idea why I’m incapable of completing a series of blog posts.

Accessing EC2 from a phone running ConnectBot

The ppk you use for putty/kitty/ssh doesn’t work on connectbot so you need to generate a new one from within the phone (connectbot app) and copy it over to your EC2 instance, appending it to your authorised keys file.

Generating the key

Go and install the app from the android store.

Find the app on your phone:
connectbot_icon

Generate a new key from the Manage Keys page:
connectbot_generate_public_key

Getting your file to your EC2 instance

Firstly, copy the public key to clipboard:
connectbot_copy_public_key

Then get it to your phone. There are a few ways of doing this; I pasted it into a new text file via my phone’s dropbox app and then curl-ed/wget-ed that file on to my EC2 instance (obviously logging in from a PC).

Adding it to your authorised keys

First I backed the existing authorized_keys file up:

cp ~/.ssh/authorized_keys backup_auth

Then I appended the one generated from my phone:

cat s3_id_dsa.pub >> ~/.ssh/authorized_keys

You can now log in directly from your phone without using a login:
connectbot_logged_in_to_EC2

Instigator
— needing to restart Apache whilst AFK

Aside #1 – EC2 WordPress Issues

Whilst I’ve been attempting to learn a new thing each month this year, I’ve been finding it really tricky to keep to the pretty loose schedule. As such, I though I’d try and note down every time a shiny new thing takes my interest, so that I have some idea why I’m incapable of completing a series of blog posts.

EC2 Issues

I’m really having problems with EC2 these days, and I’m constantly being dragged back into finding out what the problem(s) is/are..

Restarts don’t restart

Had to add entries into the rc.local to restart httpd and mysql, and also had to change the permissions on the file to allow it to be executed upon restart (sudo chmod 6755 /etc/rc.d/rc.local).

Instigator
— because the blog went down a couple of times and I didn’t notice!

Backups kill the site!

WordPress 2 DropBox – killed me. As does mysqldump in general.

DB backup

mysqldump --add-drop-table -u <username> -p <database> | bzip2 -czs > <db backup filename>.bz2

website backup

tar -cjf <site backup filename>.bz2 /var/www

Instigator
— wordpress kept telling me to upgrade to the latest version, but also had a big MAKE SURE YOU BACKUP FIRST warning; and the WordPress 2 DropBox plugin killed my EC2 microinstance.

WordPress: Alerting on high CPU usage

#!/bin/sh
#
# Script to check CPU usage and tweet me if it's over a threshold
#
if [ ` uptime | awk '{ print$10 }' | cut -d. -f1 ` -gt 50 ];
then
    sudo python /home/ec2-user/tweepy/ec2Event.py '@rposbo LOAD: ' < /tmp/load
else
    echo lo
fi

This totally doesn’t work! Uptime seems to happily report low CPU usage even when the site is dying. Maybe CPU is low and mem is high.. hmm.. may have to change that check.

Instigator
— attempting to backup wordpress keeps breaking my site, dammit!

logstash, graphite, statsd

Currently trying to get those three to work together and help me find out why the EC2 microinstance is so.. uh.. micro..

Instigator
— EC2 just keeps on dying and I need to find out why

Node.js 101: Wrap up

Year of 101s, Part 1 – Node January

Summary – What was it all about?

I set out to spend January learning some node development fundementals.

Part #1 – Intro

I started with a basic intro to using node – a Hello World – which covered what node.js is, how to create the most basic of all programs, and mentioned some of the development environments.

Part #2 – Serving web content

Second was creating a very simple node web server, which covered using nodemon to develop your node app, the concept of exports, basic request routing, and serving various content types.

Part #3 – A basic API

Next was a simple API implementation, where I proxy calls to the Asos API, return a remapped subset of the data returned, reworked the routing creating basic search functionality and a detail page, and touched on being able to pass in command line arguements.

Part #4 – Basic deployment and hosting with Appharbor, Azure, and Heroku

Possibly the most interesting and fun post for me to work on involved deploying the node code on to three cloud hosting solutions where I discovered the oddities each provider has, various solutions to the problems this raises, as well as some debugging cleverness (nice work, Heroku!). The simplicity of a git-remote-push-deploy process is incredible, and really makes quick application development and hosting even more enjoyable!

Part #5 – Packages

Another interesting one was getting to play with node packages, the node package manager (npm), the express web framework, jade templating engine, and stylus css pre-processor, and deploying node apps with packages to cloud hosting.

Part #6 – Web-based development

The final part covered the fantastic Cloud9IDE, including a (very) basic intro to github, and how Cloud9 can still be used in developing and deploying directly to Azure, Appharbor, or Heroku.

What did I get out of it?

I really got into githubbing and OSSing, and really had to try hard to not over stretch myself as I had starting forking repos to try and make a few tweaks to things whilst working on the node month.

It has been extremely inspiring and has opened up so many other random tangents for me to explore in other projects at some other time. Very motivating stuff.

I’ve now got a month of half decent blog posts – I had only intended to do a total of 4 posts but including this one I’ve done 7, since I kept adding more information as it turned up and needed to split a few posts into two.

Also I’ve learned a bit about blogging; trying to do posts well in advance allowed me to build up the details once I’d discovered more whilst working on subsequent posts. For example, how Appharbor and Azure initially track master – but can be configured to track different branches. Also, debugging with Heroku only came up whilst working with packages in Heroku.

Link list

Node tutorials and references

Setting up a node development environment on Windows
Node Beginner – a great article, and I’ve also bought the associated eBooks.
nodejs.org – the official node site, the only place to go for reference

Understanding Javascript better

Execution in The Kingdom of Nouns
Object Orientation and Inheritance in Javascript

Appharbor

Appharbor and git

Heroku

Heroku toolbelt download and reference
node on Heroku

Azure

Checkout what Azure can do!

February – coming up, Samsung Smart TV App Development!

Yeah, seriously. How random is that?.. 🙂

Node.js 101 : Part #4 – Basic Deployment and Hosting with Azure, Heroku, and AppHarbor

Following on from my recent post about doing something this year, I’m committing to doing 12 months of “101”s; posts and projects themed at beginning something new (or reasonably new) to me.

January is all about node, and I started with a basic intro, then cracked open a basic web server with content-type manipulation and basic routing, and the last one was a basic API implementation

Appharbor, Azure, and Heroku

Being a bit of a cocky git I said on twitter at the weekend:

It’s not quite that easy, but it’s actually not far off!

Deployment & Hosting Options

These are not the only options, but just three that I’m aware of and have previously had a play with. A prerequisite for each of these – for the purposes of this post – is using git for version control since AppHarbor, Azure, and Heroku support git hooks and remotes; this means essentially you can submit your changes directly to your host, which will automatically deploy them (if pre-checks pass).

I’ll be using the set of files from my previous API post for this one, except I need to change the facility to pass in command line args for the api key to instead take it from a querystring parameter.

The initial files are the same as the last post and can be grabbed from github

Those changes are:

app.js (removed lines about getting value from command line):

[js]var server = require("./server"),
router = require("./router"),
requestHandlers = require("./requestHandlers");

// only handling GETs at the moment
var handle = {}
handle["favicon.ico"] = requestHandlers.favicon;
handle["product"] = requestHandlers.product;
handle["products"] = requestHandlers.products;

var port = process.env.PORT || 3000;
server.start(router.route, handle, port);[/js]

server.js (added in querystring param usage):

[js highlight=”7″]var http = require("http"),
url = require("url");

function start(route, handle, port) {
function onRequest(request, response) {
var pathname = url.parse(request.url).pathname;
var apiKey = url.parse(request.url, true).query.key;
route(handle, pathname, response, apiKey);
}

http.createServer(onRequest).listen(port);
console.log("Server has started listening on port " + port);
}

exports.start = start;[/js]

The “.query” returns a querystring object, which means I can get the parameter “key” by using “.key” instead of something like [“key”].

Ideal scenario

In the perfect world all I’d need to do is something like:
[code]git add .
git commit -m "initial node stuff"
git push {azure/appharbor/heroku/whatever} master
…..
done
…..
new site deployed to blahblah.websitey.net
…..
have a lovely day
[/code]
and I could pop off for a cup of earl grey.

In order to get to that point there were a few steps I needed to take for each of the three hosts.

Appharbor

appharbor-home-1

Getting started

First things first; go and sign up for a free account with AppHarbor.

Then set up a new application in order to be given your git remote endpoint to push to.

I’ve previously had a play with Appharbor, but this is the first time I’m using it for more than just a freebie host.

Configuring

It’s not quite as simple as I would have liked; there are a couple of things that you need to bear in mind. Although Appharbor supports node deployments they are primarily a .Net hosting service and use Windows hosting environments (even though they’re on EC2 as opposed to Azure). Running node within iis means that you need to supply a web.config file and give it some IIS-specific info.

The config file I had to use is:
[xml highlight=”3,9″]<configuration>
<system.web>
<compilation batch="false" />
</system.web>
<system.webServer>
<handlers>
<add name="iisnode" path="app.js" verb="*" modules="iisnode" />
</handlers>
<iisnode loggingEnabled="false" />

<rewrite>
<rules>
<rule name="myapp">
<match url="/*" />
<action type="Rewrite" url="app.js" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>[/xml]

Most of that should be pretty straightforward (redirect all calls to app.js), but notice the lines about compilation and logging; the permissions under which the appharbor deployment process runs for node projects doesn’t have access to the filesystem so can’t create anything in a “temp” dir (precompilation) nor write any log files upon errors. As such, you need to disable these.

You could also enable file system access and disable precompilation within your application’s settings – as far as I can tell, it does the same thing.

appharbor-settings-1

Deploying

Commit that web.config to your repo, add a remote for appharbor, then push to it – any branch other than master, default, or trunk needs a manual deploy instead of it happening automatically, but you can specify the branch name to track within your appharbor application settings; I put in the branch name “appharbor” that I’ve been developing against and it automatically deploys when I push that branch or master, but not any others.

You’ll see your dashboard updates and deploys (automatic deployment if it’s a tracked branch):

appharbor-deploy-dashboard-1

And then you can browse to your app:

appharbor-deploy-result-1

Azure

azure-home-1

Getting started

Again, first step is to go and sign up for Azure – you can get a free trial, and if you only want to host up to 10 small websites then it’s completely free.

You’ll need to set up a new Azure website in order to be given your git remote endpoint to push to.

Configuring

This is pretty similar to the AppHarbor process in that Azure Websites sit on Windows and IIS, so you need to define a web.config to set up IIS for node. The same web.config works as for AppHarbor.

Deploying

Although you can push to Appharbor from any branch and it will only deploy automatically from the specific tracked branch, you can’t choose to manually deploy from within azure, so you either need to use [code]git push azure {branch}:master[/code] (assuming your remote is called “azure”) or you can define your tracked branch in the configuration section:

azure-settings-1

Following a successful push your dashboard updates and deploys:

azure-deploy-dashboard-1

And then your app is browsable:

azure-deploy-result-1

Heroku

heroku-home-1

Getting started

Sign up for a free account.

Configuring

Heroku isn’t Windows based as it’s aimed at hosting Ruby, Node.js, Clojure, Java, Python, and Scala. What this means for our node deployment is that we don’t need a web.config to get the application running on Heroku. It’s still running on Amazon’s EC2 as far as I can tell though.

However, we do need to jump through several other strange hoops:

Procfile

The procfile is a list of the “process types in an application. Each process type is a declaration of a command that is executed when a process of that process type is executed.” These can be arbitrarily named except for the “web” one which handles HTTP traffic.

For node, this Procfile needs to be:

Procfile:
[code]web: node app.js[/code]

Should I want to pass in command line arguments, as in the previous version of my basic node API code, I could do it in this file i.e. [code]web: node app.js mYAp1K3Y[/code]

Deploying

Heroku Toolbelt

There’s a command line tool which you need to install in order to use Heroku, called the Toolbelt; this is the Heroku client which allows you to do a lot of powerful things from the command line including scaling up and down, and start and stopping your application.

Instead of adding heroku as a git remote yourself you need to open a command line in your project’s directory and run [code]heroku login[/code]and then[code]heroku create[/code]
Your application space will now have been created within Heroku automatically (no need to log in and create one first) as well as your git remote; this will have the default name of “heroku”

Deploying code is still the same as before [code]git push heroku master[/code]

In Heroku you do need to commit to master to have your code built and deployed, and I couldn’t find anywhere to specify a different tracking branch.

Before that we need to create the last required file:
package.json:
[js]{
"name": "rposbo-basic-node-hosting-options",
"author": "Robin Osborne",
"description": "the node.js files used in my blog post about a basic node api being hosted in various places (github, azure, heroku)",
"version": "0.0.1",
"engines": {
"node": "0.8.x",
"npm": "1.1.x"
}
}[/js]

This file is used by npm (node package manager) to install the module dependencies for your application; e.g. express, jade, stylus. Even though our basic API project has no specifc dependencies, the file is still required by Heroku in order to define the version of node and npm to use (otherwise your application simply isn’t recognised as a node.js app).

Something to consider is that Heroku doesn’t necessarily have the same version of node installed as you might; I defined 0.8.16 and received an error upon deployment which listed the available versions (the highest at time of writing is 0.8.14). I decided to define my required version as “0.8.x” (any version that is major 0 minor 8).

However, if you define a version of node in the 0.8.x series you must also define the version of npm. A known issue, apparently. Not only that, it needs to be specifically “1.1.x”.

Add these settings into the “engines” section of the package.json file, git add, git commit, and git push to see your dashboard updated:

heroku-deploy-dashboard-1

And then your app – with a quite random URL! – is available:

heroku-deploy-result-1

If you have problems pushing due to your existing public keys not existing within heroku, run the following to import them [code]heroku keys:add[/code]

You can also scale up and down your number of instances using the Heroku client: [code]heroku ps:scale web=1[/code]

Debugging

The Heroku Toolbelt is a really useful client to have; you can check your logs with [code]heroku logs[/code] and you can even leave a trace session open using [code]heroku logs –tail[/code], which is amazing for debugging problems.

The error codes you encounter are all listed on the heroku site as is all of the information on using the Heroku Toolbelt logging facility.

A quick one: if you see the error “H14”, then although your deployment may have worked it hasn’t automatically kicked off a web role – you can see this where it says “dyno=” instead of “dyno=web.1”; you just need to run the following command to start one up: [code]heroku ps:scale web=1[/code]

Also – make sure you’ve created a Procfile (with capitalised “P”) and that it contains [code]web: node app.js[/code]

Summary

Ok, so we can now easily deploy and host our API. The files that I’ve been working with throughout this post are on github; everything has been merged into master (both heroku files and web.config) so it can be deployed to any of these hosts.

There are also separate branches for Azure/Appharbor and Heroku should you want to check the different files in isolation.

Next Up

Node packages!

Creating a SublimeText plugin to publish markdown to WordPress

I’ve been using SublimeText for a while now as a no-frills text editor; it has a nice Zen mode which hides all of the tabs and menus and pushes it to full screen which I’ve found perfect for taking notes during seminars:

SublimeText2 Zen mode

I have been trying to take notes using the markdown sytnax, but have found the process between taking the notes in markdown, converting them to HTML, and publishing the HTML to a blog a bit of a pain.

Given that SublimeText supports extension via plugin scripts written in Python, I’ve knocked together an extension which will

  1. convert the markdown to html
  2. push the html directly to wordpress

Before I get into the details of the script, maybe a little background would be useful:

What’s Markdown?

A lightweight markup language for writing human-readable content which can be compiled into HTML. Being able to write a lovely HTML blog post in a very basic text editor instead of the bloated Windows Live Writer is extremely refreshing.

For example, to make a line a <h1> header, prefix it with a #. For <h2>, use ##. For <h3> ###. Easy.

Want a link? Try surrounding it with square brackets and putting the url just after it in round brackets: [Something like this](which links here). It’s a really easy “language” and actually looks decent enough to just read on its own.

To include an image you just need to write ![alt text](image url)

A syntax reference page can be found here.

One problem I do have is the basic support for code highlighting within markdown is poor, obviously, since HTML only has a single <code> block. The Markdown generated HTML conflicts with my wordpress Syntaxhighlighter plugin, hence the ugly clode blocks in this post.

There is support for “fenced-code-blocks” wihtin many implementations of markdown, and with the library that I’m also using, but this is still not playing nicely with Syntaxhighlighter. I’m sure I’ll figure something out eventually – please bear with me until then.

I use the fantastic python-markdown2 library to convert markdown to HTML on the fly; I just had to copy the lib/markdown2.py file from the github repo into the sublimetext “plugins” directory – I created a Markdown subdirectory for it – and reference it from my script. i.e.,

post_content = str(markdown2.markdown(post_content,extras=["code-friendly"]))

What’s Python?

Python

A functional scripting language. I found it really quite tricky to work with and had to install PythonShell as a powershell-ish window for testing out python commands before putting them to work in the plugin script. What’s not great fun is that SublimeText2 installs a different version of Python to PythonShell to version 1 of SublimeText to any version you explicitly install yourself.

Some commands and syntax will work in one version of python and not others

One great example is setting up a proxy; the syntax for this is completely different between versions of python and whatever example code you’re reading might be using the syntax that your version of python doesn’t support.

Working with a functional language took a lot of getting used to; I’d want to return an object from a method but had to instead return a comma delimited list of base types. It’s all a little odd to me, and my messy code shows that!

What’s WordPress?

Wordpress

A popular free blog engine that’s used as a CMS for some small businesses. It has an XML RPC API which conforms to the MetaWeblog spec, and you can really dig into this by looking through the php files that make up a WordPress installation.

Since I’m using Amazon EC2 as my wordpress host I can use Kitty to SSH in and browse to /var/www/html/xmlrpc.php to see the metaweblog service api itself. That file references wp-includes/class-wp-xmlrpc-server.php which contains all of the underlying functionality for the api implementation. This is great reference material to make sure you’re passing the correct values of the correct types to the correct endpoints.

Details

The sublimemarkpress plugin is intended to:

  1. check the contents of a config file for your blog access details
  2. scan the contents of the active window to get the posts details, such as post ID (to make it an Update instead of a Create), tags, and status
  3. convert the text from Markdown to HTML
  4. Push the HTML to the metaweblog endpoint with the correct blog/post details

1. Getting the Blog setup details from a config file

The plugin relies on a settings file called “sublimemarkpress.sublime-settings” using the structure:

{
    "xmlrpcurl": <URL to xml rpc endpoint>,
    "username": <username>,
    "password": <password>
}

to read it:

s = sublime.load_settings("sublimemarkpress.sublime-settings")
mbURL = s.get("xmlrpcurl")
mbUsername = s.get("username")
mbPassword = s.get("password")

2. Get the text and strip out the blog post data

The plugin expects the top of your text file/active window to have optional tags to define blog post details:

<!-- 
#post_id:<id of existing post - optional>
#tags:<comma delimited list of post tags - optional>
#status:<draft or publish - optional>
-->

To get the entire contents of the active window:

all_lines_in_page = self.view.lines(sublime.Region(0, self.view.size()))

Then to extract the header details:

post_id, tags, status, has_header_content = self.GetHeaderContent(all_lines_in_page, header_lines)

where GetHeaderContent is the hack-y:

def GetHeaderContent(self, all_lines_in_page, header_lines):
    page_info = {"has_header_content":False,"post_id":None, "tags":"", "status":""}

    if self.view.substr(all_lines_in_page[0]).startswith("<!--"):
        page_info["has_header_content"] = True
        self.MoveCurrentLineToHeader(header_lines, all_lines_in_page)

        # post_id
        if self.view.substr(all_lines_in_page[0]).startswith("#post_id"):
            page_info["post_id"] = self.view.substr(all_lines_in_page[0]).split(":")[1]
            self.MoveCurrentLineToHeader(header_lines, all_lines_in_page)

        #post tags
        if self.view.substr(all_lines_in_page[0]).startswith("#tags"):
            page_info["tags"] = self.view.substr(all_lines_in_page[0]).split(":")[1]
            self.MoveCurrentLineToHeader(header_lines, all_lines_in_page)

        #post status
        if self.view.substr(all_lines_in_page[0]).startswith("#status"):
            page_info["status"] = self.view.substr(all_lines_in_page[0]).split(":")[1]
            self.MoveCurrentLineToHeader(header_lines, all_lines_in_page)

        self.MoveCurrentLineToHeader(header_lines, all_lines_in_page) # removes the closing comment tag
    return page_info["post_id"],page_info["tags"],page_info["status"],page_info["has_header_content"]

def MoveCurrentLineToHeader(self, header_lines, all_lines_in_page):
        header_lines.insert(len(header_lines),all_lines_in_page[0])
        all_lines_in_page.remove(all_lines_in_page[0])

3. Convert the rest from Markdown to HTML

As mentioned earlier, thanks to the great python-markdown2 library, this is simply a case of calling the “markdown” method, passing in the content to process:

post_content = str(markdown2.markdown(post_content,extras=["code-friendly"]))

I actually do a test to see if the markdown library can be imported first, then if it fails I don’t even try to convert from markdown to HTML:

can_markdown = False
try: 
    import markdown2 # markdown
    can_markdown = True
except ImportError:
    can_markdown = False

4. Post to metaweblog api

Build up the request using the blog details, post details, and HTML content:

content = self.BuildPostContent(self.view, {"content": post_content, "title": title, "tags": tags, "status": status})

def BuildPostContent(self, view, page_data):        
    return {"description": page_data["content"], "post_content": page_data["content"], "title": page_data["title"], "mt_keywords": page_data["tags"], "post_status": page_data["status"]}

Then submit it to the api:

proxy = xmlrpclib.ServerProxy(mbURL)

if post_id == None:
    post_id = proxy.metaWeblog.newPost(blog_id, mbUsername, mbPassword, content)
else:
    proxy.metaWeblog.editPost(post_id, mbUsername, mbPassword, content)

Extra bits and making it all work

To execute a command from a plugin within SublimeText, firstly you need to import the sublimetext library at the top:

import sublime, sublime_plugin # sublime

then name your class “something<command>” and have it take the “sublime_plugin.TextCommand” parameter:

class PublishCommand(sublime_plugin.TextCommand):

Then to run it you need to hit ctrl+’ to bring up the command window and type:

view.run_command("<name of the class minus the 'Command' suffix>")

e.g. for my plugin:

view.run_command("publish")

You can try this out with loads of the plugins included with a SublimeText install

That’s about it

All of this is in a small github repo. I’d appreciate it if you want to fork the repo and help show me how python should actually be structured. And written.

Known Issues

Categories

These need to be requested, looped through and matched against those associated with a post, added if those specified don’t exist, and the IDs associated with the post. Couldn’t be arsed.

As such, when you post to your blog you’ll see the entry is “uncategorised” and you’ll have to manually edit this.

Images

Since I’m just fooling around with a text editor, uploading images is still a little out of scope.

Random reference material

  • pyblog – for helping to work out the guts of how python interacts with the metaweblog api
  • I used Fiddler & Windows Live Writer for investigating traffic and finding out what the actual parameter being used to specify “tags” is within the metaweblog api: mt_keywords

EC2 WinSCP /var/www file upload

Since I use an Amazon EC2 Microinstance to host this blog and I noticed I had no favicon appearing (which is a bad thing) I thought I might as well make one and pop it up.

I took my usual avatar (Jiraya, the “Pervy Sage” from Naruto) and just let someone else do the hard work for me, uploading it to FavICO.com instead of even bothering to download an app to do it.

Then I opened up WinSCP and used my EC2 ppk to authenticate with my EC2 instance, took that favicon.ico file and tried to upload it to /var/www/html (the default website root if you install wordpress) only to receive the error:

ec2 var www upload permission error

Hmmm.

If you check the permissions on that folder (“stat /var/www” via SSH) you’ll see that it’s owned by the “root” group; since you’re logging on as ec2-user you’re not a member of that group.

Your options, according to the internet, appear to be;

The solution I found was much easier.

  • Upload the favicon to the ec2-user home directory via WinSCP
  • Move (mv) the favicon from the ec2-user directory to /var/www via SSH using “sudo” to get the necessary permissions

Easy.

AppHarbor, Heroku, Git, and the Sweet, Sweet CI Process

The background: I thought that my Mobile TFL Bus Countdown site might be suddenly very popular for a very short time (for about a weekend perhaps) and didn’t want to pay for the potential sudden jolt in hosting costs from my own servers. As such, I developed it locally using git as VCS, pushed it to my newly acquired Appharbor account, and just saw it suddenly available to browse at rposbo.apphb.com

The pitch: For your own small website/app you probably edit it locally on your PC, maybe you even have source control like a good dev, you’ll compile the code and then you’ll copy it to your hosting provider, probably using FTP/ via a web interface/ SCP/ SSH.

Then at work you’re probably shouting about how awesome CI builds are and how to introduce continuous deployment as part of a branching and build strategy.

You might even use Azure or EC2 at work, maybe for your own little home projects too. Maybe you’ve learned a bit of git but your office uses TFS (ugh) or SVN (meh).

So why not do this for your own stuff? For free? In the cloud?

Imagine the ideal workflow: make some code changes –> commit them to (D)VCS –> push them to a (remote) repo –> the push kicks off a build the committed project (git hook) –> run any associated tests, then if they pass –> deploy the app to the cloud.

That’s exactly what Appharbor and Heroku do! Let’s start with the pretty one:

Heroku

Heroku says it’s a “cloud application platform” for running scalable Ruby, Node.js, Clojure, and Java sites/apps. To create and deploy a new site is, apparently, as easy as:

[code gutter=”off”]$ heroku create
Created sushi.herokuapp.com | [email protected]:sushi.git

$ git push heroku master
—–&gt; Heroku receiving push
—–&gt; Rails app detected
—–&gt; Compiled slug size is 8.0MB
—–&gt; Launching… done, v1
http://sushi.herokuapp.com deployed to Heroku[/code]

So here the flow is: write some code –> commit to git –> push to Heroku –> code is built –> code is deployed. Done.

heroku homepage

The Heroku website is fantastically full of all the information you’d want to get started, and their pictorial representation of how their solution works and the various levels of databases you can buy are geek-awesome:

heroku databases

“This app needs a BAKU DATABASE!! GRRAARRRR!!” Go and have a look and bask in the beautiful piccies and animations. No wonder this is (apparently) the place to go to write and deploy cloud hosted Facebook apps.

Thanks to Heroku I’m finally beaing pushed to learn Ruby, but haven’t managed anything quite yet, hence no demo of the Heroku flow – wait a few more posts and I’ll have something Ruby-fied and certainly some Node.js as I’ve been meaning to get into that for a while, possibly even Clojure (sounds fun) and Java (old school!).

Next up is one for the .net crowd:

appharbor

Appharbor sells itself as “Azure done right” which confused me. The website itself is verrrry low on information so I just assumed it would deploy my app to Azure. Turns out I was wrong:

appharbor chat on twitter

Despite my being pedantic over their homepage tagline I took the dive and just signed up. Only once you’ve done this do you get to see the money shot – the intro video; a new MVC app in Visual Studio to EC2 cloud via git + appharbor in a matter of minutes:

Now I have my account and I have a great intro vid I just hop into my code directory;

[code gutter=”off”]git init
git add .
git commit –m "init"
git remote add appharbor <git repo url appharbor gave me>
git push appharbor master[/code]

And that’s it. Committed code is checked out on their servers, built, any associated tests are executed, if everything passes then it gets deployed – and you can see all this from your Appharbor account:

appharbor deployment

(mine didn’t actually have anything to build, as it was a single html page and that really basic asmx web proxy I wrote).

In conclusion; you now have absolutely no excuse to not write and deploy whatever applications you feel like writing. There is no hosting to worry about, no build server – it just works. Use Appharbor for your .Net and use Heroku as an excuse to look at their pretty pictures and learn something that’s not .Net.

I know I will.

Comments appreciated.