Push your code onto your EC2 instance with Git

Here is a little thing that has saved me a lot of work, and which I find quite neat.

I use the EC2 as my cloud computer. This page (lovholm.net) is hosted on a shared host, but sometimes it’s nice to have the flexibility to run things that are not supported by shared hosts, such as custom software and all that jazz. When I try out different web-things that are not only front-end or PHP I usually create a subdomain and place the code on an EC2-instance after initial development on localhost.

Since I now use different environments for development I have found this neat way of pushing code  from my local computer to the server. This is a lot easier than using FTP, and is more flexible than deploying to Heroku (which is – by the way – a brilliant super-easy way of hosting and deploying).

The idea is very similar to that used by Heroku, except that the setup is more complex and the good terminal tool is lacking. On the other hand it gives you greater flexibility to run your own instance and it may also be more affordable if you plan to make your code into a business with much traffic.

Don’t know what git is, or don’t have git installed. Github has some great documentation on installation and Git on what git is.

Setting up the Git-repositories

Locally I initialise a git repository in the folder I am working.

local $ cd the-amazing-test-project
local $ git init

Then I use SSH to connect to my cloud computer where I have a folder for remote git repositories (I store these in a sibling-folder to the root folders of my projects.)

remote $ cd git-repos
remote $ mkdir testrepo.git
remote $ cd testrepo.git
remote $ git init --bare

When this is done, we need to create a remote-link from the local repository, and create a post-update hook at the remote repository. This will make you able to push the code from the local git-repo onto the server and from there unpack the code and do other neat things like logging and restarting of servers (if necessary).

The cloud instance as remote repository

So now you have a local repository on which you work, commit and track your local files, and you have a bare repository in the cloud. Let’s make the bridge.

local $ git remote add web ssh://ec2-user@yourwebserver.cloud.org/home/www/git-repos/testrepo.git

Create an empty file in the local directory, use

git add .

to add this file to repository, commit the changes to the git repository:

git commit -am "Initial test"

Now you could push the repository to the clouds using:

git push web master

If you get an error message. Make sure that you have added the keys used to connect to the remote host to your ssh-keys. If not, you could use ssh-add to do this. Call the command ssh-add with the key as the only argument.

If you still experience problems. Enter the local repository with your terminal and then change to the .ssh directory where you will find a file named config. This content of the file should be something similar to this:

[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
ignorecase = true
precomposeunicode = false
[remote “web”]
url = ssh://ec2-user@yourwebserver.cloud.org/home/www/git-repos/testrepo.git
fetch = +refs/heads/*:refs/remotes/origin/*

Add a hook for deployment

Once you have a connection between the local and remote repository, we need to make a way for the remote repository to deploy our code and not just keep it safe and sound. Connect through ssh to the terminal of your remote computer and change into the git-repos directory and your testrepo.git folder.

Within this folder you should have a folder with hooks, and it is here we are going to add the last pieces of code to get our system up and running. We will utilize the server-side hook post-update to deploy our code and do other tasks needed. The post-update hook is a shell script which git executes after receiving a push. Add to this the following code:

#!/bin/sh
GIT_WORK_TREE=/home/www/amazing_web git checkout -f

echo "the_amazing_web_project checkin at: <$(date)> " >> /home/www/logs/updates_log

The second line in this code excerpt will extract the data (when adapting this snippet, make sure the path after the equal sign is created), and the fourth line will add to a logfile the name of the project and the date.

You can now commit code to your remote server easy, and if you need to do any changes like rebooting the web-server you can add the code to the post-update script.

How to set up an AWS EC2 instance

Amazon Web Service’s EC2 is the workhorse and general purpose computer in the AWS ecosystem. It is affordable (free or very cheap if you keep it scaled to the minimum option), it is easy to set up, and it provides you with a computer located in the cloud. This is a good option if you have content you would like to be accessible everywhere, and especially if you develop solutions you want to be accessible through the Internet.

Here are two examples where a cloud computer may come in handy:

1) If you have ever tried to host services on a local machine through a bread-and-butter Internet Service Provider, you will be delighted to be allocated a persistent IP-address. You can also decide on the ports you want to be accessible or shut bypassing ISP-policy, e.g. blocking outbound port 80 (default HTTP-port).

2) If you have ever used a shared host for hosting websites, you may have been met with software that doesn’t work. This can be because your scripts, plug-ins or other software are using languages not supported by the host. For example many general purpose hosts are not providing support for Python and Ruby. Another problem is libraries and other dependencies. If you want to have access  to more advanced photo-editing through e.g. Imagemagick then your fate is in the hand of the provider and what stack/software/libraries/dependencies they wish to provide. You are also stuck with using FTP-client as many shared hosting providers do not grant you SSH access to their servers.

Signing up for AWS access

Setup is fairly easy. First you need to go to the AWS website and sign in or sign up with an Amazon account. As part of the registration Amazon will validate your phone number by calling you with an automatic voice service to give you a verification code, so keep your phone with you as you start the registration process. Follow through the web form for registration a Amazon account, then a

Creating the EC2 Instance

AWS-services
AWS has many services, but for this blog post we stick to the EC2. Click on the image to enlarge it.

Once registered and validated you should have access to the Management Console. If not you find this by first entering the AWS frontpage, then on the top of the window selecting “My account/Console” then “AWS Management Console”. This should bring up a new view with all the services that AWS provide (don’t be dazzled, there is very much information, but luckily one can survive just using a couple of them). Select the EC2 option in the orange section named “Compute & Networking”. This should bring up another view. Before continuing, go to the top right corner of the screen (to the right for your name) and select the region that is closest to you. (More on regions)

EC2-view
The EC2 view. From here you can create and administer your EC2 instances, and services related to these.

In the EC2-view you can easily set up and administer your instances. One thing to be aware of it that once the instance is created some settings persists with it, and these can only be changed by creating a new instance. This can cause you many hours of extra work if you do many changes to your instance, and it could also make it impossible to access the device (if you loose your encryption keys and passwords.)

There are three things we need to pay attention to at this point: what kind of instance do we want, how should we create the authentication, and how do we deal with persistant storage.

Choosing an AMI

AMI is the abbreviation for Amazon Machine Image. These are the images with the virtual appliances which are used for instanceiate the EC2 (which is basically a virtual machine). You can choose between free public AMIs, commercial images where you have to pay or you can even create an image yourself (the last is out of scope for this post). For this tutorial we are going to use a commonly used image – The Amazon Linux AMI.

From the EC2-view click the new instance button, then choose a classical view for selecting an AMI. As you can see there are several options within Linux and Windows. The ones marked with a star got a free tier if used with the micro instance, so no need to worry about the costs yet.

Creating the keys

To connect to the EC2 you will need SSH-keys which can identify you. There are two ways of creating these keys, either create the key locally and share the public key with the EC2 instance, or you can let Amazon create the keys and send you the public key for the instance. I recommend creating these locally as you then have both the private and the public keys. There seems to be a limit on downloading the keys from Amazon, and this may cause problems if you loose the key. It is also an advantage to have both keys in case you use two or more computers to communicate with your EC2 instance.

To create an SSH-key open your terminal and enter: ssh-keygen -t rsa -C “your@email.example” A tutorial is also provided by GitHub.

Upload this SSH-key when you assign keys in the EC2 setup.

Getting an IP-address

When your instance is up and running you get an access point address where you can find and connect to your instance. I would however recommend that you get an IP-address to this point. Not only is it easier and shorter, it also gives you a way of abstract your virtual machine from the address. Amazon provides you with an Elastic IP, and with this you can change which EC2-instance the IP address should point to. This makes it easy to e.g. start up a new and more powerful instance and then quickly change the IP address to this instance instead of letting the user endure the longer time a virtual reboot would take. At some point you would like to map a DNS address to the Elastic IP so e.g. sub.domain.com or domain.com points to your instance, at this point it is nice to just have to configure the elastic IP and not have to change the DNS record when you want to change instances.

Test your instance

The instance should now be up and running. Now try to connect to your instance using SSH. The login username may change from AMI to AMI, if your AMI is ubuntu the username is usually ubuntu, if you use the standard Amazon Linux AMI the username is ec2-user. 

ssh -v ec2-user@123.123.123.12'

Changing the security settings

ec2-security

If you are not able to login it may be that you need to open the firewall to your instance. As you created your instance, you assigned the instance a security settings group.

If you have any problems connecting to a service using a specific port, you should check here to ensure that the traffic is not being blocked at the firewall. For SSH you will need to open port 22.

Creating an alias for easier access

This is not a part of the setup process, but it’s a neat little trick for making access easier. Instad of remembering the whole address in the terminal when you use SSH to connect to your instance, make an alias in the .bashrc or .bash_profile.

If you connect writing ssh -i $HOME/keys/aws.pub 1233.4556.7789.1 then put the string below into your .bash_profile and source this.

alias my_ec2_connect='ssh -v -i $HOME/keys/aws.pub 1233.4556.7789.1'

(Disclaimer to point two: lovholm.net is in fact hosted on a shared host by one.com – WordPress is very convenient to host at providers such as one, since it’s built on ubiquitous web-stack PHP+MySQL, and shared hosts are generally affordable and even easier to set-up than dedicated. One does also have a good admin dashboard from which you also can do some simple rerouting – more on this in a later post)

The image is from Flickr and is provided under a CC-licence. The image is associated to cloud computing and IBM, not Amazon. 

Enter the Cloud – Amazon Web Services

AWS – Amazon Web Services – is a great infrastructure cloud service for all IT business from start-ups to full blown corporations. Providing both Infrastructure-as-a-Service (IaaS) through the general purpose EC2 and Platform-as-a-service (PaaS) through Beanstalk as well as hybrid solutions such as storage solution S3 and database service RDS.

I have been using these services for about half a year now, and I am convinced these services as well as competing equals (e.g. Microsoft Azure) is a good tool that for everyone who want to experiment with web technology or create web businesses. This is not a new thing, cloud computing have been around for years, and many web services are running on Amazon services today, but through looking into them myself I understood how I can benefit from them in my work with the web and through the Internet, and new experiments

S3

Amazon Simple Storage Service provide affordable web space. In this specific market similar services have emerged, which also are easier accessible e.g. Dropbox – one of the best things since bread came sliced. The good thing about S3 is the tight integration with the other Amazon services, it is also good for hosting static web pages (pages that only needs to be served, without initial computation). Some FTP clients also include S3 support so you can use the S3 service as you would regularly use FTP for storage of files. Another advantage is that you can choose the S3 bucket (the entity in which you place your files on S3) to be distributed to all of Amazons data centres (for an additional cost). This is good if you need to transfer files to end user fast as there are less potential for bottlenecks using a local datacenter (by local here I mean the same continent.)

EC2

Amazon Elastic Compute Cloud is the real game changer. While most of the other services are specialised the EC2 is the general purpose machine. An EC2 instance is a virtual computer located in the cloud where you pay for the computing hours you use, a standardised unit being approximately: “CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.” Since the payment is by the hour, and the units are scalable it can be very flexible to your needs. The service enables you to make your hosting scalable both for scaling up, and scaling out.

Scaling up: The instances comes in several packages. From the modest and almost free micro instance to extra-large distributed platforms. You can change the processor performance and dedicated RAM on your instance, and changing the instance type just require a reboot and you can transfer your persistance storage onto the new virtual machine.

Scaling out: If you create an image with all your server variables, binaries, libraries and other stuff you need, or a method for loading this from a remote computer, you can in just a couple of minutes initialise new instances with these data, and since you pay by the hour you can e.g. have four computers running at times with high load and one during times with low load. The coolest feature is that you can measure trafic and load new servers as the service is running at lower performance than wanted.

Elastic-IP: 

The URL to the instances are normally quite arcane and difficult to remember. That is why this is quite good. Elastic-IP is just bridge between a IPv4 address that you get allocated, and the instance. Above I described how you could reboot an instance, but the downside to this is that while the instance is restarting your server would be down. To avoid this, you could create a clone of your instance #1 with the higher capacity settings, and when this is loaded, direct the IP to the new instance #2 and then turn off the instance #1. (Well, I guess you would loose the users’ sessions)

Do you want to set up an EC2 instance? (More about how)

RDS

RDS, or the Relational Database Service. You need this if you want to handle persistent data in a relational manner (and if you’re not hacking user-data into self-composed formatted files, or using a noSQL solution) and think life is to short to set-up and configure a SQL-database on the general purpose EC2 service.

When you create a new DB instance you can easily choose type. If you use Oracle you can bring your own licence or rent licence by a hour-licence.
When you create a new DB instance you can easily choose type. If you use Oracle you can bring your own licence or rent licence by a hour-licence.

For the RDS you can choose between (ex-Swedish, now Oracle) MySQL, Microsoft SQL Server or  Oracle (also Oracle *dough* ). The setup is very easy, and you can choose the size of the database, the backup retention period and other variables then start the instance. Check the security settings that the ports are open and that the DB can access traffic from your IP-address. Download SQL Workbench (MySQL) or SQL Developer (Oracle) (Sorry, not sure about good tool for MS SQL Server), insert the endpoint URL, username and password and get started.

Other services

AWS-services
The AWS selection is huge, but luckily you won’t need to know every service available to get started. (Click images for full size)



This picture show the AWS panel from which you administer your services, and as you can see there are several more services to explore. Many of these are specialised to e.g handle MapReduce or deploying specific web-stacks easily, but with the S3, RDS and EC2 it should be easy to get started.