Web development blog by Philipp Rieber
The existing application setup is a common one:
In the past, every programmer at our office has been more or less a one-man show, being the master of all the above mentioned processes. But our applications have grown up:
So in the course of time we were getting concerned with our single server setup because it is not reliable to have a single point of failure and we are not able to scale in either direction. Furthermore, programmers need to get back to their real work instead of struggling with backup scripts, updating server software or monitoring the monitoring process.
As we were using Amazons S3 storage since its launch in 2007 and some of us have already delved into all the Amazon cloud services added in the last years out of curiosity, it was no big discussion about what could improve our setup:
Route 53 will save us from tedious email communication and idle times about everything where DNS is involved. Switching an A record to another IP or setting up a subdomain? Only a matter of minutes. We will gain full control and flexibility without the risk of being offline because our DNS contact at the other end of the world is still at sleep.
S3 will act as a kind of shared file server and solve the handling of user uploads between the different members of the server farm. So all the files are stored independently from the load balanced, auto scaled web servers and they are accessible through a well-designed API.
SES will replace the need of a mail server. As our application only sends out emails we will just have to configure a given SMTP endpoint.
RDS will replace the need of a MySQL server. Security patches will be installed automatically, hot backups are available at any point in time during the last few weeks, database replication between different availability zones is self-evident. We will just have to configure new connection settings and move the existing database to the cloud. For all the rest we do not lift a finger apart from archiving our obsolete backup script.
EC2 is the heart of the new setup up since this is where our application code is installed. A template server instance with Ubuntu, Apache and PHP will be used to setup a load balanced server cluster. By configuring traffic and workload triggers, we will be able to scale the number of server instances up and down automatically. As they are all based on the same template instance, they all share the same configuration. Application code will have to be installed on boot time. Deploying new application code will be the trickiest part.
ElastiCache may be used at a later point in time to store the sessions. It is a memcached compatible key-value store. For now we will move the sessions to the database.
On top of everything we can take advantage of a lot of security features. For instance, RDS and the EC2 instances can easily be configured to be only accessible from our own network or the application server cluster respectively.
While thinking about how to move our prototype application to the cloud it turned out that we do not need to move everything at once. It should be possible to hook in all the services step by step into the productive system.
In the next parts of this series I will describe step by step how we modified and moved our first Symfony2 application to the Amazon cloud:
While having gained a lot of AWS knowledge by reading all the documentation and implementing a lot of proof of concepts we still wanted to be on the safe side before switching to production. This is why we decided to benefit from the AWS experience of tecRacer - our cloud system administrators.
Philipp Rieber is a passionate and certified PHP and Symfony developer working at Paymill in Munich, Germany. He is also engaged in the frontend, the Cloud, on Mobile, as a DevOp and a technical writer. He is never missing the PHP User Group Munich and he is co-organizing the Symfony User Group Munich.
Liked this post? Follow me on Twitter for updates.