Web development blog by Philipp Rieber
Moving to the cloud mostly means moving to a scalable multi-server environement with a load balancer in front. The load balancer redirects a user to an available web server instance of the cluster. Imagine a form with an image file upload somewhere in your application allowing a user to publish an avatar on his profile page. Handling the uploaded file the old way would mean to store it on the current web server’s file system. But how could this file be accessed by other web servers of the cluster, e.g. to display the avatar in the user’s public profile to users that have been redirected to another instance of the cluster? Moreover, what happens if we want to scale down our multi-server environment – meaning that we may need to shut down a web server that stores uploaded images? One possible solution would be to setup an additional file server for this purpose, not being part of the scaling cluster. All webs servers could access uploaded files at the same central location. But there are several drawbacks with this setup: First, it means setting up and maintaining another server with a different configuration. Second, it means a single point of failure: if our single file server fails then the whole application is concerned – and for the sake of simplicity mirroring the file server is not an option… S3 to the rescue!
Amazon S3 (Simple Storage Service) is a cloud storage: it allows you to store files on Amazon servers. Based on one of the most reliable infrastructures on the internet and with features like permissions and REST/SOAP APIs on top it should perfectly suit our needs for a centralized file storage. There are some nice S3 management tools which allow file handling similar to all the good old FTP clients, e.g. S3 Browser or CloudBerry. But remember the file upload in our application: we will need to handle files programatically. The S3 documentation describes all the low level commands for file operations: how to upload or download files, how to setup permissions or how to list your existing files. If you do not want to reinvent the wheel you should use the appropriate standard development kit (SDK), abstracting away all the tedious work of crafting restful URLs and looking up all the parameters. Amazon provides SDKs for all major languages, e.g. the AWS SDK for PHP.
But we didn’t wanted to tie our application completely to S3 by spreading SDK code everywhere, so we went one step further by using the fabulous Gaufrette library and its Symfony2 bundle. Gaufrette provides a filesystem abstraction layer allowing us to handle files independently from the underlying storage system. In other words: our application will use Gaufrette commands to save and retrieve files without knowing the underlying filesystem: Local, FTP, S3, database, Dropbox, APC …
Every installation of our application can be configured separately which filesystem to use. This allows us to continue working with the local filesystem in our development, staging or test environment without being charged by Amazon and to still be able to develop offline.
Installing the AWS S3 SDK for PHP is as simply as adding a new line to your
Then update your vendors:
AmazonS3 class from the AWS SDK as a service. Usually you’ll have to pass an options array with the S3 credentials as constructor arguments:
Add the credentials to your
Now you could grab the service from the container and just use it:
As mentioned above we can get rid of this dependency on the S3 service by using Gaufrette. Install the Gaufrette Symfony2 bundle by adding another line to your
Update your vendors again:
Register the bundle in the kernel:
The container configuration of the Gaufrette bundle allows to register different virtual filesystems in a so called filesystem map. Every filesystem requires an adapter for defining and configuring the underlying real storage. In our example we register a filesystem named
uploads and make its adapter configurable:
Usage of a filesystem is easy. Just retrieve the desired filesystem from the
knp_gaufrette.filesystem_map service and use its self-explanatory methods to handle files:
But how to define and switch the adapters? This also happens in the container configuration:
The first adapter
uploads_local defines a simple local filesystem with only one parameter: the directory where the files should be stored. The second adapter
uploads_s3 defines what we were waiting for: S3 storage. We have to specify the service key of the above configured AmazonS3 service and additionally pass a bucket name and the appropriate AWS region.
Now we can complete our
By switching the
uploads_adapter value between
uploads_s3 we can now switch between local file storage and S3 storage.
Philipp Rieber is a passionate and certified PHP and Symfony developer working at Paymill in Munich, Germany. He is also engaged in the frontend, the Cloud, on Mobile, as a DevOp and a technical writer. He is never missing the PHP User Group Munich and he is co-organizing the Symfony User Group Munich.
Liked this post? Follow me on Twitter for updates.