#49 - Static Sites using AWS S3, CloudFront, and Route 53 (2/5)

Links, Code, and Transcript


In this episode, we will create a S3 bucket to store static website content. We will review the AWS management console at a high level, talk about S3 bucket permissions, along with manual S3 file uploads.

Review Architecture Diagram

Before we dive in though, it probably makes sense to quickly review the architecture diagram, just so that we are both in the same page, in terms of what we are trying to do in this episode. Last episode, we looked at two example static hosting architectures. A full CloudFront solution, with CloudFront sitting in front of our S3 bucket, serving all user requests. Then, there was a bit of a hybrid solution, using both S3 and CloudFront at the same time, where the html pages come off S3, and bulky assets, things like stylesheets, javascript, and images are served from CloudFront.

In both cases, these S3 buckets act as storage areas for our website content, and from there it is sent out to the end user, either via CloudFront, or served directly from the S3 bucket. The S3 bucket, is basically the foundation for the entire static hosting environment, and this is what we are going to create today. So, with that in mind, lets jump over to the Amazon Web Services management console.

AWS Management Console

Once you have signed up, and logged into AWS, you will see an administration console like this. As you can see, there are lots of options for compute, where you could easily start all shapes and sizes of virtual machine. Storage solutions, things like S3 and CloudFront. All types of database and caching options. Administration and monitoring, deployment workflow tools, along with many different application type services, all on pay as you go pricing.

It can actually be a little overwhelming, in that you have just so many options, and the learning curve is pretty steep. Each one of these links, will bring you to a dedicated management console for that particular service, so there is just so much to learn and explore in here. As an ice breaking, I wanted to create a simple problem, like hosting a static website. So, to build out our static hosting environment, we are going to use a mix of S3 for storage, CloudFront for speedy content delivery, and Route 53 for DNS resolution.

S3 Management Console

Okay, so lets click into S3, and pull up the management console. The interface is actually really clean and simple to use. On the left hand panel here, there will be a list of S3 buckets assigned to your account, and the right hand side will show details about buckets, things like when the bucket was created, and various settings which can be changed. This will all become obvious in a minute when we create our first bucket.

Creating a S3 Bucket

Since we want to host a website called websiteinthecloud.com, and we are going to store the contents in S3, lets go and create a S3 bucket for it. A wizard pops up, and we are asked to fill out two fields, what we want to call the bucket, and in what geographic region we want this bucket to be located.

I think of these S3 buckets as directories or folders, you just create them, put files in there, then you can assign all types of settings and permissions. Actually, they have a pretty good description of what a bucket is here, a bucket is a container for objects stored in S3. They also mention, you can choose the region where your bucket stored. What does this mean? Well, AWS has data centres all over the world, and you can choose where the bucket is physically stored. Say for example, that the majority of your customers are in Tokyo, it might make sense to store the data close to them, in the Tokyo region, as access times will likely be much quicker, and data transfer fees lower. Or, say that you are dealing with data, that is required by law to stay within your home country, you can comply with these laws by picking the correct region. But, I am just going to choose US Standard.

For the bucket name, lets call it websiteinthecloud.com, the name really is not all the important, as in most cases no one will ever see it, but you should call it something so you can easily recognize it. The reason you can call this bucket an arbitrary name, is that we are going to use Route 53 in a later episode, to map our websiteinthecloud.com domain name, to this S3 bucket, so the name is not really important, as it is hidden away behind the scenes. If you wanted to log access requests for this S3 bucket, you could follow the wizard, but since I do not really care about that for this example, we can just create the bucket.

Bucket Navigation

Okay, so over on the left hand side here, we have our newly created websiteinthecloud.com bucket. Then, on the right hand side here, we have metadata about this bucket, and a whole bunch of expandable settings. You can click into this bucket, just like you would a folder on your desktop, or something similar. You can see here, the bucket websiteinthecloud.com is empty, obviously because we just created it. To go back, you can just click this little breadcrumb here, and it takes us back to the S3 console. Sorry, if this is super basic stuff, I just wanted to start from the ground up, trying not to assume too much AWS knowledge.

We need to change a couple setting associated with this bucket, mainly because we want to serve static content out of it, so lets do that now. You can get to the bucket properties menu, by either clicking this icon, or right clicking the bucket name, and selecting properties.

Over here, we see the bucket name, various metadata associated with the bucket, things like region, created date, and who owns it. Then down here, there is a bunch of expandable options, for things like bucket permissions, static hosting, logging, versioning, lifecycle, etc. The AWS documentation is fantastic, so if you have any questions about what these features do, just expand the option, and click the documentation links.

Granting Anonymous S3 Access

By default, the bucket and anything in it, is only accessible by your account. But, since we are going to serve a public website from it, we need to enable public access. Lets expand this permissions drop down here. Now, you could enable public access to each file in the bucket, on a file by file basis, but that can be extremely cumbersome and time consuming. An easier method, is just to add a blanket policy, which grants anonymous read access to all object in the bucket. We can do that by clicking add bucket policy, then adding the policy into the text box here.

To see what some example policies looks like, you can click this helpful sample bucket policies documentation link, down here. There are many great examples in here, and it really helps to show you what is possible, things like, granting anonymous access, restricting access to specific IP addresses, access based on a http header, only allow access from cloudfront, etc. There is also a great overview of how to grant anonymous read access, from the static hosting guide, along with some additional links if needed. All of these links are in the episode notes below. But, we are just going to copy this block, and then paste it, over on the S3 management console tab.

{
  "Version":"2012-10-17",
  "Statement":[{
  "Sid":"PublicReadGetObject",
        "Effect":"Allow",
    "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}

This policy grants anonymous read access to our bucket. This version, and date, is related to the policy version we want to use. You cannot enter an arbitrary date in here, this is kind of like saying that we want to use a particular version of a policy, so that AWS knows what these attributes down here mean, so they are pinned to this policy version.

We grant public read access, allowing all, get object requests, then we specify the bucket down here. You can see the example bucket name here, then the star indicates all files in that bucket. So, we just need to change this example bucket name, to our websiteinthecloud.com bucket, the one we created earlier over here. Then lets save it. We can verify that it actually worked by clicking edit. Great so that is how you grant anonymous access. This might be a little long winded, but I did not want there to be any voodoo happening behind the scenes, and you might come up with other policy use cases too, so this is useful knowledge to have.

Configure Static Website Hosting

Okay, so now that we have the bucket created, and anonymous access configured, lets enable static website hosting. Says here, that once you turn on static hosting, this endpoint link will take you to your bucked enabled website. Right now, it is serving a 404 error message, saying that no such website has been configured. What is interesting about all this, is that once we enable static hosting, you can actually give out this link to people, say you just wanted to share some large files in the bucket, you do not even need to map a custom domain to it.

Okay, so lets turn on static hosting. Down here, we have a couple radio buttons, currently static hosting is disabled, so lets enable it. In here, you are asked to provide the index document, basically the default file to show someone when they visit this bucket. This is typically an index.html file, then we need an error page to show people, lets opt for a 404.html page, finally, lets save the changes.

Lets scroll up here, so we can view that bucket endpoint link again. Lets open this up in a new tab. We are still seeing an error, but it is different, in that instead of a no such website has been configured error, we see this no such key error. You can see that it says no such key exists, then the key name is index.html, so it is basically complaining that we do not have the index.html file in our bucket to serve as the default page. Down here, comically, S3 says there was an error, while attempting to retrieve our custom 404.html error page, again this is because it does not exist in the bucket.

Thought it makes sense to work through these errors, in that if you miss something while getting this deployed, you have a pretty good idea of where to look. So, lets go and upload our index.html, and 404.html files, which will fix these issues.

Uploading an Example Website

Back in the S3 management console, if we click on the bucket name, then we can upload our example website files. You have a couple options for uploading files through the GUI here, you can right click, and select upload, or you can use the Upload button here. A dialog box opens up, where you can select files to upload. You can drag and drop files onto the dialog, use this add files button, or use command line tools. I will actually show you a command line tool, called s3_website, in the next episode, as it really streamlines the process, if you have hundreds, or thousands of files you want to upload. But, for now, lets click this add files button, so that you can see how this works the manual way.

I have created an example index.html file, along with a 404.html file, they are super simple for now, but this should give you an idea of how it works. Just to show you what this upload wizard looks like, we can click through to the set details page. This is where you can set the redundancy and encryption requirements for files, basically how many times they are replicated, or if they should be encrypted server side, this is useful in tweaking your billing profile, or satisfying business requirements.

Next, you can set the permissions associated with these files, but because we have a blanket policy to allow anonymous access on this bucket, we do not need to worry about this.

Finally, you can set custom http headers. For example, maybe you wanted to set cache control, or a specific content type, you could define that here. We will also look at how this can be set using s3_website, via a command line tool, in the next episode.

Okay, so that is it, lets click start upload. There will be a progress bar, as your files are uploaded, then you can see your completed transfers. Lets slide this over, so that we can see the metadata associated with these files, actually lets just close this so we get a better look.

Great, so our two files are upload, you can see their storage class, reduced redundancy or standard, their size, and timestamp information too. Pretty, standard stuff. You can imagine how labour intensive this would be, if you wanted to upload lots of files, or create many directories. So, you will definitely want to use some type of upload tool, if you are deploying content all the time, as it will ease workload.

Testing the Static Website

Okay, so lets head back to the bucket properties, and check out that static hosting endpoint link again. Lets, just open this link into a new tab here, and you can see my simple index.html page. So, maybe you have a prototype you want to share with someone, you could just give them this link, or maybe you have large files you want to transfer around, there are all sorts of use cases where you might not want to, or have the need to map a real domain to this S3 bucket. We can also test the error page, by going to something that does no exist, and watching for our custom error page. Easy enough right.

Wrapping Up

So, that is static hosting with S3 in a nutshell. Nothing too crazy, or intimidating, in here. The one really cool thing about AWS is that there is almost no limit to what you want do in terms of scale. If you wanted to upload 3 TB of data today, there is nothing stopping you, expect the bill. This is especially cool if you are a small company, in that you can start really small, and scale as you grow, all without swapping providers. I know this looks extremely simple, but as you will see throughout this series, we can build blazing fast, and highly scalable static websites, using exactly this method.

If we head back to our architecture diagram for a minute, we pretty much have the foundation built out, in that we know what the AWS management console looks like, we have a S3 bucket created, and it is configured for static website hosting. The next episode will look at how we can streamline uploads into a nice automated workflow, and also look at some AWS best practices.

Metadata
  • Published
    2015-04-17
  • Duration
    14 minutes
  • Download
    MP4 or WebM
You may also like...