This page looks best with JavaScript enabled

Setting up a Static Blog using Hugo and AWS

And a bit of git too!

 ·  ☕ 13 min read

Choosing your Technology Stack

So you'd like to set up your own blog? Thankfully, there are quite a few options available to you. While this blogpost will guide you on setting up a static blog using Hugo and AWS, it's not the only way. I'm including 3 major decisions that you should consider carefully

Decision 1: Custom Domain?

Do you want your website to have a custom name? If not, you can probably set up something super quick using Github pages (for a website name that's relatively readable). If you do want a custom domain, you'll have to buy it from a domain registrar

Decision II: Static vs Dynamic Hosting

Another key decision you will have to make is whether you want to host a static or dynamic website.

Static sites follow a very simple model. There's a bunch of html files and a server that displays them. That's it. You change the html files (or add more), point the server to these changes and they're live. It's an easy model, and something that's perfect for a blog.

Dynamic sites are a bit more complex. If you've ever come across a wordpress or ghost website - these are dynamic. There's usually a more granular configuration mechanism, and correspondingly more resources are required to host/serve the website. If you don't need advanced CMS management, or lots of server-side processing for events - dynamic sites are overkill in terms of potential functionality.

I tried both, and have chosen the static flow in the end for a more automated workflow (though dynamic hosting was easier to set up). If you'd like to go down the dynamic hosting route via AWS, I've included a very brief guide here:

Dynamic Hosting Simple Guide
Buy a custom domain
Set up a Ghost EC2 instance
Attach an ElasticIP to the EC2 instance
Create a DNS CNAME "A" entry on your domain DNS management portal and link to your ElasticIP

Decision III: Where to store your website files

You can use any of the major cloud providers - they're all pretty good. Each has its pros and cons that I won't cover in this guide. I went with AWS because of their free SSL certificates and Route53/Cloudfront stack. Also, quite importantly, their documentation is top notch (ever the unsung hero)

Yosh! Let's set up our website

This guide will show you how to set up a static website using

  • A custom domain name

  • Static Website Framework - Hugo

  • AWS as the place to store my website files

Here are the steps we'll follow in order to set up the site. If you've already completed any of the steps, feel free to jump to the section that's relevant to you:

Overview of Guide
1. Buy a custom domain
2. Set up an S3 bucket,configured for Public Access
3. Create a simple Hugo website
4. Install AWS CLI
5. Configure your Hugo deployment to upload to S3
6. Set up Route53
7. Set up SSL (Optional)

Note: I've tried my best to keep this guide as simple as possible, but you will need to be comfortable with running commands on the terminal. Knowledge of tinkering with config files also comes in handy. With that out of the way - let's go!

Step 1: Buy a custom domain

First things first - choose a domain name and a registrar. I chose GoDaddy as my domain provider because I wanted a domain. This was unfortunately unavailable for purchase in AWS Route53. I would recommend checking in Route53 before you go with an external domain registrar as it will make things simpler.

For the rest of this guide, we'll be using the domain1 to illustrate. Don't forget to replace this with your own custom domain!

Step 2: Set up an S3 bucket, configured for Public Access

If you don't already have one, set up an AWS Account (tutorial here). The rest of this guide assumes you have successfully done this. If you've already got an AWS account - log in! Nip back once you're ready to go on - we'll wait.

Back? Ok, let's move on.

Bucket 1: → Create S3 Bucket named → Set up as Static Website → Access level Public Access + Custom Bucket Policy → Files stored here
Bucket 2: → Create S3 Bucket named → Set up Static Website REDIRECT → The End
Bucket 1: Domain

Head on over to AWS Console S3 and create your S3 bucket where you will store your website files. The process is pretty straightforward. Important things to keep in mind:

  • The name of the bucket must be identical to your purchased domain. (so in this example)

  • Ensure that the bucket has public access by unchecking the appropriate box during creation

    • After the bucket is successfully created

      • Go to Properties and set it to Static Web Hosting. Type in the defaults for index and error pages

      • Go to Permissions and ensure that Block Public Access is Off. Then type in the following bucket policy (substituting for your own S3 created bucket)

          "Version": "2012-10-17",
          "Statement": [
                  "Sid": "PublicReadGetObject",
                  "Effect": "Allow",
                  "Principal": "*",
                  "Action": "s3:GetObject",
                  "Resource": ""
Bucket 2: SubDomain

Create another bucket and name it with the www.<apexdomain> for this guide.

  • Go to Properties and set it Static Web Hosting, but choose the redirect to other website option.

    • Fill in your domain bucket name ( as the redirect target

This step ensures that both and show your website contents (without storing 2 parallel copies in both buckets. That would be silly)

Step 3: Create a simple Hugo Website

The Hugo website has excellent documentation, so for any issues in this section - just refer to the official docs at Hugo Quick Start Guide

Install HugoUse Hugo to create new website directoryHugo Build the site

Pop the following commands into your terminal window:

brew install hugo #This is for MacOS. Linux/Windows will have something similar/simple
cd ~/ #Or wherever you'd like on your local machine.
hugo new site #Replace with your own website as appropriate
git init
git submodule add themes/ananke
echo 'theme = "ananke"' >> config.toml #This sets your default theme to ananke
hugo -D #Build static websites

That's it - your new website has been generated! Now we need to copy these files to our S3 bucket created earlier

Step 4: Install AWS CLI

Install AWS CLI → Set up IAM Roles → Run aws configure

You could upload your website to the S3 bucket via the web interface if you wished. But you will likely be spending a LOT of time tweaking and tuning your website, and uploading manually just becomes cumbersome. Hugo manages all the heavy lifting of doing this via a single command - so why wouldn't we set it up?

But first, we need to install the AWS CLI. Navigate to → Installing AWS CLIv2, select your OS and install the tool.

In order to complete the aws configure step, you'll need to have an IAM role with S3 bucket access set up. Follow this guide → AWS CLI Configuration Guide and note down the IAM IDENTITY and IAM SECRET

Once you have these, run aws configure on your local machine.

Step 5: Configure your Hugo deployment to upload to S3

Time to set up your Hugo deployment. Open up your config.toml file (should be in your root website directory: ~/ directory) and add the following section to the bottom:


    name = "hugoexampleS3" #This can be whatever you'd like, just choose a sensible name
    #Now set the URL Parameter to point to your S3 Bucket in the following format:
    # URL = "s3://<Bucket Name>?region=<AWS region>"
    URL = "s3://"

Once this is done, running hugo deploy from the command line will upload your public files to the S3 bucket. (Note: hugo deploy chooses the first of the deployment targets without any arguments. Use hugo deploy --target=<target name> if you'd like to be specific, or have multiple deployment targets)

Visit your S3 bucket's end-point (which should be something like to validate that your site has been uploaded and all's working well.

Step 6: Set up Route53

Now we get to the tricksy bits. This step has quite a few moving parts, so do try to check your configurations before proceeding to the next stage.

Stage 1: Set up Route53
Go to AWS Route53 → Set up a hosted zone → Point domain to S3 Endpoint
Stage 2: Update Godaddy NameServers
Update Godaddy Nameservers with Route53 NameserversWait for DNS Updates to propagate
Stage 1: Set up Route53

Navigate to the Route53 service from your AWS Console: Route53 AWS Console

  • Create a hosted zone on Route53 and enter the domain name (without the www) that you purchased earlier. In our case, we'll go with Leave it as a Public Hosted Zone and create it.

  • Navigate to the zone from the dashboard and you should see a few entries. Note the type. You should see an entry for NS and one of type SOA.

Now it's time to create the record that will point your domain to the S3 bucket.

  • Click on Create Record Set → leave the Name: field blank and ensure the type is A - IPv4 address.

  • Then check the yes button for Alias. This will let your choose the Alias target from a dropdown menu.

    • If you have set up your S3 bucket naming properly, you will be able to select it from the menu.

  • Click Save Record

Stage 2: Update Godaddy Nameservers

If you bought your domain via Route53, skip this step. Otherwise:

  • Log in to godaddy and click on your purchased domain

  • Click on manage DNS and change nameservers

  • Select use your own and enter all the Route53 nameservers as individual entries.

    • Note that you will have to remove the last . in your Route53 entries (those are used by AWS to denote the end of the domain)

  • Save your changes.

    Brew a coffee, clear your browser history and flush dns cache. These updates will likely take up to a few hours (depending on various factors) to propagate.

Note: If you don't wish to set up https access, you can stop here. Wait for a few hours for the DNS changes to propagate and your site will be accessible.

Step 7: Set up HTTPS connections (Optional)

Get a free SSL Certificate from AWS Certificate Manager → Set up a Cloudfront distribution → await activation → Redirect Route53 domain entry

SSL Certificates can be quite expensive. The good news is that AWS provides a free SSL certificate for your cloudfront/Route53 services (auto-renew). So let's go ahead and grab one.

Important Make sure your region is US East (N. Virginia) region for the Certificate import

  • Log in to your AWS Console and navigate to AWS Certificate Manager (link)

  • Request a CertificateRequest a public certificate

  • Add both and to the domain names

  • Choose DNS Validation (it's much easier if you've completed route53 setup) and proceed to confirm and request

  • Then click the Create Record in Route53 button (it should be available if you've set everything correctly so far)

  • Done.

With that out of the way, it's time to create a Cloudfront distribution. Route53 and S3 don't enable direct HTTPS access to your buckets, so we'll have to create a Cloudfront distribution to do this. The good news, Cloudfront is AWS' global Content Delivery Network - which means it will cache your website at various locations around the world leading to speedy loading times to serve your avid, impatient, global userbase. But we're getting ahead of ourselves.


  • Navigate to the AWS Cloudfront resource on your AWS Console (link)

  • Click on Create DistributionWebGet Started

  • You're on a page with many many fields and you have immediately,but naturally, panicked. It's ok - we'll get through this together. Only a few fields need modification.

  • Origin Domain Name - Don't choose the S3 bucket from the dropdown. Instead manually copy the endpoint url from your S3 bucket properties omitting the initial s3://. Should look something like

  • Origin Path - Set to /

  • Origin ID - will probably autofill. If not, use a name like

  • Viewer Protocol Policy - Redirect HTTP to HTTPS

  • Compress Objects Automatically - Yes

  • Alternate Domain Names - Enter both and in separate lines inside the textbox

  • SSL Certificate → Custom SSL Certficate → Request or Import a Certificate with ACM → select the SSL cert you created earlier from the dropdown menu

  • Create Distribution

We're nearly there! Wait for a short while until the distribution status is Enabled.

Important: Access the cloudfront endpoint directly (This will look like If your site loads up - it's been configured properly. Do NOT proceed until your cloudfront url loads up your website

Now, update your Route53 DNS settings so that the whole world can access your website securely.

  • Go to your Route53 hosting zone for and edit the domain A entry that was pointed to your S3 bucket.

  • Change it to your Cloudfront domain name, which should be something like * You should be able to choose this from the dropdown menu.

  • Also add a new A entry for and do the same

Note: Once your DNS settings are properly set up, you can go ahead and delete your S3 Bucket. We no longer need it for the aliasing of the www subdomain.

Huzzah - You're done!

Photo by Pixabay from Pexels

That's it! Now wait a few hours for the DNS entries to propagate. Pat yourself on the back for a job well done. This is a good time to start customising your hugo blog (on your local machine).

Do leave a comment if you were able to set up your site using this guide (or if you found any errors)

Useful Tips

  • Use whatismydns to check how your DNS is propagating (if you are impatient like me!)

  • Flush your dns on MacOS (Catalina++) using sudo killall -HUP mDNSResponder

  • After you update your godaddy nameservers, sometimes your manage dns page doesn't load. If this happens, open a new private mode window and log in there. Also, clearing your history can do the trick.

  • After setting up or changing some configuration settings, click the raw S3 and Cloudfront endpoints to check that your website is loading up correctly.

  • While configuring your website, you may want to set your Nameserver TTL to something short (~5 minutes). Once you're certain that your DNS settings are properly configured, feel free to increase the TTL back up to a day or so.

  • After setting up cloudfront, try to resist uploading any customisations to your S3 buckets while the DNS entries are stabilising. Cloudfront caches your website assets by default for around 24 hours. You can manually invalidate for free only a limited number of times.

Other Resources


Also a special shout out to David Baumgold's excellent and comprehensive tutorial that I used when I set my blog up → Link to David Baumgold - Host a Static Site on AWS, using S3 and CloudFront

1The website name without any identifiers is usually called the apex domain. The website name with a www appended (or other modifiers) to it is usually referred to the subdomain. Bear this in mind if you're referring to guides/documentation

Share on

Deb Goswami
Deb Goswami
Data Scientist