How To Migrate A Static Website To Amazon S3

Posted on 28 November 2017 in Blog • Tagged with dev, howto

So you decided that your managed hosting service does not suit your needs (anymore) and would like to move to a cloud service like Amazon Web Services (AWS)? Then you’ve come to the right place! In this article, I try to provide a short reference of the most important steps and considerations during the migration.
Note that this process requires a certain level of technical expertise, therefore some background knowledge on the topics covered is expected.

Also note that this is a writeup of my own experience of moving this very (static) website from managed hosting at 1&1 to S3: My website is fairly small in size (currently less than 1 MB), does not include any (server-side) dynamic features or any form of authentication. It does, however, make use of SSL certificates, and redirects from to as well as from to This guide will be specific to this setup, your scenario may vary. Especially if your website relies on dynamic features or authentication, S3 is not the right choice and you won’t find this guide to be very helpful.
For the ones who are undecided, I will first discuss the general pros and cons of the cloud vs. managed hosting, so you can decide for yourself. If you are impatient or have already made your decision, you may jump directly to the migration recipe.


For the discussion contained in the following chapters, it is helpful to first consult the definitions of the terms managed hosting and cloud computing:

Definition of managed hosting from the Wikipedia article on web hosting (emphasis mine):

The user gets his or her own Web server but is not allowed full control over it (user is denied root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user to modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client.

Definition of cloud computing, according to Wikipedia (again, emphasis mine):

Cloud computing is an information technology (IT) paradigm, a model for enabling ubiquitous access to shared pools of configurable resources (such as computer networks, servers, storage, applications and services), which can be rapidly provisioned with minimal management effort, often over the Internet.

As you can see, both options are simple to manage (compared to operating server hardware yourself). However, there are huge differences between both approaches: While managed hosting provides few, tightly controlled software options, the cloud provides a multitude of services which have to be configured individually.

Considerations before moving

After this introduction, here are a couple of reasons why one might want to move from a static host to a cloud service, and a few considerations to make about before switching.


  • Different software needs: Managed hosting contracts commonly offer domains, a web server with PHP, a database server (e.g. MySQL), email addresses and SSL certificates. You might notice that you do not use all of these or worse: require different software which is not available within the given infrastructure. A cloud provider lets you choose from a lot more options on a per-project basis.
  • Scalability: Managed hosting contracts usually include a fixed amount of hard drive space that you may occupy, as well as a maximum amount of data transfers per month. If you have less visitors than the contract anticipates, you might end up paying much more than actually required. If you have more visitors, visitors might end up not being able to use your website once the transfer limit has been reached. Cloud providers usually enable you to scale the infrastructure to your demands.
  • Pricing: This goes together with the last point, you might not need the entire bundle.
  • Increased international availability: Some hosting contracts keep your data in one country. This can introduce a sloppy user experience for visitors from the other side of the planet. Use a content delivery network or deploy your website in a cloud spreading multiple continents to mitigate this problem.
  • Education: Moving to AWS has taught me a lot about the backbones of my website, therefore I can also recommend this if you just want to experiment! But be prepared to fail, so avoid experiments with the production website.


  • Greater technical expertise required: Despite Amazon’s great documentation, a lot more fine tuning and manual editing of configurations is required. You should have at least a basic understanding of HTTP, DNS and SSL to undergo this adventure.
  • Less customer support: Most hosting contracts include a support plan with a human contact. The Basic AWS Support only includes static documentation and account support. The most basic support plan including a (human) technical support contact currently starts at $29.
  • Privacy Concerns: As the cloud consists of a multitude of services running on servers all around the world, data in the cloud is not restricted to one jurisdiction. If your application processes your or other user’s private data, this might be of concern.
  • Security Concerns: This does not affect S3 as much as dedicated servers such as EC2 instances. Note however that for some services, the user is responsible for securely configuring their server and keeping software up-to-date.
  • Varying costs: Due to the nature of on-demand pricing, your bills may vary greatly from month to month. This might be disadvantageous for predicting costs.


Moving to a cloud provider is not a all-or-nothing decision. You can always start by moving a couple of services to a cloud provider and leave e.g. the domain registration and DNS at your previous provider. Also note that there are more companies offering services similar to AWS. Two mentionable alternatives are the Google Cloud Platform and DigitalOcean, which I have not yet tried.


Here is a quick overview of the required migration steps:

  1. Create an S3 bucket and copy static content to it
  2. Setup DNS at Route 53
  3. Create a CloudFront distribution for the bucket
  4. Point DNS records at CloudFront
  5. Request and install an SSL certificate
  6. Setup Redirections using a second S3 bucket and CloudFront distribution

All steps are also documented within the Amazon documentation: Setting up a Static Website Using a Custom Domain, Migrating DNS. This blog post tries to keep it short, less technical, and tailored to our scenario.

I assume that you have successfully created an AWS account and can log into your AWS Console.

Moving Static Content to S3

The website is going to be stored and served by Amazon S3 (Simple Storage Service), so start by creating a new S3 bucket: Log into the AWS console, and choose the “S3” service through the navigation bar up top.

Start the creation wizard by clicking on “Create Bucket”, choose a name and your preferred region. I strongly recommend naming your S3 bucket the same as your domain (e.g. such that you don’t rely on CloudFront for DNS aliasing (see below). Don’t change anything on the remaining pages of the wizard.
Next up, choose your newly created bucket from your list of buckets and navigate to the “Properties” tab. Enable “Static Website Hosting” and choose an index document, most likely index.html. Optionally, you can specify an error page which will be shown to visitors on error. Make a note of the “endpoint” URL (something like http://<BUCKETNAME>.s3-website.<REGION>, you will need it later.

After enabling “Static Website Hosting”, navigate to the “Permissions” tab and click the “Bucket Policy” button. Paste the following snippet, insert your bucket name for BUCKETNAME and don’t forget to save.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"

Note that this snippets makes your entire bucket content world-readable! (Also note that you cannot choose the “Version” key, it has nothing to do with today’s date.)

The next step is to copy the contents from your old provider to S3. First, obtain a copy of all your static files from your old provider, for example via FTP. Now is also the time to remove any files that are no longer needed. Especially remove any .htaccess and .htpasswd files, as they may contain confidential information and are not understood by the S3 host. All files uploaded to your bucket will be publicly accessible and you will not be able to leverage any kind of server-side authentication (e.g. HTTP Basic) on S3.

For administering the content of your S3 buckets, you can either use the AWS console or the AWS Command Line Interface. I am going to describe usage of the latter. First, you have to download and install the package corresponding to your operating system. After installation, use aws configure to setup your credentials, as described in the official documentation.

You can then navigate into your content folder (which you just downloaded and cleaned) and use aws s3 sync ./ s3://<BUCKETNAME> --delete to sync the S3 bucket to your local file system. Note that the --delete option instructs the S3-CLI to delete files from the bucket that are not present on your local file system.
Amazon will charge you depending on the storage used, so try to keep the storage footprint low.

At this time, you should be able to successfully navigate to the URL that you remembered earlier (e.g. http://<BUCKETNAME>.s3-website.<REGION> in your web browser. Note that it is expected that some assets might not load properly, as their URLs might be specified absolutely, not pointing to the S3 bucket.

Setup Route 53 as DNS Provider

As a second, independent step, let’s move your DNS setup to Amazon Route 53. For this, we first replicate your current DNS setup at Route 53. Navigate to the Route 53 service through the AWS console and create a new hosted zone for your domain (charges apply!). As the hosted zone type, choose “Public Hosted Zone”.

Select your newly created zone from your list of hosted zones. By default, two record sets are created: The NS entry and the SOA entry (Do not delete them!). We will now add additional record sets from your old provider. You can use the dig command to lookup existing record sets. Call dig +nocmd <DOMAIN> any +noall +answer. The output will look similarly to the following:

$ dig +nocmd any +noall +answer            21599   IN      A            21599   IN      AAAA    2606:2800:220:1:248:1893:25c8:1946            21599   IN      NS            21599   IN      NS            3599    IN      SOA 2017102410 7200 3600 1209600 3600            3599    IN      MX      10            59      IN      TXT     "Hello World."

For each record (except SOA and NS records), click on “Create Record Set” in your hosted zone view, enter the name (or leave it blank), choose the correct record type (4th column of the dig output), TTL (time-to-live; 2nd column) and value (last column), then save. Ignore the NS and SOA records in the dig response.

Next, point the DNS settings of your old hosting provider to the Amazon name servers: For 1&1, log into your control center, and navigate to the DNS settings for the domain. In the section “Name Server Settings”, choose “Other name servers” and enter the four name servers provided by Amazon. Their names should look like this: (with different numbers).

If everything has been configured correctly, nothing should have changed about your website. You can use the dig command from earlier to verify that your record sets have correctly been transferred. Verify that the Amazon name servers show up for the NS records. It may take up to 48h for the changes to take effect.

In the next step, we will setup the domain to point at your S3 bucket directly. This step is optional and is only possible if you do not require SSL and used your domain name as bucket name earlier.
Navigate to your hosted zone within the Route 53 dashboard. Create a new A-class record set and set its name to the subdomain hosting your website (e.g. ‘www’). If you already have an A-record set for that subdomain, you have to modify it. Choose “Alias” and use your website bucket URL without the bucket name (e.g. s3-website.<REGION> as “Alias Target”.

Here’s where Route 53 trickery comes into play: If you named your bucket, you can use it with the record set named, which will automatically point to<REGION> Similarly, if you want to serve your website from the bare domain name, call the bucket, the endpoint; the website will be served from<REGION> You can then verify if everything works correctly by navigating to your website with your web browser. Make sure to hit the http:// version though, as SSL has not yet been set up. If your browser automatically redirects you to the https://-version, this might be due to HSTS. In that case you might have to clear your browsers HSTS settings for your domain.

Setup CloudFront as Content Delivery Network

Using CloudFront in front of an S3 website has several advantages: It enables you to use SSL on your domain (S3 only supports SSL on the <BUCKETNAME>.s3-website.<REGION> and it gives you presence in a world wide CDN. The disadvantages are cost and the fact that due to CloudFront’s caching, changes usually take around 24 hours to come into effect.

You can manage CloudFront from your AWS console: Choose “CloudFront” as a service in the navigation bar. Choose “Create Distribution” and then in the “Web” section “Get Started”. As “Origin Domain Name”, enter your S3 website URL: <BUCKETNAME>.s3-website.<REGION> In the “Distribution Settings” section, enter your domain name in the “Alternate Domain Names” field. Leave everything else to the default settings (e.g. don’t specify a default root object).

After creation, the CloudFront distribution is assigned a domain name which you can see on the list of distributions. Note down the domain.
Navigate back into your hosted zone at Route 53 and create a new A-class record set (or modify the existing one). Choose “Alias” and enter the CloudFront domain as “Alias Target”.

Request and Install SSL Certificates

There are several providers which offer free SSL certificates nowadays, with variations in the domain verification procedure and the offered features:

  • Let’s Encrypt: free, (currently) most popular service, verification via special file name on web server, automated requests through ACME clients, renewal via CRON-jobs, no wildcard certificates (at the time of writing), private key in your hands
  • Amazon Certificate Manager (ACM): free for AWS users, verification via email (addresses from WHOIS or DNS MX record), automatic renewal, offers wildcard certificates, private key not accessible

I decided to stick with Amazon and use ACM.

Navigate to the “Amazon Certificate Manager” on your AWS console. On the top right, choose “US East (N. Virginia)” as region, as CloudFront can only access certificates from that specific ACM region. Click “Request a Certificate” and enter your domain name. This is also a good moment to consider other (sub-) domains that you would like to use with your website. You can even set up a wildcard certificate by entering * in the “Domain name” field. Note that the wildcard certificate does not protect the bare domain (, so go ahead and add both. After submission, you will have to verify ownership of the domain. For this purpose, emails will be send, to the administrative contact in the WHOIS record of your domain, as well as several administrative contacts guessed via your DNS MX record. Receive the email and follow its instructions to verify your email.

Then edit your CloudFront distribution, choose “Custom SSL Certificate” and select your newly created certificate. Again, changes to CloudFront distributions may take a while to come into effect, so you might have to wait for one day. You should then be able to navigate to in your web browser.
Also try to navigate to to check whether the redirection to HTTPS works.

Alternate Domains

So far, this guide only aimed to make the website accessible via one domain. However, for convention and usability, you might want to make the website accessible via multiple (sub-) domains, such as and As search engines do not like to see the same content available via two domains, one usually sets up a redirection. To setup a redirection that works in combination with HTTPS, create a second S3 bucket for the domain that you want to redirect from. In the “Properties” tab, again choose “Static Website Hosting”, but this time select the redirection part. Enter the entire target domain (the domain that you set up and tested in the last step) and ‘https’ as protocol.
Create a CloudFront distribution analogously to the previous section, enter the domain under “Alternate Domain Names” and choose a corresponding HTTPS certificate (e.g. the wildcard certificate that you created earlier). Again, note down the CloudFront domain name.
Lastly, create a DNS A record set for the new subdomain within your existing hosted zone and choose the CloudFront distribution as alias target.

Unfortunately, creating a second CloudFront distribution seems to be the only way to setup the redirection. For more information, also see this blog post by Simone Carletti on this topic.

Setup Email

There are several options for setting up email with your domain at Route 53. You can setup the MX record to point to any mail provider of your choice. Amazon also offers their own mail service (WorkMail), which I have not yet tried. Another cheap option is to setup a redirection using Amazon SES and Amazon Lambda. There is a great writeup by bravokeyl on this topic.


In this a little lengthy post I have first discussed the pros and cons of moving from a managed hosting service to a cloud service like AWS. In form of a guide, I have subsequently written down my experience of moving from 1&1 to AWS and the minimal amount of steps necessary to do so.

Concerning pricing and long term experience, I will update this article in a couple of months.

Two Factor Authentication from Dumped Sqlite Databases

Posted on 20 February 2015 in Blog • Tagged with security

Some of you may be using Google’s “Authenticator” app for Android in order to achieve a higher security level for your Google account.

(If you are not yet using it, I recommend setting it up, you can find the details here).

I used to have my web browser set up to always launch in incognito mode (not saving cookies, etc), and as such, Google regularily prompted me for the two-factor authentication token. I did not want to switch to my phone every time, so I decided to reimplement the Google Authenticator as Python script on Windows.

The general algorithm is well known and documented in RFC 6238. There is even a pseudo code implementation available on Wikipedia, so my contribution is the Python implementation featuring simplified usability by reading from an sqlite database.

In order to use the script, you will have to get the “secret” keys from your Android phone to your computer. This is easily possible if you already have root access to your Android phone. In this case, you can use the Android Debugging Bridge (adb) to pull the database from /data/data/ An excellent tutorial on how to acquire the database can be found here.

In the end, for other reasons, I decided to switch out of Chrome’s incognito mode, so I don’t use the authentication script as often as I used to.
But I decided to share it anyways:

WinSCP Session Password Decryption - Part 2

Posted on 20 February 2015 in Blog • Tagged with security, reverse-engineering, cryptography

After this old article got some more attention recently, I decided to give this subject another shot.

The old C++ code was messy and the result of copying together the right WinSCP source code to create a deobfuscator.
This time I decided to implement the same script in Python, adding the feature to read the newest values from this machine’s registry, because I am pretty sure that this is the most common use case.

Here it goes:

The usage is much easier this time:

usage: [-h] [--hostname HOSTNAME] [--username USERNAME]
                          [--hash HASH]

Deobfuscate WinSCP password, using info either from registry (if no arguments
are given) or from the command line.

optional arguments:
  -h, --help           show this help message and exit
  --hostname HOSTNAME  HostName
  --username USERNAME  UserName
  --hash HASH          Password

Again: I hope that this will help someone, have fun!

(Fun fact for those of you who are using FileZilla: FileZilla stores the plain-text password in %APPDATA%/FileZilla/sitemanager.xml)

Fixing a Chrome SSL Connection Error caused by COMODO Products

Posted on 13 December 2013 in Blog • Tagged with dev, howto

A couple of months ago, I lost my precious Google Chrome browser (running on Windows 7, 64bit): All of the sudden (not so much as I will find out later), Chrome was not able to establish secure connections anymore.

Problem description

The symptoms where a little bit unclear: When I would start the browser, I was able to reach secure sites (with the https:// prefix, port 443) for about 30 seconds. Afterwards, it would always get stuck in the “Establishing secure connection” stage for new connections, until ERR_CONNECTION_TIMED_OUT. The connections that were established in these first moments seemed to work for the rest of the session and usual http traffic (TCP port 80) was not affected.

Researching solutions on the internet yielded a lot of open questions and problems, including obvious solutions as checking firewall settings, reinstalling Chrome, clearing the ssl cache or disabling TLS 1.2 (which is not possible in anymore in current Chrome versions). All of these solutions did nothing for me.

Finding and Resolving the Issue

So, I monitored my network connections with packet capture software (Wireshark). First everything went as expected. But even when the secure connections started to fail, I could clearly see that the handshake completed (SSL Client Hello, Server Hello, Certificate, Client Key Exchange, New Session Ticket), but there was silence after the handshake until Chrome finished and reset the connection 20 seconds later (TCP FIN followed by RST flag). Comparing that behavior to Firefox (which still worked; for comparison, I also enabled TLS 1.2), it was clear that it was Chrome’s turn to send data.

The fact that the connections which were established in the first seconds worked for an entire session indicated that some kind of data would be cached, probably certificates. Additionally, because the amount of websites that I could load was not limited by a certain number, but a certain time period, it was quite obvious that my connections were victims of a race condition.
So I fired up Sysinternals Process Monitor, filtering only events from the chrome.exe application.
After a few tries, the experiment was set up and ready to capture Chrome’s faulty behavior. I started Chrome and navigated to a lot off different web pages, pinpointing the exact time of failure. Using Wireshark and Process Monitor, I found out that at the time of the failure, Chrome just finished reading a lot of certificate files (having to do with different Certificate Authorities and revocation lists). There were especially a lot of accesses to the certificate stores issuers.sst and subjects.sst (located at ... \AppData\LocalLow\COMODO\CertSentry).

Research on the internet showed that these files were remnants of an old COMODO Dragon installation that I got tricked into earlier this year and that had not finished correctly. If one tries to delete them, they are recreated moments after by the DLL certsentry.dll (located at C:\Windows\system32 and C:\Windows\SysWOW64). Using regsvr32 /u certsentry.dll and Unlocker, I was able to unregister and delete the files making a deletion of the certificate stores permanent. After a couple of reboots, Chrome worked again as intended.

But I did not stop here. To further investigate the issue, I kept copies of the DLL files and the certificate lists. Using Dependency Walker, I was able to confirm that there were dependencies on crypt32.dll, cryptnet.dll and cryptui.dll. The certificate stores could be opened with certmgr and included certificates which I had used during my tests (e.g. Google, LastPass, AdBlockPlus, akamai [facebook]). They were still valid though.

Explanation and Conclusion

Earlier this year, I had to manually cancel a COMODO setup procedure and remove all of its remnants. During that procedure, I missed the DLL files certsentry.dll in my system folder which were still being hooked by Microsoft Windows cryptography modules, creating the files issuers.sst and subjects.sst in my user data folder.
When Google Chrome is started, the user can immediately start browsing while the certificate stores are loaded. This leads to a race condition. Once all files have finished loading, new SSL connections also fail after completing the handshake.
There are a couple things that Chrome and COMODO could learn from these situations: First of all, the asynchronous loading of security files could also be very dangerous since the user does not seem to be protected by the COMODO mechanism in the first seconds. Secondly, there should be a reasonable error message informing the user of SSL errors instead of a time out exception.

For me, I am very satisfied with the result, especially since I have a game jam coming up where I wanted to make use of HTML5 technology, which is faster and more stable in Chrome than it is in Firefox.

Deploying Octopress via SFTP

Posted on 29 July 2013 in Blog • Tagged with dev

There were some great tutorials online concerning octopress deployment via rsync or ftp. Unfortunately, my web hosting service only supports access via sftp.

So here it goes:

I wanted to upload my generated html into the directory /octopress of my web host. So modified _config.yml like this:

destination: octopress

and added to Rakefile:

public_dir = "octopress"

## ...

sftp_user = "<user name>"
sftp_target = "<hostname>"
deploy_default = "sftp"

desc "Deploy website via SFTP"
task :sftp do
        puts "## Deploying website via SFTP"
        ok_failed system("echo 'put -r #{public_dir}\n bye' | sftp #{sftp_user}@#{sftp_target}")

Now I can just run

rake deploy

and enter my password to upload everything.

ToDo: optimize the mirroring routine so it does not reupload everything everytime

Reversing the WinSCP session password encryption

Posted on 23 December 2012 in Blog • Tagged with security, reverse-engineering, cryptography

Edit: this article has been superseeded by a newer version, implemented in Phython: WinSCP session password decryption - Part 2

So today I decided to access my web hosting account via scp from my Linux partition. But of course, I had forgotten my password! So I used the “Offline NT Password & Registry Editor” to extract the necessary settings (from Windows 7 partition):

Open the file


and inside regedit navigate (via “cd”) to

\Software\Martin Prikryl\WinSCP 2\Sessions\<SessionName>

From this key, you need the values “Password” (only possible if saved, very long string), “Host” and “UserName”.

Finally I reverse engineered the WinSCP source code, which was especially hard because it origins in Delphi, where all strings and arrays are 1-based. My final decrypter code:

Usage (using the values from the registry key):

./decrypter HostName UserName Password

I hope that this will save someone elses time, too!