# Michael On Everything Else

I’ve had a number of largely sleepless nights recently* and the outcome of one was a substantial update to the underpinnings of this blog.

Previously it was generated using Jekyll v.1.4.3 and hosted on Github. The first change was to update Jekyll to the latest version: 3.0.1. This brought in a number of improvements, the one I’m most happy with is the regeneration time, which for me was cut nearly in half, from 20 - 30 seconds to around 10 seconds. The number of “fixes” I had to apply after leap-frogging so many versions was minimal and if memory serves (it rarely does) it mostly involved config changes, such as the addition of gems: [jekyll-paginate] to _config.yml.

For Jekyll 3, include the jekyll-paginate plugin in your Gemfile and in your _config.yml under gems.

There were additional Ruby gems to be installed, all of which were handled automatically and very cleanly.

Once I had Jekyll running smoothly and generating the site as expected (which took less than 30 minutes from start-to-finish) I started the migration from Github to Amazon S3. In a nutshell, S3 is cloud storage (the S3 stands for Simple Storage Service) that also provides a minimal web server to host static web pages. With S3 and Jekyll, I’m basically administering a web page very similarly to the old-school method of generating content locally then using FTP to upload new content to the server.

The data migration from Github to S3 was just as seamless and glitch-free as the Jekyll update. I used the s3_website plugin for Jekyll. The basic steps for the data migration were as follows:

1. Register an AWS (Amazon Web Services) account
2. Install s3_website using gem install s3_website per the READ ME file
3. Configure .gitignore to ignore s3_website.yml (see caveat below)
4. Create a dedicated user in AWS for the s3_website plugin, per the s3_website doc Setting up AWS credentials
5. Configure s3_website.yml per the README file
6. Perform all steps under the Usage section of the s3_website README file
7. Verify data migration and website functionality
8. Point DNS to the new location

Some caveats for this migration:

This one is critical: In your .gitignore file, make sure Git is ignoring s3_website.yml because it contains the ID and secret key that has full control of the entire bucket at S3. If Git uploads s3_website.yml to Github, you will have exposed admin credentials to the world.

Follow the directions for setting up AWS credentials so that you limit the account used by s3_website to only the bucket containing this website. Least privilege is a good thing.

I use Keybase and it requires a text file in the base directory of my website. The s3_website plugin didn’t migrate that file so I had to do so manually and also added a line to s3_website.yml to ignore that file on the server so that it doesn’t delete it.

In S3 I enabled logging, which creates a “logs” directory in the root of your bucket. s3_website will delete that directory so you also need to tell s3_website to ignore that directory:

ignore_on_server:     - logs     - keybase.txt

The other change I made to s3_website.yml was to enable compression before uploading, using the following line: gzip: true

With the move from Github to S3, my routine for post-generation changes only in how I commit or push new content to the server (step #4 below)::

1. Use rake to create the markdown file, as usual
2. Create the post in that *.md file
3. Use Jekyll to regenerate the site *
4. Once I'm satisfied with the post, I use the command s3_website push to upload all changes to S3, whereas before I would use the Github Desktop app to commit changes to Github

And that is pretty much the extent of the work. All-in-all I’d say it took about an hour and was super-simple, even for a coffee guy. =) I’m relatively pleased with the performance of S3 though I have noticed it’s slower to load images, at least for me here in Singapore. Once I have some time and the notion, I may dig a bit deeper and see why that is and if it’s something I want to tackle.