I’ve had a number of largely sleepless nights recently* and the outcome of one was a substantial update to the underpinnings of this blog.
Previously it was generated using Jekyll v.1.4.3 and hosted on Github. The first change was to update Jekyll to the latest version: 3.0.1. This brought in a number of improvements, the one I’m most happy with is the regeneration time, which for me was cut nearly in half, from 20 - 30 seconds to around 10 seconds. The number of “fixes” I had to apply after leap-frogging so many versions was minimal and if memory serves (it rarely does) it mostly involved config changes, such as the addition of
gems: [jekyll-paginate] to
For Jekyll 3, include the jekyll-paginate plugin in your Gemfile and in your
There were additional Ruby gems to be installed, all of which were handled automatically and very cleanly.
Once I had Jekyll running smoothly and generating the site as expected (which took less than 30 minutes from start-to-finish) I started the migration from Github to Amazon S3. In a nutshell, S3 is cloud storage (the S3 stands for Simple Storage Service) that also provides a minimal web server to host static web pages. With S3 and Jekyll, I’m basically administering a web page very similarly to the old-school method of generating content locally then using FTP to upload new content to the server.
The data migration from Github to S3 was just as seamless and glitch-free as the Jekyll update. I used the s3_website plugin for Jekyll. The basic steps for the data migration were as follows:
gem install s3_websiteper the READ ME file
s3_website.yml(see caveat below)
s3_website.ymlper the README file
Some caveats for this migration:
This one is critical: In your
.gitignore file, make sure Git is ignoring
s3_website.yml because it contains the ID and secret key that has full control of the entire bucket at S3. If Git uploads
s3_website.yml to Github, you will have exposed admin credentials to the world.
Follow the directions for setting up AWS credentials so that you limit the account used by s3_website to only the bucket containing this website. Least privilege is a good thing.
I use Keybase and it requires a text file in the base directory of my website. The s3_website plugin didn’t migrate that file so I had to do so manually and also added a line to
s3_website.yml to ignore that file on the server so that it doesn’t delete it.
In S3 I enabled logging, which creates a “logs” directory in the root of your bucket.
s3_website will delete that directory so you also need to tell s3_website to ignore that directory:
The other change I made to
s3_website.yml was to enable compression before uploading, using the following line:
With the move from Github to S3, my routine for post-generation changes only in how I commit or push new content to the server (step #4 below)::
raketo create the markdown file, as usual
s3_website pushto upload all changes to S3, whereas before I would use the Github Desktop app to commit changes to Github
And that is pretty much the extent of the work. All-in-all I’d say it took about an hour and was super-simple, even for a coffee guy. =) I’m relatively pleased with the performance of S3 though I have noticed it’s slower to load images, at least for me here in Singapore. Once I have some time and the notion, I may dig a bit deeper and see why that is and if it’s something I want to tackle.
jekyll serve --config _config.yml,_config_dev.yml --watch