My Blog used to be hosted over at DigitalOcean, on one of the smallest droplets available. But as times changed I wanted to move it in house - like actually into my house.

I have an UnRaid server serving my Plex library and handling my device backups via Docker, so I figured why not bring Ghost into the mix as well.  

'Installing' Ghost was straight forward and it was pretty fast locally, but externally the site took over 5 seconds to fully load, sometimes even taking in excess of 15 seconds.

Google has said a few things on page load times in the past, here are two of the most popular quotes;

"The average time it takes to fully load the average mobile landing page is 22 seconds. However, research also indicates 53% of people will leave a mobile page if it takes longer than 3 seconds to load."
"2 seconds is the threshold for ecommerce website acceptability. At Google, we aim for under a half second."

2 - 3 seconds, this sounds reasonable to me.

While this blog is not a commercial website it seems like I have a goal.


Phase 0; Initial load times

What would a little optimisation be without a baseline?

I'm going to be using GTmetrix throughout this experiment. There are many sites out there to test a sites speed. Using one or the other doesn't make too much difference, but using one and only one will produce consistent and meaningful results.

As you can see above, the main page takes 10+ seconds to load 4MB over 32 requests. Not going to win any records for speed, that's for sure.

Phase 1; Cloudflare and GZIP

I've had a Cloudflare account for a while now, it's my goto DNS manager even on the free tier.

My preference for configuring rules in Cloudflare is to use what they call "Page Rules" which gives access to all the relevant cache settings on one page.

I set up a rule for all pages * using the follow rules. Not much to it, just bumped up the maximum cache/TTL settings and a medium security level.

GZip is a must, but in my haste to set up the new environment I didn't enable it.

After enabling both GZip and setting Cloudflare to minify what it could the results were looking a little better. But there is still room for improvement.

Phase 2; CDN? Cloudinary

This one is obvious when looking at the total page size of the last result. And when minifying HTML only saved 0.02MB it's time to look at what we look at the images.

Now I've covered content delivery networks before. But I'm cheap and lazy, this blog doesn't get the traffic to warrant large paying a monthly fee, so my hunt for cheap or even free CDNs was on.

One of the Ghost integrations is Cloudinary.

Utilising Cloudinary's API/Fetch URLs means I can upload images in any format/size and have them resized and optimised without having to run anything extra, as a bonus their free plan provides more than enough storage/bandwidth for what I need from it.

Because I'm running Ghost in a Docker container only files in the content folder are using persistent storage, everything else gets wiped when the docker container is updated or restarted.

Themes are persistent, which is why I chose to make use of their fetch and transformation URLs by editing a few key areas such as post-card.hbs, post.hbs and index.hbs. This doesn't cover every image I upload but it does cover the index page where a lot of images are loaded and the headers of individual posts.

The fetch URL allows for flags to be set;

https://res.cloudinary.com/<USER_NAME>/image/fetch/<FLAGS>/<IMAGE URL>

After some trial and error the flags I ended up with were;

w_600                => width 600px
h_400                => height 400px
c_fit                => fit the image into these bounds
q_auto               => set quality level to auto
f_auto               => convert to the most optimal image type available
dpr_auto             => automatically scale based on the devices pixel ratio

Now I was worried about f_auto a little bit, would Cloudinary serve the most optimal format even if I include an extension? The answer is yes.

Even with the jpg extension the image will still be encoded using WebP so long as you include the f_auto parameter.

To handle the image inside posts I threw in a little javascript;

<script>
    // For every image tag
    $('img').each(function(key, obj) {
    
        // Get the src of the image
        var src = $(obj).attr('src');

        // If the image src doesn't have http...
        //  we can assume it's a relative path
        if (src.indexOf('http') === -1) {
            src = 'https://jcode.me/' + src; 
        }

        // If the image src doesn't have 
        //  cloudinary in the string
        // Make it use cloudinary
        if (src.indexOf('res.cloudinary') === -1) {
            $(obj).attr('src',
                'https://res.cloudinary.com/<USER_NAME>/image/fetch/<FLAGS>/' 
                + 
                src
            ); 
        }
    });
</script>

Prepared to see some minor savings at best, the results actually surprised me.  

With the aid of Cloudinary I'm able upload any image in any format and have it get transformed into an optimised copy and let people load my blog in just under 500ms.

Afterthought;

Companies come and go, CDNs are no different. Some even disappear overnight.

Putting all my faith into an external entity is not something I do lightly, which is why I opted to use the URL method instead of using the storage adapter where "images are uploaded directly to Cloudinary and integrated into its media library".

In the event that Cloudinary does disappear or remove their free plan, all I have to do is remove their URL prefix from my theme and I still have a working albeit slower website.