<![CDATA[Adventures of a Technomancer]]>https://jcode.me/https://jcode.me/favicon.pngAdventures of a Technomancerhttps://jcode.me/Ghost 2.9Thu, 27 Apr 2023 10:41:16 GMT60<![CDATA[Laravel, Dropzone.js & S3 the right way]]>https://jcode.me/laravel-dropzone-the-right-way/Ghost__Post__59e18ee53c9e4f0deb8984f5Sun, 17 Nov 2019 00:14:10 GMTProblemLaravel, Dropzone.js & S3 the right way

All right, here's your motivation: Your name is Lucas, you're an average developer who wants to create beautiful things you can be proud of. One day, you'll think about hitting your clients with a "computers for dummies" book.

No, forget that part. We'll improvise... just keep it kind of loosey-goosey. You want to create an form to allow clients to upload files and you need it done yesterday! ACTION.

Most developers start out by doing something like this;

Laravel, Dropzone.js & S3 the right way

Uploads hit the server, probably in some web accessible directory like /uploads/ and figure everything is great.

Some might even learn about attack vectors and move the uploads directory to a non web accessible directory.

Then comes the day when you learn about S3. Storing files on someone else's computer? Sign me up.

So the easiest solution is often to copy files from your server to S3 after an upload.

Laravel, Dropzone.js & S3 the right way

But this is the wrong thing to do, now you're touching the file twice, paying for double the amount of bandwidth and on large files adding a decent amount of latency that any user would get sick of.

So, how do we fix it?

Assuming you have set up an S3 filesystem then the first step is to create an endpoint to generate a signed URL for the client to upload files to.

In this example I'm calling it `s3-url`.

Route::get('/s3-url', 'SignedS3UrlCreatorController@index');

And the relevant controller logic.

class SignedS3UrlCreatorController extends Controller
{
    public function index()
    {
        return response()->json([
            'error'     => false,
            'url'       => $this->get_amazon_url(request('name')),
            'additionalData' => [
                // Uploading many files and need a unique name? UUID it!
                //'fileName' => Uuid::uuid4()->toString()
            ],
            'code'      => 200,
        ], 200);
    }
    private function get_amazon_url($name)
    {
        $s3 = Storage::disk('s3');
        $client = $s3->getDriver()->getAdapter()->getClient();
        $expiry = "+90 minutes";
        $command = $client->getCommand('PutObject', [
            'Bucket' => config('filesystems.disks.s3.bucket'),
            'Key'    => $name,
        ]);
        return (string) $client->createPresignedRequest($command, $expiry)->getUri();
    }
}

This means that any GET request sending through a filename parameter generates a signed URL that anyone can use.

It's probably a good idea to keep track of these URLs and/or file names in a database just in case you want to query or remove them programmatically later on.

The second step is to use something like DropzoneJS.

DropzoneJS is an open source library that provides drag’n’drop file uploads with image previews.

Dropzone will find all form elements with the class dropzone, automatically attach itself, and upload files dropped into it to the specified action attribute.

The uploaded files can be handled just as if there would have been a regular html form.

<form action="/file-upload" class="dropzone">
  <div class="fallback">
    <input name="file" type="file" multiple />
  </div>
</form>

But we don't want to just handle it like a regular form, so we need to disable the auto discover function.

Dropzone.autoDiscover = false;

Then create a custom configuration that watches for files added to the queue, like so;

var dropzone = new Dropzone('#dropzone',{
    url: '#',
    method: 'put',
    autoQueue: false,
    autoProcessQueue: false,
    init: function() {
        /*
            When a file is added to the queue
                - pass it along to the signed url controller
                - get the response json
                - set the upload url based on the response
                - add additional data (such as the uuid filename) 
                    to a temporary parameter
                - start the upload
        */
        this.on('addedfile', function(file) {
            fetch('/s3-url?&name='+file.name, {
                method: 'get'
            }).then(function (response) {
                return response.json();
            }).then(function (json) {
                dropzone.options.url = json.url;
                file.additionalData = json.additionalData;
                dropzone.processFile(file);
            });
        });

        /*
            When uploading the file
                - make sure to set the upload timeout to near unlimited
                - add all the additional data to the request
        */
        this.on('sending', function(file, xhr, formData) {
            xhr.timeout = 99999999;
            for (var field in file.additionalData) {
                formData.append(field, file.additionalData[field]);
            }
        });

        /*
            Handle the success of an upload 
        */
        this.on('success', function(file) {
            // Let the Laravel application know the file was uploaded successfully 
        });
    },
    sending: function(file, xhr) {
        var _send = xhr.send;
        xhr.send = function() {
            _send.call(xhr, file);
        };
    },
});

This provides the following upload flow.

Laravel, Dropzone.js & S3 the right way

No double handling and secure S3 uploads, nice.

From here it's possible to expand the uploading and handling logic to update database records. But I'll leave that to you.

Laravel, Dropzone.js & S3 the right way

All source is available on GitHub for your viewing pleasure.

JasonMillward/laravel-dropzone
Contribute to JasonMillward/laravel-dropzone development by creating an account on GitHub.
Laravel, Dropzone.js & S3 the right way

NB:

Don't forget the CORS config for your S3 bucket

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>
]]>
<![CDATA[My wedding party knives]]>https://jcode.me/wedding-knives/Ghost__Post__5c36b78fc08e2a00017ae095Fri, 29 Mar 2019 00:06:00 GMT

Although we had a little help with things that required specialty tools such as the water jet cutting and the laser engraving these knives are hand made from start to finish by myself and my bride to be.

This is the process we undertook in order to create them.

Knives are brought into this world as steel, rolled out into flat bars which are then cut into the rough shape of the knife using high pressure water jet cutter - imagine a pressure washer on steroids.

The edges on these knives are ground in the style of a full flat grind, which is thickest at the spine for strength, but tapers down into a relatively thin edge for excellent slicing.

My wedding party knives

A full flat grind is often stronger than a hollow grind, and will cut better than a sabre grind.

Once the grinding is completed, sanding starts at 80 grit and progresses to 180 grit in preparation for heat treating. Doing a little work now saves about two hours per knife later on.


From here each blade is heated up to non-magnetic temperatures and allowed to cool down naturally.

This is performed twice before having clay slathered all over the blade to allow for differentially hardening and to produce a rather fetching hamon - the difference between hard and soft steel.

Once the clay has dried the blades are once again heated up to a non-magnetic temperature - about 900°c - and quenched in warm oil.

The blades in this state are incredibly fragile and if they were dropped they would shatter.

To counter this the blades are throw into an oven and tempered at 200 degrees for 2 hours.

When everything has cooled down the knives are ground again, this time to a closer finish and hand sanded, starting from 80 grit sandpaper and progressing through grits 120, 180, 220, 400 to finally end up at 600 grit.

The last step is finishing them with varying grades of a scotchbrite like material.


The handles are made out of wood and made in pairs;

  • Two handles made from red river gum that was slightly eaten by termites with gaps filled with resin and black dye, bolsters made from jarrah.
  • Two are made from African mahogany with gidgee bolsters.
  • The last two handles are ancient bog oak, donated by one of the groomsmen, the gaps filled with resin and a purple pearlescent dye and bolsters made from jarrah.

Holes are drilled down the middle of the handle to a length of 120mm, whereas the bolster has the shape carved out by hand using two of my grandfathers chisels.

To test the fit of everything the knife is inserted into the bolster and handle. If it looks good to go the wood is glued together temporarily using CA glue.


When the CA glue has cured, the handle is sanded down into a hexagonal shape with the butt and bolster getting slightly rounded on the tips.

Sanding is done progressively from 80 grit to 600 grit, the handle is then given a sizable dosage of food grade mineral oil and beeswax to seal the wood.

Given a few days for the oil to seep into the wood, it gets a quick polish using a sisal buffing wheel.


Each knife is now glued in to the handle securely using a two part epoxy which is left to cure in an upright position for 3 days to achieve maximum bond strength.  

Using varying grades of diamond hones and a leather strop, the blade eventually come to a razor sharp edge.

Finally everything is given a once over, any blemishes polished out using a microfibre cloth and cleaned up with a thin layer of food grade mineral oil and packaged, ready for the big day.

My wedding party knives
]]>
<![CDATA[Octoprint: Push notifications]]>https://jcode.me/octoprint-push-notifications/Ghost__Post__5c863c1eeaeffb0001272e14Tue, 12 Mar 2019 20:00:00 GMT

Since I'm touching on a few things Octoprint this week I thought I'd also post my method of getting regular updates from my printer when it's printing.

At the time of posting I'm currently writing an update to the Octoprint plugin to do this all internally, but in the meantime this will have to do.

EDIT:

Pull request merged! You should use the plugin instead of the script below.


Cron this up as often as you'd like to get notifications, I personally chose once an hour.

It doesn't send notifications unless the printer is printing.

import json
import urllib

from pushover import Client
from urllib2 import Request, urlopen


def octoprint(path):
    q = Request("http://10.0.0.60/api/{}".format(path))
    q.add_header("X-Api-Key", "[OCTOPRINT API KEY]")
    a = urlopen(q).read()

    return json.loads(a)


def sendSnapshot():
    client = Client("[PUSHOVER CLIENT KEY]", api_token="[PUSHOVER API TOKEN]")
    urllib.urlretrieve("http://10.0.0.60/webcam/?action=snapshot", "/tmp/snapshop.jpg")

    job = octoprint("job")

    message = "{}% Complete".format(round(job['progress']['completion'], 2))

    with open("/tmp/snapshop.jpg", "rb") as image:
        client.send_message(message, attachment=image)


if __name__ == "__main__":

    printer = octoprint("printer?history=false&limit=0")

    if printer["state"]["text"] == "Printing":
        sendSnapshot()

]]><![CDATA[Octoprint: turn off your webcam automatically]]>https://jcode.me/octoprint-turning-off-webcam-automatically/Ghost__Post__5c845393f22ccd0001fe0dc2Mon, 11 Mar 2019 10:40:00 GMT

I'm not a fan of leaving a webcam running unattended in my home, even if its pointed at my printer and a wall.

So I made a little self-contained python script that reads the printers status from Octoprint. When the printer is printing, the webcam is on, allowing for timelapses and snapshots to be sent. If the printer is off or has just completed printing the webcam is also off.

Running every minute via Cron is a timely way to trigger the script and poll the printers status.

*	*	*	*	*	python /home/pi/autoWebcam.py
import json
from urllib2 import Request, urlopen
import psutil
import os


def octoprint(path):
    q = Request("http://10.0.0.60/api/{}".format(path))
    q.add_header("X-Api-Key", "[YOUR API KEY GOES HERE]")
    a = urlopen(q).read()

    return json.loads(a)


def checkWebcamD():
    for proc in psutil.process_iter():
        if proc.name() == "mjpg_streamer":
            return True

    return False


if __name__ == "__main__":

    printer = octoprint("job")

    if "Offline" in printer['state'] or "Operational" in printer['state']:
        if checkWebcamD():
            os.system("/etc/init.d/webcamd stop")

    if "Printing" in printer['state']:
        # If we are printing, check the webcam

        # if the webcam service is not running, turn it on
        if not checkWebcamD():
            os.system("/etc/init.d/webcamd start")
]]>
<![CDATA[Find missing content with wget spider]]>https://jcode.me/find-missing-content-with-wget-spider/Ghost__Post__5c2d54d507503f00017d6b22Thu, 03 Jan 2019 23:00:00 GMT

After moving my blog from digital ocean a month ago I've had Google Search Console send me a few emails about broken links and missing content. And while fixing those was easy enough once pointed out to me, I wanted to know if there was any missing content that GSC had not found yet.

I've used wget before to create an offline archive (mirror) of websites and even experimented with the spider flag but never put it to any real use.

For anyone not aware, the  spider  flag allows wget to function in an extremely basic web crawler, similar to Google's search/indexing technology and it can be used to follow every link it finds (including those of assets such as stylesheets etc) and log the results.

Turns out, it’s a pretty effective broken link finder.

Installing wget with debug mode

Debug mode is required for the command I'm going to run.

On OSX, using a package manager like Homebrew allows for the --with-debug option, but it doesn't appear to be working for me at the moment, luckily installing it from source is still an option.

Thankfully cURL is installed by default on OSX, so it's possible to use that to download and install wget.

Linux users should be able to use wget with debug mode without any additional work, so feel free to skip this part.

Download the source

cd /tmp
curl -O https://ftp.gnu.org/gnu/wget/wget-1.19.5.tar.gz
tar -zxvf wget-1.19.5.tar.gz
cd wget-1.19.5/

Configure with openSSL

./configure --with-ssl=openssl --with-libssl-prefix=/usr/local/ssl

Make and install

make
sudo make install

With the installation complete, now it's time to find all the broken things.

Checking your site

The command to give wget is as follows, this will output the resulting file to your home directory ~/ so it may take a little while depending on the size of your website.

wget --spider --debug -e robots=off -r -p http://jcode.me 2>&1 \
        | egrep -A 1 '(^HEAD|^Referer:|^Remote file does not)' > ~/wget.log

Let’s break this command down so you can see what wget is being told to do:

  • --spider, this tells wget not to download anything.
  • --debug, gives extra information that we need.
  • -e robots=off, this one tells wget to ignore the robots.txt file.
  • -r, this means recursive so wget will keep trying to follow links deeper into your sites until it can find no more.
  • -p, get all page requisites such as images, styles, etc.
  • https://jcode.me, the website url. Replace this with your own.
  • 2>&1, take stderr and merge it with stdout.
  • |, this is a pipe, it sends the output of one program to another program for further processing.
  • egrep -A 1 '(^HEAD|^Referer:|^Remote file does not)', find instances of the strings "HEAD", "Referer" and "Remote file does not". Print out these lines and the ones above it.
  • > ~/wget.log, output everything to a file in your home directory.

Reading the log

Using grep we can take a look inside the log file, filtering out all the successful links and resources, and only find references to the lines which contain the phrase broken link.

grep -B 5 'broken' ~/wget.log

It will also return the 5 lines below that line so that you can see the resource concerned (HEAD) and the page where the resource was referenced (Referer).

An example of the output;

--
HEAD /autorippr-update/ HTTP/1.1
Referer: https://jcode.me/makemkv-auto-ripper/
User-Agent: Wget/1.16.3 (darwin18.2.0)
--
Remote file does not exist -- broken link!!!


--
HEAD /content/images/2019/01/tpvd27rgco7ssa21.jpg HTTP/1.1
Referer: https://jcode.me/makemkv-auto-ripper/
User-Agent: Wget/1.16.3 (darwin18.2.0)
--
Remote file does not exist -- broken link!!!
]]>
<![CDATA[How much power does a 3D printer use?]]>https://jcode.me/how-much-power-does-a-3d-printer-use/Ghost__Post__5c2350817476b30001331c5bThu, 27 Dec 2018 06:30:00 GMT

The three greatest questions a man can ask; What is OK short for? Why do I have nipples? and How does my mum know I'm lying?  

These questions have no known answer and have confused geologists the world over.

Today, I add another great question to the mix; How much power does my 3D printer use?


In this post I'll be measuring arbitrary things that get hot or move on my Prusa i3 Mk3. The printer is running firmware version: 3.5.1, not that it will matter much be it's better to have too many points of data rather than none.

The current outside temperature is 39°C, inside it's only cooler in a few rooms, but not in the room dedicated to noise and electronics.

The printer tells me that it's bed and hot end temperature is sitting at 35°

Conditions in this room are pretty stable today, no fans or AC so there should be no external influences interfering with the science.

I'm using a "Smart Wi-Fi Plug with Energy Monitoring" the HS110 from TP-Link.

Lets measure some watts!


Off

First up is the printer off, only smart-switch using power.

How much power does a 3D printer use?

Idle

Then we have the printer on, finished its boot sequence and sitting pretty, no fans spinning, no heaters engaged.

How much power does a 3D printer use?

Stepper motors

Turning on the X, Y, or Z steppers add an additional 2w each, if they are moving or not doesn't matter due to the way stepper motors work.

How much power does a 3D printer use?

Printer bed

Cranking to bed temperature to regular PLA settings (60°) shows the printer drawing 240 at peak, slowly fading off until, after 3 minutes, the desired temperature has been reached.

As the bed cools down the printer draws between 19w and 100w as the heater kicks in.

How much power does a 3D printer use?

Hot end

The hot end takes 2 minutes to heat up to a toasty 195 degrees, drawing a maximum of 60w which tapers off quickly. Using between 19w and 40w in similar fashion to the heated bed.

Fan

Honestly the fan barely made a bump in the overall watts being drawn, at 100% speed it added between 1 - 2 watts.

How much power does a 3D printer use?

Starting a print

With both the bed and the hot end heating up the printer draws 286 watts at the start. Exactly as before it quickly fades as they reach the desired temperatures.

How much power does a 3D printer use?

After the printer has concluded its pre-heating phase it begins mesh bed leveling. The moving, probing and calculating of meshes doesn't even register on the watt meter as the bed needs to be continuously warmed during the process.

How much power does a 3D printer use?

Printer in action

Now that the printer has been warmed up for a few minutes it's time to start an actual print and produce the results that are useful.

Using #3D Benchy as a test print for this part of the experiment gives a total run time of 1.5 hours.

The smart switch I'm using can export average data for every minute

Removing idle and null power usage we get an average of 86.07 watts, removing the warm up power usage reduces the average to 84.13 watts.

Since power is often paid for in kilowatts we need to break out the calculator.

Divide the number of watts by 1,000.

Remember to show your working. The result is 0.08413kW

Now turn that number into kWh multiply by hours

How much power does a 3D printer use?

The following might not help out so much, since it's only for South Australia's power prices. But if you happen to live in the area...

SA Power Networks thinks power grows on trees like oranges, and requires extra work to juice out the sweet sweet fuel it uses to run the generators. Current prices are 43.67 cents per kWh.

How much power does a 3D printer use?

That's a total of 5.5 cents of power for each benchy. Or 3.6 cents per hour of run time.

Handy when calculating how much to charge for parts.

How much power does a 3D printer use?
]]>
<![CDATA[Laravel + Android push notifications]]>https://jcode.me/laravel-android-push-notifications/Ghost__Post__5c1b2988f7c6990001490793Mon, 24 Dec 2018 04:52:29 GMT

Webviews. Android developers dislike using them as an entire app replacement and I'm no different. But when your client has no money left after spending it all on iOS development do you tell them no, or do you offer an alternative solution?

I would hazard a guess, and say this is where a decent chunk of webview apps come from.

There's a project in the pipeline at my work where this has happened and I can foresee it happening again as Android users are still seen as filthy peasants who don't deserve a native application.

Laravel + Android push notifications

While webview apps are easy enough to build I'm looking at making the experience a little nicer with some push notifications, default to device caching and a splash screen while loading.

I'm going to update the template I'm building over on GitHub, and I'll post a few more updates as I add features.

JasonMillward/android-laravel-webview-push-notifications
Contribute to JasonMillward/android-laravel-webview-push-notifications development by creating an account on GitHub.
Laravel + Android push notifications
]]>
<![CDATA[Auto-purging Cloudflare's cache with Zapier]]>https://jcode.me/auto-purging-cloudflare-cache/Ghost__Post__5c16e5299de85300014875f7Mon, 17 Dec 2018 21:30:00 GMTAutopurgeAuto-purging Cloudflare's cache with Zapier

It sounds nasty but with my aggressive caching I need a way to purge the cache when I publish a new blog post. An automated way would be best. Thus; auto-purge.

Enter Zapier. The IFTTT for webhooks.

Auto-purging Cloudflare's cache with Zapier

The process works like this:

  1. I publish a new post
  2. Ghost fires off a webhook to Zapier
  3. Zapier receives this webhook, and sends a request to Cloudflare via API
  4. Cloudflare purges

In order to set this up like I have, you'll need a few things;

  1. A Zapier account
  2. A Cloudflare account
  3. The email address for your Cloudflare account
  4. Your site’s Zone ID
  5. Your Cloudflare API key

Once you've got everything prepared, head on over to Zapier and create a new Zap.

Trigger

You'll want to find Ghost as the trigger app.

Auto-purging Cloudflare's cache with Zapier

And set the trigger to a new story. Defining the published status comes a bit later in the setup.

Auto-purging Cloudflare's cache with Zapier

Connect to your Ghost instance. You'll need to define a full URL, https:// and all.

Auto-purging Cloudflare's cache with Zapier

And here is where we set the trigger status to be published. There are others available but published is the status we care about for this Zap.

Auto-purging Cloudflare's cache with Zapier

Action

Cloudflare doesn't have an app in Zapier, so we'll have to make do with a webhook...

Auto-purging Cloudflare's cache with Zapier

... a custom webhook!

Auto-purging Cloudflare's cache with Zapier

When setting up the webhook it needs to be a POST request to URL:https://api.cloudflare.com/client/v4/zones/<Zone ID>/purge_cache

Ignore data pass-through.

Set the data to the following JSON;

{
    "purge_everything":true
}

And add in two headers, their keys are X-Auth-Email and X-Auth-Key.

Auto-purging Cloudflare's cache with Zapier

Once that's done hit continue and test the webhook.

Auto-purging Cloudflare's cache with Zapier

Done!

With all of that hard work out of the way your super cache purging adventure can begin.

Happy blogging!

Auto-purging Cloudflare's cache with Zapier
]]>
<![CDATA[Speeding up Ghost or; how I moved everything to the cloud]]>https://jcode.me/speeding-up-ghost/Ghost__Post__5c122b8ad012490001e4c1feFri, 14 Dec 2018 11:49:46 GMT

My Blog used to be hosted over at DigitalOcean, on one of the smallest droplets available. But as times changed I wanted to move it in house - like actually into my house.

I have an UnRaid server serving my Plex library and handling my device backups via Docker, so I figured why not bring Ghost into the mix as well.  

'Installing' Ghost was straight forward and it was pretty fast locally, but externally the site took over 5 seconds to fully load, sometimes even taking in excess of 15 seconds.

Google has said a few things on page load times in the past, here are two of the most popular quotes;

"The average time it takes to fully load the average mobile landing page is 22 seconds. However, research also indicates 53% of people will leave a mobile page if it takes longer than 3 seconds to load."
"2 seconds is the threshold for ecommerce website acceptability. At Google, we aim for under a half second."

2 - 3 seconds, this sounds reasonable to me.

While this blog is not a commercial website it seems like I have a goal.


Phase 0; Initial load times

What would a little optimisation be without a baseline?

I'm going to be using GTmetrix throughout this experiment. There are many sites out there to test a sites speed. Using one or the other doesn't make too much difference, but using one and only one will produce consistent and meaningful results.

Speeding up Ghost or; how I moved everything to the cloud

As you can see above, the main page takes 10+ seconds to load 4MB over 32 requests. Not going to win any records for speed, that's for sure.

Phase 1; Cloudflare and GZIP

I've had a Cloudflare account for a while now, it's my goto DNS manager even on the free tier.

My preference for configuring rules in Cloudflare is to use what they call "Page Rules" which gives access to all the relevant cache settings on one page.

I set up a rule for all pages * using the follow rules. Not much to it, just bumped up the maximum cache/TTL settings and a medium security level.

Speeding up Ghost or; how I moved everything to the cloud

GZip is a must, but in my haste to set up the new environment I didn't enable it.

After enabling both GZip and setting Cloudflare to minify what it could the results were looking a little better. But there is still room for improvement.

Speeding up Ghost or; how I moved everything to the cloud

Phase 2; CDN? Cloudinary

This one is obvious when looking at the total page size of the last result. And when minifying HTML only saved 0.02MB it's time to look at what we look at the images.

Now I've covered content delivery networks before. But I'm cheap and lazy, this blog doesn't get the traffic to warrant large paying a monthly fee, so my hunt for cheap or even free CDNs was on.

One of the Ghost integrations is Cloudinary.

Utilising Cloudinary's API/Fetch URLs means I can upload images in any format/size and have them resized and optimised without having to run anything extra, as a bonus their free plan provides more than enough storage/bandwidth for what I need from it.

Because I'm running Ghost in a Docker container only files in the content folder are using persistent storage, everything else gets wiped when the docker container is updated or restarted.

Themes are persistent, which is why I chose to make use of their fetch and transformation URLs by editing a few key areas such as post-card.hbs, post.hbs and index.hbs. This doesn't cover every image I upload but it does cover the index page where a lot of images are loaded and the headers of individual posts.

The fetch URL allows for flags to be set;

https://res.cloudinary.com/<USER_NAME>/image/fetch/<FLAGS>/<IMAGE URL>

After some trial and error the flags I ended up with were;

w_600                => width 600px
h_400                => height 400px
c_fit                => fit the image into these bounds
q_auto               => set quality level to auto
f_auto               => convert to the most optimal image type available
dpr_auto             => automatically scale based on the devices pixel ratio

Now I was worried about f_auto a little bit, would Cloudinary serve the most optimal format even if I include an extension? The answer is yes.

Even with the jpg extension the image will still be encoded using WebP so long as you include the f_auto parameter.

To handle the image inside posts I threw in a little javascript;

<script>
    // For every image tag
    $('img').each(function(key, obj) {
    
        // Get the src of the image
        var src = $(obj).attr('src');

        // If the image src doesn't have http...
        //  we can assume it's a relative path
        if (src.indexOf('http') === -1) {
            src = 'https://jcode.me/' + src; 
        }

        // If the image src doesn't have 
        //  cloudinary in the string
        // Make it use cloudinary
        if (src.indexOf('res.cloudinary') === -1) {
            $(obj).attr('src',
                'https://res.cloudinary.com/<USER_NAME>/image/fetch/<FLAGS>/' 
                + 
                src
            ); 
        }
    });
</script>

Prepared to see some minor savings at best, the results actually surprised me.  

With the aid of Cloudinary I'm able upload any image in any format and have it get transformed into an optimised copy and let people load my blog in just under 500ms.

Speeding up Ghost or; how I moved everything to the cloud

Afterthought;

Companies come and go, CDNs are no different. Some even disappear overnight.

Putting all my faith into an external entity is not something I do lightly, which is why I opted to use the URL method instead of using the storage adapter where "images are uploaded directly to Cloudinary and integrated into its media library".

In the event that Cloudinary does disappear or remove their free plan, all I have to do is remove their URL prefix from my theme and I still have a working albeit slower website.

]]><![CDATA[Git hooks and Freshdesk]]>https://jcode.me/git-hooks-and-freshdesk/Ghost__Post__5c108461a27a880001ad6d85Wed, 12 Dec 2018 09:29:01 GMT

When your team is using Freshdesk to manage support requests that require changes in a Git repository, tracking changes manually can be annoying and time consuming.

With Git 2.9+ global hooks can take care of the trouble by adding a folder and simply letting Git know about it.

At my company, all Freshdesk users install the hooks with 3 lines from the repositories readme .

cd ~/repos
git clone git@gitlab.com:jcode/freshdesk-commit-hook.git
git config --global core.hooksPath /Users/$USER/repos/freshdesk-commit-hook

The hook itself is a post-commit that reads the most recent commit, and, as long as the developer has followed the outlined commit message structure (FD#[0-9]+) the hook picks it up and leaves a private note on that ticket providing a link to the repos commit diff and the output of  git log -1 --format=medium.

An example commit;

git commit -m "FD#0001 - Addressing clients request and making a change" 

And of course, the result.

Git hooks and Freshdesk

]]><![CDATA[A rant on the naming of projects]]> NB: Dropping vowels is optional Most of the above are descriptiv]]>https://jcode.me/a-rant-on-the-naming-of-projects/Ghost__Post__598301a21bef1564f80a83c1Sun, 06 Aug 2017 01:30:00 GMTA rant on the naming of projects

Developers are an interesting group when it comes to naming creations.

In my opinion you have a few distinct groups, and they are developers who name the application after something it does (Transcoder, Mediator, Console, make), name it after something it does and affix a verb (Autorippr), call it something seemingly nonsensical (Yarn, Laravel, Composer) and those who abbreviate a short description of what it does (NPM, RVM).

NB: Dropping vowels is optional

Most of the above are descriptive enough in their own name. Just by looking at them you could guess at what most of them do. Transcoder transcodes, Autorippr rips automatically, Node Package Manager manages node packages. You don't need a a short paragraph to explain what they do.

Recently I've been seeing a trend in the tech world, take something boring but simple and make it sound hip and cool. Appeal to the developers to show how laid back the business is. This ain't your daddy's corporate environment any more.

There are some things that these names work well for, sprints for example.

Sprints come and go, most lasting 2 weeks or less and get an incrementing number.

  • Sprint 1
  • Sprint 2
  • Sprint 3

Now you can make things fun and name them after Pokemon, the doctors of Dr. Who or elements from the periodic table...

  • Koffing
  • Cadmium
  • Christopher Eccleston

These work because you almost never refer to the sprint number as part of your development cycle. Tickets yes, sprint numbers, not really.

This is the perfect spot to inject some fun.

Streams or projects on the other hand can't handle that sort of renaming.

Take these 100% original, boring names

  • Mobile stream
  • Web stream
  • Ops stream
  • QA stream

These are boring, but you know exactly what they contain.

You're not writing a fantasy novel, or creating a hip new start up, you need something people understand today, tomorrow, and whenever people talk about it to others out of the loop.

If you were to read these, could you tell me what kind of work went on in the streams?

  • Baratheon Stream
  • Lannister Stream
  • Targaryen Stream
  • Tully Stream
  • Stark Stream

They're not fun, they're not hip. They. are. confusing.

If you have to describe the stream as "Baratheon (Ops)" then why waste time? Just say "Ops".

Keep It Simple, Stupid

I'm not against fun.

A rant on the naming of projects

Fun is good, it keeps the team together. Just have fun in the right places.

]]>
<![CDATA[Bulk remove time from JIRA]]>https://jcode.me/bulk-remove-time-from-jira/Ghost__Post__597a84334c50b41f8629ff74Sat, 29 Jul 2017 10:32:15 GMTBulk remove time from JIRA

Bulk remove time from JIRA

If delete an entry and copy the request as cURL, and repeat it, changing only the time entry ID you can remove your own entires in bulk.

Here's a simple bash script to do exactly that.

#!/bin/bash

START=31052
END=$(($START + 356))

ARGS="-H 'Pragma: no-cache' -H 'Origin: https://app.tempo.io' ..."

for i in `seq $START $END`;
do
    curl "https://app.tempo.io/rest/tempo-timesheets/4/worklogs/$i" -X DELETE $ARGS &
done
]]>
<![CDATA[Overcoming 5MB of localStorage with LZW compression]]>https://jcode.me/overcoming-5mb-of-localstorage-with-lzw-compression/Ghost__Post__597a84334c50b41f8629ff73Fri, 16 Jun 2017 01:40:00 GMTOvercoming 5MB of localStorage with LZW compression

In one part of my recent projects I was asked to build a web based interface for a student learning platform.

The client had specified that the app was intended for use in regional Australia, where the internet is mostly slow or non-existent and frequent API calls were not something to rely upon.

So I started work on the app, having it preload the required content and store it in localStorage. Part of the preloaded content was a dictionary of translations which ended up being very large.

When development was nearing completion, we started cross-browser testing. Chrome and FireFox both performed perfectly, but Safari couldn't get past the preloading stage.

As it turns out, Safari has a limit of 5mb on localStorage and the data being pulled down was easily going over that.

classrooms  = 1577.72 KB
dictionary  = 4946.14 KB
story       = 59.66 KB
Total       = 6583.65 KB

Since I was storing the data as JSON I thought that using some compression would be the way to go since there's a lot of duplicate strings and plain text.

As it turns out, compressing the data was easy enough and reduced the data being stored by a massive 85%.

classrooms  = 617.89 KB
dictionary  = 373.52 KB
story       = 14.10 KB
Total       = 1005.64 KB

This also has an added bonus of allowing more customer created data to come through the API without hitting that 5MB limit right away.

Here's a quick rundown of what I did.


Add dependency

yarn add lz-string

Import

window.LZString = require('lz-string');

Setter

window.setStorage = function(key, value) {
    localStorage.setItem(
        key,
        LZString.compress(
            JSON.stringify( value )
        )
    );
};

Getter

window.getStorage = function(key) {
    return JSON.parse(
        LZString.decompress(
            localStorage.getItem(key)
        )
    );
};

And you're done. Just reference getStorage/setStorage instead of localStorage.getItem/localStorage.setItem and its compressed data all day every day.




WebSQL was looked at in conjunction with using localStorage but W3C ceased working on the specification in November 2010.

]]>
<![CDATA[Extracting .png files from a .bin sprite sheet]]>https://jcode.me/extracting-png-files-from-a-bin-sprite-sheet/Ghost__Post__597a84334c50b41f8629ff72Fri, 12 May 2017 03:27:37 GMTExtracting .png files from a .bin sprite sheet

Here's a little script I wrote to extract a whole lot of .png files from a compiled sprite sheet.

The python script reads the .bin file as binary, finds the starting header of a .png file (89504E47) and the footer (49454E44AE426082) and separates it into individual images.

This may not be the worlds best or most useful script, but it saved me several hours of copy and paste.

import binascii
import re
import os

for directory, subdirectories, files in os.walk('.'):
    for file in files:

        if not file.endswith('.bin'):
            continue

        filenumber = 0

        with open(os.path.join(directory, file)) as f:

            hexaPattern = re.compile(
                r'(89504E47.*?49454E44AE426082)',
                re.IGNORECASE
            )

            for match in hexaPattern.findall(binascii.hexlify(f.read())):

                with open('{}-{}.png'.format(file, filenumber), 'wb+') as f:
                    f.write(binascii.unhexlify(match))

                filenumber += 1
]]>
<![CDATA[Printing @ 0.2mm & 100 microns]]>https://jcode.me/printing-at-02mm-and-100-microns/Ghost__Post__597a84334c50b41f8629ff71Tue, 29 Nov 2016 11:31:00 GMTPrinting @ 0.2mm & 100 microns

I bought a 0.2mm Micro Swiss nozzle for my 3D printer after the provided 0.4mm wasn't achieving the resolution I required for my up and coming keycap printing job, and I was having a painful time getting prints to come out cleanly or stick to the bed for the first few prints.

After playing around with the settings and generally getting frustrated, I found that raising the printing temperature by 10-20°C and tweaking the some of my slicer settings I found something that worked for my Flash Forge Pro.

Nozzle Diameter:        0.2mm
Extrusion Multiplier:   0.9
Extrusion Width:        0.2mm

Primary Layer Height:   0.1mm
Perimeter Shells:       4  

First Layer Height:     110%
First Layer Width:      130%
First Layer Speed:      40%

These settings on top of a freshly leveled printing bed gave the print a rather strong grip to the printing bed that insured no curling would happen.

]]>
<![CDATA[The Begärlig terrarium]]>https://jcode.me/begarlig/Ghost__Post__597a84334c50b41f8629ff6eSun, 06 Nov 2016 20:00:00 GMTThe Begärlig terrarium

In early October I was approached by a teacher and friend to help create a simple low-cost terrarium for high-school students taking biology 101 or what might eventually become biotech 101.

The Begärlig terrarium

The idea behind the request was to teach students about photosynthesis and to give a little more hands-on experience using a terrarium and a couple of sensors enabling it to save all captured data in a format that could be easily graphed.

Luckily I have some experience with microcontrollers and other sensors;

The Begärlig terrarium

Sensors

Now this got me thinking, what kind of sensors would be:

a) useful in this situation; and
b) simple enough to include in this project?


The first one on my list would have to be some sort of light sensor because photosynthesis only happens during the day. But just how much is the minimal amount of light required to kick off the process and does brighter or more light increase the reaction?

Temperature and/or humidity monitoring would probably be good to watch as well from more of a botanical perspective. If the terrarium is too hot or too cold it puts the plant life at risk and potentially ruins the experiment. Another thing that may be monitored are environmental conditions to determine if they affect the rate of photosynthesis.

To prevent students from drowning their plants a soil hydrometer might be useful.

And the most important of all sensors for photosynthesis would have to be either a CO2 or O2. Without one of these the experiment wouldn't have any measurable results.

Overall, materials that are required include the sensors, the glass container for the terrarium, soil, activated carbon or charcoal and some plants.

DHT22

The Begärlig terrarium

The DHT22 is a low-cost digital temperature and humidity sensor. It uses a capacitive humidity sensor and a thermistor to measure the surrounding air.

Both DHT11 and DHT22 are very common sensors which have a lot of documentation and presence on the web.
These sensors are more or less identical, though one measures more precisely, but in this experiment precision is not essential.

LDR

The Begärlig terrarium

A photoresistor (or light-dependent resistor, LDR, or photocell) is a light-controlled variable resistor.

Light-dependent resistors increase their resistance based on the amount of light visible, making them incredibly cheap, easy, but while they're not inaccurate they don't provide any quantifiable output like lux.

MQ135

The Begärlig terrarium

The MQ series of gas sensors use a small heater inside with an electro-chemical sensor. They are sensitive for a range of gasses and are used indoors at room temperature.

The MQ135 is mainly sensitive for benzene, alcohol and smoke, but with the right calibration it can also detect CO2, making it probably the cheapest CO2 sensor available.

Plants

At the time of writing moss is available in large quantities thanks to a very wet Australian winter and to obtain some you required only a spade.

However, local Garden Centres have a variety of other suitable plants such as:

  • Hen and chick
  • Golden Clubmoss
  • Asplenium Bulbiferum
  • Small succulents
  • Cacti
  • Moss
  • Philodendron
  • Peperomia
  • Pilea
  • Bromeliads
  • Small orchids and Ferns
  • Rex Begonia
  • Aluminum Plant
  • Pothos
  • Baby Tears
  • Mini English Ivy

The terrarium

After searching for a suitable container that would meet the required height and width.

I settled on the Begärlig vase from Ikea, giving this terrarium its official name.

Carbon

Pet stores have activated carbon readily available by the kg at a low cost.

Cost breakdown

This is what I paid when I purchased almost everything from eBay. Admittedly the Arduino is a clone and the parts were from China so I had to wait a few weeks for shipping, but it kept the cost down.

  • $09.99 - Begärlig vase
  • $01.09 - Activated carbon (3 KG @ $32.90 / 30 terrariums @ 100g ea)
  • $02.88 - DHT22
  • $04.15 - Arduino Uno R3
  • $02.68 - Data logger shield, SD and RTC
  • $00.99 - Light sensor
  • $01.75 - MQ135

Total cost = $23.53 AUD

3D Design

The philosophy behind the design was more or less, keep it simple. The plan was to create a lid type structure that would sit securely on the rim of the vase and hold the Arduino, shield and the 3 sensors but not necessarily keep it air tight.

After measuring the inner and outer diameter of the vase and the Arduino mounting holes I ended up with this;

The Begärlig terrarium

The Begärlig terrarium

The bottom structure holds onto the rim of the vase with the sensors sitting at the bottom and Arduino secured in by screws in the middle. An opening allows for a USB cable to be inserted and provide power.

The top is merely a dust cover with some ventilation holes. Reducing the risk of fire hazards and all that.

Software and 3D models

Built on the foundation of sharing knowledge, all Arduino microcontroller code is available for free. The project is open source, 100% free and everything is available on GitHub.

Everything that is part of the Begärlig terrarium is licenced under the MIT licence. Which means...
The Begärlig terrarium

In keeping with the educational motif of this project I have kept track of all of the code and 3D design changes from beginning to end using git, a version control system.

Assembly

Terrariums are built on layers, each layer adds something of value to the terrarium. The 3 base layers are;

  • Gravel
  • Activated charcoal/carbon
  • Soil of some kind

With that in mind are a extensions of ones creative side and there are no wrong ways to design the uppermost layers.

My initial creation was a simple design with an ornament placed roughly in the middle. Over time I may add or remove ornaments, change plant life or create changes in the terrain, but for now my terrarium looks a little like this:
The Begärlig terrarium

Watch this space - detailed wiring diagram coming soon.

Data

As I've written it, the Arduino outputs all information in CSV format which means it can be opened in Excel, Calc by OpenOffice, R or almost any other application out there.

Below are some examples of the data and how it can be used.

The Begärlig terrarium

Datetime,                Temp, Humidity,    Light,  CO2 PPM
2016/11/02 08:08:55,       26,       83,      639,      250
2016/11/02 08:08:57,       26,       83,      637,      250
2016/11/02 08:08:59,       26,       83,      635,      250
2016/11/02 08:09:01,       26,       83,      638,      232
2016/11/02 08:09:03,       26,       83,      640,      250
2016/11/02 08:09:05,       26,       83,      643,      232
2016/11/02 08:09:07,       26,       83,      636,      250
2016/11/02 08:09:09,       26,       83,      639,      250
2016/11/02 08:09:11,       26,       83,      644,      250
2016/11/02 08:09:13,       26,       83,      644,      250
2016/11/02 08:09:15,       26,       83,      645,      250
2016/11/02 08:09:17,       26,       83,      639,      250
2016/11/02 08:09:19,       26,       83,      638,      250
2016/11/02 08:09:21,       26,       83,      641,      250
2016/11/02 08:09:23,       26,       83,      650,      250

]]>
<![CDATA[The peculiar case of the malfunctioning keyboard]]> Gee, I wish I could focus more on my work and not get distracted by all these Slack and e-mail notifications and done something about it? By any chance did you install heyfocus [https://heyfocus.com/]? Because if you did, there's a good chance that just like Brad, you might have experienced some peculiar happenings to do with your keyboard. -------------------------------------------------------------------------------- Gather round and I'll tell yo]]>https://jcode.me/the-peculiar-case-of-the-malfunctioning-keyboard/Ghost__Post__597a84334c50b41f8629ff70Tue, 01 Nov 2016 00:00:00 GMTThe peculiar case of the malfunctioning keyboard

Have you ever thought to yourself ...

Gee, I wish I could focus more on my work and not get distracted by all these Slack and e-mail notifications

and done something about it? By any chance did you install heyfocus?

Because if you did, there's a good chance that just like Brad, you might have experienced some peculiar happenings to do with your keyboard.


Gather round and I'll tell you a story, a story of keyboards and gremlins, a story that starts many moons ago when young Brad bought himself a brand new CODE keyboard, elevating his status from lowly database guy to peerless Database Engineer - among other things - in one simple purchase.

But the rise to power was not an easy one, you see, the keyboard was damaged in transit from over the ocean! It required repair. However Brad lacked the tools and ability to fix his keyboard, so he sought the aid of his newly acquired and handsome young friend, Jason, who fixed the problem without having to search Google for the answer.