Cache invalidation with memcache

“There are only two hard things in Computer Science: cache invalidation and naming things.” — Phil Karlton

Mr. Karlton was not wrong. In my day-to-day job, cache invalidation is something that can easily disrupt releases – the fact that the result from an API is cached can be easily forgotten, for example.  It has often caused us to pause and re-evaluate exactly what our applications are doing.  Some of our APIs are quite static in nature and are cached appropriately (general rule of thumb is that more static data is cached for longer periods of time).  When it comes to update these APIs, there are often many layers of cache to bust through in order to prune stale data.  If we forget to clear one cache, the stale data can again propagate to all the other caches in the stash.  Not cool.

Not unlike an onion, this can definitely cause tears.

Memcache & Redis

Invalidating keys in redis is relatively simple via redis-cli:

redis-cli KEYS "session:*" | xargs redis-cli DEL

memcached on the other hand does not support namespaced deletes, nor does it have a tool to interact with the server.  The only real way to interact with the server is via telnet or similar tool via TCP/IP (such as nc).  This caused a desire to write a tool to invalidate cache quickly so that I could test these problem APIs more effectively. Below is the source code (a bash script) – it requires netcat to be installed and within your path.


function usage {
    echo ""
    echo "$0 [regex]"
    echo "Used to invalidate keys on memcached servers"
    echo ""

function memcache_netcat {
    netcat -q $TIMEOUT $SERVER $PORT

function memcache_delete {
    echo "DELETING: $1"
    RESULT=$(echo "delete $1" | memcache_netcat)

# Parameter is required
if [ -z $1 ]; then
    exit 1

# For each server... (in SERVER:PORT format)
for definition in "${SERVERS[@]}"
    read -ra server <<< "$definition"

    echo ""
    echo "Invalidating keys on: $SERVER:$PORT"
    echo "Searching for       : $1"
    echo ""
    echo "stats items" | memcache_netcat | while read line;
        let "ITERS=$LOOPS % 10"

        # Each STATS ITEM row brings back 10 statistics. Skip all but first row.
        if &#91; $ITERS -eq 1 &#93;; then
            read -ra chunks <<< "$line"

            # If this not a STATS ITEMS row, skip it
            if &#91; ${#chunks&#91;@&#93;} -lt 3 &#93;; then

            IFS=" "

            # Search this slab for the keys it contains
            echo "stats cachedump $SLAB $KEYLIMIT" | memcache_netcat | while read row;
                # If the key matches the search, delete
                KEY=`echo "$row" | tr -d '\b\r' | sed 's/^.\{4\} \(&#91;^ &#93;*\).*$/\1/'`
                if &#91;&#91; $KEY = "END" &#93;&#93;; then

                if &#91;&#91; $KEY =~ $1 &#93;&#93;; then
                    read -ra parts <<< "$row"
                    memcache_delete $KEY




echo "DONE."
echo ""
&#91;/sourcecode&#93;<div class="wp-git-embed" style="margin-bottom:10px; border:1px solid #CCC; text-align:right; width:99%; margin-top:-13px; font-size:11px; font-style:italic;"><span style="display:inline-block; padding:4px;">memcache_invalidator</span><a style="display:inline-block; padding:4px 6px;" href="" target="_blank">view raw</a><a style="display:inline-block; padding:4px 6px; float:left;" href="" target="_blank">view file on <strong>GitHub</strong></a></div>
<h3>Usage example</h3>
To use this script, make sure it is executable and pass a regular expression in as the first argument. As an example, let's invalidate all keys that start with <em>session:</em>

chmod a+x memcache_invalidator
./memcache_invalidator ^session

Here is some sample output, showing that we have deleted two keys (that I added for testing purposes) from local memcache:

jonnu@onion:$ ./memcache_invalidator ^session

Invalidating keys on:
Searching for : ^session

DELETING: session_9fc9575c7eb47fbcdb39c2a872ea74d8
DELETING: session_2bdace452a1904970c457f7ddfd6a132


Suggestions on how to improve this tool are welcome – either comment here or just fork & send me a merge request on github.

Written by .

PNGs & browser colour management

Subtle colour differences in hex #3FA868 between browsers

Ahh, the joys of colour management. Within the realms of web development, managing colour can be a real pain. It is a well-known fact that browsers are already guilty of subtle variations in how they render web pages, but it can be true of how they render colour too.

You might have noticed when saving PNGs that the colour varies ever so slightly between different browsers (usually Firefox). The image above shows what should be #3FA868 between Chrome and Firefox, both running on Mac OS X. For designers that like their websites to look the same in all browsers, this is evidently a problem, and moreso when trying to blend an image into a background colour.

The problem stems from how each browser handles colour. Images often come with something called a ‘colour profile’ embedded within them which allows displays to be calibrated in order to give the best colour. In the above example, the difference is caused by Firefox rendering the image with the colour profile, whilst Chrome opts to ignore it. You can, if required, turn this on in Chrome.

There are two solutions. If you are using Adobe Photoshop, ensure that PNGs are saved using the sRGB colour profile (under ‘Save For Web’). The second, and my preferred method, is to strip out the colour profile from the PNG file. This has the added bonus of shrinking the file size (sometimes as much as 25%).

This is done with a tool called pngcrush which is a command-line tool.  If you are not confident with using the command line there is a GUI alternative which embeds pngcrush‘s functionality, called trimmage (also available for Windows).

Stripping out an images colour profile can be done with the following command:

pngcrush -q -rem gAMA -rem cHRM -rem iCCP -rem sRGB oldfile.png newfile.png

After performing this many, many times, I found it quite labour intensive.  This is mainly due to pngcrush not allowing you to write over the old file in place since the source and destination are the same.  In order to bypass the monotony, I wrote a quick shell script that will convert all PNGs in the current folder:


echo " "
shopt -s nullglob
for file in ./*.png; do
	echo "Working on $file"
	pngcrush -q -rem gAMA -rem cHRM -rem iCCP -rem sRGB $file "$file.tmp"
	mv $file "$file.old"
	mv "$file.tmp" $file
echo " "
echo "Complete."
echo " "

# Remove old files
rm -rf *.png.old

I tend to save this in a folder listed in $PATH (such as /usr/local/bin) for easy access – et voila!  PNG-based headaches are now a thing of the past.  Unless of course, you want to start a discussion on transparent PNGs and IE6…

Written by .

OSX, dot underscore and .DS_Store

Like most developers these days I consider myself platform agnostic.  This has led me into several jobs where I develop exclusively on OS X, but store and stage work on non-AFP server volumes.  Of course this is no problem but it does come with its own set of idiosyncrasies.

One such annoyance is the automatic creation of ‘dot underscore’ files.  These metafiles quickly litter any non-HFS+ formatted drive (a common issue in a mixed-platform environment), and are irritating when it comes to tasks such as version control (although you can have them ignored) and archiving directories with tar. By far the easiest method for disposing of these files in OS X is via a tool called BlueHarvest.  The only downside is it’ll cost you – $14.95 USD at the time of writing.

So, is there a free alternative?  Well, yes – it does however involve a little bit of work.  You can recursively remove the offending dot underscore files with the following one-liner:

find . -name '._*' -print | xargs rm

If you find that you are having issues due to files containing spaces, you can get around this by using the following command instead:

find . -name '._*' -printf \'%h/%f\'\\n | xargs rm

This will find all matching files recursively that match the glob pattern ‘._*’ and print them, then each result of this command is piped through ‘rm’.  The other common sight is the .DS_Store file.  These can be removed using the same snippet as above (switching ‘._*’ for ‘.DS_Store’), or you can suppress the automatic creation of resource fork files (.DS_Store) with this snippet:

defaults write DSDontWriteNetworkStores true
Written by .