technology from back to front

Blog: latest posts

Optimizing Salsa20 in BouncyCastle

A while ago I started dabbling in cryptography a bit, and inevitably, I ended up toying with performance of the related algorithms and code. In this post I’d like to share my approach to optimizing BouncyCastle’s implementation of Salsa20.

A few observations regarding Salsa20 (and presumably other modern ciphers) in the context of performance:

  • Salsa20 uses 32bit integers, 32bit CPUs are dirt cheap nowadays, there is no reason to design an algo based on smaller words.
  • It produces a pseudo-random stream in a block based fashion, 16 ints per block. A good opportunity to leverage multiple cores and multiple execution units in a core.
  • It only uses basic, fast operations on ints, add, xor and shift.

As far as I could measure it, BouncyCastle’s Salsa20 implementation works out at around 20-22 CPU cycles / byte with latest JVMs. Fastest C implementations, according to DJB’s website can make it at around 4-5 CPU cycles / byte. A noticeable gap, let’s see why this is and what can be done with it.

Salsa20Engine.java from BouncyCastle sources can be found here. There are two hotspots in it.

First, salsaCore method, the actual implementation of the Salsa20 cipher. It produces a chunk of pseudo random ints and stores it in an internal buffer of bytes. One of the recent commits is an optimisation of this part of code. Buffering ints in variables as seen in the commit can potentially reduce a number of memory lookups in comparison to manipulating array elements as well as a number of boundary checks JVM has to perform.
Unfortunately, because algorithhm requires 16 such variables, JIT is likely to produce extra stores / loads to handle them all. Furthermore, underneath this still is good old serial code with no utilization of SIMD execution units. JITs in Java7 and Java8 can utilise SSE/AVX but are quite finicky about when and how they do it and code in Salsa20Engine.java doesn’t compel JIT to make use of SIMD instructions. As comment in the commit says, this optimization yields about 15%, and it is about so far we can go this way. This part of code nonetheless has a potential of yielding more with SIMD but it has to be approached from a different angle. The topic of SIMD use in JVM doesn’t seem to be well covered so I had to resort to experimentation and analysis of the JIT’s disassembly. To explain it all in proper detail would take way too much space to fit in a single post. So, I share a full optimized source code and hope it speaks for itself instead. The last, somewhat generic, note on it is that restructuring execution flow so that it uses SIMD entails extra data rearrangements which in turn take up extra CPU and reduce gains. This often is the case when we try to optimise a part of the picture without changing too much, or when we just cannot carry out deeper structural changes.

The second hotspot, in processBytes method, which implements an API entry, does xor’ing of an input stream of bytes with the sequence of bytes produced by salsaCore. Problem is, Salsa20 algorithm operates on 32-bit ints whereas API takes and outputs streams of bytes. As I mentioned before, Salsa20Engine.java converts ints produced by algorithm into a buffer of bytes which in turn is used to xor an input buffer of bytes. The xoring itself is done byte by byte and JIT in fact does produce code that processes it all in 8-bit chunks (including the costly loads / stores). A better approach is to keep the ints produced by algorithm as ints and use them as input bytes go, xor’ing input bytes with respective quarters of precalculated ints.

To test it all, FasterSalsa20Engine.java needs to be dropped alongside Salsa20Engine.java (into org.bouncycastle.crypto.engines package path), and SalsaTest.java needs to be compiled and run against this modified BouncyCastle.

On my laptop with a SandyBridge CPU and Java 1.8.0_11, an example output for larger blocks shows an average gain of 200-220%:

        Salsa20   30.6kB :: min= 104mb/s  avg= 109mb/s  max= 111mb/s
  FasterSalsa20   30.6kB :: min= 221mb/s  avg= 227mb/s  max= 241mb/s
        Salsa20   86.5kB :: min=  99mb/s  avg=  99mb/s  max=  99mb/s
  FasterSalsa20   86.5kB :: min= 239mb/s  avg= 239mb/s  max= 239mb/s
        Salsa20   15.6kB :: min=  92mb/s  avg=  92mb/s  max=  92mb/s
  FasterSalsa20   15.6kB :: min= 231mb/s  avg= 231mb/s  max= 231mb/s
        Salsa20   72.4kB :: min=  93mb/s  avg= 100mb/s  max= 111mb/s
  FasterSalsa20   72.4kB :: min= 200mb/s  avg= 207mb/s  max= 221mb/s
        Salsa20    3.8kB :: min=  96mb/s  avg=  97mb/s  max=  98mb/s
  FasterSalsa20    3.8kB :: min= 140mb/s  avg= 193mb/s  max= 207mb/s
by
jarek
on
22/08/14

Automating pre-deployment sanity checks with Grunt

Grunt is a great tool for building, running and deploying ‘Single Page Apps’. I have a single grunt command to build and deploy to S3 for production, but recently I added some extra functionality to make deployment safer and even easier:

  • Abort if you are not on master branch
  • Abort if there are any uncommitted local changes
  • Abort if not up to date with the origin repo
  • Create a file revision.txt containing the deployed git revision hash, so we can GET it from the server and be sure of which revision is live
  • Automatically create a tag with the date and time.

I found a few existing pieces to implement some of these, but not all of them, and I ended up with a set of custom Grunt tasks, which I present here in the hope that they are useful to others. They could perhaps be packaged up into a Grunt plugin.

With no further ado, here is the stripped down Gruntfile, just showing the parts relevant to this post, though the deploy-prod task definition leaves in the other task names for context in the overall flow.

module.exports = function(grunt) {

  // Load all grunt tasks matching the `grunt-*` pattern
  require('load-grunt-tasks')(grunt);

  grunt.initConfig({
    // Lots of other Grunty things
    // ...

    // Executing the 'gitinfo' command populates grunt.config.gitinfo with useful git information
    // (see https://github.com/damkraw/grunt-gitinfo for details) plus results of our custom git commands.
    gitinfo: {
      commands: {
        'status': ['status', '--porcelain'],
        'origin-SHA': ['rev-parse', '--verify', 'origin']
      }
    },

    gittag: {
      prod: {
        options: {
          tag: 'prod-<%= grunt.template.today("ddmmyy-HHMM") %>'
        }
      }
    },

    shell: {
      gitfetch: {
        command: 'git fetch'
      },
      saverevision: {
        // Save the current git revision to a file that we can GET from the server, so we can
        // be sure exactly which version is live.
        command: 'echo <%= gitinfo.local.branch.current.SHA %> > revision.txt',
        options: {
          execOptions: {
            cwd: 'dist'
          }
        }
      }
    },
  });

  grunt.registerTask('check-branch', 'Check we are on required git branch', function(requiredBranch) {
    grunt.task.requires('gitinfo');

    if (arguments.length === 0) {
      requiredBranch = 'master';
    }

    var currentBranch = grunt.config('gitinfo.local.branch.current.name');

    if (currentBranch !== requiredBranch) {
      grunt.log.error('Current branch is ' + currentBranch + ' - need to be on ' + requiredBranch);
      return false;
    }
  });

  grunt.registerTask('check-no-local-changes', 'Check there are no uncommitted changes', function() {
    grunt.task.requires('gitinfo');

    var status = grunt.config('gitinfo.status');

    if (status != '') {
      grunt.log.error('There are uncommitted local modifications.');
      return false;
    }
  });

  grunt.registerTask('check-up-to-date', 'Check code is up to date with remote repo', function() {
    grunt.task.requires('gitinfo');
    grunt.task.requires('shell:gitfetch');

    var localSha = grunt.config('gitinfo.local.branch.current.SHA');
    var originSha = grunt.config('gitinfo.origin-SHA');

    if (localSha != originSha) {
      grunt.log.error('There are changes in the origin repo that you don\'t have.');
      return false;
    }
  });

  // Some of these tasks are of course ommitted above, to keep the code sample focussed.
  grunt.registerTask('deploy-prod', ['build','prod-deploy-checks','gittag:prod','aws_s3:prod']);

  grunt.registerTask('prod-deploy-checks', ['gitinfo','check-branch:master','check-no-local-changes','shell:gitfetch','check-up-to-date']);
};

We rely on a few node modules:

  • grunt-git which provides canned tasks for performing a few common git activities. We use it for tagging here.
  • grunt-gitinfo which sets up a config hash with handy data from git, and allows adding custom items easily. This helps us to query the current state of things.
  • grunt-gitshell which lets us run arbitrary command line tasks. We use it to git fetch (not supported by grunt-git, though we could probably have abused gitinfo to do it) and to save the current revision to file. I hope that the command I use for that is cross-platform, even to Windows, but it’s only tested on Mac so far.

Hence I ended up with the following added to package.json:

    "grunt-git": "~0.2.14",
    "grunt-gitinfo": "~0.1.6",
    "grunt-shell": "~0.7.0"
by
Sam Carr
on
15/08/14

Things I wish I’d known about Google Docs

I have had cause to write a lot of Google Docs recently, which leaves me furnished with a stock of interesting observations that others might find helpful. With no further ado…

It doesn’t auto-number headers

I typically want my business-like docs to have numbered headings, so an H3 might be “2.4.1. Architecture considerations”. Word can just do this automatically and keep them up to date with the changing structure of your doc. Google Docs can’t, though there is a free add-on called “Table of contents” which performs a dual duty here:

  • It shows the structure of your documents headers in a sidebar, which is incredibly handy for reviewing that structure and for navigating the doc (click to jump).
  • It can optionally renumber the headers, though it only does this when explicitly invoked via a button, which you have to remember to do after inserting new headings or restructuring. The numbering is just inserted as ordinary text in the doc as part of each header so it’s crude and non-semantic.

Rather surprisingly, the add-on can be very slow indeed to do its thing – even clicking on a link often took 3+ seconds to actually jump to the location in a 27 page doc. This is hard to fathom, but most docs are fairly short and it behaves acceptably well. Add-ons are trivially easy to install – just go to the Add-ons menu in your doc – so I would recommend everyone to dive in. Once you have used this particular add-on once, it’s two clicks to turn it on for any doc from the menu.

Printing is a lame experience

In Safari when you hit cmd-P to print, nothing happens. This leaves you a little bewildered, so you try again, and then you try invoking the menu item with the mouse rather than using the keyboard shortcut. A few seconds after the initial attempt, you might notice a little icon swoop up to the downloads button in the Safari toolbar – and when you click up there to check you’ll find each of your print attempts have caused it to download a PDF of the doc, after a multi-second wait in each case, naturally. Then you curse, open the PDF in Preview and print it from there.

I suspect it’s a lot better in Chrome, but for my money there’s no excusing such a poor experience in Safari. At the very least it should give feedback to show that it’s received your request to print and is working on it, and then make it clear what it’s actually done.

You can’t have mixed orientation pages

I wanted to include a landscape format diagram on its own page. Tough – all pages in the doc must be the same orientation.

Pasting from a Google Spreadsheet doesn’t maintain formatting

This is a trivial little thing, but annoying: if I paste a table (of estimates breakdowns, say) from a Google Spreadsheet into a Google Doc, it drops some of the text alignment formatting – so cells that were left-aligned become right-aligned.

Really it’s a shame I can’t embed a Spreadsheet directly in the doc, especially where I just want to get totals added up for me.

It doesn’t have a concept of appendices

Then again, I always found Word rather wanting in handling appendices nicely.

Drawings don’t support gradients

I was shocked and dismayed (again) to see no gradients in Google Drawings. The whole story of these apps seems to be excruciating simplicity, which is great in a way, but the reluctance to gradually increase the feature set puzzles me when they’re genuinely trying to compete with Word.

In one case I resorted to rolling my own gradients by duplicating and offsetting a shape repeatedly with very low opacity (so the opacities gradually stack up), then grouping the results. You only want to try this in extreme circumstances where it’s really important to you.

Basically, it’s pretty awesome

All of those irritations aside, it’s still my go-to tool for bashing out docs, partly because I don’t have Word and am not in a hurry to acquire it. Learn the keyboard shortcuts, use the Table of contents add-on, and you can be quite effective. I suppose the simplicity may even help to concentrate on the content and structure.

That said, an online editor that had the same cloud storage, collaboration and a much improved feature set, would be a big draw. Frankly it’s probably out there if only I look, but Google have done just enough to grab and retain the market.

by
Sam Carr
on
23/07/14

Optimising compilers as adversaries

Suppose that you want to handle some secret data in C and, in the wake of some high-profile vulnerability or other, want to take precautions against your secret being leaked. Perhaps you’d write something along these lines:

#include <string .h>

typedef struct {
  char password[16];
} secret_t;

void get_secret(secret_t* secret);
void use_secret(secret_t* secret);

void wipe_secret(secret_t* secret) {
  memset(secret, 0, sizeof(secret_t));
}

int main() {
  secret_t secret;
  get_secret(&secret);
  use_secret(&secret);
  wipe_secret(&secret);
  return 0;
}

I think you could be forgiven for assuming that this does what it says. However, if you have what John Regehr calls ‘a proper sense of paranoia’, you might actually check. Here’s an excerpt of what I got when I ran clang -S -O2 -emit-llvm on this example:

define i32 @main() #0 {
  %secret = alloca %struct.secret_t, align 1
  call void @get_secret(%struct.secret_t* %secret) #4
  call void @use_secret(%struct.secret_t* %secret) #4
  ret i32 0
}

As if by magic, wipe_secret has completely disappeared.

Read more…

by
ash
on
30/06/14

Dockerising an XMPP Server

As part of an internal migration of our XMPP server, we thought this would also present a good opportunity to test drive Docker to see if it would be useful for other infrastructure projects in the future. Docker is fast becoming the industry standard for deployment on Linux platforms, and for a number of good reasons:

* Very lightweight, unlike conventional virtual machines
* Good isolation between multiple containers running on the same host machine
* Allows for multiple applications that rely on different versions of the same package to run on the same box
* Provides repeatability in deployments

For this example, we’ll be looking to Dockerise the Prosody XMPP server, with a PostgreSQL backend. If you are completely new to Docker, it would be useful to read the official documentation first to familiarise yourself with the basic concepts.

To start with, we’ll consider the PostgreSQL side, which will be split amongst two containers. One container will contain the application software (version 9.3 in this case), while the second will simply provide a container for persisting data. This means the first container can be swapped at a later time (to upgrade to a later Postgres version for example), while retaining the database data in the second container (which is quite desirable).

For the data container, the Dockerfile is specified as follows:

FROM busybox

# build data image:
#   docker build -t data_postgres .
# create data container:
#   docker run --name data_postgres data_postgres true
# data container directory listing:
#   docker run --volumes-from data_postgres busybox ls -al /data

RUN mkdir /data
ADD postgresql.conf /data/
ADD pg_hba.conf /data/

RUN adduser -u 5432 -D postgres
RUN chown -R postgres:postgres /data

VOLUME /data

This uses the very lightweight busybox base image, which provides a minimal set of userland software, and exposes a volume for writing to at /data. Two files with the Postgres configuration settings are also added to this directory, which can be picked up by the application container later, allowing the application container to be replaced without losing config information. A postgres user is also created with a specific UID of 5432 with ownership of this directory, meaning another container can create a postgres user with the same UID and have the correct read permissions on the directory.

As outlined in the comments at the top of the Dockerfile, we can build the image and create the container by running the /bin/true command, which exits quickly leaving a container behind named “data_postgres” with no running processes.

For the application container, the Dockerfile is as follows:

# run postgres:
#   docker run --volumes-from data_postgres -d --name postgres postgres93

FROM phusion/baseimage:0.9.11

# disable sshd and set up baseimage
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh
ENV HOME /root
CMD ["/sbin/my_init"]

# install postgres 9.3
RUN useradd --uid 5432 postgres
RUN apt-get update && apt-get install -y \
    postgresql-9.3 \
    postgresql-client-9.3 \
    postgresql-contrib-9.3 \
    language-pack-en

# configure postgres
RUN locale-gen en_GB
RUN mkdir /etc/service/postgres
ADD run_postgres.sh /etc/service/postgres/run

EXPOSE 5432

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

This uses the phusion/baseimage container, which is essentially an Ubuntu 14.04 image with some tweaks to the init process that can monitor and restart processes if they crash. One such service is added for running Postgres, which is defined in an executable bash script as follows:

#!/bin/bash

DATADIR=${DATADIR:-"/data/main"}
CONF=${CONF:-"/data/postgresql.conf"}
POSTGRES=${POSTGRES:-"/usr/lib/postgresql/9.3/bin/postgres"}
INITDB=${INITDB:-"/usr/lib/postgresql/9.3/bin/initdb"}
DB_USER=${DB_USER:-"db_user"}
DB_PASS=${DB_PASS:-"db_pass"}
DATABASE=${DATABASE:-"prosody"}

# test if DATADIR exists
if [ ! -d $DATADIR ]; then
  mkdir -p $DATADIR
fi

# test if DATADIR has content
if [ ! "$(ls -A $DATADIR)" ]; then
  chown -R postgres:postgres $DATADIR
  sudo -u postgres $INITDB -D $DATADIR
  sudo -u postgres $POSTGRES --single -D $DATADIR -c config_file=$CONF \
    <<< "CREATE USER $DB_USER WITH SUPERUSER PASSWORD '$DB_PASS';"
  sudo -u postgres $POSTGRES --single -D $DATADIR -c config_file=$CONF \
    <<< "CREATE DATABASE $DATABASE OWNER $DB_USER;"
fi

exec /sbin/setuser postgres $POSTGRES -D $DATADIR -c config_file=$CONF

After building the image and running the container (using the command outlined in the comment at the top of the Dockerfile), we’ll have a container with Postgres running, linked with the data volume created earlier for persisting database data separately, and exposing the Postgres port at 5432 for other containers to access.

The Prosody container is created with the following Dockerfile:

# run prosody:
#   docker run -t -i -d -p 5222:5222 -p 5269:5269 -p 5280:5280 -p 5347:5347 --link postgres:postgres --name prosody prosody

FROM phusion/baseimage:0.9.11

# disable sshd and set up baseimage
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh
ENV HOME /root
CMD ["/sbin/my_init"]

# prosody installation
RUN curl https://prosody.im/files/prosody-debian-packages.key \
    | apt-key add -
RUN echo "deb http://packages.prosody.im/debian trusty main" \
    >> /etc/apt/sources.list
RUN apt-get update && apt-get install -y \
    prosody \
    lua-dbi-postgresql

# prosody config
ADD prosody.cfg.lua /etc/prosody/prosody.cfg.lua
ADD certs /etc/prosody/certs
RUN mkdir /etc/service/prosody
ADD run_prosody.sh /etc/service/prosody/run

EXPOSE 5222 5269 5280 5347

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

This uses a simple bash script for running the Prosody service:

#!/bin/bash
exec /etc/init.d/prosody restart

When creating the image, SSL certificates will be picked up from the certs directory relative to the build path and embedded in the container, as well as the prosody.cfg.lua file which contains the XMPP server settings.

When running the container, a link is made between this container and the Postgres application one, which will set up an entry in this container’s /etc/hosts file that points to the correct IP for the Postgres container. For example, the Postgres settings for Prosody are set up as follows:

sql = {
  driver = "PostgreSQL",
  database = "prosody",
  username = "db_user",
  password = "db_pass",
  host = "postgres"
}

This means the XMPP server can point to a database at host “postgres”, which is the name given to the link, and the correct container IP will be used for writing to the database.

One final note would be around creating new XMPP server users with the prosodyctl command. This means running a command on the Prosody container which doesn’t have SSHD running on it, which can be achieved with nsenter. The easiest way to do this is by running the docker-enter bash script it provides that will inspect running containers by name to retrieve their process ID then enter the namespace of that container:

docker-enter prosody

This will provide a bash terminal inside the Prosody container, which allows the prosodyctl command to be run to set up new users at the XMPP server. This data will be persisted in the data volume created at the start, meaning new Prosody containers can be created at a later time without needing to repeat these steps again for the same users.

by
Shaun Taheri
on

Super-simple JavaScript inheritance

JavaScript uses prototype-based inheritance, which can prove a bit of a puzzler for those of us used to class-based object orientation. At first glance it seems like it’s basically the same and as if it can be used in very nearly the same way. If you pretend that those prototype objects are in fact classes and ignore the nuances you can get surprisingly far and code up a decent-sized heap of working code. However you will eventually be bitten and have to read up on what’s really happening at the nuts and bolts level. This is guaranteed to happen just when you’re under pressure, trying to get a critical feature working. It’s a horrible feeling, the dawning realisation that the subtle bug you can’t grok is because things don’t work the way you thought at a very basic level and that your heap of code is founded on quicksand. This happened to … a friend of mine.

This post isn’t going to attempt to explain the depths and subtleties of JavaScript’s prototype model. Plenty of others have been there before. In fact we will embrace our class-based stubbornness and attempt to get it working the way we really wanted. Plenty of others have done this too, but there are a few issues with most of their solutions:

  • They are too simplistic and don’t cover all the cases required, like having an arbitrarily deep hierarchy that can call up the constructor chain neatly
  • They are too complicated, having developed into powerful libraries with many features
  • The perennial problem: I didn’t write them, so am not in control and able to understand exactly what’s going on and adapt to exactly my needs – no more, no less.*

I present the result below, wrapped up for require.js. There is really very little code indeed – just two functions: inherit and superConstructor.

// Because class is a keyword in JS, consumers should inject this as clazz.
define(function() {

  return {
    // In the unlikely event that you need to explicitly call a superclass implementation
    // of a method, because a method with the same name exists in the current class:
    //  foo.parent.bar.call(this, x, y);
    inherit: function(child, parent) {
      child.prototype = Object.create(parent.prototype);
      child.prototype.constructor = child;
      child.prototype.parent = parent.prototype;
    },

    // The superclass constructor should generally be called from child's constructor
    // otherwise it won't run and fields defined there will be missing:
    //   superConstructor(this);
    superConstructor: function(self) {
      // The constructor that we call here may in turn wish to call superConstructor() to
      // call its own parent's constructor (but with the same 'self') so we must take
      // special measures to allow this, as self will be the same object with each recursion.
      var constructor = (self.nextParent) ? self.nextParent.constructor : self.parent.constructor;
      self.nextParent = self.parent.parent;
      constructor.call(self);
      self.nextParent = undefined;
    }
  }

});

The contents of inherit are much as you’ll find in many a blog post, though there’s a surprising amount of subtle variation out there!

More interesting is superConstructor, which employes a somewhat offensive tactic to allow calls all the way up the constructor chain. What makes this difficult is that ‘this’ must remain the actual object being constructed throughout those nested calls, so we need to manually provide the context to know what the next constructor up the chain is.

Having done this and saved the code above into clazz.js, we can write code with inheritance as follows (working example as a jsfiddle).

// A Dog can bark.
function Dog() {
    console.log('Constructed a dog');
}
Dog.prototype.bark = function() { return 'Woof' };

// A Yorkie is a Dog that barks a lot!
clazz.inherit(Yorkie, Dog);
function Yorkie() {
    var self = this;
    clazz.superConstructor(this);
}
Yorkie.prototype.bark = function() {
    var noise = this.parent.bark.call(this);
    return noise + noise + noise;
};

// Create dogs and see what noises they make.
console.log(new Dog().bark());
console.log(new Yorkie().bark());

To be fair, my super-simple inheritance library is extremely restricted in its abilities, for instance not handling constructor parameters. But that’s because I didn’t need them, and any extra features should be easy to add. Most of all it was a valuable learning experience.

* Actually I love an off-the-shelf library as much as the next chap (or chappess) – but if you don’t feel comfortable with the libraries on offer and the problem seems nicely tractable and a worthwhile learning experience then why not go for it. You can always change your mind.

by
Sam Carr
on
16/06/14

CSS Transitions can’t animate display change

I’d like to demonstrate a fairly simple CSS issue that caught me out, and the straightforward solution. Put simply CSS Transitions do not work if there is a change in the display property as part of the same change that fires the transition, but you can workaround this by separating out the display change.

If you’re not already aware, CSS Transitions are a cute way of animating transitions on your web page. Simply add a transition property in your CSS stating which property of the element should be animated when it changes, and over what period of time.

.animatedWidth {
    transition: width 2s;
}

In the example above, whenever the width of the element is changed (e.g. programmatically from JavaScript) it will animate that change over 2 seconds, complete with ease-in and ease-out by default.

I’ve created a jsfiddle with a more convoluted example that demonstrates the display problem, so you can inspect the HTML, CSS and JS, and run it in the browser. The example has three coloured bars (though the second two start off invisible) and an Animate button. Click the button and you’ll see that the ordinary transition animates the width of the bar as expected, but where the coloured bar is being made visible at the same time it just winks into existence in its end state with no animation. The third bar appears and then animates correctly, because our JS separately shows it then triggers the animation. It uses a timeout with zero delay to achieve this, effectively giving the rendering engine its chance to handle the display change before then triggering the animation.

button.on('click', function() {
    // To get the animation working we need to change the
    // display property first (via jQuery toggle()) and then
    // trigger the CSS transition with a zero-delay timeout.
    bar3.toggle();
    window.setTimeout(function() {
        bar3.toggleClass('animate');
    }, 0);
});

In my real world situation where I first stumbled across this effect, the item being animated started offscreen (and invisible) and slid into place, with the problem only evident on Chrome for some still unknown reason. The change of display property was but one of many things going on via incidental CSS so it took some sleuthing to figure out that it was responsible for the problem. Coming at it from that baffling angle for the first time, the problem and its solution were not nearly so obvious as presented above!

by
Sam Carr
on
27/05/14

Requiem for the Command pattern

Is there anything sadder than the Command pattern? The exemplar of the once-proud Patterns movement, the one that everyone understands and can see the power of, the one that has an instant applicability to many applications: the undo system. I remember a time when undo seemed a luxury to be implemented only by the most hardened of programmers; then the command pattern made it achievable by any decent coder. Now, the Command pattern is just that extra cruft you have to write when your language doesn’t have good support for closures.

But what of undo? Doesn’t Command still encapsulate something worth having in this situation, beyond what a closure gives you for free? Especially when, for whatever reason, you are using a language without decent support for closures.

I found myself in this situation recently when re-writing the undo system for the Linux Stopmotion application. This application is written in C++, and there are many bugs in it. Fixing the undo system seemed necessary for sorting the worst of them out.

If you search the internet for “undo.cpp”, you can find three different undo system implementations that people have used in C++. One is the classic described in Gamma et al’s Design Patterns, where Command objects have an undo() and a redo() method. This was the original Stopmotion implementation, and I also found this in Inkscape, a Battle for Wesnoth tool, Torque3D and example code from the blogs of RandomMonekyWorks and Yingle Jia. It is unfortunate that this version is so popular because, unless you do some cleverness I have yet to see attempted, you need to implement each operation twice; once as the Undo of Delete (say), again as the Redo of Insert. You also need (again, barring as-yet-unseen cleverness) to copy any data that will be added or removed into your command object.

A better approach (the one I took with my re-write) can be seen in Yzis, KAlarm and Doom 3′s Radiant tool (although the code in these three is not for the faint-hearted and doesn’t quite conform to the platonic ideal I’m about to express). Here your Command object has just an undo() and an invert() method – indeed these can (and should) be combined – undo() should perform the operation, delete itself and return an inverse of itself – to ensure that a command, once undone, cannot be undone again without being redone first. This also means that a Command object does not need to copy any data; a Delete object removes the thing deleted from the model, attaches it to the inverse Insert object, deletes itself and returns the Insert object. The Insert object, if executed, returns the same object back to the model, creates the delete object, deletes itself (now that it is in an empty state) and everything is fine.

A third approach I saw just once in my quick search; an application called Satan Paint, which stores the entire model state as the undo element, not using the Command pattern at all. However, storing the entire state is madness, right? All that memory storing all that data you’ll probably never use…

But now that I’ve done my re-write and it seems to be working well, there’s a nagging thought. Can and should we retire the Command pattern, even in C++, even for undo? My motto in these cases is always “think how you’d do it in Haskell, then see if it’s applicable in the other language”. So how would one apprach undo in Haskell?

Well Haskell, having no mutable state, would require the use of a purely-functional data structure. This is a data structure that has operations that return mutated versions of the operated-on structure, but the original is still present. To avoid creating a whole new copy, parts of the old structure are re-used in the new wherever possible. And the art in designing purely-functional data structures is enabling as much re-use as possible. Once you have a purely-functional data structure, a Command object is redundant; you simply remember previous states. So, kudos to Satan Paint!

Now all we need is a decent library of purely-functional data structures in C++, together with a decent garbage collector to stop no-longer-used sub-parts leaking…

by
Tim Band
on
17/05/14

A simple Knockout page router

Knockout.js is a pleasantly simple approach to data-binding ViewModels into your HTML. Like many JavaScript libraries it sticks to a core mission with a few simple concepts, which makes it quite approachable. Its simple template support means that you don’t need to write much code to get a top-level page router going in your single page app (SPA) and that’s exactly what I have done.

Knockout-routing

It uses hash-based routing, so URLs must be of the form http://foo.com/index.html#myPage. This approach means that even a statically hosted site with just the one real URL (index.html in this example) and zero server-side dynamicism can be a SPA with multiple virtual pages. All requests will ultimately come to index.html and then the router takes over and shows the right actual page based on the hash in the URL. Back and forward buttons work, as does page refresh, bookmarking, emailing links etc.

The code is on GitHub, with a decent README explaining the features and the key files to look at, so I won’t repeat that here. The code is also well-commented, with the intention that you can (and should) read it to see how it works. You can clone it, then simply double click src/index.html to open it in your browser and see its capabilities demonstrated. Nice and easy.

The router itself is just a 61 line JavaScript file, which would be very easy to extend with further features that you might need. The rest of the code on GitHub shows how to use it by example, and demonstrates all of its features.

Any feedback is very much appreciated. I imagine there are other similar routers out there, but this one is mine and making it (and using it in anger) taught me a lot and provided a nice, tight result which I can easily add to as required.

by
Sam Carr
on
30/04/14

Two weeks at LShift

On a welcome break from studying for my GCSEs at school I spent two weeks doing ‘work experience’ at LShift. At the end of the two week placement I was interrogated by Keith Fisher. Here’s a transcript:

1. Did you have a choice in where to do your work experience placement?

Yes I had complete control over what I wanted to do for my work experience. If I didn’t, I would be working in an old people’s home or a school. Also, I think if you don’t find a work placement before the deadline, you have to work for a teacher.

2. Why did you pick LShift?

I picked LShift because it seemed like a fun place to work, LShift also works with software development which is something that I am doing at school and it’s something I enjoy. A relative suggested LShift as a place to do my work experience so I did some research. The website really said it all, software design, computers and free cola. That was all I needed.

3. What was it like?

When I arrived, I was actually surprised. I was imagining the place to be a huge company with hundreds of employees. Instead it was quite a small sized company with friendly people!  LShift must be an anomaly in the industry. It’s a place where stress isn’t even a problem, everyone is relaxed and calm. It was actually relaxing to be there.

4. What did you do while you were there?

While I worked at LShift I had the chance to sit in on a training course for a project management method called “DSDM” which will help me in later life. I learnt how to make programs using .net. I learnt how software development companies run and work. Lastly, I learnt what it’s like to work for a fantastic company.

5. Is two weeks enough time to get a sense of what work is like?

No, I could learn so much more from LShift so leaving so quickly is a shame.

6. Would you do it again?

No I would never come here again… just kidding. I would definitely consider working at LShift again. In the future I would hope I could work somewhere like this.

by
Lewis
on
08/04/14

Older Posts »

Search

Categories

Feeds

Archives

2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us