technology from back to front

Archive for February, 2007

Improved unobtrusive linked select boxes

Here’s a problem that crops up regularly in Web interfaces: having a dropdown whose available options depend on another dropdown. The canonical example, if you like, is the date selector:

Select date

This has the obvious (though not serious) problem that one can select, say, February 31st. We’d like to catch those errors by having the values in the “day” select depend on what is chosen in “month”. It’s easy to do this on the server with a round-trip, of course, but that has the problems that the user interface can be in an inconsistent state before the form is submitted; it would be nice to be able to finesse the interface, using JavaScript, to remove that possibility.

We can do this ad-hoc, of course, but that needs specialised code for each instance (well, for each type of data and dependency); a general solution would be better. We’d also like it to be unobtrusive, by which I mean two closely related things:

  1. The markup is meaningful
  2. It works without JavaScript

Bobby van der Sluis gives a pretty good general solution. Briefly, the technique involves keeping a full set of the dependent options and picking the appropriate options from it when necessary.

It relies on encoding the dependency as an HTML class: to me it’s a muddy use of class, but the real problem is that it’s not a very obvious way of making the dependency explicit in the markup. I think we can improve on it.

Using OPTGROUP we can produce straight-forward markup that makes the dependency obvious:

        <legend>Select date</legend>
        <label for="month">Month</label>
        <select id="month">
          <option value="jan">January</option>
          <option value="feb">February</option>
          <option value="mar">March</option>
        <label for="day">Day</label>
        <select id="day">
          <optgroup label="January">
            <option value="1">01</option>
            <option value="1">02</option>
            <option value="1">03</option>
          <optgroup label="February">

Without JavaScript it looks like this:

Select date

Here’s my (allegedly) improved unobtrusive linked select box code (it uses MochiKit to avoid some verbose DOM manipulation — just read $(x) as “select the element with ID ‘x’”, and the rest does what it says):

function linkSelects(parent, child) {
  var parent = $(parent);
  var child  = $(child);
  var cloned = child.cloneNode(true);
  refreshDynamicSelectOptions(parent, child, cloned);
  connect(parent, 'onchange', function(event) {
    refreshDynamicSelectOptions(parent, child, cloned);

function refreshDynamicSelectOptions(parent, child, optionholder) {
  var alreadySelectedValue = (child.selectedIndex >= 0) && child.options[child.selectedIndex].value;
  var selectedLabel = strip(scrapeText(parent.options[parent.selectedIndex]));
  for (var i=0; i < optionholder.childNodes.length; i++) {
    var opt = optionholder.childNodes[i];
    if (opt.tagName.toLowerCase() == "option") {
      var newopt = opt.cloneNode(true);
      if (newopt.value == alreadySelectedValue) newopt.selected = true;
      appendChildNodes(child, newopt);
    else if (opt.tagName.toLowerCase() == "optgroup" && opt.label==selectedLabel) {
      for (var j=0; j < opt.childNodes.length; j++) {
    var newopt = opt.childNodes[j].cloneNode(true);
 if (newopt.value == alreadySelectedValue) newopt.selected = true;
   appendChildNodes(child, newopt);

and lastly, here's the working version:

Select date

There remain a couple of weaknesses. It relies on the convention of OPTGROUP labels being the same as OPTION text; it's reasonable, since there is a semantic link between those two things. Also, it doesn't always get selection right -- easily seen in this example, where you'd expect a choice of '01' to persist when changing months. I think those are tweaks away.


A crypto standards manifesto

Hopefully this will be crossposted in several places.

My current plan to change the world involves writing a manifesto for a proposed mailing list to work out crypto standards that actually work and stand a chance of getting widely adopted in the open source world. This is essentially version 0.1.5 of that rant, and may contain some inaccuracies or overstatements; I look forward to your comments and corrections.

Currently there are four crypto standards that see any real use in open source land; in order of deployment, they are:

  • SSH
  • OpenPGP/GnuPG
  • IPSec

These are the best examples of good practice that we cite when we’re trying to encourage people to use standards rather than make it up, and all of them fail to be any good as a practical, convenient basis by which people writing open source software can make their software more secure through cryptography. All of them suffer from three problems; in order of increasing severity

  • They are all designed long ago, in three cases initially by people who were not cryptographers, and are difficult to adapt to new knowledge in the crypto world about how to build good secure software. As a result, deprecated constructions for which there are no good security reductions are common. They are also generally far less efficient than they need to be, which would be a very minor problem if it didn’t put people off using them.
  • In every case protocols and file formats introduce far more complexity than is needed to get the job done, and often this shows up as complexity for the users and administrators trying to make them work, as well as unnecessary opportunities to make them insecure through misconfiguration.
  • But by far the worst of all is the parlous state of PKI. This of course is something I’ve ranted about before:
    • SSL’s dependence on the disaster that is X.509 makes it insecure, painful for clients, and imposes the ridiculous Verisign Tax on servers, as well as making it very unattractive as a platform for new software development.
    • SSH occasionally shows you a dialog saying “you haven’t connected to this server before, are you sure?” I’m sure someone’s going to tell me they actually check the fingerprints before connecting, but let me assure you, you are practically alone in this. I can’t even share this information across all the machines I log in from, even if I use ssh-agent. The situation for authenticating clients to servers is slightly better, but still involves copying private keys about by hand if you want the most convenience out of it. It makes you copy whole public keys rather than something shorter and more convenient like OpenPGP fingerprints. It certainly doesn’t make use of the basic fact that keys can sign assertions about other keys to make life more convenient.
    • OpenPGP’s authentication is based on the PGP Web of Trust, which is all about binding keys to real names using things like passports. As I’ve argued before, this is a poor match for what people actually want keys to do; it’s a very poor match for authenticating anything other than a person.
    • IPSec is also tied to the X.509 disaster. It is also so complex and hard to set up that AFAICT most IPSec installations don’t use public key cryptography at all.

Perhaps the key management problems in all these applications can be pinned down to one thing: they were all designed and deployed before Zooko’s triangle was articulated with sufficient clarity to understand the options available.

It’s worth noting one other infuriating consequence of the PKI problems these applications display: none of them really talk to each other. You can buy an X.509 certificate that will do for both your SSL and IPSec applications, if you’re really rich; these certificates will cost you far more than a normal SSL certificate, and for no better reason than that they are more useful and so Verisign and their friends are going to ream you harder for them. Apart from that, each application is an island that will not help you get the others set up at all.

I’ve left out WEP/WPA basically because it’s barely even trying. It should never have existed, and wouldn’t have if IPSec had been any good.

I’m now in the position of wanting to make crypto recommendations for the next generation of the Monotone revision control system. I wish I had a better idea what to tell them. They need transport-level crypto for server-to-server connections, but I hesitate to recommend SSL because the poison that is X.509 is hard to remove and it makes all the libraries for using SSL ugly and hard to use. They need to sign things, but I don’t want to recommend OpenPGP: it’s hard to talk to and the Web of Trust is a truly terrible fit for their problem; on top of which, OpenPGP has no systematic way to assert the type of what you’re signing. They need a way for one key to make assertions about another, and we’re going to invent all that from scratch because nothing out there is even remotely suitable.

Monotone has re-invented all the crypto for everything it does, and may be about to again. And in doing so, it’s repeating what many, many open source applications have done before, in incompatible and (always) broken ways, because the existing standards demand too much of them and give back too little in return. As a result, crypto goes unused in practically all the circumstances where it would be useful, and in the rare case that it is used it is at best inconvenient and unnecessarily insecure.

I don’t believe that things are better in the closed source world either; in fact they are probably worse. I just care more about what happens in the open source world.

We can do better than this. Let’s use what we’ve learned in the thirty-odd years there’s been a public crypto community to do something better. Let’s leave the vendors out, with their gratuitous complexity and incompatibility as commercial interests screw up the standards process, and write our own standards that we’ll actually feel like working with. We can make useful software without their support, and it seems in this instance that their support is worse than useless.

A good starting point is SPKI. SPKI has a very nice, clean syntax that’s easy to work with in any programming language, very straightforward semantics, and supports constructions that anticipate the central ideas behind petnames and Zooko’s Triangle. Unfortunately SPKI seems to be abandoned today; the feeling when I last looked at it was that despite their inadequacies, the victory of PKIX and X.509 was now inevitable and resistance was futile.

Well, it turns out that X.509 was so bad that no amount of industry support could turn it into the universal standard for key management applications. There are places that it will simply never be able to go, and in fact these are the vast majority of real crypto applications. On top of which, there is a limit to how far a standard that hardly anyone will ever understand the application of can go.

It’s time we brought back SPKI. But more than that, it’s time we adapted it for the times it finds itself in; take out the parts that complicate it unnecessarily or slow its adoption, extend it to do more than just PKI, and specify how it can talk to the existing broken cryptographic applications in as useful a way as possible. Once we’ve built a working petnames system to serve as a workable PKI, my current feeling is that we should start with no lesser a goal than replacing all of the standards listed above.

Does anyone else think this sounds like a good idea? What other way forward is there?

Paul Crowley

JSON and JSON-RPC for Erlang

About a month ago, I wrote an implementation of RFC 4627, the JSON RFC, for Erlang. I also implemented JSON-RPC over HTTP, in the form of mod_jsonrpc, a plugin for Erlang’s built-in inets httpd. This makes accessing Erlang services from in-browser Javascript very comfortable and easy indeed.

Downloading the code:

* you can browse the code here on github
* a tarball is available here (note: this is dynamically generated from the HEAD revision in the git repository)
* the git repository holding the code can be retrieved with the command
git clone git://

Documentation is available, including notes on how to write a service and how to access it from javascript, and the curious may wish to see the code for an example Erlang JSON-RPC service and its corresponding javascript client.

The JSON codec uses a data type mapping suggested by Joe Armstrong, where strings map to binaries and arrays map to lists.

Coincidentally, on the very same day I started writing my JSON codec, Eric Merritt released his new JSON codec, Ktuo. If I’d seen that, I probably wouldn’t have started writing my own. At the time, the only other implementation I knew of was the json.erl included with yaws, which uses an awkward (to me) encoding and was, at the time I was using it, a bit buggy (decoding “[]” returned an incorrect value – it seems to have been fixed somewhere between yaws 1.64 and 1.68). To an extent, Eric’s rationale for a new JSON codec applies to mine, too, and my other excuse is that the data type mapping where strings become Erlang binaries is much more useful to my application. Your mileage may vary!


An Alphabetical Google Zeitgeist

I’ve installed Firefox 2.0 on most of the machines I work on daily now. Its use of google suggestions surprised me when I first saw it, but I’ve grown to find it somewhat useful on occasion now. It suggested the following experiment (not in so many words, of course): assuming that the suggestions it supplies are based on popularity of search terms (presumably filtered by google’s safe-search feature!), then the suggestions ought to reflect the zeitgeist to a certain extent – what does it suggest for each letter of the alphabet? The results weren’t exactly surprising. No philosophy, very little science, technology, literature or art; nothing but wall-to-wall Britney, Ebay and “U tube” (!). The A-to-Z follows below.

Read more…


RFC 1982 limits itself to powers of two unnecessarily

RFC 1982 defines a “Serial Number Arithmetic”, for use when you have a fixed number of bits available for some monotonically increasing sequence identifier, such as the DNS SOA record serial number, or message IDs in some messaging protocol. It defines all its operations with respect to some power of two, (2^SERIAL\_BITS). It struck me just now that there’s no reason why you couldn’t generalise to any number that simply has two as a factor. You’d simply replace any mention of (2^SERIAL\_BITS) by, say, N, and any mention of (2^(SERIAL\_BITS-1)) by (N/2). The definitions for addition and comparison still seem to hold just as well.

One of the reasons I was thinking along these lines is that in Erlang, it’s occasionally useful to model a queue in an ETS table or in a process dictionary. If one didn’t mind setting an upper bound on the length of one’s modelled queue, then by judicious use of RFC 1982-style sequence number wrapping, one might ensure that the space devoted to the sequence numbering required of the model remained bounded. Using a generalised variant of RFC 1982 arithmetic, one becomes free to choose any number as the queue length bound, rather than any power of two.


LShift and CohesiveFT launch RabbitMQ Open Source Enterprise Messaging

LShift have developed RabbitMQ, a complete open source implementation of Advanced Message Queuing Protocol (AMQP), with the support of the pioneering software appliance company CohesiveFT.

AMQP is the emerging standard for high performance enterprise messaging; reducing change and maintenance costs through the separation of integration concerns, removal of silo dependency, and freedom from language and platform lock-in. This has resulted in consistently excellent performance, without compromising user experience, security and scalability.

RabbitMQ enables developers of messaging solutions to benefit not only from AMQP, but also from one of the most proven systems in use, the Open Telecommunication Platform (OTP). OTP is used exclusively by telecommunications companies to manage switching exchanges for voice calls, VoIP, and now video. These systems are designed never to go down even when handling vast user loads. As such systems cannot be taken offline, they have to be extremely flexible; for instance, it must be possible to ‘hot deploy’ features and fixes whilst managing consistent user service level agreements.

Rather than creating a new messaging infrastructure, the RabbitMQ team built an AMQP layer on top of OTP using Erlang. Java tooling and clients are provided for developers and administrators to run RabbitMQ and connect to it over the AMQP wire protocol, with other language adaptors to come. This combines the robustness and scalability of a proven platform with the flexibility of AMQP’s messaging model.

John O’Hara, Executive Director at JPMorgan and Chair of the AMQP Working Group said “A strong standard needs a variety of interoperating implementations and I am pleased to welcome RabbitMQ to the family. The vision of the AMQP Working Group is that through standardisation AMQP enables businesses to reduce their integration costs and paves the way to simple, robust transaction processing between firms globally. RabbitMQ, implemented in technologies pioneered in the demanding telecommunications industry, demonstrates the innovation which can occur on the back of an open standard like AMQP.”

Version 1.0.0 Alpha binary and source distributions (along with documentation) are available for download for Generic Unix, Windows, and Debian GNU/Linux platforms. The download includes the RabbitMQ server and Java client, providing an API to AMQP. RabbitMQ is licensed under the open source Mozilla Public License.

The next phase of the project will address improved support for hot failover and AMQP clients will be extended beyond Java to other programming languages and environments. RabbitMQ will be integrated with other networks via Enterprise Service Buses such as Mule, interfaced with existing management and monitoring tools such as HermesJMS, and packaged as a Software Appliance for drop-in deployment.


Rabbits, rabbits, rabbits

We’re proud to announce that the project we’ve been working on for the past few months, RabbitMQ, has been released. RabbitMQ is an AMQP server written using Erlang/OTP. Check it out at – or you can go straight to the downloads page for sources and binaries.




You are currently browsing the LShift Ltd. blog archives for February, 2007.



2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us