technology from back to front

Archive for November, 2006

GF(232-5)

GF(232-5) is a finite field.

Many of the most useful bits of coding theory and in particular network codes work by treating the data you want to transmit as a vector over a finite field. The link will tell you the full story, but in summary a finite field is a set of finitely many elements, with addition and multiplication operations defined on them such that they are very well behaved: a + b = b + a, (a + b) c = ac + bc and so on.

For any prime p and any positive integer n, there’s exactly one finite field with pn elements; this field is called GF(pn). In particular, since 2 is a prime, there’s conveniently a finite field with 2n elements. And there are straightforward ways to treat, for example, an array of bytes as an array of elements of GF(28); or if it’s more convenient, break up your data into larger chunks and work in, say, GF(216) or GF(232).

Addition in GF(2n) is simply XOR, which is as fast as you could want in both hardware and software. Multiplication hardware for GF(2n) is similarly much simpler than multiplication hardware for ordinary arithmetic, which has made these fields very attractive for a variety of coding and cryptographic applications in hardware.

Unfortunately, doing these multiplies in software is much less efficient, because processors don’t have suitable instructions and hardware dedicated to these jobs. If you’re using coding theory to transmit large amounts of data, this could give rise to quite an inconvenient CPU load.

My colleague Sebastian Egner and myself have been investigating an alternative that could prove considerably more efficient: instead of working in GF(2n), work in GF(p) for some prime p. This is probably the most familiar finite field: you simply do addition and multiplication “mod p“. So you can use the addition and multiplication units on a normal processor to do most of the work. Two problems arise:

* Multiply may be fast, but division isn’t. How will we prevent all those “mod p” operations slowing us to a crawl?

* We have a vector of bits to encode, and we have to encode it as another vector of bits. Won’t an intermediate representation “mod p“, with its awkward prime order, introduce unacceptable inefficiencies in coding?

Our proposal addresses both these problems. One application in particular motivated our thinking about all this. Dot products over finite fields are at the heart of both the distribution and the authentication in Microsoft’s BitTorrent-like research project, Avalanche. Our approach has the potential to make the Avalanche software several times more efficient.

We propose to work in the finite field GF(232-5) – here p = 232-5 = 4294967291 is the largest prime below 232. So representing a vector in this field for transmission is easy: just transmit it as a stream of 32-bit words. Converting from a vector of bits to a vector in this field is more interesting, but we’ll come to that later – first, we’ll consider how to do arithmetic efficiently in this field.

The basic idea is to put off that expensive “mod p” operation for as long as possible, and then do it as cheaply as we can. The operation we’re most concerned about is the inner product of two vectors – multiply each component of one vector with the corresponding component in the other, and sum all the products. We do this as follows:

* Multiply together the 32-bit vector components to get a vector of 64-bit products
* Treat each product as consisting of a 32-bit “high half” and a 32-bit “low half”
* Sum the “high halves” and “low halves” separately to get two 64-bit sums
* Multiply the “high half” sum by five, and add it to the low half.
* Multiply the “high half” of this 64-bit sum by five, and add it to the low half to get a new 64-bit sum
* Finally, while the sum is p or greater, subtract p.

The “multiply by five” trick works because 232 ≡ 5 (mod p). This assumes that vectors have 232 or fewer components, though if they have more than 232/25 components the final summing up steps have to be modified slightly.

Sebastian has implemented this approach and compared it to his own highly optimized GF(216) implementation – a matter in which he has no mean expertise – and found it to be over twice as fast. We believe that considerably greater speedups are possible since use of this field makes the work considerably more amenable to implementation using SIMD instruction sets such as SSE2.

So that leaves us with one last problem – how do we convert the bits to be encoded into a vector in this field ready for encoding?

The most efficient encoding possible reads the input as one huge base-2 integer, and converts it to a base-p integer. This encoder has an asymptotic coding rate of about 1 – 2-34, but encoding and decoding is far, far too slow – in fact, the time grows quadratically on the length of the input. A more time-efficient encoding is needed, even at some sacrifice of coding rate.

Our first ideas to this end were based on the idea of escaping. If the data is compressed, then words of value p or above will be rare. We could just escape them in some way – for example, represent every word x where xp -1 as a pair (p -1, xp +1). However, this means that the length of the encoded vector is unpredictable, which could be a pain. Worse, there will be pathological files whose encoded size is twice the size as other files of the same length.

We played with a variety of solutions to this problem, some quite efficient, but since then found a much more efficient encoding based on a somewhat different principle: break the data into blocks of a certain maximum size, and XOR the blocks with a constant.

Decoding in this encoding is lighting fast. Given an encoded vector

(y0, y1, … ,yk)

the decoded vector is simply (2y0y1, … 2y0yk). In other words, multiply the first word by two (as a 32-bit integer, not as a member of the finite field) and then XOR it with all the other words.

Note that for many applications there are many more recipients than senders, so while encoding need only be acceptably efficient, we want decoding to be very efficient. It’s hard to imagine that a more efficient decode is possible here.

Why does this work? Does every possible block have a valid encoding, and how do we find it? Suppose first that we set the maximum block size to, say 219 -1 words. By the pidgeonhole principle, there must be at least one 19-bit prefix that does not appear in the block. Let m be this prefix – then we choose y0 = 212(m ⊕ 0x7FFFF). Now XOR every word in the block with 2y0 – the result is guaranteed to have at least one of its first 19 bits clear, and is therefore guaranteed to be below p.

We find m by constructing a 64 kbyte bitmap, and combing through the data marking every prefix that does appear, then searching the bitmap for the first hole; as we note above there is guaranteed to be at least one. This bitmap will just fit into L1 cache on many processors so the search can be quite efficient.

219 -1 is an awkward kind of block size. We can extend it to 219 as follows: if we find that every 19-bit prefix is represented, each of them must appear exactly once. If d0 is the first word of the data to be encoded, then we choose y0 = (d0 ⊕ 0xFFFFFFF8) >> 1. Now the first word will encode as 232 -8 or 232 -7, which fits below p, and the subsequent words will have at least one zero in the first nineteen bits of their encoding and therefore also fit below p.

This approach adds one word to a block of 219 words, so the rate of the code is 1 – 2-19, which is efficient enough for nearly any purpose. However, we can push the rate further if desired. This approach extends straightforwardly to a block size of 229, but a 64 Mbyte bitmap is impractical – the table is randomly accessed, so once it extends beyond the size of the L2 cache the software will crawl due to cache misses. Instead we use a recursive approach:

* Pass over the dataset once, counting the frequency of the 10-bit prefixes; this requires only a 4kb table
* Find the least frequent 10-bit prefix
* Pass over the dataset a second time copying the words with this prefix to an auxiliary table
* Recurse over this auxiliary table

This gives us a rate of 1 – 2-29.

We can squeeze one more bit out. In a block of 230-1 words, at least one 29-bit prefix must appear at most once. In other words, there must exist a value m such that either

* m is the value of a word whose 29-bit prefix does not appear in the data, or
* m is the value of a word in the data whose 29-bit prefix does not appear anywhere else in the data

In either instance, we choose y0 = (m ⊕ 0xFFFFFFF8) >> 1. The words not equal to m have a different 29-bit prefix, and so they encode to values below 232 -8 and thus below p. m itself will encode to either 232 -8 or 232 -7, which are also both below p.

In summary: if you need to treat data as vectors over some field on a conventional computer architecture, it can be significantly faster to compute modulo the prime number p = 232-5 than using a field with size a power of two. You will have to encode arbitrary 32-bit words into values modulo p, but the encoding and decoding algorithms can be extremely fast, and at negligible redundancy: the rate of the code can be as high as 1 – 2-30. This is particularly useful for network coding applications.

by
Paul Crowley
on
29/11/06

Keeping Trac of Bugzilla

How I learned to stop worrying and love Trac

It’s hard to believe only a week has passed since we’ve integrated the project blog into Trac – with the Timeline view displaying both source code changes and blog entries, it now became really convenient to just keep Trac open and refresh it every time we want to get an update on the state of the project. The first thing that bothered me was the looks – Trac comes with a lovely warm coloured skin – the Trac team has done an excellent job designing the interface, but next to the rest of our web-based tools (which all conform to the same colour scheme as this very blog) Trac just looked a bit out of place. Also, with so many Trac installations on the web it started being a bit confusing. I thought I’d have a stab at customizing Trac’s look.

It ain’t no sin, to take off your skin…

First, I grabbed the logo and favourites icon from our website, placed them in the htdocs/ directory of our Trac installation, and changed the entries pointing to the in the Trac configuration file.

In order to change the colour scheme I changed the template site_css.cs in the project’s templates directory, and added a directive to pull a CSS file I had prepared based on our standard css (the template is being pulled into each and every page that Trac renders, so it’s better not to put the css directly in it – using a separate css file allows browser to cache the code). Getting the CSS selectors right was mostly straightforward, though I do think the HTML templates could have been better designed for customization.

Getting rid of the ticketing system

While we already make use of Trac’s source browser, timeline and wiki, we don’t need the ticketing system (we use Bugzilla for bug-tracking). I wondered whether I could make Trac hide the navigation and search items for tickets. Turns out it’s easier than I hoped – since all Trac components inherit from the Component class, it’s easy to tell Trac which components to load based on the package they’re in – All I have to do is add the following to trac.ini:

[components]
trac.ticket.* = disabled

With View Tickets, New Ticket and Roadmap gone from the Trac menu the system looks much less cluttered, and there’s no risk that anyone would try to use these features, that are incompatible with our existing bug-tracking system.

Bugs, not Tickets

I still found myself switching between Trac and Bugzilla all the time, and that can be very frustrating – so frustrating it can ruin even days like this, when the weather is so fine. Something had to be done – it was time to integrate our project’s bug-list from Bugzilla into Trac.

Bugzilla is notorious for having very sparse documentation, but the clues are there if you want to read them. For every bug-list you get from bugzilla you can pull an RSS feed, and for the detailed bug views you can request the data to be sent in Bugzilla’s proprietary XML format. Retrieving information from Bugzilla is possible using a combination of the two.

First you must authenticate – Bugzilla uses cookies to keep track of your login, so it’s easy to fool it by posting the username and password to the login form, capturing the cookie and sending it back to Bugzilla in all future requests.

To search for bugs I build a query string (based on what I learned from playing with the standard web interface) and add the parameter ctype=rss. I parse the result with feedparser and extract the id from all the bugs in the result by parsing the bug url.

With the list of bug ids I can initiate a call to show_bug.cgi, passing the ids and the parameter ctype=xml. I parse the result using xmltramp and iterate over it, building a neat little list of dictionaries which can be easily used from Python code and from templates.

Integration with Trac

As we’ve seen last week, integrating new functionality into Trac is easy peasy – all you have to do is write classes that inherit from Component and implement interfaces – the documentation is not great but the Trac source code is very easy to read and pretty self-documenting.

First, I added a new navigation item (implementing INavigationContributor) which reads Bugs and points to /bugs. I added a new template (which I placed inside my Python package) and implemented IRequestHandler so that it reads all of the project’s bugs out of bugzilla and uses the template to render them, linking each bug to its corresponding page on Bugzilla.

One of the advantages of having a centralized system for project management is that you can search over all of it’s components – by implementing ISearchSource I could provide a hook into Bugzilla’s search – by simply forwarding the search terms to the short desc and long desc parameters in Bugzilla you get pretty good coverage.

Finally, bugs are a moving target, they just keep changing all the time as participants in the project add comments and change the bug’s status, so it would be useful to see these changes interleaved with the rest of the chronological list in the timeline view. After last week’s little adventure with the blog I am already an ITimelineEventProvider pro – it only took a couple of minutes before I had bug changes in the list.

The code

You can get the plugin from here. It’s buggy, and extremely inefficient. It authenticates before each and every request, thus making the interface to Bugzilla even slower than it needs to be. The entries in the timeline indicate that the bug changed, but not why it changed – you’ll have to follow to Bugzilla if you’re curious enough, the bug list doesn’t sort the bugs by anything, doesn’t allow any report except for the list of all bugs, and it doesn’t look very nice either. If you do decide to give it a try, don’t forget to send us some patches :)

Happy Trac(k)ing!

by
Tom Berger
on
27/11/06

Keeping Trac of the Project Blog

Software Project Management

In order to manage projects efficiently and control the quality of our software, we use a combination of free software tools and ad-hoc bespoke solutions we developed for gluing them together. We use Bugzilla to track bugs and tasks, CVS, Subversion or Darcs for source revision control, Pyle (TonyG’s WikiWiki) for knowledge retention and collaboration, CVSZilla for matching source checkins with bugs, our TimeTracker and a collection of scripts that make it all work together.

When we started setting this system up this type of integration was not common in software development. We realized that this must be a requirement shared by many software developers, but a centralized tool failed to emerge.

Enter Trac

A few months ago we started hearing about Trac, a free software management hub which does most of what we’d like to be able to do. Trac combines a simple issue-tracking system, a wiki and a source browser which integrates perfectly with Subversion. Unfortunately, the more we looked at Trac the more we realized that while the potential of such a tool is great, Trac itself is way too simplistic for us to be able to abandon our carefully fine-tuned system for it.

- Source Control: Trac uses Subversion, but we’re still using CVS on most projects, mostly because we are watching carefully the recent developments in the free source control arena carefully. Subversion is becoming the default tool for CVS survivors, but there are now many any tools on offer.

- Wiki: Trac’s wiki is cool, but Pyle (and especially the stack of extensions we developed for it on a by-need basis) still does lots of things we can’t get done with a generic wiki.

- Bug / Issue Tracking: This is probably where Trac is missing the most. Bugzilla is a very powerful tool. Sometimes it gets in your way with cumbersome management, but we already rely on so many features to facilitate our work-flow – it would be impossible for us to switch to anything less powerful.

- Permissions: Trac’s permissions system is very simple, and for a commercial company like us that often works on projects with many partners it is just not enough.

Experimenting with Trac

For a recent project we decided to use Subversion as our source repository. Subversion was easy to start working with, and it allowed our partners, who are used to working with Subversion to work on the project effectively. One thing we found missing is a good repository browser. Sure, Subversion works over WebDAV, so you can always point your browser at the repository, but we got used to using a fancy repository browser with syntax highlighting, diffing and a search interface. That’s when we suddenly remembered Trac – maybe we couldn’t use it for the entire tool-chain, but the source browser is certainly superior to anything else we’ve seen. Apart from the source browser itself, Trac provides a timeline view, which aggregates all the activity on the project into a chronological list. Changes to the source code and to wiki pages, ticket activity, all in one long list. I welcomed this feature – previously I’ve been using a script that emails me about every new checkin, but this is just so much more convenient.

Apart from source control and bug-tracking, we’ve started using blogging software for the management of the project. This was a lucky gamble by MikeB, who identified a hole in our communication / knowledge-retention scheme. Wikis are great, but they fail to represent activity in a chronological manner, and so much in project management is, in fact, a serial process. Blogs also seem to reduce the barriers to participation – for some reason we find it easier to post a new entry to the blog than start a new wiki page.

Aggregating the Blog with Trac

Soon after I’ve started using Trac’s timeline I realized that I am now following two lists: one page displays the repository checkins, another page displays free textual entries people on the project posted to the blog. The two lists often correlate (for example: someone would make a change to the way some module works, commit the work, the post an explanation to the blog). Clearly, it would be useful to be able to combine the two lists into one.

With free software there’s only one way to find out – I downloaded the source code for Trac and decided to see how difficult it would be to extend it to grab feeds from our blogs. If Trac is easy to tweak, I thought, than maybe we do have a chance of integrating it into our own system. Turns out it’s really easy – the Trac developers anticipated this need, and as of version 0.9 Trac is built using a components architecture, and there’s a standard format for writing plugins and distribute them.

I decided to use the excellent Universal Feed Parser library for reading the feeds, and implement the ITimelineEventProvider interface[1], wrapping entries from the feed with Trac timeline entries.

The rest was entirely unexciting. after an hour I had the plugin written and tested (~50 lines of code, that’s all it takes). After another four hours I managed to install it on our project machine, get disappointed (because it didn’t work), find out that the version of Trac we were using is ancient[2], from before Trac even had plugins, install from source, then make everything work again and upgrade the database.

You can get the plugin here. Beware – the plugin doesn’t do any caching at all and will request the feeds again and again every time you refresh the Trac timeline. I hope to fix this soon, but until then you should avoid using it to import public feeds from high-traffic sites – if you wont get banned you’ll at least earn yourself a few enemies.

Conclusions

- Trac is the best thing that happened to humanity since the cultivation of chik peas. Finally software project management has a centralized hub almost anyone can use to make the process more effective.

- Trac is beautifully designed and implemented. Extending it is easy and there’s a strong community of developers and users. I will continue to push for faster adoption of Trac, now that I know that it can be made to do whatever we want it to. It’s so much better to invest in extending a well supported product than writing yet another glue script.


[1] Interfaces in Python?, i hear you say … Python doesn’t have interfaces as part of the language (in the way that Java or C# has them), but more and more Python projects include some kind of interfaces, leveraging the language’s reflective facilities to support contracts. Will the Python community ever standardize on the way it does interfaces, either by including them in the language, or by using a common library for implementing them?


[2] If you are the nostalgic type, try running Debian Stable on your computers – you’ll get a nice reminder of what the free software world looked like a few years ago. It can be frustrating, and it can make you apprecaite more up-to-date operating systems even more.

by
Tom Berger
on
20/11/06

LShift and Habitat win Retail Systems Award



On Tuesday evening, Stuart Mottram and Michael Bridgen from LShift attended the Retail Systems Awards ceremony at the Grosvenor Hotel, London, along with the Gift Service project team from Habitat. It was a tense evening, however they all managed to resist rumbling their hands on the table as the Habitat Gift Service was announced as the winner of the giftcard / loyalty programme of the year category.

The project was up against innovative projects from Sainsbury’s and The Co-Op and so LShift and Habitat were pleased by the extent to which the Gift Service impressed the judges. The award recognises the retailer and agency who can demonstrate use of technology to drive improvements in customer relationship management. The primary criteria for judging are the delivery of definable and significant business benefits, innovation and originality of application, ROI and project management issues such as delivery on time and within budget.

by
Administrator
on
16/11/06

Web Development with Python

For a new web project we’re working on, we wanted to use a dynamic environment. Initially we considered Ruby on Rails, but after using it for some time decided the shortcomings of the framework and the language outweigh the benefits. We’ve resolved to use Python, a language we feel very comfortable with, and I went to test several pythonic web components, in particular the stuff that gets bundled with TurboGears and web.py (we briefly considered Django, the web framework adopted by Python’s creator Guido van Rossum, but identified all of the same problems we had with Rails, so we didn’t spend too much time with it).

TurboGears

TurboGears is a collection of Python web components, rather than a top-down complete framework like RoR. It uses CherryPy for the controller (using WSGI for low-level infrastructure), SQLObject for database access and manipulation, [old link: http://kid-templating.org/ ] Kid for templating and [old link: http://mochikit.com/ ] MochiKit as its main Javascript library. On top of that TurboGears provides several integration tools which are supposed to turn this collection into a ‘framework’ (a skeleton generator, some simple web-bsed administration, a shell preloaded with the project modules, etc).

My experience working with TurboGears has been much better than working with RoR. Partly, that’s because I know and like Python better than Ruby, but mostly the approach is saner and much less demanding – there’s very little magic, and it’s quite easy to understand how to go about implementing your ideas. With TurboGears I was able to complete the same amount of work I did in my RoR test in ~20% of the time!

I wasn’t very impressed with the framework / integration parts of TurboGears. The quickstart tool (for building the skeleton) did some stuff that was quite obvious and I could have completed myself with little effort, and some stuff that I didn’t expect or want it to do (just like RoR). The shell is convenient, but again, I felt that I paid a price in flexibility without gaining much – after all that’s just the Python REPL with some modules imported. The build and deployment tools are a great idea, but they don’t seem to be powerful enough, and extending them may or may not be easier than just doing it ourselves using a build tool like SCons or Make, or simply writing straight python scripts.

Components

Data Access: SQLObject – When we first tried out SQLObject we found it enticing – the interface is simple and easy to understand and the features it provides looked like a perfect match for our needs. After working with SQLObject for a while, however, we were shocked to discover that it’s hardly adequate. SQLObject’s model of working with databases is naive, at the very least – for every attribute you set (on a data-bound object) the mapping issues a separate SQL update statement and what’s worse, if you plan to use SQLObject’s cache (which is the only way to use it efficiently to query the database) you can simply forget about transactions. I find it hard to understand how TurboGears can get away with that – after all, with optimization and ACIDity thrown out of the window, it’s hard to see in what way saving objects to a relational database is any better than simply pickling them.

Templating: Kid – Kid is an attribute language (like ZPT) – it allows you to generate XML by inserting special attributes instructing the processor how to do dynamic things (like looping, inserting values and so on). Unlike ZPT, Kid is much more pythonic – almost all constructs map directly to control flow structures known from Python. I was able to start working with Kid without having to read any of the documentation and got good results immediately. (Kid is now slowly being replaced by Genshi as the state-of-the-art in Python XML templating, but for our needs we felt that Kid is just good enough).

Javascript / AJAX: MochiKit – I had quite a lot of previous experience with MochiKit, so I didn’t bother testing it in this context. MochiKit shines in comparison to other JS libraries (Prototype, Dojo, YUI) – it is very high quality, got excellent documentation, and anyone with experience using Python or any functional programming languages will feel very comfortable using it. I prefer it to any other JS library.

Controller: CherryPy – I didn’t like CherryPy, although it did look adequate. Particularly, CherryPy relies on mapping Python objects and attributes to HTTP access more than I would like it to. My experience working with web.py was better (the library is richer and provides some neat facilities for rapid development, the mapping of requests to handlers is saner).

We ended up deciding to mix and match – we use web.py as a controller. Instead of an ORM we design our own model objects and connect them to the database using web.py’s data access facilities. We use Kid for templating and some other Python libraries for supporting more specific features of the website.

Conclusions

  • Dynamic languages and frameworks are the hot thing in web development for a good reason, but RoR’s lead is not wholly justified. 1
  • Python has a huge collection of frameworks and components for web development, but the quality varies quite a lot. Choosing the right mix of tools and libraries can be difficult, but being able to mix and match gives you great power and flexibility.
  • TurboGears is a nice take on this idea, but the choices made are dissatisfactory.
  • Web.py is an excellent start, but to do useful things you’ll have to augment it with bits from other sources.

[1] I still owe you a detailed critique of RoR – possibly in a future post. In the meantime, don’t take my word for it – lot’s of people swear by it – you too may find it to be the right tool for your project.

by
Tom Berger
on
15/11/06

We want to recruit you

LShift need to recruit a number of senior developers. If you want to join one of the most skilled and interesting technical teams around, how about submitting a CV and some code samples? See this page for details.

by
andy
on

We Want to Recruit You

LShift need to recruit a number of senior developers. If you want to join one of the most skilled and interesting technical teams around, how about submitting a CV and some code samples?

As a company, our outlook is moulded entirely around the principle of recruiting only candidates who combine breadth and depth of outlook and experience. We do not recruit people to slot into a particular technical role, but only all-rounders, familiar with the broadest range of operating environments, programming languages, etc., combining this with a high level of software engineering expertise. Our technical staff are all required to act as client-facing Lead Developers, managing all technical aspects of a project from its inception through to delivery.

If you think that you meet up to the high standards we expect from a potential employee, read how to apply and complete our online application form

by
Administrator
on
09/11/06

An AJAX Erlang Jukebox

Erlang Jukebox Screenshot

Sometime around the beginning of July I rewrote our internal jukebox in Erlang. It’s taken me four months to get a round tuit, but new stock has just arrived: here’s the code for our AJAX jukebox
web-application, as a tarball. (There’s also a mercurial repository:
hg clone http://hg.opensource.lshift.net/erlang-jukebox/.)
Click on the image for a screenshot.

To run it, you will need Erlang, Yaws (the Erlang webserver), a modern browser, href=”http://www.mpg123.de/”>mpg123, ogg123 (from
vorbis-tools)
, and some MP3 or OGG files to listen to.

I’ve made a start on a bit of href=”http://hg.opensource.lshift.net/erlang-jukebox/raw-file/default/doc/index.html”>documentation and design rationale. Here are a few highlights for the curious:

* You point the jukebox at one or more root URLs, which it then spiders, collecting URLs for MP3 and OGG files, which it puts into a simple flat-file database. Just expose, say, your iTunes folder via Apache, point the Jukebox at it, and you’re away.

* It relies on mpg123 and ogg123′s support for playing HTTP-streamed MP3 and OGG files, respectively, rather than retrieving or playing the media itself.

* The user interface is completely written in HTML+Javascript, using prototype for its event binding and href=”http://en.wikipedia.org/wiki/XMLHttpRequest”>XMLHttpRequest support.

* The server side of the application communicates with the user interface solely via JSON-RPC.

* Erlang made a great platform for the server side of the application. Its support for clean, simple concurrency let me design the program in a very natural way.

As part of the development of the program, I built a few stand-alone modules that others might be interested in reusing:

* the href=”http://hg.opensource.lshift.net/erlang-jukebox/file/default/priv/execdaemon/”>execdaemon
and its associated Erlang href=”http://hg.opensource.lshift.net/erlang-jukebox/file/default/src/execdaemon.erl”>controller
module is a filthy hack I threw together to get better than the built-in support for POSIX process control and signalling from Erlang.

* href=”http://hg.opensource.lshift.net/erlang-jukebox/file/default/priv/server_root/htdocs/jsonrpc.js”>jsonrpc.js
is a tiny, simple layer atop href=”http://hg.opensource.lshift.net/erlang-jukebox/file/default/priv/server_root/htdocs/json.js”>json.js
and prototype that supports basic parsing of href=”http://manual.dojotoolkit.org/WikiHome/DojoDotBook/Book9″>Simple Method Description (SMD) files, generating Javascript client proxy objects for each JSON-RPC service described by an SMD.

[Update: fixed an issue with json.js, tweaked the use of screen real-estate, and now seems to work with Safari, IE6, and Opera. I've changed the tarball link above to point to the new version.]

[Update: fixed a couple of links that had broken over time as the darcs repository evolved.]

[Update: moved from darcs to mercurial, and altered the links to reflect the change.]

by
tonyg
on
06/11/06

Search

Categories

You are currently browsing the LShift Ltd. blog archives for November, 2006.

Feeds

Archives

2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us