technology from back to front

Archive for the ‘Howto’ Category

Things I wish I’d known about Google Docs

I have had cause to write a lot of Google Docs recently, which leaves me furnished with a stock of interesting observations that others might find helpful. With no further ado…

It doesn’t auto-number headers

I typically want my business-like docs to have numbered headings, so an H3 might be “2.4.1. Architecture considerations”. Word can just do this automatically and keep them up to date with the changing structure of your doc. Google Docs can’t, though there is a free add-on called “Table of contents” which performs a dual duty here:

  • It shows the structure of your documents headers in a sidebar, which is incredibly handy for reviewing that structure and for navigating the doc (click to jump).
  • It can optionally renumber the headers, though it only does this when explicitly invoked via a button, which you have to remember to do after inserting new headings or restructuring. The numbering is just inserted as ordinary text in the doc as part of each header so it’s crude and non-semantic.

Rather surprisingly, the add-on can be very slow indeed to do its thing – even clicking on a link often took 3+ seconds to actually jump to the location in a 27 page doc. This is hard to fathom, but most docs are fairly short and it behaves acceptably well. Add-ons are trivially easy to install – just go to the Add-ons menu in your doc – so I would recommend everyone to dive in. Once you have used this particular add-on once, it’s two clicks to turn it on for any doc from the menu.

Printing is a lame experience

In Safari when you hit cmd-P to print, nothing happens. This leaves you a little bewildered, so you try again, and then you try invoking the menu item with the mouse rather than using the keyboard shortcut. A few seconds after the initial attempt, you might notice a little icon swoop up to the downloads button in the Safari toolbar – and when you click up there to check you’ll find each of your print attempts have caused it to download a PDF of the doc, after a multi-second wait in each case, naturally. Then you curse, open the PDF in Preview and print it from there.

I suspect it’s a lot better in Chrome, but for my money there’s no excusing such a poor experience in Safari. At the very least it should give feedback to show that it’s received your request to print and is working on it, and then make it clear what it’s actually done.

You can’t have mixed orientation pages

I wanted to include a landscape format diagram on its own page. Tough – all pages in the doc must be the same orientation.

Pasting from a Google Spreadsheet doesn’t maintain formatting

This is a trivial little thing, but annoying: if I paste a table (of estimates breakdowns, say) from a Google Spreadsheet into a Google Doc, it drops some of the text alignment formatting – so cells that were left-aligned become right-aligned.

Really it’s a shame I can’t embed a Spreadsheet directly in the doc, especially where I just want to get totals added up for me.

It doesn’t have a concept of appendices

Then again, I always found Word rather wanting in handling appendices nicely.

Drawings don’t support gradients

I was shocked and dismayed (again) to see no gradients in Google Drawings. The whole story of these apps seems to be excruciating simplicity, which is great in a way, but the reluctance to gradually increase the feature set puzzles me when they’re genuinely trying to compete with Word.

In one case I resorted to rolling my own gradients by duplicating and offsetting a shape repeatedly with very low opacity (so the opacities gradually stack up), then grouping the results. You only want to try this in extreme circumstances where it’s really important to you.

Basically, it’s pretty awesome

All of those irritations aside, it’s still my go-to tool for bashing out docs, partly because I don’t have Word and am not in a hurry to acquire it. Learn the keyboard shortcuts, use the Table of contents add-on, and you can be quite effective. I suppose the simplicity may even help to concentrate on the content and structure.

That said, an online editor that had the same cloud storage, collaboration and a much improved feature set, would be a big draw. Frankly it’s probably out there if only I look, but Google have done just enough to grab and retain the market.

Sam Carr

Two Magnolias, one container

We are using Magnolia in a number of projects here at LShift. I have been feeling that Magnolia has a simple way to do most things, but often there are a number of other plausible alternatives that gradually lead you into wasting enormous amounts of time.

Here I want to present a simple way to get both author and public instances of Magnolia running in your dev environment in the same container. It may seem very obvious. If so, good. This was not the first way I tried, and it cost me a lot of time.

We will be aiming for:

  1. Easily deploying Magnolia onto a stage or production environment — one file, one or two configuration parameters only.
  2. Making it easy for a tester to launch local public and author instances of Magnolia that talk to each other correctly.
  3. Making it easy for a developer to debug Magnolia, having both instances running under the control of the IDE.


I will be assuming that you have a parent project with a child project that represents your webapp. I also will assume that you have copied the contents of src/main/webapp/WEB-INF/config from the magnolia-empty-webapp project into your own webapp project. The source for this is in the ce-bundle at, but assuming you have magnolia-empty-webapp as a dependency (as recommended) you should be able to pick it up from your target directory.

I will be using Tomcat 7 as Tomcat is recommended by Magnolia and 7 is the latest stable version at the time of writing.

Deploying Magnolia to Stage or Production environments

For deployment to stage or production you don’t want both author and public deployed in the same container, or even on the same machine; so we only need to be able to configure a single running instance to be either author or public.

This is quite simple and well documented. In your webapp project, open your src/main/webapp/WEB-INF/web.xml (that you copied from the empty webapp project as described above) and look for the lines:


You will need to add your own line at the top of the <param-value> section:


Then when you deploy your WAR, you can simply set the instanceName environment variable to magnoliaPublic or magnoliaAuthor depending on what type of instance you want. As you can see from the fragment of web.xml above, this will make the settings in src/main/webapp/WEB-INF/config/magnoliaAuthor/ or src/main/webapp/WEB-INF/config/magnoliaAuthor/ active, respectively. Ultimately you will want to make more files in more subdirectories (called, perhaps, stageAuthor, productionPublic and so on) with appropriate settings for those environments and you can simply make instanceName refer to the appropriate subdirectory.

Local Magnolia from the command line

Now, it would seem plausible that this method can be made to make your local testing environment work. Plausible, but wrong. This is the difficult way. You’ll start writing your context.xml files, then you’ll need a server.xml file, then before you know it you’ll be building your own Tomcat so that you can manage it all.

The “secret” is to use the fact that the web.xml already refers to the context path, in the form of the line:


(as well as in another line which we won’t concern ourselves with). This means that, instead using an environment variable, you can deploy the same WAR file to two different context paths and Magnolia will set itself up differently for each. And if you choose the paths /magnoliaAuthor and /magnoliaPublic you will automatically pick up the properties files provided by the empty webapp and all will be fine — Magnolia even sets up the author instance to point at http://localhost:8080/magnoliaPublic by default, so you won’t have to configure it yourself!

Well, actually, it’s not all fine. If you try this, you’ll find that one of your instances will refuse to start, complaining that its repository is already locked. Of course, they are trying to use the same repository. Fix this by adding a line similar to the following to magnoliaPublic/


The name of the subdirectory is not important. Note that, as it stands, this will change where the stage and production deployed Magnolias you configured above store their data. If that bothers you, now might be a good time to make your productionPublic/ and similar files.

So, how do we get that running painlessly so that your tester doesn’t keep asking you how to do it?

Add the Tomcat Maven plugin to your webapp’s pom.xml, and configure it to launch your WAR twice on two different context paths:


Replacing and my-webapp with your own webapp’s group and artifact id.

Now you can run your Magnolia simply with:

mvn tomcat7:run-war

For reasons best known to the Tomcat plugin, boring old mvn tomcat7:run doesn’t work — deploying only one Magnolia in its default location. Sorry.

The instances are available, of course, at http://localhost:8080/magnoliaAuthor and http://localhost:8080/magnoliaPublic.

Local Magnolia from your IDE

Now you’re on the home straight. Here’s how I configure the Tomcat plugin in Eclipse:

Firstly, you need to get Eclipse to know about Tomcat 7. The foolproof way to do this is as follows: Window -> Preferences -> Server -> Runtime Environments -> Add… -> Apache Tomcat v7.0 -> Next. Now give it a location that is writable by you in the “Tomcat installation directory” box and click “Download and Install…”; using your pre-existing Tomcat might not work if it isn’t laid out in the way Eclipse expects. Now Finish and open the Servers view.

You can now add a new Tomcat 7 server and double-click on it. Tick “Publish module contexts to separate XML files”, set the start timeout to something large like 480 seconds, and in the modules tab add your webapp project twice; once with the path /magnoliaAuthor and once with the path /magnoliaPublic.

Now you can launch and debug your two instances of Magnolia from within your IDE!

Tim Band

Using Debian Multiarch for cross-compiling

I’ve recently acquired a [Raspberry Pi](, and was considering using it for SNES
emulation. However, as it turns out that [Zsnes]( is
x86-only, and that [Snes9x]( got kicked out of Debian a
while back for having an annoying “no-commercial use”
[license](, so we’re into the
compile-it-yourself options. As Snes9x is a configure/makefile-type project, I
should in theory be able to just compile in on the Pi directly, but then we hit
the problem that it hasn’t got enough RAM to be able to do all the compiling…
fine, fine, I’ll go back into the messy world of cross-compiling.
Read more…

Tom Parker

Publishing your mercurial-server repositories to the Web

I got a couple of queries recently on how to make your mercurial-server repositories publically readable over HTTP. Happily this isn’t hard to do, and doesn’t really touch on mercurial-server itself. Here’s how we do it on our Debian systems; in what follows I assume that you have installed mercurial-server on, and that you’re not already using that machine as a web server for anything else. First install these packages; note that they tend to have a lot of stuff you don’t need marked as recommended, so don’t install those things:

apt-get --no-install-recommends install apache2 libapache2-mod-fcgid python-flup

Create the following four files:


/var/lib/mercurial-server/repos = /var/lib/mercurial-server/repos


style = gitweb
allow_archive = bz2 gz zip
baseurl =
maxchanges = 200


#!/usr/bin/env python

from mercurial import demandimport; demandimport.enable()

import os
os.environ["HGENCODING"] = "UTF-8"
os.environ["HGRCPATH"] = "/etc/mercurial-server/hgweb.hgrc"

from mercurial.hgweb.hgwebdir_mod import hgwebdir
from mercurial.hgweb.request import wsgiapplication
from flup.server.fcgi import WSGIServer

def make_web_app():
    return hgwebdir("/etc/mercurial-server/hgweb.config")



<VirtualHost *>
    AddHandler fcgid-script .fcgi
    ScriptAlias / /etc/mercurial-server/hgwebdir.fcgi/
    ErrorLog /var/log/apache2/hg/error.log
    LogLevel warn
    CustomLog /var/log/apache2/hg/access.log combined

Finally run these commands as root:

chmod +x /etc/mercurial-server/hgwebdir.fcgi
mkdir -p /var/log/apache2/hg
cd /etc/apache2/sites-enabled
rm 000-default
ln -s ../sites-available/hg
/etc/init.d/apache2 reload

Your files should now be served at . Sadly because of a design flaw in hgwebdir, there’s no easy way to get Apache to handle the static files it needs, but these are pretty small so there’s no harm in letting hgwebdir handle them. The “rm 000-default” thing seems pretty undesirable, but without it I can’t seem to get this recipe to work.

I’ve chosen FastCGI as the connector. This has the advantage that

  • unlike CGI, it doesn’t fork a new handler on every request
  • unlike mod_python, it keeps your Mercurial handler separate from your web server
  • unlike SCGI, it will automatically start the service for you if it’s not already running, which is a massive convenience

I’m not aware of any other way of working that offers all three advantages.

As soon as a version of lighttpd with this bug fixed makes it into Debian, I’ll add my recipe for that.

Paul Crowley

Network server programming with SML/NJ and CML

My experience with [SML/NJ]( has been almost uniformly positive, over the years. We used it extensively in a previous project to write a compiler (targeting the .NET CLR) for a pi-calculus-based language, and it was fantastic. One drawback with it, though, is the lack of documentation. Finding out how to (a) compile for and (b) use [CML]( takes real stamina. I’ve only just now, after several hours poring over webpages, mailing lists, and library source code, gotten to the point where I have a running socket server.

## Download source code, building, and running

The following example is comprised of a `.cm` file for building the program, and the `.sml` file itself. The complete sources:

* [``][cm]
* [`test.sml`][sml]


Running the following command compiles the project:

ml-build Testprog.main

The `ml-build` output is a heap file, with a file extension dependent on your architecture and operating system. For me, right now, it produces `test.x86-darwin`. To run the program:

sml @SMLload=test.x86-darwin

substituting the name of your `ml-build`-produced heap file as necessary.

On Ubuntu, you will need to have run `apt-get install smlnj libcml-smlnj libcmlutil-smlnj` to ensure both SML/NJ and CML are present on your system.

## The build control file

The [``][cm] file contains

Group is

which instructs the build system to use the CML variants of the basis and the standard SML/NJ library, as well as the core CML library itself and the source code of our program. For more information about the SML CM build control system, see [here](

## The example source code

Turning to [`test.sml`][sml] now, we first declare the ML structure (module) we’ll be constructing. The structure name is also part of one of the command-line arguments to `ml-build` above, telling it which function to use as the main function for the program.

structure Testprog = struct

Next, we bring the contents of the `TextIO` module into scope. This is necessary in order to use the `print` function with CML; if we use the standard version of `print`, the output is unreliable. The special CML variant is needed. We also declare a local alias `SU` for the global `SockUtil` structure.

open TextIO
structure SU = SockUtil

ML programs end up being written upside down, in a sense, because function definitions need to precede their use (unless mutually-recursive definitions are used). For this reason, the next chunk is `connMain`, the function called in a new lightweight thread when an inbound TCP connection has been accepted. Here, it simply prints out a countdown from 10 over the course of the next five seconds or so, before closing the socket. Multiple connections end up running connMain in independent threads of control, leading automatically to the natural and obvious interleaving of outputs on concurrent connections.

fun connMain s =
let fun count 0 = SU.sendStr (s, “Bye!\r\n”)
| count n = (SU.sendStr (s, “Hello ” ^ (Int.toString n) ^ “\r\n”);
CML.sync (CML.timeOutEvt (Time.fromReal 0.5));
count (n – 1))
count 10;
print “Closing the connection.\n”;
Socket.close s

The function that depends on `connMain` is the accept loop, which repeatedly accepts a connection and spawns a connection thread for it.

fun acceptLoop server_sock =
let val (s, _) = Socket.accept server_sock
print “Accepted a connection.\n”;
CML.spawn (fn () => connMain(s));
acceptLoop server_sock

The next function is the primordial CML thread, responsible for creating the TCP server socket and entering the accept loop. We set `SO_REUSEADDR` on the socket, listen on port 8989 with a connection backlog of 5, and enter the accept loop.

fun cml_main (program_name, arglist) =
let val s = INetSock.TCP.socket()
Socket.Ctl.setREUSEADDR (s, true);
Socket.bind(s, INetSock.any 8989);
Socket.listen(s, 5);
print “Entering accept loop…\n”;
acceptLoop s

Finally, the function we told `ml-build` to use as the main entry point of the program. The only thing we do here is disable SIGPIPE (otherwise we get rudely killed if a remote client’s socket closes!) and start CML’s scheduler running with a primordial thread function. When the scheduler decides that everything is over and the program is complete, it returns control to us. (The lone `end` closes the `struct` definition way back at the top of the file.)

fun main (program_name, arglist) =
(UnixSignals.setHandler (UnixSignals.sigPIPE, UnixSignals.IGNORE);
RunCML.doit (fn () => cml_main(program_name, arglist), NONE);



HTML email from Squeak using Seaside

Recently, as part of a Seaside-based application running within Squeak, I wanted to send HTML-formatted notification emails when certain things happened within the application.

It turns out that Squeak has a built-in SMTP client library, which with a small amount of glue can be used with Seaside’s HTML renderer to send HTML formatted emails using code similar to that used when rendering Seaside website components.

sendHtmlEmailTo: toEmailAddressString
  from: fromEmailAddressString
  subject: subjectString
  with: aBlock
 | m b bodyHtml |
    m := MailMessage empty.
 m setField: 'from' toString: fromEmailAddressString.
  m setField: 'to' toString: toEmailAddressString.
  m setField: 'subject' toString: subjectString.
    m setField: 'content-type' toString: 'text/html'.
    b := WAHtmlBuilder new.
 b canvasClass: WARenderCanvas.
  b rootClass: WAHtmlRoot.
    bodyHtml := b render: aBlock.

   m body: (MIMEDocument contentType: 'text/html' content: bodyHtml).
    SMTPClient deliverMailFrom: m from
               to: {m to}
               text: m asSendableText
               usingServer: 'YOUR.SMTP.SERVER.EXAMPLE.COM'.

The aBlock argument should be like the body of a WAComponent’s renderContentOn: method. Here’s an example:

  sendHtmlEmailTo: ''
  from: ''
  subject: 'Hello, world'
  with: [:html |
    html heading level3 with: 'This is a heading'.
    html paragraph with: 'Hi there!']

OpenAMQ’s JMS client with RabbitMQ server

OpenAMQ has released their JMS client for using JMS with AMQP-supporting brokers. This afternoon I experimented with getting it running with RabbitMQ.

After a simple, small patch to the JMS client code, to make it work with the AMQP 0-8 spec that RabbitMQ implements (rather than the 0-9 spec that OpenAMQ implements), the basic examples shipped with the JMS client library seemed to work fine. The devil is no doubt in the details, but no problems leapt out at me.

To get it going, I checked it out using Git (`git clone
git://`). Compilation was as simple as running `ant`. Kudos to the OpenAMQ team for making the build process so smooth! (Not to mention writing a great piece of software :-) )

The changes to make it work with AMQP 0-8 were:

– retrieving the 0-8 specification XML

– changing the JMS client library’s `build.xml` file to point to the downloaded file in its `generate.spec` variable

– changing one line of code in `src/org/openamq/client/`: in 0-8, the final `null` argument to `BasicConsumeBody.createAMQFrame` must be omitted

– re-running the `ant` build

After this, and creating a `/test` virtual-host using RabbitMQ’s `rabbitmqctl` program, the OpenAMQ JMS client examples worked fine, as far as I could tell.

rabbitmqctl add_vhost /test
rabbitmqctl set_permissions -p /test guest ‘.*’ ‘.*’ ‘.*’

You can download the patch file I applied to try it yourself. Note that you’ll need to put the correct location to your downloaded `amqp0-8.xml` file into build.xml.


Where did all my space go?

Over the last little while, I’ve started to suffer from lack of space on the hard disk in my laptop, which is ridiculous, since there’s an 80GB disk in there and there is no way I have that much data I need to hang on to. I decided to do something about it last week. The main part of the problem was to figure out what was eating all the space: du tells you exactly what’s using how much, but it’s hard to get a feel for where your space has gone by scanning through pages of du output. So I built a program to help.

spaceviz is a small Python program that takes the output of du -ak, and builds you a picture and HTML client-side imagemap of your space usage

Running it against the output of du -ak / showed me very clearly where all the space had gone: not only did I have a few seasons of various TV shows on my disk (which I already knew were there), but I had 11 GB of unneeded gzipped RDF data left over from a project that finished earlier this year (that I had forgotten about). Instant win!

To run it for yourself, check out the mercurial repository, and run

make veryclean all ROOT=/

replacing the ROOT=/ with a definition of ROOT that points at the directory tree you want to generate usage data for. The makefile will take care of running du and for you. Edit the settings for WIDTH and HEIGHT in to change the dimensions of the generated picture.

The program runs not only on Linux without fuss, but also on OS X so long as you have the netpbm port installed to convert the python-generated PPM file to a browser-accessible (and much more efficiently compressed!) PNG.


Listening to your Webcam

Here’s a fun thing:

The Analysis & Resynthesis Sound Spectrograph, or ARSS, is a program that analyses a sound file into a spectrogram and is able to synthesise this spectrogram, or any other user-created image, back into a sound.

Upon discovery of this juicy little tool the other day, Andy and I fell to discussing potential applications. We have a few USB cameras around the office for use with camstream, our little RabbitMQ demo, so we started playing with using the feed of frames from the camera as input to ARSS.

The idea is that a frame captured from the camera can be used as a spectrogram of a few seconds’ worth of audio. While the system is playing through one frame, the next can be captured and processed, ready for playback. This could make an interesting kind of hybrid between dance performance and musical instrument, for example.

We didn’t want to spend a long time programming, so we whipped up a few shell scripts that convert a linux-based, USB-camera-enabled machine into a kind of visual synthesis tool.


Just below is a frame I just captured, and the processed form in which it is sent to ARSS for conversion to audio. Here’s the MP3 of what the frame sounds like.


Each frame is run through ImageMagick’s “charcoal” tool, which does a good job of finding edges in the picture, inverted, and passed through a minimum-brightness threshold. The resulting line-art-like frame is run through ARSS to produce a WAV file, which can then be played or converted to mp3.


You will need:

* one Debian, Ubuntu or other linux computer, with a fairly fast CPU (anything newer than ca. 2006 ought to do nicely).
* a USB webcam that you know works with linux.
* a copy of ARSS, compiled and running. Download it here.
* the program “webcam”, available in Debian and Ubuntu with apt-get install webcam, or otherwise as part of xawtv.
* “sox”, via apt-get install sox or the sox homepage.
* “convert”, apt-get install imagemagick or from ImageMagick.


The scripts are crude, but somewhat effective. Three processes run simultaneously, in a loop:

* webcam runs in the background, capturing images as fast as it can, and (over-)writing them to a single file, webcam.jpg.
* a shell script called grabframe runs in a loop, converting webcam.jpg through the pipeline illustrated above to a final wav file.
* a final shell script repeatedly converts the wav file to raw PCM data, and sends it to the sound card.

Here’s the contents of my ~/.webcamrc:

delay = 0
text = ""

local = 1
tmp = uploading.jpg
file = webcam.jpg
dir = .
debug = 1

Here’s the grabframe script:



while [ ! -e webcam.jpg ]; do sleep 0.2; done
convert -charcoal $CHARCOAL_WIDTH -negate $THRESHOLD webcam.jpg frame.bmp
mv webcam.jpg frame.jpg
./arss frame.bmp frame.wav.tmp --log-base $LOG_BASE --sine --min-freq $MIN_FREQ --max-freq $MAX_FREQ --pps $PIXELS_PER_SECOND -f 16 --sample-rate 44100
mv frame.wav.tmp frame.wav

You can tweak the parameters and save the script while the whole thing is running, to experiment with different options during playback.

To start things running:

* In shell number one, run “webcam”.
* In shell number two, run “while true; do ./grabframe ; done”.
* In shell number three, run “(while true; do sox -r 44100 -c 2 -2 -s frame.wav frame.raw; cat frame.raw; done) | play -r 44100 -c 2 -2 -s -t raw -”.

That last command repeatedly takes the contents of frame.wav, as output by grabframe, converts it to raw PCM, and pipes it into a long-running play process, which sends the PCM it receives on its standard input out through the sound card.

If you like, you can use esdcat instead of the play command in the pipeline run in shell number three. If you do, you can use extace to draw a spectrogram of the sound that is being played, so you can monitor what’s happening, and close the loop, arriving back at a spectrogram that should look something like the original captured images.


Late-binding with Erlang

Upon browsing the source to the excellent MochiWeb, I came across a call to a function that, when I looked, wasn’t defined anywhere. This, it turns out, was a clue: Erlang has undocumented syntactic support for late-bound method dispatch, i.e. lightweight object-oriented programming!

The following example, myclass.erl, is a parameterized module, a feature that arrived undocumented in a recent Erlang release. Parameterized modules are explored on the ‘net here and here. (The latter link is to a presentation that also covers an even more experimental module-based inheritance mechanism.)

-module(myclass, [Instvar1, Instvar2]).
-export([getInstvar1/0, getInstvar2/0]).
getInstvar1() -> Instvar1.
getInstvar2() -> Instvar2.

“Instances” of the “class” called myclass can be created with myclass:new(A, B) (which is automatically provided by the compiler, and does not appear in the source code), where A and B become values for the variables Instvar1 and Instvar2, which are implicitly scoped across the entirety of the myclass module body, available to all functions defined within it.

The result of a call to a new method is a simple tuple, much like a record, with the module name in the first position, and the instance variable values in order following it.

Eshell V5.6  (abort with ^G)
1> Handle = myclass:new(123, 234).
2> Handle:getInstvar1().
3> Handle:getInstvar2().

While this looks really similar to OO dispatch in other languages, it’s actually an extension to Erlang’s regular function call syntax, and works with other variations on that syntax, too:

4> {myclass,123,234}:getInstvar1().

The objects that this system provides are pure-functional objects, which is unusual: many object-oriented languages don’t clearly separate the two orthogonal features of late-binding and mutable state. A well-designed language should let you use one without the other, just as Erlang does here: in Erlang, using parameterized modules for method dispatch doesn’t change the way the usual mechanisms for managing mutable state are used. “Instance variables” of parameterized modules are always immutable, and regular state-threading has to be used to get the effects of mutable state.

I’d like to see this feature promoted to first-class, documented, supported status, and I’d also very much like to see it used to structure the standard library. Unfortunately, it’s not yet very well integrated with existing modules like gb_sets, ordsets and sets. For example, here’s what happens when you try it with a simple lists call:

5> lists:append([1, 2], [3, 4]).
6> {lists, [1, 2]}:append([3, 4]).

Not exactly what we were after. (Although it does give brittle insight into the current internals of the rewrites the system performs: a {foo, ...}:bar(zot) call is translated into foo:bar(zot, {foo, ...}) – that is, the this parameter is placed last in the argument lists.)




You are currently browsing the archives for the Howto category.



2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us