technology from back to front

Archive for the ‘Howto’ Category

Two Magnolias, one container

We are using Magnolia in a number of projects here at LShift. I have been feeling that Magnolia has a simple way to do most things, but often there are a number of other plausible alternatives that gradually lead you into wasting enormous amounts of time.

Here I want to present a simple way to get both author and public instances of Magnolia running in your dev environment in the same container. It may seem very obvious. If so, good. This was not the first way I tried, and it cost me a lot of time.

We will be aiming for:

  1. Easily deploying Magnolia onto a stage or production environment — one file, one or two configuration parameters only.
  2. Making it easy for a tester to launch local public and author instances of Magnolia that talk to each other correctly.
  3. Making it easy for a developer to debug Magnolia, having both instances running under the control of the IDE.

Preconditions

I will be assuming that you have a parent project with a child project that represents your webapp. I also will assume that you have copied the contents of src/main/webapp/WEB-INF/config from the magnolia-empty-webapp project into your own webapp project. The source for this is in the ce-bundle at git.magnolia-cms.com/gitweb/?p=ce-bundle.pub.git, but assuming you have magnolia-empty-webapp as a dependency (as recommended) you should be able to pick it up from your target directory.

I will be using Tomcat 7 as Tomcat is recommended by Magnolia and 7 is the latest stable version at the time of writing.

Deploying Magnolia to Stage or Production environments

For deployment to stage or production you don’t want both author and public deployed in the same container, or even on the same machine; so we only need to be able to configure a single running instance to be either author or public.

This is quite simple and well documented. In your webapp project, open your src/main/webapp/WEB-INF/web.xml (that you copied from the empty webapp project as described above) and look for the lines:

  <context-param>
    <param-name>magnolia.initialization.file</param-name>
    <param-value>
      WEB-INF/config/${servername}/${contextPath}/magnolia.properties,
      WEB-INF/config/${servername}/${webapp}/magnolia.properties,
      WEB-INF/config/${servername}/magnolia.properties,
      WEB-INF/config/${contextPath}/magnolia.properties,
      WEB-INF/config/${webapp}/magnolia.properties,
      WEB-INF/config/default/magnolia.properties,
      WEB-INF/config/magnolia.properties
    </param-value>
  </context-param>

You will need to add your own line at the top of the <param-value> section:

      WEB-INF/config/${contextAttribute/instanceName}/magnolia.properties,

Then when you deploy your WAR, you can simply set the instanceName environment variable to magnoliaPublic or magnoliaAuthor depending on what type of instance you want. As you can see from the fragment of web.xml above, this will make the settings in src/main/webapp/WEB-INF/config/magnoliaAuthor/magnolia.properties or src/main/webapp/WEB-INF/config/magnoliaAuthor/magnolia.properties active, respectively. Ultimately you will want to make more magnolia.properties files in more subdirectories (called, perhaps, stageAuthor, productionPublic and so on) with appropriate settings for those environments and you can simply make instanceName refer to the appropriate subdirectory.

Local Magnolia from the command line

Now, it would seem plausible that this method can be made to make your local testing environment work. Plausible, but wrong. This is the difficult way. You’ll start writing your context.xml files, then you’ll need a server.xml file, then before you know it you’ll be building your own Tomcat so that you can manage it all.

The “secret” is to use the fact that the web.xml already refers to the context path, in the form of the line:

      WEB-INF/config/${contextPath}/magnolia.properties,

(as well as in another line which we won’t concern ourselves with). This means that, instead using an environment variable, you can deploy the same WAR file to two different context paths and Magnolia will set itself up differently for each. And if you choose the paths /magnoliaAuthor and /magnoliaPublic you will automatically pick up the properties files provided by the empty webapp and all will be fine — Magnolia even sets up the author instance to point at http://localhost:8080/magnoliaPublic by default, so you won’t have to configure it yourself!

Well, actually, it’s not all fine. If you try this, you’ll find that one of your instances will refuse to start, complaining that its repository is already locked. Of course, they are trying to use the same repository. Fix this by adding a line similar to the following to magnoliaPublic/magnolia.properties:

magnolia.repositories.home=${magnolia.home}/repositories-public

The name of the subdirectory is not important. Note that, as it stands, this will change where the stage and production deployed Magnolias you configured above store their data. If that bothers you, now might be a good time to make your productionPublic/magnolia.properties and similar files.

So, how do we get that running painlessly so that your tester doesn’t keep asking you how to do it?

Add the Tomcat Maven plugin to your webapp’s pom.xml, and configure it to launch your WAR twice on two different context paths:

      <plugin>
        <groupId>org.apache.tomcat.maven</groupId>
        <artifactId>tomcat7-maven-plugin</artifactId>
        <version>2.2</version>
        <configuration>
          <webapps>
            <webapp>
              <groupId>com.my.group</groupId>
              <artifactId>my-webapp</artifactId>
              <version>1.0-SNAPSHOT</version>
              <type>war</type>
              <asWebapp>true</asWebapp>
              <contextPath>/magnoliaAuthor</contextPath>
            </webapp>
            <webapp>
              <groupId>com.my.group</groupId>
              <artifactId>my-webapp</artifactId>
              <version>1.0-SNAPSHOT</version>
              <type>war</type>
              <asWebapp>true</asWebapp>
              <contextPath>/magnoliaPublic</contextPath>
            </webapp>
          </webapps>
        </configuration>
      </plugin>

Replacing com.my.group and my-webapp with your own webapp’s group and artifact id.

Now you can run your Magnolia simply with:

mvn tomcat7:run-war

For reasons best known to the Tomcat plugin, boring old mvn tomcat7:run doesn’t work — deploying only one Magnolia in its default location. Sorry.

The instances are available, of course, at http://localhost:8080/magnoliaAuthor and http://localhost:8080/magnoliaPublic.

Local Magnolia from your IDE

Now you’re on the home straight. Here’s how I configure the Tomcat plugin in Eclipse:

Firstly, you need to get Eclipse to know about Tomcat 7. The foolproof way to do this is as follows: Window -> Preferences -> Server -> Runtime Environments -> Add… -> Apache Tomcat v7.0 -> Next. Now give it a location that is writable by you in the “Tomcat installation directory” box and click “Download and Install…”; using your pre-existing Tomcat might not work if it isn’t laid out in the way Eclipse expects. Now Finish and open the Servers view.

You can now add a new Tomcat 7 server and double-click on it. Tick “Publish module contexts to separate XML files”, set the start timeout to something large like 480 seconds, and in the modules tab add your webapp project twice; once with the path /magnoliaAuthor and once with the path /magnoliaPublic.

Now you can launch and debug your two instances of Magnolia from within your IDE!

by
Tim Band
on
03/03/14

Using Debian Multiarch for cross-compiling

I’ve recently acquired a [Raspberry Pi](http://www.raspberrypi.org/), and was considering using it for SNES
emulation. However, as it turns out that [Zsnes](http://www.zsnes.com/) is
x86-only, and that [Snes9x](http://www.snes9x.com/) got kicked out of Debian a
while back for having an annoying “no-commercial use”
[license](http://en.wikipedia.org/wiki/Snes9x#License), so we’re into the
compile-it-yourself options. As Snes9x is a configure/makefile-type project, I
should in theory be able to just compile in on the Pi directly, but then we hit
the problem that it hasn’t got enough RAM to be able to do all the compiling…
fine, fine, I’ll go back into the messy world of cross-compiling.
Read more…

by
Tom Parker
on
17/06/12

Publishing your mercurial-server repositories to the Web

I got a couple of queries recently on how to make your mercurial-server repositories publically readable over HTTP. Happily this isn’t hard to do, and doesn’t really touch on mercurial-server itself. Here’s how we do it on our Debian systems; in what follows I assume that you have installed mercurial-server on hg.example.com, and that you’re not already using that machine as a web server for anything else. First install these packages; note that they tend to have a lot of stuff you don’t need marked as recommended, so don’t install those things:

apt-get --no-install-recommends install apache2 libapache2-mod-fcgid python-flup

Create the following four files:

/etc/mercurial-server/hgweb.config:

[collections]
/var/lib/mercurial-server/repos = /var/lib/mercurial-server/repos

/etc/mercurial-server/hgweb.hgrc:

[web]
style = gitweb
allow_archive = bz2 gz zip
baseurl = http://hg.example.com/
maxchanges = 200

/etc/mercurial-server/hgwebdir.fcgi:

#!/usr/bin/env python

from mercurial import demandimport; demandimport.enable()

import os
os.environ["HGENCODING"] = "UTF-8"
os.environ["HGRCPATH"] = "/etc/mercurial-server/hgweb.hgrc"

from mercurial.hgweb.hgwebdir_mod import hgwebdir
from mercurial.hgweb.request import wsgiapplication
from flup.server.fcgi import WSGIServer

def make_web_app():
    return hgwebdir("/etc/mercurial-server/hgweb.config")

WSGIServer(wsgiapplication(make_web_app)).run()

/etc/apache2/sites-available/hg:

<VirtualHost *>
    ServerName hg.example.com
    AddHandler fcgid-script .fcgi
    ScriptAlias / /etc/mercurial-server/hgwebdir.fcgi/
    ErrorLog /var/log/apache2/hg/error.log
    LogLevel warn
    CustomLog /var/log/apache2/hg/access.log combined
</VirtualHost>

Finally run these commands as root:

chmod +x /etc/mercurial-server/hgwebdir.fcgi
mkdir -p /var/log/apache2/hg
cd /etc/apache2/sites-enabled
rm 000-default
ln -s ../sites-available/hg
/etc/init.d/apache2 reload

Your files should now be served at http://hg.example.com/ . Sadly because of a design flaw in hgwebdir, there’s no easy way to get Apache to handle the static files it needs, but these are pretty small so there’s no harm in letting hgwebdir handle them. The “rm 000-default” thing seems pretty undesirable, but without it I can’t seem to get this recipe to work.

I’ve chosen FastCGI as the connector. This has the advantage that

  • unlike CGI, it doesn’t fork a new handler on every request
  • unlike mod_python, it keeps your Mercurial handler separate from your web server
  • unlike SCGI, it will automatically start the service for you if it’s not already running, which is a massive convenience

I’m not aware of any other way of working that offers all three advantages.

As soon as a version of lighttpd with this bug fixed makes it into Debian, I’ll add my recipe for that.

by
Paul Crowley
on
05/03/10

Network server programming with SML/NJ and CML

My experience with [SML/NJ](http://www.smlnj.org/) has been almost uniformly positive, over the years. We used it extensively in a previous project to write a compiler (targeting the .NET CLR) for a pi-calculus-based language, and it was fantastic. One drawback with it, though, is the lack of documentation. Finding out how to (a) compile for and (b) use [CML](http://cml.cs.uchicago.edu/) takes real stamina. I’ve only just now, after several hours poring over webpages, mailing lists, and library source code, gotten to the point where I have a running socket server.

## Download source code, building, and running

The following example is comprised of a `.cm` file for building the program, and the `.sml` file itself. The complete sources:

* [`test.cm`][cm]
* [`test.sml`][sml]

[cm]: http://dev.lshift.net/tonyg/test.cm
[sml]: http://dev.lshift.net/tonyg/test.sml

Running the following command compiles the project:

ml-build test.cm Testprog.main

The `ml-build` output is a heap file, with a file extension dependent on your architecture and operating system. For me, right now, it produces `test.x86-darwin`. To run the program:

sml @SMLload=test.x86-darwin

substituting the name of your `ml-build`-produced heap file as necessary.

On Ubuntu, you will need to have run `apt-get install smlnj libcml-smlnj libcmlutil-smlnj` to ensure both SML/NJ and CML are present on your system.

## The build control file

The [`test.cm`][cm] file contains

Group is
$cml/basis.cm
$cml/cml.cm
$cml-lib/smlnj-lib.cm
test.sml

which instructs the build system to use the CML variants of the basis and the standard SML/NJ library, as well as the core CML library itself and the source code of our program. For more information about the SML CM build control system, see [here](http://www.smlnj.org/doc/CM/index.html).

## The example source code

Turning to [`test.sml`][sml] now, we first declare the ML structure (module) we’ll be constructing. The structure name is also part of one of the command-line arguments to `ml-build` above, telling it which function to use as the main function for the program.

structure Testprog = struct

Next, we bring the contents of the `TextIO` module into scope. This is necessary in order to use the `print` function with CML; if we use the standard version of `print`, the output is unreliable. The special CML variant is needed. We also declare a local alias `SU` for the global `SockUtil` structure.

open TextIO
structure SU = SockUtil

ML programs end up being written upside down, in a sense, because function definitions need to precede their use (unless mutually-recursive definitions are used). For this reason, the next chunk is `connMain`, the function called in a new lightweight thread when an inbound TCP connection has been accepted. Here, it simply prints out a countdown from 10 over the course of the next five seconds or so, before closing the socket. Multiple connections end up running connMain in independent threads of control, leading automatically to the natural and obvious interleaving of outputs on concurrent connections.

fun connMain s =
let fun count 0 = SU.sendStr (s, “Bye!\r\n”)
| count n = (SU.sendStr (s, “Hello ” ^ (Int.toString n) ^ “\r\n”);
CML.sync (CML.timeOutEvt (Time.fromReal 0.5));
count (n – 1))
in
count 10;
print “Closing the connection.\n”;
Socket.close s
end

The function that depends on `connMain` is the accept loop, which repeatedly accepts a connection and spawns a connection thread for it.

fun acceptLoop server_sock =
let val (s, _) = Socket.accept server_sock
in
print “Accepted a connection.\n”;
CML.spawn (fn () => connMain(s));
acceptLoop server_sock
end

The next function is the primordial CML thread, responsible for creating the TCP server socket and entering the accept loop. We set `SO_REUSEADDR` on the socket, listen on port 8989 with a connection backlog of 5, and enter the accept loop.

fun cml_main (program_name, arglist) =
let val s = INetSock.TCP.socket()
in
Socket.Ctl.setREUSEADDR (s, true);
Socket.bind(s, INetSock.any 8989);
Socket.listen(s, 5);
print “Entering accept loop…\n”;
acceptLoop s
end

Finally, the function we told `ml-build` to use as the main entry point of the program. The only thing we do here is disable SIGPIPE (otherwise we get rudely killed if a remote client’s socket closes!) and start CML’s scheduler running with a primordial thread function. When the scheduler decides that everything is over and the program is complete, it returns control to us. (The lone `end` closes the `struct` definition way back at the top of the file.)

fun main (program_name, arglist) =
(UnixSignals.setHandler (UnixSignals.sigPIPE, UnixSignals.IGNORE);
RunCML.doit (fn () => cml_main(program_name, arglist), NONE);
OS.Process.success)

end

by
tonyg
on
01/01/10

HTML email from Squeak using Seaside

Recently, as part of a Seaside-based application running within Squeak, I wanted to send HTML-formatted notification emails when certain things happened within the application.

It turns out that Squeak has a built-in SMTP client library, which with a small amount of glue can be used with Seaside’s HTML renderer to send HTML formatted emails using code similar to that used when rendering Seaside website components.

sendHtmlEmailTo: toEmailAddressString
  from: fromEmailAddressString
  subject: subjectString
  with: aBlock
 | m b bodyHtml |
    m := MailMessage empty.
 m setField: 'from' toString: fromEmailAddressString.
  m setField: 'to' toString: toEmailAddressString.
  m setField: 'subject' toString: subjectString.
    m setField: 'content-type' toString: 'text/html'.
   
    b := WAHtmlBuilder new.
 b canvasClass: WARenderCanvas.
  b rootClass: WAHtmlRoot.
    bodyHtml := b render: aBlock.

   m body: (MIMEDocument contentType: 'text/html' content: bodyHtml).
    SMTPClient deliverMailFrom: m from
               to: {m to}
               text: m asSendableText
               usingServer: 'YOUR.SMTP.SERVER.EXAMPLE.COM'.

The aBlock argument should be like the body of a WAComponent’s renderContentOn: method. Here’s an example:

whateverObjectYouInstalledTheMethodOn
  sendHtmlEmailTo: 'target@example.com'
  from: 'source@example.org'
  subject: 'Hello, world'
  with: [:html |
    html heading level3 with: 'This is a heading'.
    html paragraph with: 'Hi there!']
by
tonyg
on
08/09/09

OpenAMQ’s JMS client with RabbitMQ server

OpenAMQ has released their JMS client for using JMS with AMQP-supporting brokers. This afternoon I experimented with getting it running with RabbitMQ.

After a simple, small patch to the JMS client code, to make it work with the AMQP 0-8 spec that RabbitMQ implements (rather than the 0-9 spec that OpenAMQ implements), the basic examples shipped with the JMS client library seemed to work fine. The devil is no doubt in the details, but no problems leapt out at me.

To get it going, I checked it out using Git (`git clone
git://github.com/pieterh/openamq-jms.git`). Compilation was as simple as running `ant`. Kudos to the OpenAMQ team for making the build process so smooth! (Not to mention writing a great piece of software :-) )

The changes to make it work with AMQP 0-8 were:

– retrieving the 0-8 specification XML

– changing the JMS client library’s `build.xml` file to point to the downloaded file in its `generate.spec` variable

– changing one line of code in `src/org/openamq/client/AMQSession.java`: in 0-8, the final `null` argument to `BasicConsumeBody.createAMQFrame` must be omitted

– re-running the `ant` build

After this, and creating a `/test` virtual-host using RabbitMQ’s `rabbitmqctl` program, the OpenAMQ JMS client examples worked fine, as far as I could tell.

rabbitmqctl add_vhost /test
rabbitmqctl set_permissions -p /test guest ‘.*’ ‘.*’ ‘.*’

You can download the patch file I applied to try it yourself. Note that you’ll need to put the correct location to your downloaded `amqp0-8.xml` file into build.xml.

by
tonyg
on
16/03/09

Where did all my space go?

Over the last little while, I’ve started to suffer from lack of space on the hard disk in my laptop, which is ridiculous, since there’s an 80GB disk in there and there is no way I have that much data I need to hang on to. I decided to do something about it last week. The main part of the problem was to figure out what was eating all the space: du tells you exactly what’s using how much, but it’s hard to get a feel for where your space has gone by scanning through pages of du output. So I built a program to help.

spaceviz is a small Python program that takes the output of du -ak, and builds you a picture and HTML client-side imagemap of your space usage

Running it against the output of du -ak / showed me very clearly where all the space had gone: not only did I have a few seasons of various TV shows on my disk (which I already knew were there), but I had 11 GB of unneeded gzipped RDF data left over from a project that finished earlier this year (that I had forgotten about). Instant win!

To run it for yourself, check out the mercurial repository http://hg.opensource.lshift.net/spaceviz, and run

make veryclean all ROOT=/

replacing the ROOT=/ with a definition of ROOT that points at the directory tree you want to generate usage data for. The makefile will take care of running du and spaceviz.py for you. Edit the settings for WIDTH and HEIGHT in spaceviz.py to change the dimensions of the generated picture.

The program runs not only on Linux without fuss, but also on OS X so long as you have the netpbm port installed to convert the python-generated PPM file to a browser-accessible (and much more efficiently compressed!) PNG.

by
tonyg
on
29/10/08

Listening to your Webcam

Here’s a fun thing:

The Analysis & Resynthesis Sound Spectrograph, or ARSS, is a program that analyses a sound file into a spectrogram and is able to synthesise this spectrogram, or any other user-created image, back into a sound.

Upon discovery of this juicy little tool the other day, Andy and I fell to discussing potential applications. We have a few USB cameras around the office for use with camstream, our little RabbitMQ demo, so we started playing with using the feed of frames from the camera as input to ARSS.

The idea is that a frame captured from the camera can be used as a spectrogram of a few seconds’ worth of audio. While the system is playing through one frame, the next can be captured and processed, ready for playback. This could make an interesting kind of hybrid between dance performance and musical instrument, for example.

We didn’t want to spend a long time programming, so we whipped up a few shell scripts that convert a linux-based, USB-camera-enabled machine into a kind of visual synthesis tool.

webcam-arss-pipeline.png

Just below is a frame I just captured, and the processed form in which it is sent to ARSS for conversion to audio. Here’s the MP3 of what the frame sounds like.

example-frame-smaller.png

Each frame is run through ImageMagick’s “charcoal” tool, which does a good job of finding edges in the picture, inverted, and passed through a minimum-brightness threshold. The resulting line-art-like frame is run through ARSS to produce a WAV file, which can then be played or converted to mp3.

Ingredients

You will need:

* one Debian, Ubuntu or other linux computer, with a fairly fast CPU (anything newer than ca. 2006 ought to do nicely).
* a USB webcam that you know works with linux.
* a copy of ARSS, compiled and running. Download it here.
* the program “webcam”, available in Debian and Ubuntu with apt-get install webcam, or otherwise as part of xawtv.
* “sox”, via apt-get install sox or the sox homepage.
* “convert”, apt-get install imagemagick or from ImageMagick.

Method

The scripts are crude, but somewhat effective. Three processes run simultaneously, in a loop:

* webcam runs in the background, capturing images as fast as it can, and (over-)writing them to a single file, webcam.jpg.
* a shell script called grabframe runs in a loop, converting webcam.jpg through the pipeline illustrated above to a final wav file.
* a final shell script repeatedly converts the wav file to raw PCM data, and sends it to the sound card.

Here’s the contents of my ~/.webcamrc:

[grab]
delay = 0
text = ""

[ftp]
local = 1
tmp = uploading.jpg
file = webcam.jpg
dir = .
debug = 1

Here’s the grabframe script:

#!/bin/sh

THRESHOLD_VALUE=32768
THRESHOLD="-black-threshold $THRESHOLD_VALUE"
CHARCOAL_WIDTH=1
LOG_BASE=2
MIN_FREQ=20
MAX_FREQ=22000
PIXELS_PER_SECOND=60

while [ ! -e webcam.jpg ]; do sleep 0.2; done
convert -charcoal $CHARCOAL_WIDTH -negate $THRESHOLD webcam.jpg frame.bmp
mv webcam.jpg frame.jpg
./arss frame.bmp frame.wav.tmp --log-base $LOG_BASE --sine --min-freq $MIN_FREQ --max-freq $MAX_FREQ --pps $PIXELS_PER_SECOND -f 16 --sample-rate 44100
mv frame.wav.tmp frame.wav

You can tweak the parameters and save the script while the whole thing is running, to experiment with different options during playback.

To start things running:

* In shell number one, run “webcam”.
* In shell number two, run “while true; do ./grabframe ; done”.
* In shell number three, run “(while true; do sox -r 44100 -c 2 -2 -s frame.wav frame.raw; cat frame.raw; done) | play -r 44100 -c 2 -2 -s -t raw -”.

That last command repeatedly takes the contents of frame.wav, as output by grabframe, converts it to raw PCM, and pipes it into a long-running play process, which sends the PCM it receives on its standard input out through the sound card.

If you like, you can use esdcat instead of the play command in the pipeline run in shell number three. If you do, you can use extace to draw a spectrogram of the sound that is being played, so you can monitor what’s happening, and close the loop, arriving back at a spectrogram that should look something like the original captured images.

by
tonyg
on
25/07/08

Late-binding with Erlang

Upon browsing the source to the excellent MochiWeb, I came across a call to a function that, when I looked, wasn’t defined anywhere. This, it turns out, was a clue: Erlang has undocumented syntactic support for late-bound method dispatch, i.e. lightweight object-oriented programming!

The following example, myclass.erl, is a parameterized module, a feature that arrived undocumented in a recent Erlang release. Parameterized modules are explored on the ‘net here and here. (The latter link is to a presentation that also covers an even more experimental module-based inheritance mechanism.)

-module(myclass, [Instvar1, Instvar2]).
-export([getInstvar1/0, getInstvar2/0]).
getInstvar1() -> Instvar1.
getInstvar2() -> Instvar2.

“Instances” of the “class” called myclass can be created with myclass:new(A, B) (which is automatically provided by the compiler, and does not appear in the source code), where A and B become values for the variables Instvar1 and Instvar2, which are implicitly scoped across the entirety of the myclass module body, available to all functions defined within it.

The result of a call to a new method is a simple tuple, much like a record, with the module name in the first position, and the instance variable values in order following it.

Eshell V5.6  (abort with ^G)
1> Handle = myclass:new(123, 234).
{myclass,123,234}
2> Handle:getInstvar1().
123
3> Handle:getInstvar2().
234

While this looks really similar to OO dispatch in other languages, it’s actually an extension to Erlang’s regular function call syntax, and works with other variations on that syntax, too:

4> {myclass,123,234}:getInstvar1().
123

The objects that this system provides are pure-functional objects, which is unusual: many object-oriented languages don’t clearly separate the two orthogonal features of late-binding and mutable state. A well-designed language should let you use one without the other, just as Erlang does here: in Erlang, using parameterized modules for method dispatch doesn’t change the way the usual mechanisms for managing mutable state are used. “Instance variables” of parameterized modules are always immutable, and regular state-threading has to be used to get the effects of mutable state.

I’d like to see this feature promoted to first-class, documented, supported status, and I’d also very much like to see it used to structure the standard library. Unfortunately, it’s not yet very well integrated with existing modules like gb_sets, ordsets and sets. For example, here’s what happens when you try it with a simple lists call:

5> lists:append([1, 2], [3, 4]).
[1,2,3,4]
6> {lists, [1, 2]}:append([3, 4]).
[3,4|{lists,[1,2]}]

Not exactly what we were after. (Although it does give brittle insight into the current internals of the rewrites the system performs: a {foo, ...}:bar(zot) call is translated into foo:bar(zot, {foo, ...}) – that is, the this parameter is placed last in the argument lists.)

by
tonyg
on
18/05/08

A touchscreen mod for the Asus Eee 701

Tony and I bought Asus Eee PCs a couple of months ago, largely to experiment with. He’s been using his quite a bit, and had some fun installing Ubuntu. Mine has been languishing at home, waiting for its tooth on the cog to come around.

Recently, someone pointed me at a YouTube video of unrepentant modder jkkmobile fitting a touchscreen to his Eee PC. The cog turned, and I ordered a specialised touchscreen overlay kit from eBay they are quite easy to find.

There’s four bits to the kit: the touchscreen itself, which has a ribbon; a touchscreen controller PCB; and two cables, one from the ribbon to the PCB, and one from the PCB to USB. Some kits (like the one I bought) purport to be solderless; however, the cable meant to connect PCB to USB ended in the familiar A type plug, and had a choke (which we had a bit of fun destroying).

The overlay is a 4 wire resistive touchscreen panel, meaning that it works with a finger, a (capped) pen, a PDA stylus, or whatever. One just has to be careful not to use anything sharp or inky.

The first task was to take it all apart and determine where and how to accommodate the controller. Taking it apart consists mostly of two activities: unscrewing screws and popping the various clips in the case. A fiddly bit is taking the keyboard connector out, which requires especial delicacy because the housing is quite sticky and its easy to scratch the ribbon – this goes for the touchpad connector too.

As jkkmobile mentions, there’s a bit of space here and there for the controller. I didn’t want to sacrifice a speaker, as he did, and I wasn’t planning to do any more modding, so we used the space-of-least-effort, which is beside the memory housing on the underneath of the motherboard. We used a bit of insulation tape to separate the board from the motherboard; it took a few layers, since the pins on the underside wanted to poke through, and we had to trim the plug casings (other people have removed them altogether and soldered the cables on).

With the controller board in roughly the right place and cables plugged in, we routed the wires up through the gap between the motherboard and the case. It’s probably worth wrapping some tape around, so the edges don’t strip the sheath away.

The next choice was what to connect the USB endpoint to. USB cabling consists of four (sometimes five) wires: two for data, one (or two) earth, and one +5V power. On this, jkkmobile says that an external port would be fine except for the power lead, which ought to be connected to a power source that turns off when the PC does. I was happy to sacrifice one of the three external USB ports, so the earth, D+, and D- went to the pins of the lonesome port on the left-hand side of the case. tnkgirl helpfully provides a guide to the various USB traces on the motherboard, from which we choose a seeming spare (and verified its behaviour with a multimeter). We also had to check which colour wire corresponded to which pin, again with the multimeter this is left to vendors in the USB specification, though there is a convention.

Then there was soldering. At this stage I would like to point out that the last time either of Tony and I abused a soldering iron was under heavy supervision when we were about twelve. Nonetheless, we were game, and a couple of hours, countless Watts and a burnt finger bought us some shiny conductive solders. We ended up using some conducting epoxy to glue the earth to the port shielding, but it could equally (health and safety concerns aside) have been soldered to the pin.

The penultimate task was to attach the touchscreen and plug it in. I used double-sided tape along each edge of the LCD display. I left the backing on while I cleaned the screen one last time, then lined up the panel and pressed it on, with the ribbon at the bottom where there’s a bit of room out of the way of the camera. The panel overlapped the LCD by about 5mm at the left edge, so I had to trim the plastic clip on the fascia and cope with there being a bit of a bulge on that side.

The cable from the controller to the screen had to go from the keyboard part of the case to the screen part of the case, so I routed it through the hinge along with the camera and left-speaker wires. It took some persuasion, but there’s actually plenty of room to spare, both through the hinge and over the motherboard (the mic and speaker plug housings rise a good few mm above the board, for example) certainly enough for the cable to go through a couple of 90° bends. The ribbon from the panel is robust enough to be folded too, though perhaps don’t score it first.

While putting it all back together, we had a heart-stopping moment as it started refusing to boot past the initial firmware. At first we thought it might be the keyboard, and in particular the keyboard connector ribbon. In general these are often very sensitive, and known to be so, for the Eee. However, we are nothing if not scientific here at LShift, and we’d soon discovered that it was not the keyboard, but in fact some other problem that disappears when the motherboard is prodded in a certain way. After defouling some wiring and cycling its state of assembly, it booted consistently again.

Last came the software. The Xandros distribution happily recognises the USB controller, loads the usbtouchscreen module, and treats it as a mouse. However, it needs calibration, so it ends up being an insane teleporting mouse. A fair amount of interaction with Google led us to believe that the answer was to treat it as a different input device; that is, stop it looking like a mouse, and have a special input driver for it.

What normally happens is that it gets assigned something like /dev/mouse1, which is interleaved into /dev/input/mice, to which the X Window pointer input driver listens. To stop it looking like a mouse, it’s necessary to have a specific rule for it in /etc/udev/rules.d/:

KERNEL=="event*", SUBSYSTEM=="input", SYSFS{idVendor}=="0eef", SYSFS{idProduct}=="0001", SYMLINK+="input/evtouch_event"

(The values for idVendor and idProduct come from examining cat /proc/bus/usb/devices)

I compiled a specialised X input driver for it – to do that, I had to install gcc, and to do that I had to add repositories to /etc/apt/sources.list – and configured it as an input device.

To calibrate it, Tony wrote a tiny program to interpret the numbers from the event stream:

import sys
for line in sys.stdin.readlines():
  (d3, d4, d1, d2) = line[:4]
  hex = d1+d2+d3+d4
  print int(hex, 16)

and a few pipes and filters later we had our min and max values:

xxd /dev/input/evtouch_event | grep '0300 0000' | cut -d' ' -f8 | python byteswap.py | sort -n

(byteswap.py is the file with the Python from above. The ’0300 0100′ filters for the event type and the X axis; the Y axis is ’0300 0100′. I’ve doctored the original line used to be a bit simpler, so it may need some hacking).

I needed a bit of experimentation with the XSwap, YSwap and Rotate options in the input driver configuration in xorg.conf, with accompanying restarts of X, then it was done!

The resolution is such that it’s possible, with a bit of concentration, to browse web sites with a finger. A nice side-effect of the screen layout is that it’s possible to scroll a full-screen window by running a thumb along beside the bezel.

by
mikeb
on
15/05/08

Search

Categories

You are currently browsing the archives for the Howto category.

Feeds

Archives

2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us