technology from back to front

Archive for December, 2005

Parallels between Smalltalk and Linux

Travis Griggs has some interesting things to say about the similarities between Smalltalk and Linux. Of course there are huge differences as well, but it’s interesting to hear a Smalltalker’s take on the similarities.

by
tonyg
on
20/12/05

FRP with STMs

I had an idea for an interesting application of Software Transactional Memory (STM) the other day: Functional Reactive Programming (FRP).

Shriram Krishnamurti gave two excellenttalks at SchemeUK. In the pub discussion afterwards he explained some of the implementation challenges and techniques in FrTime (pronounced “Father Time”), the Functional Reactive Programming system for PLT Scheme. It occurred to me then that STMs, in particular the flavour proposed in Composable
Memory Transactions
paper, may provide a neat platform for FRP.

One of the challenges faced by FRP are glitches. Glitches can occur when a signal propagates via different routes at different speeds and the signals from these routes are combined. The combined signal may be produced from values that are out of sync. More generally, glitches can occur anytime several signals are combined that are affected by synchronised input signals.

If signals were represented by STM memory cells, we could express synchrony through transaction atomicity. A change of value in one or several synchronised signals would propagate through the entire signalling network atomically.

Another FRP implementation challenge are change notifications – signal values need to be recalculated whenever their input values change, which may in turn be calculated from other inputs etc. FrTime employs weak pointers from signals to their consumers for this and broadcasts
notifications along these, essentially implementing a “push” model of signal propagation.

In an STM implementation of FRP we should be able to exploit the retry and orElse constructs. The former allows us to wait for a change in inputs until it produces a change in output. The latter is used for composition.

I attempted a quick implementation of some of my ideas on top of Haskell’s STM. It didn’t take long to get something to work. However, that also revealed some fundamental design flaws in my initial approach. Here are some of the challenges:

  • Making sure that signal values don’t get lost; every value change must be propagated

  • Invoking all the right signal processing functions, in the right order, as part of single transactions

  • Dealing with feedback, i.e. cycles in the signal processing graph

At this stage it is not yet clear whether STMs really do make FRP implementations significantly easier – my efforts may get unstuck on the above, or some other problems I have yet to discover.

by
matthias
on
12/12/05

Best. Language comparison. Evar.

Someone has finally figured out [old link: http://www.iolanguage.com/about/simplicity/] the ultimate metric for language complexity. Legions of ivory-tower academics can now retire. It’s a solved problem. Who knew it would turn out to be that easy? I guess people have just been overlooking the obvious for years!

by
tonyg
on
07/12/05

Io – Prototypes for ordinary people

Some programming languages look like they were designed especially to secure their authors a place on LtU – sort of 15 minutes of fame for language mavens. That’s not the case with Io, although it does sport some neat features that are not common enough in today’s language universe (and it did make it to LtU, which is how i found out about it).

Io is a small, dead simple dynamic language. It’s most important characteristic is it being a prototype-based pure object-oriented language. Unlike many popular OO languages, Io lacks any data types that aren’t objects – that includes classes, which are common as a generalisation mechanism in other languages, but are dangerous because they pollute a language with an entity that is above the normal object space and often make it much less intuitive to reason about OO design and to interact with live systems. Instead, in prototype languages like Io (and like [old link: http://research.sun.com/self/language.html" new link http://c2.com/cgi/wiki?SelfLanguage] Self, the ancestor of all prototype languages, Javascript and Slate, to mention a few) new objects are created by cloning existing ones. Cloning does not really copy an entire object, it only creates a pointer to the original object, the prototype, and stores any changes to the new object later on, if and when it’s modified.

As I mentioned, Io is not the only prototype language out there, but it is unique in being a very simple, geared towards down to earth programmers in need of a powerful tool. It is not a grand research project, a new operating system replacement or a domain-specific extension. Io is a ‘scripting’ language, in the same sense that Python, Perl and Ruby are. It is extremely dynamic and introspective, allowing the program to inspect and change any of its components in runtime, and is running on top of a simple virtual machine written in portable C. In addition, Io is very easy to learn, thanks to the almost complete lack of special syntax (in Io even assignment is achieved by passing a message to an object).

After i found that the SGML parser bundled with Io is not good enough (the current distribution comes with a generous collection of libraries, though quality and maturity vary), I have decided to try and build bindings for Gnome’s libxml. Libxml comes with XML definitions of its entire public interface, and using a bit of XSLT i transformed them into Io code, which looks very much like LISP S-Expressions (sure, I would have loved to use Io itself for the task, but the lack of working XML facilities was an obstacle, of course). This is where the problems began. Libxml is rather big, and I found, to my great disapopintment that the version of Io I have cannot parse the entire generated program, running into some artificial memory limitations before completion. I was even more surprised to find out that not only could I not configure this parameter in either run or build time, but that the magic number I wanted to control was hard-coded (not even using a define) and buried deep in the source.

After many attempts (and many more similar problems) I have discovered Io’s DynLib object – a useful little object that allows you to bind to shared C libraries in run time, and I am now directing my efforts in this direction. After finally being able to get some basic results, yesterday’s so-called stable release rendered my experiments useless again. Languages in development are indeed a moving target, but this contrasts with the claim of the language designer’s to have a 1.0 release baking, out by the end of the year.

In conclusion, Io is a wonderful little language, and I think it has a great potential in same niche occupied today by other ‘scripting’ languages. I sure do hope to be working more with it in the future. However, judging by the current state of the codebase and the amount of changes the language and its basic libraries have taken even in the very short time I’ve been watching the development, we are still far away from a 1.0 release.

by
Tom Berger
on

Service Google

At the recent [Architects Summit](http://www.looselycoupled.com/blog/lc00aa00127.html) there was a brief discussion on the idea of a *Service Google*, i.e. the ability to search for software components that offer a service with a particular behaviour. How might such a Service Google work?

### Google vs the Semantic Web

The [Semantic Web](http://www.semanticweb.org/) folks will tell you that we should define [ontologies](http://www.w3.org/TR/owl-features/) for service descriptions, and add meta data to our services that describes their behaviour in terms of these ontologies. That is *not* how Google works, and it is [the wrong way to go](http://www.shirky.com/writings/ontology_overrated.html), requiring a-priori agreement on how to express service behaviour and not permitting any change in the notion of service behaviour over time.

A Service Google that works like the current Google would not require any up-front specification of service semantics. Instead all notion of meaning is deferred to indexing & search time. There is one important difference though: for the Service Google to be of any use for *automated* service discovery and binding the meaning must be *exact* and *verifiable*.

Note that ontologies do not have anything to offer us here. Firstly, they are not grounded in anything that could be called exact or precise. Secondly, they *impose* meaning on an artefact without providing any mechanism to verify that the artefact does in fact comply with that meaning.

So, if ontologies aren’t the answer, what is?

### Behavioural Types

Behavioural types are mathematically precise definitions of the behaviour of services, including SLA, performance characteristics, cost, security etc, with proofs that the actual services do indeed exhibit that behaviour. Searching for behaviourally typed services is done by formulating queries as expressions in some suitable logic. Matching is the satisfaction relation and hence exact. Verification is type checking.

Types *express* the meaning of an artefact, rather than imposing it. Following Google’s lead, we ideally would want to avoid any explicit, upfront specification of types. Instead types should be *inferred* by the indexer. Unfortunately, type inference limits the expressiveness of
types and, to a degree, the artefacts (i.e. code) – a minor tweak to a programming language can break type inference.

Thus there is a real challenge here for the research community: devising behavioural type systems that capture interesting properties and are inferable for a large class of practical programs. The beauty of the inference approach is that we can evolve the notion of type, just like Google can evolve its indexing & search algorithms.

### Ranking

Another challenge for a Service Google based on behavioural types is that a large part of the value Google is delivering is *ranking*. A prerequiste of that, in Google’s case, is the inexact nature of its
indexing/search. The satisfaction relation, otoh, is exact. As a result a Service Google would only find exact matches, i.e. services that exactly meet the criteria we have specified. We can, of course,
relax the criteria, but then we’d just get more hits – we do not get any ranking of them. What is the Service Google equivalent of page rank? Is it perhaps the notion of simulation, i.e. the more other
matching services a service can simulate the higher its rank?

An alternative is to come up with a distance measure of behaviour, i.e. the more one service behaves like another the closer it is to that service. This allows us to keep our queries precise and let the
search engine worry about digging up non-exact matches and ranking them. The problem then is how to bind to service obtained in this way, since it matches our criteria only closely but not exactly.

by
matthias
on
06/12/05

Claim this Quote

Somewhere I read something along the lines of

> sharing, mutation, concurrency – pick any two

which is a marvellous little snippet of wisdom.

Since Google cannot find it I am claiming it for myself until the rightful owner steps forward.

by
matthias
on
03/12/05

Search

Categories

You are currently browsing the LShift Ltd. blog archives for December, 2005.

Feeds

Archives

2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us