technology from back to front

Archive for September, 2005

MonetDB for Everybody

Is it just me, or should everybody be using [MonetDB](http://monetdb.cwi.nl/), instead of whatever database they are currently wrestling with? The feature list is impressive: fast, runs on everything, SQL and XQuery support, open-source. There are some [limitations](http://homepages.cwi.nl/~mk/MonetDB/When-not-to-consider-MonetDB.html), but overall this looks like a very impressive product.

by
matthias
on
26/09/05

Transactions Everywhere

In an
[interview](http://www.ddj.com/documents/s=9776/ddj1126793370067/vb9.htm)
about the future of Visual Basic 9, Erik Meijer makes the following
prediction:
> We will use transactions everywhere; in-memory, database, XML,
everywhere. We will have one way for dealing with concurrency.

This statement is made in the context of talking about Software
Transactional Memory, and the concurrency control mechanisms one can
build on top of it. When I first read about
[STMs](http://research.microsoft.com/Users/simonpj/papers/stm/stm.pdf),
this is exactly the direction I was thinking in – I want to be able to
have transactions that span memory, in more than one system, several
databases and other transaction-aware resources. And I want to be able
to use these transactions for concurrency control.

This, of course, is nothing other than Distributed Transactions – an
area that has been researched for decades and for which several
commercially supported Distributed Transaction Protocols (DTPs) exist,
e.g. [XA](http://www.opengroup.org/public/pubs/catalog/s423.htm),
[CORBA OTS](http://www.omg.org/cgi-bin/doc?formal/2003-09-02),
[JTS](http://java.sun.com/products/jts/), and
[Jini](http://java.sun.com/products/jini/2.0.2/doc/specs/html/txn-spec.html),
plus a plethora of Web Services related specs.

The big question is: *Can STMs participate in a DTP?*

The first hurdle is that STMs used for concurrency control, though not
plain STMs, requires nested transactions in order to implement the
‘orelse’ choice construct. Nested transactions are not supported very
widely, and the main DTP standard, XA, does not include them. Other
standards like CORBA OTS, Jini and JTS do though, but that is of
little help when most of the potential DTP participants, notably the
most popular databases, lack such support.

The second hurdle is that STMs’ concurrency control requires a
mechanism to retry transactions when something has changed. This is
something that none of the existing DTPs support. However, I think the
addition of just a single operation to the transaction participant
API is sufficient to meet the requirement. The operation instructs the
participant to abort a transaction and notify the transaction manager
when some of the underlying data has been modified.

Retro-fitting this operation onto existing DTP implementations is hard
if one wants to be precise, i.e. ensure that the notification is only
triggered when the relevant data has *really* changed. However, such
precision is only required for optimal performance. There is no harm
in making notification more coarse-grained. In fact, this is exploited
by [some](http://www.cl.cam.ac.uk/~kaf24/papers/2003-ls4lt.pdf) STM implementations in order to reduce the amount of book
keeping information. Applying this technique to, say, a legacy database, we could wrap the
entire database interface, and trigger notifications whenever a
transaction has committed. That is the equivalent of having one
ownership record for an entire STM.

The third, and final hurdle is guaranteeing progress. STMs employ an
optimistic concurrency model in which transaction execute without
acquiring any locks, and the final commit only fails when any data
read by the transaction has been modified by the commit of another
transaction. This guarantees progress in a single-machine
setting. DTPs, however, employ 2PC, in which the decision on whether a
transaction is committable is separated from the instruction to
actually commit the transaction. In a naive port of STMs to the DTP
world this can destroy the progress guarantee. The problem is that
once an STM has voted to commit a transaction, it cannot vote to
commit another transaction that has read from some of the memory
locations modified by the first until the first transaction has either
committed or aborted. It therefore must either vote to abort the
second transaction, which may result in lack of progress, or delay the
decision, which may result in deadlock.

In conclusion, the transactional world envisaged by Meijer and others
may well be achievable, but clearly more research is needed before we
can tell for sure.

by
matthias
on

LShift Partners with DataPower

Following our very positive experience using their XS40 XML security gateway on a recent Vodafone project, LShift are looking forward to deploying Datapower technology on other projects.

DataPower’s products deliver business and technical benefits in equal measure. Most companies that consider exposing business critical data via Web Services are justifiably concerned about the security risks they could be introducing. This has had a considerable effect on the adoption of such Service Oriented architectures. Separating the management and enforcement of security from application logic, and concentrating it in security appliances at the periphery of the network, reduces the risk significantly and helps network administrators stay on top of systems with ever increasing complexity.

LShift are also excited about the XML acceleration capabilities of the DataPower technology. Performing complex XML processing tasks at high throughput and low latency, such as is often required in the telecoms and financial sector, has hitherto been a significant challenge, which is only growing over time as the Web Services stack matures and more and more applications want to take advantage of it. Fortunately, many XML processing tasks lend themselves to parallelisation and implementation in hardware. DataPower’s XML accelerators therefore open the door to the deployment of XML-based services in areas where previously no cost-effective solutions existed.

DataPower provides enterprises with intelligent XML-Aware network (XAN) infrastructure to ensure unparalleled performance, security and manageability of next-generation applications and XML Web Services. DataPower’s patent-pending XML Generation Three (XG3™) technology powers the industry’s first wire-speed XML-aware networking devices that provide immediate return on technology investments while streamlining application deployments. Founded in 1999, DataPower is privately held and based in Cambridge, MA.

by
Administrator
on
23/09/05

XML tunnel-vision

In Ant 1.6, properties can be written in XML files. Can someone tell me why

<property name="some.property" value="some.value"/>

is more desirable than

property.name=some.value

?

Update The import feature is what’s new in Ant 1.6 that makes this usage possible. So, the answer is, “because you can conditionally set properties in the imported files” (rather than conditionally evaluating property files). So really my gripe is (again) with the weird, stilted little language that is Ant.

by
mikeb
on
22/09/05

Understanding “Understanding Brute Force”

D J Bernstein’s draft paper Understanding Brute Force argues that the way we currently measure the cost of cryptanalytic attacks is highly misleading. The paper is a good example of Bernstein’s unconventional style, and mixes quite informal writing with very formal and precise descriptions of cryptanalytic methods and costs. Though his conclusions are correct, I think he hasn’t quite put his finger on how people have come to be misled in the past, so I shall have a go here at arguing what I think is the same point in different words.

The traditional argument goes like this: an n-processor machine can never solve a problem more than n times faster than the same machine with only one processor, and sometimes it will be much less than n times faster because some problems don’t parallelize well. So we give the attacker the best advantage by measuring the cost of their attack on a uniprocessor machine, and that’s the best way forward.

The premise is (approximately) correct, but the conclusion is false. The problem is that this reasoning ignores memory cost. Specifically, if your memory costs are high you might be better off spending less on memory and more on processors. By making this error, you can overestimate the efficacy of a cryptanalytic attack.

To highlight this, imagine an attack on a block cipher with a 128-bit key which takes 290 time and requires 290 storage. By current standards, this is a publishable attack which would be considered to “break” the block cipher. But compare this attack with a straightforward parallel brute-force attack: for far less cost, one could build a machine with 264 dedicated processors, which would brute-force through the keyspace of the cipher in 264 steps. Clearly, the cipher is not really any weaker for the existence of this attack.

Or consider the attacks on triple DES presented in my colleague Stefan Lucks’ paper, Attacking Triple Encryption. The final attack presented requires only time equivalent to 290 DES encryptions, which may seem a huge step forward over the 2168 steps required for a brute-force attack. However, it also requires 288 memory. Considered in the above light, this attack seems less impressive: if a dedicated Triple-DES key cracker can be built for the cost of 28 units of storage, then brute force will beat this attack on time and cost quite handily. As it is, the cost of such a unit is likely to be somewhat more, so this attack survives but barely.

This brings us to our first conclusion: the true measure of the cost of an attack is not time, but the product of time and hardware cost. Only by this measure are our comparisons with brute force fair.

Once we accept this measure, however, we open ourselves up to the reverse error; if we persist in considering only serial attacks, we will now underestimate the power available to attackers. Though I don’t understand it well enough to know for sure, it’s possible that the above Triple-DES attack might parallelize quite nicely. If 270 processors each with 218 memory can perform the attack with the full efficacy of parallelism, then only 220 steps are needed to perform the attack, and this is a far lower time and hardware cost than any brute force search. Suddenly this attack becomes far more tempting – but not all attacks are amenable to such parallelization, and the details have to be considered carefully.

From this we draw our second conclusion: it is insufficient to consider serial attacks, and assume that parallel attacks are implicitly covered. Parallel attacks must be considered explicitly. The existence of an attack with hardware cost h and time cost t (total cost ht) does not imply either the existence of an attack with hardware cost h/2 and time cost 2t or one with hardware cost 2h and time cost t/2.

However, one should be careful in asserting that a parallel attack can be “more powerful” than a serial attack. This assertion can be misleading, because it seems to contradict our opening premise which was correct. Rather, this assertion should be taken as the combination of our two conclusions: the correct measure of an attack is not time but the product of time and hardware cost, and it is by this measure that we might find that against some cryptosystems there exist parallel attacks that outperform any serial attack.

Bernstein’s paper also highlights a third point, which touches only tangentally on these arguments about parallelism but also adds to our understanding of brute force: if our attacker has multiple cryptanalytic targets, their life becomes proportionally easier. If our attacker need only break any one of 220 keys to defeat the system, this can make the job up to 220 times easier, because each guess at a key has a 220 times greater chance of being right. This insight was first presented by Biham in 1996 (How to Forge DES-Encrypted Messages in 228 Steps), and is a key component of the attack presented in Extracting a 3DES key from an IBM 4758: the attackers find a way to break into the system if any of 214 DES keys can be discovered, and this make the job of brute-forcing DES sufficiently easy (only 242 steps) that it can be done on an off-the-shelf FPGA evaluation board costing under $1000. Bernstein demonstrates that these advantages can be preserved in a highly parallel attack, and goes on to argue convincingly that the best countermeasure is a larger keyspace – echoing the conclusion of Biham’s original paper on the subject, which suggests that our keys should be twice as long as they need to be to exhaust the computational resources of our attacker.

This leads to an interesting open problem in cryptography. The current effective working definition of what it means for a cipher to be “unbroken” is that there are no distinguishing attacks more effective than brute force. Bernstein suggests that we should modify this definition to allow a cipher to have a key length greater than its design strength; one could declare that a cipher with a 256-bit key has a design strength of only 128 bits, so an attack that required 2160 resources would be considered irrelevant even though it greatly outperforms brute force. The purpose of this distinction would be to gain the resistance against multiple-target cryptanalysis that comes with larger keys without paying the performance cost associated with resisting the impossibly powerful attacker who can muster up to 2256 computing resources.

However, this leaves a problem of defining exactly what security up to a design strength of 128-bits means. If one defines it in the straightforward way – that the cipher is indistinguishable from random by an attacker who can expend up to 2128 resources – then one can achieve this by discarding half the key bits and using a strong 128-bit cipher, which would lend no extra resistance to multiple-target attacks and defeats the purpose of extending the key. I see no more elegant way to do it than directly considering an attacker with access to multiple targets, but I hope that further research will shed more light on this useful question.

by
Paul Crowley
on
21/09/05

The Ashes

Stuart ran an RF cable up from the riser downstairs so we can have the last, deciding day of the Ashes Tests shown through the projector in our meeting room.

Exciting!

by
mikeb
on
12/09/05

Search

Categories

You are currently browsing the LShift Ltd. blog archives for September, 2005.

Feeds

Archives

2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us