technology from back to front

Fair messaging in Erlang

One of the new projects that I’m working on involves a messaging infrastructure in Erlang. Without boring you with the details, the basic idea is that there are two types of messages, A and B and these are both sent to a thread (or a *process* in Erlang). One A must be paired with one B before the A and B can be discarded. Performance is an issue so this pairing must be fast. Several approaches were developed in trying to make this go very fast:

1. The first approach is also the dumbest. All A and B messages are sent to the same process. The process deals with these messages in a fifo order and thus must internally maintain queues of As and Bs, matching them as becomes possible. The problem with this is that flooding the process with As or Bs cripples performance because of processing the queue in fifo order. So a DoS attack is very possible and would often accidentally occur (most traffic is bursty).
1. The next idea is to not process the messages in a fifo order. This is possible because of Erlang’s *receive* statement which can do pattern matching on the messages in the queue. The problem is that the process must scan every message in the queue in a fifo order each time through the receive block until it finds one that matches. So whilst the DoS attack can’t happen in the same way, it can still cripple performance as the message queue must be scanned until you reach the end and find the first message of the *other* type so that the pairing can occur. So this cripples performance again.
1. Use three processes. Send messages of one type to one process and of the other type to another process. These processes are then *buffers*. They process messages in a fifo order (which is fast) and put all the messages they receive into an internal queue. The third process (the *pairer*) then sends *fetch* messages to these buffers which then send a predetermined number of messages in their internal queue to the pairer. The pairer then scans its messages in a non-fifo way but because the message queue is effectively bounded, there is no significant performance drop. The points are these:
* There is potentially a DoS attack possible which prevents the buffers from receiving the *fetch* message. However, firstly by processing the message queue in a fifo order it *will* be received, just possibly not immediately; and secondly, the buffers simply remove A or B messages from their message queues and place them in an internal queue, thus there is very little processing done. So getting through several thousand A or B messages does not take and real amount of time.
* The pairer only asks for more messages (via *fetch* messages sent to the buffers) iff it has exhausted its quota of messages: i.e. it asks for *n* messages from each buffer and will only ask for more if it has received *n* messages from each buffer. This means that its queue will never be more than 2 *n* messages long.
* Careful testing reveals that *n* = 7 is the best performance on the hardware I have available. This balances the cost of the pairer processing messages out of order with the extra round trip to the buffers. Lower *n* means that there are too many round trips sending *fetch* messages to the buffers but the non-fifo processing is really cheap. Higher values of *n* mean the pairer’s message queue gets too big so the non-fifo processing costs too much.
* By using three processes, it can make good use of thread level parallelism hardware. The two buffers can get through their message queues in parallel as can the pairer. There’s also therefore the possibility of distribution across multiple machines.
* It’s very possible to alter the pairer so that it sends the *fetch* messages after it has achived *m* pairings where *m* < *n*. This means the pairer's message queue would be a maximum of 2 *n* + 2(*n* - *m*) messages long but would reduce the delay between the pairer sending the *fetch* messages and the buffers being able to receive the *fetch* message and send messages on to the pairer - effectively the *fetch* message ends up higher up in the buffers' message queues.

It’s quite ironic that a language that seems, at least on the face of it, ideal for messaging-type applications turns out to have implicit semantics that, if not hindering, at least make you think much harder when trying to implement certain requirements…

by
matthew
on
01/08/06
  1. Did you try any more process-oriented approaches? I can think of two, although I can’t even guess if they’d be faster:
    1. Each new message is a process, it broadcasts to all other processes (via a server list) and both processes die and de-register when a match is found. n^2 scaling even if message passing is fast, so hope the set size is fairly small…scratch this.
    2. Each new message gets passed to a tree of processes which passes internally down based on the message content until a match or leaf is created (processes as a datastructure idea)

    Getting things off the queue and into a process asap, esp. when in an SMP environment, has gotta be better.

    I’m too slack to implement this, although I am using the tree of processes idea to build a generic log file analyser (tree of time)

  2. Hi Mathew,

    interesting you write about this. I have been involved in a real time matching engine in Java for a large financial institution and the problem is very similar though a bit more complex.

    The interesting part is that I have been reading about Erlang a few months back and was intrigued by way in which Erlang could be used to solve the same problem.

    In the current solution we are using the RETE algorithm to efficiently evaluate the matching rules. I was also considering how Erlang could be used to implement a distributed and fault tolerant version of RETE.

    Anyway, thanks for the post!

 
 


five + 5 =

2000-14 LShift Ltd, 1st Floor, Hoxton Point, 6 Rufus Street, London, N1 6PE, UK+44 (0)20 7729 7060   Contact us