Thursday, February 07, 2008

The Erlang Challenge

In the spirit of Paul Graham's Arc challenge, and in spite of the "controversy" around it, I devised a programming challenge of my own. It's called the Erlang Challenge.

The goal is to implement an advanced "parallel map": a function (F1) that applies another function (F2) to every element (E) of a list (L1) concurrently and then gathers the results in a new list (L2). The order of results in L2 corresponds to the order of elements in L1, and the elements of L1 are guaranteed to remain unchanged no matter what function F2 does. To make things more interesting, the F1 should also accept a list of cluster nodes (NL) onto which the computations can be distributed. Each application of F2 to an element of L1 should be done on a random node from NL, and the result should be send back to the node on which F1 was called (N). If NL is empty, all computations should occur locally on N.

This challenge isn't just about succinctness, but, like most real-world systems, also about scalability and performance. Therefore, it has the following requirements:

- The implementation must be able to spawn at least 1 million concurrent processes on a modern server machine.
- On multi-core machines, the application's performance must increase (almost) linearly with the number of available CPUs.
- The application as a whole must exhibit soft real-time performance. Specifically, this means that processes won't freeze for a long time during global garbage collection sweeps, and that no process will be able to block the entire VM to do IO or call native code, regardless of what F2 does. In other words, it is impossible to create a function F2 that could block the scheduler while F2 executes.

Here's how I would implement it in Erlang (this is loosely based on Joe Armstrong's pmap implementation):


pmap(Fun, List, Nodes) ->
SpawnFun =
case length(Nodes) of
0 -> fun spawn/1;
Length ->
NextNode = fun() -> nth(random:uniform(Length), Nodes) end,
fun(F) -> spawn(NextNode(), F) end
end,
Parent = self(),
Pids = [SpawnFun(fun() -> Parent ! {self(), (catch Fun(Elem))} end)
|| Elem <- List],
[receive {Pid, Val} -> Val end || Pid <- Pids].


How would you implement it in your favorite language?

Update: please read the followup.

20 comments:

Xichekolas said...

I probably just don't know Erlang enough to see it (but I'm trying to learn it currently!), but what in your code puts the elements back in relative order? Do the worker nodes just return the results in the right order?

sjs said...

Xichekolas: The pids gathered are in order of the elements in List. In the last list comprehension the values are gathered in same order as the pids, thus they should be in the same order as the original List.

Nat said...

That's even more biased than the Arc challenge!

What meaning do you give the term "process" (without reference to Erlang). Do you really mean function application? Why do I want to do 1 million function applications in parallel on the same machine? Surely I want to do the appropriate number to maximise the use of the available CPUs.

You're misusing the term "soft-realtime". It appears that you just mean "whatever I say is fast enough".

And this challenge doesn't play to Erlang's strengths (no distribution, no failover, etc.). It is most suitable for an array processing language, not an actor language.

Nona Myous said...

If there are a million elements in the list, then the mailbox for the the parent process can grow to a large size. Does Erlang handle this properly?

Kevin said...

@Nat: I would take "process" to mean any concurrency abstraction. The other two requirements (linear scaling wrt additional CPUs and soft-realtime) may constrain the implementations, though. (Cooperative threads seem to be out.) Also, this example will distribute to multiple machines. Just include remote nodes in Nodes. Finally, while probably if you have a million items in List you will want to distribute to multiple machines, if the tasks involve, for example, lots of waiting on the network, then you very well might want a large number running in parallel on a single machine.

ken said...

I think you just described the standard Fortress for-loop. Do you want me to write one for you? It looks pretty much like a Python loop.

jlouis said...

[ken: A fortress loop] ... or a Manticore parallel list. Although the language is not out yet. The idea is basicly that [| x, y, z |] is a list onto which we may operate in parallel. The system is free to spawn processes as it sees fit to get the maximum performance.

Of course there is a comprehending variant as well ;)

Garrett said...

pmap f [] = []
pmap f (x:xs) = y `par` (ys `pseq` (y : ys))
where y = f x
ys = pmap f xs

David Mathers said...

update T1 set L2 = F2(L1)

The clustering and scaling is handled by the SQL implementation, not mixed in with the application logic.

horia314 said...

Touche, one might say. On the other hand, just like Arc is trying to prove it's the best Arc around, so is this post proving that Erlang is the best Erlang around.

Except for it, there isn't any production ready programming language that can do the sort of massive parallelism that Erlang can provide.

Kudos to it for being able to do that, but monopolies are bad things in programming languages as they are in business.

One thing in special bothers me about Erlang : why is there a different method for node creation on the local machine than it is on another node? I mean, it's clearly something the runtime can handle, and it'll spare you some fair amount of work on the programming side.

Bob Ippolito said...

It seems most people are missing the fact that this will distribute the work over all nodes (usually nodes are run on different machines). The data parallel languages cited in the comments will do a great job of this too, almost certainly better than Erlang, but I believe that most of them will only do computation on the local machine.

The equivalent single-node pmap is for Erlang is short too, but less interesting (remove all but the last 3 statements, change SpawnFun to spawn, and remove Nodes from the argument list).

There isn't really a different method for process creation on the local node versus any other node. It looks like SpawnFun is just handling a case that it really doesn't need to (bad input). spawn(node(), F) would work just fine (node() is the local node). spawn(F) is just shorter to write, but it's not necessary.

Realistically you'd probably call pmap(Fun, List, [node() | nodes()]) which would distribute the computation over all nodes that the local node can see, including itself node. He could've just as well checked for an empty Nodes and used [node()] instead.

As far as mailbox size goes, it depends on what you mean by "properly". The answer is probably not -- this isn't an optimal pmap for very large lists due to that issue.

Xichekolas said...

@sjs: Ahhh, yeah. That makes total sense.

wingedsubmariner said...

I wrote a library to do this a while ago:

http://code.google.com/p/plists/

It will also parallelize many other operations. It supports proper error handling as well, trying to make itself as invisible as possible, while still not sending large lists across processes except when necessary (separate heaps == potential inefficiency). It's capable of autodetecting the number of of threads Erlang is configured to use on each node and to spawn that many processes -- effectively maxing out a machine.

It's over one thousand lines of code though, so it fails succinctness.

Yariv said...

@Bob There is a slight difference between the use of SpawnFun in this example and calling pmap(Fun, List, [node() | nodes()]) -- in this example, if the list of nodes is not empty, the current node won't be included in the call to spawn(). Obviously it's not a very important difference -- when I designed the example, I was just trying to illustrate how in Erlang, spawning processes on remote VM's is as simple as spawning them on the local VM.

@wingedsubmariner I know about plists. It's great.

Yariv’s Blog » Blog Archive » More Erlang Fun: Distributed, Fault Tolerant MapReduce said...

[...] the Erlang Challenge posting, I showed how to implement a simple distributed parallel map function. You can pass it a [...]

Martial said...

I do like Erlang programming but the Smalltalk continuation-based solution with Seaside looks like the simpler way:
http://www.lukas-renggli.ch/blog/take-the-arc-challenge
Erlang has the advantage of low memory consumption and 'infinite' scalability.

The (Unofficial) Erlang Blog » Blog Archive » Erlang Challenge Announced said...

[...] last week’s Arc release and subsequent Arc challenge, Yariv Sadan (ErlyWeb) has announced the Erlang Challenge. The goal is to design & program an advanced “parallel map” function that can [...]

SQL Tutorials said...

Does anyone know if there is another language or set of commands beside SQL for talking with databases?

I'm working on a project and am doing some research thanks

SQL Tutorials said...

You know, the thing about SQL is, that there is virtually nothing that can replace it.

Does anyone know if a substitute exists for sql? I mean besides MS SQL and Oracle and all that jazz. Thanks.

developers.hover.in » Somethings to rejoice about said...

[...] compared some techniques from many pmap implementations out there, and here are some [...]