Sunday, May 24, 2009

Unifying dynamic and static types with LFE

In my last post I described how to use LFE to overcome some of the weaknesses of parameterized modules. Unfortunately, all is not rosy yet in the land of LFE types. Parameterized modules allow you to create only static types. The compiler doesn't do static type checking, but you have to define the properties of your types at compile time. This works in many cases, but sometimes you want totally dynamic containers that map keys to values. In Erlang, this is typically done with dicts. We could still use them with LFE, but I don't like having different methods of accessing the properties of objects depending on whether their types were defined at run time or compile time.

Let's use macros to solve the problem.

In my last post, I relied on the build-in 'call' function to access the properties of static objects. Let's create a wrapper to 'call' that lets us access the properties of dicts in exactly the same manner as we do properties of other objects:

We can use 'dot' to get and set the properties of both dicts and static objects:

> (: dog test)
(lola rocky)

I think this is kinda of cool, though to be honest I'm not entirely sure it's a great idea to obfuscate in the code whether we're dealing with dicts or static objects.

Sunday, May 10, 2009

Geeking out with Lisp Flavoured Erlang

One of the features I dislike the most about Erlang is records. They're ugly and they require too much typing. Erlang makes me write

Dog = #dog{name = "Lolo", parent = #dog{name = "Max"}},
Name =,
ParentName = (Dog#dog.parent),
Dog1 = Dog#dog{
name = "Lola",
parent = (Dog#dog.parent)#dog{name = "Rocky"}}

When I want to write

Dog = #Dog{name = "Lolo", parent = #dog{name = "Max"}},
Name =,
Size = Dog.size,
Dog1 = Dog{
name = "Lola",
parent = Dog.parent{name = "Rocky"}}

In the defense of records, they're just syntactic sugar over tuples and as such they enable fast access into a tuple's properties despite Erlang's dynamic nature. In compile time, they're converted into fast element() and setelement() calls that don't require looking up the property's index in the tuple. Still, I dislike them because 1) in many cases, I'd rather optimize for more concise code than faster execution and 2) if the Erlang compiler were smarter could allow us to write the more concise code above by inferring the types of variables in your program when possible. (I wrote a rant about this a long time ago, with a proposed solution to it in the form of a yet unfinished parse transform called Recless.)

Parameterized modules provide a somewhat more elegant and dynamic dealing with types, but they still require you to do too much work. You can define a parameterized module like this:

-module(dog, [Name, Size]).

This creates a single function called 'new/2' in the module 'dog'. You can call it as follows:

Dog = dog:new("Lola", "big").

It returns a tuple of the form

{dog, "Lola", "Big"}.

You can't set only a subset of the record's properties in the 'new' function. This doesn't work

Dog = dog:new("Lola").

To access the record's properties you need to define your own getter functions, e.g.

name() ->

which you can call it as follows:

Name = Dog:name().

(This involves a runtime lookup of the 'name' function in the 'dog' module, which is slower than a record accessor).

There's no way to set a record's properties after it's created short of altering the tuple directly with setelement(), which is lame and also quite brittle. To create a setter, you need do the following:

name(Val) ->
setelement(2, THIS, Val).

Then you can call

Dog1 = Dog:name("Lola").

to change the object's name.

When LFE came out I was hoping it would provide a better way of dealing with the problem. Unfortunately, records in LFE are quite similar to Erlang records, though they are a bit nicer. Instead of adding syntactic sugar, LFE creates a bunch of macros you can use to create a record and access its properties.

(record dog (name))
(let* ((dog (make-dog (name "Lola")))
(name (dog-name dog))
(dog1 (set-dog-name dog "Lolo")))
;; do something)

LFE still requires us to do too much typing when dealing with records to my taste, but LFE does give us a powerful tool to come up with our own solution to the problem: macros. We can use macros generate all those repetitive brittle parameterized module getters and setters that in vanilla Erlang we have to write by hand. This can help us avoid much the tedium involved in working with parameterized modules.

(ErlyWeb performs similar code generation on database modules, but it does direct manipulation of Erlang ASTs, which are gnarly.)

Let's start with the 'new' functions. We want to generate a bunch of 'new' functions allow us to set only a subset of the record's properties, implicitly setting the rest to 'undefined'.

(defun make_constructors (name props)
(let* (((props_acc1 constructors_acc1)
(: lists foldl
((prop (props_acc constructors_acc))
(let* ((params (: lists reverse props_acc))
`(defun new ,params
(new ,@params 'undefined))))
(cons prop props_acc)
(cons constructor constructors_acc)))))
(list () ())
`(defun new ,props (tuple (quote ,name) ,@props))))
(: lists reverse (cons main_constructor constructors_acc1))))

This function take the module name and a list of properties. It returns a list of sexps of the form

(defun new (prop1 prop2 ...prop_n-m)
(new prop1 prop2 ... prop_n-m 'undefined))

as well as one sexp of the form

(defun (prop1 prop2 ... prop_n)
(tuple module_name prop1 prop2... prop_n)

The first set of 'new' functions successively call the next 'new' function in the chain, passing into it their list of parameters together with 'undefined' as the last parameter. The last 'new' function takes all the parameters needed to instantiate an object and returns a tuple whose first element is the module, and the rest are the object's property values. (We need to store the module name in the first element so we can use the parameterized modules calling convention, as you'll see later.)

Now let's create a function that generates the getters and setters

(defun make_accessors (props)
(((accessors idx1)
(: lists foldl
((prop (acc idx))
(let* ((getter `(defun ,prop (obj) (element ,idx obj)))
(setter `(defun ,prop (val obj)
(setelement ,idx obj val))))
(list (: lists append
(list getter setter)
(+ idx 1)))))
(list () 2)

This function takes a list of properties and returns a list of sexps that implement the getters and setters for the module. Each getter takes the object and returns its (n + 1)th element, where n is the position of its property (the first element is reserved for the module name). Each setter takes the new value and the object and returns the tuple after setting its (n + 1)th element to the new value.

Now, let's tie it this up with the module declaration. We need to create a macro that generates the module declaration, constructors, getters, and setters, all in one swoop. But first, we need to expose make_constructors and make_accessors to the macro by nesting them inside a (eval-when-compile) sexp.

(defun make_constructors ...)
(defun make_accessors ...))

(defmacro defclass
(((name . props) . modifiers)
(let* ((constructors (make_constructors name props))
(accessors (make_accessors props)))
(defmodule ,name ,@modifiers)

(defclass returns a `(progn) sexp with a list of new macros that it defines. This is a trick Robert Virding taught me for creating a macro that in itself creates multiple macros.)

Now, let's see how this thing works. Create the file dog.lfe with the following code

;; import the macros here

(defclass (dog name size)
(export all))

Compile it from the shell:

(c 'dog)

Create a dog with no properties

> (: dog new)
#(dog undefined undefined)

Create a new dog with just a name

> (: dog new '"Lola")
#(dog "Lola" undefined)

Create a new dog with a name and a size

> (: dog new '"Lola" '"medium")
#(dog "Lola" "medium")

Get and set the dog's properties

> (let* ((dog (: dog new))
(dog1 (call dog 'name '"Lola"))
(dog2 (call dog1 'size '"medium")))
(list (call dog2 'name) (call dog2 'size)))
("Lola" "medium")

This code uses the same parameterized module function calling mechanism that allows us to pass a tuple instead of a function as the first parameter to the 'call' function. Erlang infers the name of the module that contains the function from the first element of the tuple.

As you can see, LFE is pretty powerful. I won't deny that sometimes I get the feeling of being lost in a sea of parenthesis, but over time the parentheses have grown on me. The benefits of macros speak for themselves. They give you great flexibility to change the language as you see fit.

Sunday, May 03, 2009

How to work on cool stuff

I attended the Bay Area Erlang Factory last week. It was a great event. I met many Erlang hackers, attended interesting talks, learned about cool projects (CouchDB, QuickCheck, Nitrogen, Facebook Chat), gave a talk about ErlyWeb, and drank beer (without beer, it wouldn't be a true Erlang meetup).

My favorite talk was by Damien Katz. He told the story of how he had decided to take a risk, quit his job, and work on his then amorphous project. He wanted to work on cool stuff, and that was the only way he could do it. Even if nothing else came out of it, he knew it would have been a great learning exercise. Something great did eventually come out of it, as he created CouchDB (which looks awesome btw) and IBM eventually hired him to work on it full time.

Damiens' story reminded me of the time I started working ErlyWeb a few years ago. After I left the company I was working for at the time, I decided to take a few months and work on something cool. I didn't know what exactly it would be or how long it would take, but I knew that I wanted to build a product that would help people communicate in new ways, and I wanted to build it with my favorite tools. I knew the chance of failure was high, but I figured the learning alone would be worth it. I also viewed open source as an insurance policy of sorts. Even if I couldn't get a product off the ground, my code could live on and continue to provide value to people.

Doing it paid off. My savings dwindled, but I learned Erlang, created ErlyWeb and Vimagi, met many like minded people, and it opened new doors. Now I work on cool stuff at Facebook, ErlyWeb lives on, and every day people are using Vimagi to create amazing art and share it with their friends.

The moral of the story: if you're not working on cool stuff, take a risk and try to make it happen. Don't worry about building the next Google or making lots of money, because you'll probably fail. But the lessons you learn and the connections you make will be worth it.

Monday, March 09, 2009

Parallel merge sort in Erlang

I've been thinking lately about the problem of scaling a service like Twitter or the Facebook news feed. When a user visits the site, you want to show her a list of all the recent updates from her friends, sorted by date. It's easy when the user doesn't have too many friends and all the updates are on a single database (as in Twoorl's case :P). You use this query:

"select * from update where uid in ([fid1], [fid2], ...) order by creation_date desc limit 20"

(After making sure you created an index on uid and creation_date, of course :) )

However, what do you when the user has many thousands of friends, and each friend's updates are stored on a different database? Clearly, you should fetch those updates in parallel. In Erlang, it's easy. You use pmap():

fetch_updates(Uids) ->
fun(Uid) ->
Db = get_db_for_user(Uid),
query(Db, [<<"select * from update where uid =">>,
Uid, <<" order by creation_date desc limit 20">>])
end, Uids).

%% Applies the function Fun to each element of the list in parallel
pmap(Fun, List) ->
Parent = self(),
%% spawn the processes
Refs =
fun(Elem) ->
Ref = make_ref(),
fun() ->
Parent ! {Ref, Fun(Elem)}
end, List),

%% collect the results
fun(Ref) ->
{Ref, Elem} ->
end, Refs).

Getting the updates is straightforward. However, what do you do once you've got them? Merging thousands of lists can take a long time, especially if you do it in a single process. The last thing you want is that your site's performance would grind to a halt when users add lots of friends.

Fortunately, merging a list of lists isn't too hard to do in parallel. Once you've implemented your nifty parallel merge algorithm, you can theoretically speed up response time by adding more cores to your web servers. This should help you maintain low latency even for very dense social graphs.

So, how do you merge a list of sorted lists in parallel in Erlang? There is probably more than one way of doing it, but this is what I came up with: you create a list of single element lists. You scan through the main list, and for each pair of lists you spawn a process that merges the two lists and sends the result to the parent process. The parent process collects all the results, and repeats as longs as there is more than one result. When only one result is left, the parent returns it.

Let's start with the base case of how to merge two lists:

%% Merges two sorted lists
merge(L1, L2) -> merge(L1, L2, []).

merge(L1, [], Acc) -> lists:reverse(Acc) ++ L1;
merge([], L2, Acc) -> lists:reverse(Acc) ++ L2;
merge(L1 = [Hd1 | Tl1], L2 = [Hd2 | Tl2], Acc) ->
{Hd, L11, L21} =
if Hd1 < Hd2 ->
{Hd1, Tl1, L2};
true ->
{Hd2, L1, Tl2}
merge(L11, L21, [Hd | Acc]).

Now, to the more interesting part: how to merge a list of sorted lists in parallel.

%% Merges all the lists in parallel
merge_all(Lists) ->
merge_all(Lists, 0).

%% When there are no lists to collect or to merge, return an
%% empty list.
merge_all([], 0) ->

%% When no lists are left to merge, we collect the results of
%% all the merges that were done in spawned processes
%% and recursively merge them.
merge_all([], N) ->
Lists = collect(N, []),
merge_all(Lists, 0);

%% If only one list remains, merge it with the result
%% of all the pair-wise merges
merge_all([L], N) ->
merge(L, merge_all([], N));

%% If two or more lists remains, spawn a process to merge
%% the first two lists and move on to the remaining lists
%% without blocking. Also, increment the number
%% of spawned processes so we know how many results
%% to collect later.
merge_all([L1, L2 | Tl], N) ->
Parent = self(),
fun() ->
Res = merge(L1, L2),
Parent ! Res
merge_all(Tl, N + 1).

%% Collects the results of N merges (the order
%% doesn't matter).
collect(0, Acc) -> Acc;
collect(N, Acc) ->
L = receive
Res -> Res
collect(N - 1, [L | Acc]).

So, how well does this perform? I ran a benchmark on my 2.5 GHz Core 2 Duo Macbook Pro. First, I created a list of a million random numbers, each between 1 and a million:

> L = [random:uniform(1000000) || N <- lists:seq(1, 1000000)].

Then, I timed how long it takes to sort the list, first with lists:sort() and then with my shiny new parallel merge function.

> timer:tc(lists, sort, [L]).

Less than a second. lists:sort() is pretty fast!

Before we can pass the list of numbers into merge_all(), we have to break it up into multiple lists with a single element in each list:

> Lists = [[E] || E <- L].

Now for the moment of truth:

> timer:tc(psort, merge_all, [Lists]).

About 8.2 seconds :(

It's not exactly an improvement, but at least we learned something. In this test case, the overhead of process spawning and inter-process communications outweighed the benefits of parallelism. It would be interesting to run the same test it on machines that have more than two cores but I don't have any at my disposal right now.

Another factor to consider is that lists:sort() is AFAIK implemented in C and therefore it has an unfair advantage over a function implemented in pure Erlang. Indeed, I tried sorting the list with the following pure Erlang quicksort function:

qsort([]) -> [];
qsort([H]) -> [H];
qsort([H | T]) ->
qsort([E || E <- T, E =< H]) ++
[H] ++
qsort([E || E <- T, E > H]).

> timer:tc(psort, qsort, [L]).

It took about ~2 seconds to sort the million numbers.

The performance of merge_all() doesn't seem great, but consider that we spawned ~1,000,000 processes during this test. It had ~19 levels of recursion (log2 500,000). At each level, we spawned half the number of processes as the previous level. The sum of all levels is 500,000*(1 + 1/2 + 1/4 + 1/8 ... + 1/19) ~= 1,000,000 ( 8 seconds / 500,000 processes = 0.000016 seconds / process. It's actually quite impressive!

Let's go back to the original problem. It wasn't to sort one big list, but to merge a list of sorted lists with 20 items in each list. In this scenario, we still benefit from parallelism but we don't pay for the overhead of spawning hundreds of thousands of processes to merge tiny lists in the first few levels of recursion. Let's see how long it takes merge_all() to merge a million random numbers split between 50,000 sorted lists.

> Lists = [lists:sort([random:uniform(1000000) || N <- lists:seq(1, 20)])
|| N1 <- lists:seq(1, 50000)].
> timer:tc(psort, merge_all, [lists]).

This function call took just over 2 seconds to run, roughly the same time as qsort(), yet it involved spawning 25,000*(1 - 0.5^15)/(1 - 0.5) ~= 50,000 processes! Now the benefits of concurrency start being more obvious.

Can you think of ways to improve performance further? Let me know!

Tuesday, January 13, 2009

Custom Tags for Facebook Platform

Check out the announcement on the developer blog about a project I've been working on at Facebook . It's a feature that lets you create your own FBML tags for Platform apps.