Monday, January 21, 2008

How to Use Concurrency to Improve Response Time in ErlyWeb Facebook Apps

When I was building the Vimagi Facebook app, I came across a common scenario where using concurrency can make your application more responsive for its users.

The typical flow of responding to requests coming from Facebook looks like this:

1) request arrives
2) do some stuff (mostly DB CRUD operations)
3) call Facebook API to send notifications / update newsfeeds and profile FBML
4) send response

When you're building a facebook app with ErlyWeb, you can instead do the following:

1) request arrives
2) do some stuff (mostly DB CRUD operations)

spawn(fun() ->
  3) call Facebook API to send notifications / update newsfeeds and profile FBML
end)

4) send response


The Facebook API calls in step 3 are much more expensive than the typical ErlyWeb controller operations because these calls involve synchronous round trips to the Facebook servers plus XML processing for the responses. By performing the Facebook API calls in a new process we can return the rendered response to the browser immediately and let the Erlang VM schedule the Facebook API calls to happen leisurely in the background.

The only gotcha is that if an error occurs in the spawned process we can't notify the user right away -- but this isn't really problem because we can log the errors and retry later, which is arguably a better approach anyway from a usability standpoint.

It's really that easy! A simple call to spawn() makes our app *feel* much faster. This puts the debate around language performance comparisons in a new light: how do you take into account the observation that some languages make "cheating" so much easier? :)

6 comments:

Harish Mallipeddi said...

Forget about being responsive, you absolutely have to make those profile FBML and newsfeed calls asynchronously because Facebook imposes very stringent timeout periods. I was implementing my app in Python/Django and had to resort to writing my own little daemon to handle these calls.

Igor said...

Most languages these days include capabilities for kicking off background threads. In Java it takes half a dozen lines of boilerplate code (the number of lines is testament to languages verbosity). Python is comparable to Erlang.

The hard problem is making it reliable. What happens if FB is not available. What about the case where machine running background task crashed? Does Erlang help with any of these?

Kevin Smith said...

@Igor - Making the async call reliable in a Java webapp is, I think, a fairly ugly proposition. Actually, I think its ugly in any language that doesn't have Erlang-like message passing semantics.

In one of these other languages, say you're off in another thread making FB call and Bad Things Happen. How do you propagate the failure back to correct owner? I know how to do that in Erlang - just send a message. Java or Python will require me to write more infrastructure to handle the failure correctly.

Bryan said...

Totally with you, Yariv. Many actions that might take a lot of time, I've been spawning new processes for. Updating Facebook profiles, or even just flushing content to the database - as long as I can show the user a reasonably consistent view (by leaving the in-memory data temporarily out of sync), I let those updates happen in the background.

@Igor: Erlang *does* help with the failure cases. Even beyond Kevin's note that inter-process communication is simple, Erlang also has a method for linking processes together, such that even if the background process dies unexpectedly, you can handle the failure in a live process that might know what to do about it.

Yariv said...

@Igor It's not just the capabillity of spawning processes with few LOCs that makes Erlang interesting. It's also 1) the fact that those processes are very cheap and there's no scalability or performance risk in spawning them at request processing time (the same can't be said for OS threads, which Python and Java use -- Google "apache vs yaws" to see what I mean) and 2) the inherent safety in the Erlang concurrency model which guarantees that multiple threads would never try to modify the same data because all data is immutable. These two factors effectively give you a carte blanche for spawning as many processes as you want without worrying about artificial scalability bottlenecks and thorny concurrency issues.

S said...

But python is not limited to OS thread! What about stackless and/or greenlets. Those do not cost as much as OS thread and do the same as erlang.