Sunday, July 13, 2014

ReactScriptLoader

I just published a small library called ReactScriptLoader to make it easier to load external scripts with React. Feedback is appreciated! https://github.com/yariv/ReactScriptLoader

Tuesday, June 03, 2014

Announcing HDMNode: a Node.JS based HDM Bitcoin wallet

I just released an open source a library called HDMNode. It’s a Node.JS based API server and client for hosted HDM (Hierarchical-Deterministic Multisig) Bitcoin wallets. If you’re interested in using it or contributing to it, please read on!

The goal of HDMNode is to make it easier for developers to build HDM wallets. Such wallets have significant security and privacy advantages over most popular wallet types and Bitcoin users (and the ecosystem) would benefit from a greater availability of high quality HDM wallet products. I believe there’s a dearth of HDM wallets because they’re fairly complicated to build. I hope that HDMNode will reduce the effort as well as provide developers a well audited and secure codebase on which they could safely rely. It would be a shame if every developer who wanted to build such a wallet had to face the same design issues and security pitfalls.

Why did I make HDMNode and why am I releasing it now? When I started working on this project I intended to build a complete HDM wallet product. However, as time went by I realized I had bit off a bit more than I could chew in the timeframe I had allotted to the project. While I made a lot of progress on the backend and the API client there was still a good amount of work to be done to make it production ready. In addition, the code needed many more eyeballs on it before I could be confident in its security. I didn’t want to risk shipping a product that holds people’s money with glaring flaws that I failed to catch. So, I decided that the best course of action was to open source the code and give other developers an opportunity to inspect the code, to use it in their own products, and to hopefully contribute back to the project.

While I expect HDMNode to be mostly used by hosted wallet providers, if HDMNode evolves into a complete open source wallet (with UI) it could give users who want to protect their privacy the option to host their wallet on their own servers. I don’t expect this to be the primary use case but I also don’t think that users must be forced to choose between security and privacy. With HDMNode, they could have both.

If HDMNode gains traction, I hope that its JSON-RPC based API will be standardized, allowing users to mix and match clients and servers that they trust and want to use.

What’s makes HDM wallets so great, anyway? HDM wallets’ strong security comes from their reliance on P2SH multisig addresses (as defined in BIP11 and BIP16). Such wallets store coins in addresses that are guarded by multiple private keys, each of which is generated on different machine. The typical setup is 2-of-3, where the client, the server, and a backup machine each have a key, and at least 2 keys are required to sign off on every transaction. This is far more robust than non multisig wallets, where the machine that holds the key that protects the coins becomes a single point of failure from a security perspective: if that machine gets compromised, the coins are gone.

HDM wallets also offer much better privacy than no-HD multisig wallets. Such wallets rely on a fixed set or subset of keys to generate P2SH addresses. Anyone observing the blockchain could link those addresses to each other (at least after their coins are spent) and can therefore derive the user’s balance and transaction history. HDM wallets don’t have this weakness because they can generate an arbitrary number of addresses, each made from a unique set of keys, from a single randomly generated seed, as defined in BIP32. Without knowing the wallet’s seed, it’s impossible to associate those addresses with each other.

HDM wallets have a couple of additional benefits shared with their non-multisig HD counterparts. They make it easy for users to back up their wallet once by backing up the wallet’s seed and restore it fully at a later point regardless of the number of transactions the user has performed. This is possible because having the wallet’s seed allows you to scan the blockchain and find all the transactions that sent or received coins from or to addresses that can be derived from that seed. HDM wallets also allow users to set up a hierarchical tree of sub-wallets, where having the parent wallet’s keys allows you to derive the child wallet’s keys but not vice versa. This feature can be useful for organizations or groups who want to give some members limited ability to spend the organization’s coins or observe incoming transactions to other branches of the tree.

HDM wallets have real benefits, but, as you might have guessed, they’re not perfect. Besides their implementation complexity, the main downside of HDM wallets is the initial added friction when creating the wallet, at least compared to pure hosted wallets. Users have to pick a strong password and remember it (no password recovery!). If they forget their password or fail to properly back up their keys, they can lose their coins. Also, to get the full security benefits of HDM wallets users should also set up the backup key pair on a separate machine (ideally an offline one). If a user doesn’t do that, and her machine is compromised at wallet creation time, a hacker could steal her coins once they’re deposited into the wallet. Despite this weakness (which users can avoid without too much effort) HDM wallets still much secure than client side wallets that expose the keys that guard the coins every time the user transacts. Such wallets expose the keys every time the user sends money, making their vulnerability window much bigger.

HDMNode is currently designed for supporting 2-of-3 multisig wallets, with one key on the server, one key on the client, and one key in backup (ideally offline), which I expect to be the most popular option for HDM wallets. This setup combines the best security features of wallets that store private keys client side and hosted wallets that store private keys on the server. In HDMNode, the coins are safe whether the client or the server gets hacked (but not both). If the server disappears or becomes inaccessible, the user can recover her coins using the backup (offline) key. An attacker must compromise at least 2 of these different systems to steal the user’s coins. While the server can’t steal the coins, it can act as a security service for the client by enforcing 2 factor auth and by refusing to sign off on transactions that seem suspicious or that are against user-defined rules such as daily spend limits. This protects the user against attacks where the attacker gains control over the user’s device and tries to steal the user’s coins by sending spend requests to the server.


If you’re sold on HDM wallets and you want to build one, I hope you use HDMNode. I’ll be happy to take contributions from anyone who wants to make Bitcoin wallets more secure and trusted!

How to secure your bitcoins

(This was originally posted a few months ago at https://medium.com/on-banking/how-to-secure-your-bitcoins-29b86892fc64.)

In the past few months, I’ve spent a good amount of time investigating different solutions for Bitcoin storage. I’m writing this to share the knowledge I’ve gained and to help you make informed choices about securing your bitcoins. I won’t cover all of the products in this space — that would require a much longer post — just the ones that I think are relevant to the average user.

Before I go into the details I want to emphasize that great solutions to this problem don’t exist. Every solution involves different security/usability tradeoffs. The more usable ones put your coins at greater risk of theft and the more secure put your coins at greater risk of loss — it’s indeed possible to store your coins so securely that you’ll end up securing them from yourself. With this in mind, I’ll walk you through the options that I think strike the right balance for most people. The only condition for my advice is that if you follow it and you end up losing your coins you won’t blame me for it!

If you have a small amount of coins, if you need them available online for day-to-day spending, or if you want a user friendly option use Coinbase or Blockchain.info. They’ve both been around for a few years — an eternity in Bitcoin terms. They both have mobile apps, at least for Android, but iOS users are currently out of luck (sadly, Apple has removed all Bitcoin wallets from the App Store). They allow you to easily access your account from multiple machines. They provide two factor auth, an important requirement for online wallet security. Coinbase also has daily spend limits after which a second two factor auth check kicks in, which is a nice security feature.

The main difference from a security perspective between Coinbase and Blockchain.info is that Coinbase holds the private keys for your bitcoins on their servers whereas Blockchain.info encrypts them on the client with your password and only stores the encrypted keys on the server.

As a consequence, if you use Coinbase and Coinbase gets hacked or disappears tomorrow due to some calamity you will lose your coins.

This may sound alarming, but I believe the probably this would happen is low. Coinbase’s team is competent, they follow strong security practices, and they’re backed by some of the best VCs in the industry. However, the risk of loss for Coinbase does exist, so I wouldn’t recommend putting in Coinbase a significant chunk of your life savings.

(Remember: no Bitcoin wallet has the equivalent of FDIC insurance like a bank account. Once the coins are gone, they’re gone.)

Using Blockchain.info protects you from their disappearing or getting hacked. If you keep a wallet backup you can decrypt your private keys locally as described here. However, you can still lose your coins if your phone or computer gets hacked or if an attacker gets his or her hands on your encrypted wallet and you’ve chosen a weak password. This would let the attacker brute force your private keys and steal your coins.

(When using Bitcoin wallets always choose a secure password. It should be long, it should contain a combination of letters, numbers and symbols, and it should be unique. Also, never reuse passwords because it could seriously compromise their security.)

If you do use Blockchain.info you should avoid using their web based wallet. Only use their native apps or browser extensions (preferably the Chrome app), which you can download at https://blockchain.info/wallet/browser-extension. This is because Javascript cryptography in the browser is inherently non secure. In fact, you should never trust any wallet that is web based.

Open source is another important consideration. Blockchain.info has open sourced all of their client wallet code, so its security could be vetted by experts. This is a baseline requirement for any wallet application that directly handles private keys.

Coinbase has also open sourced their Android and now delisted iOS app but this is less relevant because as opposed to Blockchain.info their client apps don’t touch private keys. Nonetheless, Coinbase should be given credit for open sourcing their apps as it gives security experts the ability to at least rule out some possible attacks.

This covers the options that I consider user friendly yet still decently secure. Let’s move on to the options that would give you much greater control over the security of your coins at the cost of much greater complexity. As you’ve probably guessed, this involves putting them in cold storage.

The two main contenders in this arena are Armory and Electrum. They both let you generate your private keys on an offline machine and only transfer your public keys to an online machine, where they can receive bitcoins but but not send them. These clients are both deterministic, which implies that all their private keys could be generated from a single seed — a randomly generated 128 bit number or easily remembered passphrase— which makes them easy to back up. Being deterministic also also allows them to generate new receiving addresses from the seed’s public key. This is an important feature because in most cases you don’t want to reuse addresses when receiving bitcoins in order to protect the privacy of your wallet balance on the public blockchain.

The main difference between Armory and Electrum is that Armory downloads the full blockchain (Armory uses bitcoind as its backend) and Electrum uses a third party server to only receive information about the addresses it holds. This makes Armory slow, private, and secure and Electrum fast, less private and somewhat less secure. Electrum is less private because the remote server to which your client connects knows your IP address and which addresses your wallet requested. It’s also less secure because a malicious remote server could lie to the client about its bitcoin balance. However, this weakness doesn’t compromise your private keys, which is the primary concern for offline wallets.

Of course, both products are open source, which is a baseline security requirement for client side wallets.

MultiBit is another noteable option because, like Armory, it relies on the P2P network to query the state of the blockchain. MultiBit is also fast because it only queries the subset of the blockchain’s blocks that are relevant to the addresses in the wallet (this is called SPV mode). This makes MultiBit more user friendly than Armory at the cost of some security because an attacker that controls the internet connection could feed the client false information about the wallet’s balance, similar to a malicious Electrum server.

MultiBit is also more private than Electrum because rather than querying the addresses it cares about directly it queries them using a bloom filter that matches a superset of those addresses. However, I don’t know precisely how much privacy this would give you because presumably the remote nodes could infer which addresses the user owns within some confidence interval. Maybe someone who’s an expert in SPV mode could expand on this.

I don’t recommend MultiBit at the moment because it’s not deterministic, which makes it harder to back up. The developers announced that a future release will enable deterministic wallets, at which point MultiBit will become a strong contender for secure yet usable offline bitcoin storage.

The most important thing you have to remember before you embark on this cold storage security journey (and it is a serious journey, as you’ll soon see), is that if your private keys ever touch a machine that’s connected to the internet you should assume they’re compromised and that your coins will be stolen.

This is because, if you’re truly paranoid, you should know that computers fundamentally cannot be trusted. It’s impossible to know what really goes on within a computer. It could have viruses, rootkits, software vulnerabilities, and even compromised hardware. If that computer has access to your private keys and it can send them over the internet, whoever controls your computer can steal your bitcoins.

With this warning in mind, let’s walk through the steps you should take to set up secure cold storage. I’m going to describe the Electrum method because I’m more familiar with it but Armory should be similar.

1) Get an old computer you won’t use for anything else. Reformat it and and install Linux on it. I recommend either Debian or Ubuntu. Make sure this computer never connects to the internet.

(When you format this machine, you should ideally encrypt your partitions — including your swap parition — for extra security. Otherwise, your OS could inadvertently write your private keys unencrypted to disc while it swaps them out of memory, allowing an attacker that gets his hands on the machine to steal your private keys.)

2) Download Electrum onto your online machine and copy it to your offline machine from a USB drive.

(Note that even USB drives can carry viruses, so it’s recommended to use a new USB drive that hasn’t touched any other machines. However, those viruses are unlikely to infect Linux machines, so if you followed step 1 you should be fairly safe).

3) Verify the binary’s GPG signature before you install it (these MultiBit instructions should apply to Electrum users too). Follow the installation instructions to install Electrum on both machines.

4) On the offline machine, create a new wallet. Choose a strong password for encrypting this wallet on disk. Write the wallet’s seed on a piece of paper.

5) Copy the public key from the offline machine to the Electrum client running on your online machine. At this point, Electrum on the online machine should be able to generate public keys for receiving bitcoins but wouldn’t be able to spend them it wouldn’t have access to the private keys, which are safely store exclusively on the offline machine. (This process is described in more detail here.)

6) As a test, send a small amount of coins to the first few addresses generated by the wallet. Then create a new wallet on the online machine, enter the seed from the offline machine, and verify your wallet has been completely restored and that you can spend the funds. Send the coins back to the the online wallet from which you sent them to verify Electrum could send them successfully.

7) If everything worked as expected, repeat steps 4-5. You should repeat those steps because the moment you’ve entered your seed into the online machine you could have compromised it so you shouldn’t use it anymore to protect your offline keys.

(If you’re truly paranoid you should should use something like Diceware http://world.std.com/~reinhold/diceware.html because there’s a chance your computer’s random number generator isn’t going to generate sufficent entropy. It sounds crazy, but this kind of bug has happened before with certain Android wallets.)

8) Send your remaining coins to the new wallet you created. Store the seed on paper in a secure place, or in multiple secure places as you see fit (e.g. in safes, bank vaults, or whereever else you feel safe).

9) For even stronger security of your paper backups, you should consider generating two factor paper backups for your wallet’s seeds by encrypting them with your password as described in BIP38. This would prevent anyone that gets your paper backups from stealing your coins if they don’t also know your password. The process for doing this will be left as an exercise to the reader.

If you’ve read this far, this is a good time to stop and ask yourself: are you sure you still want to own bitcoins? ☺

As you can see, Bitcoin security is quite complicated and hard to get right, even for people that have the understanding and the patience to put their coins in offline storage. In my opinion, this is one of the main reasons that Bitcoin isn’t quite ready for the mainstream.

That said, the future for Bitcoin security looks bright. New wallets that use multisig addresses to protect bitcoins behind multiple private keys are coming, and once they’re vetted by the experts I’ll update this post or write a new post with my latest recommendations. I’ve also started working on a project that I hope will improve security for Bitcoin users but I‘m not ready to make any promises or announcements about it. ☺

Got feedback? Join the conversation or follow me on Twitter.

Monday, February 10, 2014

Bitcoin's money supply and security

(This was originally posted at https://medium.com/@yarivs/bitcoins-money-supply-and-security-10cbf87ce39e.)

It’s evident that Bitcoin has been designed to reward hoarding by its early investors. It’s encoded in the protocol that the supply of new bitcoins will gradually diminish, halving every four years until the last bitcoin will be mined in 2140. In its first four years, each Bitcoin block rewarded the miner with 50 btc. Today, the reward is 25 btc; in 3-4 years, it will be 12.5 btc, and so on.

Furthermore, mining bitcoins used to be much more accessible. In the early days of the network the hash rate (the global hashing power dedicated to mining bitcoins) was much lower. Since less hashing power was competing for the discovery of new bitcoins it used to be possible to obtain a large number of new bitcoins by mining with commodity hardware. Today, however, it’s difficult to mine profitably without custom-built, bleeding edge ASICs and cheap electricity. The combination of cheap price, high block rewards, and low competition in mining has allowed the earliest bitcoin buyers and miners to accumulate large stakes of the currency.

The scheme seems to have worked as planned. Bitcoin’s price rose from ~$14 in early 2013 to more than $1,200/btc near the end of last year (it’s now hovering around $700), yielding fantastic returns to those who accumulated a large stake of bitcoins early in the game.

The sharp rise in bitcoin’s price has led many people to call Bitcoin a bubble or Ponzi scheme. I believe that the Ponzi scheme accusation is misplaced because there’s no apparent intent to enrich early investors at the expense of late adopters or to cause late adopters any losses, which is characteristic of Ponzi schemes. In addition, while Bitcoin’s price has been undeniably volatile, whether it’s bubble or not remains to be seen. If people see Bitcoin as a reliable store of value, like gold, it’s quite possible its price will increase over time. Of course, it’s also conceivable that its price will plummet if, say, someone invents an alternative crypto currency that’s better than Bitcoin in every way and users adopt it in droves.

Whether Bitcoin’s skewed wealth distribution is fair or not is worth debating. I think that there are merits to both sides of the argument. Early investors should be rewarded for taking a risk, but if Bitcoin keeps appreciating it could be problematic that so much of its wealth should be concentrated in the hands of a few people. What is clear to me, however, is that if Bitcoin hadn’t rewarded hoarding by early investors, Bitcoin would have been much more vulnerable. In fact, it may not have been a viable new cryptocurrency at all.

The reason is that the hoarding behavior indirectly gives miners a much needed incentive to secure the network in its early days. Bitcoin can only be secure when a large amount of mining power is spent protecting the network from a 51% attack (an attack where a single entity controls 51% of the mining power and is then able to essentially rewrite the history of the blockchain and thereby launch double spend attacks). For a completely decentralized system designed for moving money, Bitcoin’s security so far has been remarkably strong. It’s unclear exactly how much it would cost to launch a 51% attack against bitcoin but I’ve heard estimates from a few hundred million to over a billion dollars. Regardless of what the actual number is, the aggregate amount of mining power that’s dedicated to securing the network is very high and that gaining control over 51% of it to launch such attack would be very expensive.

Miners aren’t volunteering their computers to the network altruistically. They have a dual incentive to mine: every time they mine a new block they earn new coins as well as transaction fees from anyone whose transaction is contained in the block. Five years into Bitcoin’s creation the transaction fees that miners make are still quite small of the amount miners earn in “block rewards” through minting new coins (only 0.29% according to https://blockchain.info/stats).

Earnings from any given block are determined by the total market cap for Bitcoin at the moment in time when the block is mined and the basic forces of supply and demand are what determine the market cap. The demand for bitcoins is driven by new investors as well as by users who want to acquire bitcoins in order to send them to other people or to buy things. The supply is driven by miners who use the newly minted coins to pay for their operations as well as by investors who sell their coins.

If Bitcoin’s incentives were inverted and investors were encouraged to sell their coins rather than hoard them, the supply of bitcoins would increase and the price would drop. Miners would lose a proportional incentive to contribute to the network the computational power that’s needed to secure it.

When early investors hoard their coins they’re propping up the price, which increases Bitcoin’s market cap and attracts more mining power to protect the blockchain. This serves the function of priming the network’s mining power in its early days before enough transactions go through the network to provide large numbers of miners with sufficient transaction fees to continue mining profitably. Because the minting of new coins dilutes the ownership stake of early investors, early investors who hoard their bitcoins are arguably paying (indirectly) for bitcoin’s security ahead of Bitcoin’s readiness to become used as a currency by large numbers of people. In fact, according to https://blockchain.info/stats, the current price per transaction (measured by dividing miner revenues by number of transactions) is $34.68, most of which is paid for almost exclusively by the block reward, i.e., existing bitcoin holders.

The current high price per transaction seems unsustainably high — and it is. To ultimately succeed, bitcoin must see growing transaction volumes and a gradual inversion of the relationship between transaction fees and block rewards. Barring these trends, it’s likely that mining revenue will decline, causing miners to eventually drop out of the network. If a large portion of miners do so the network’s security will be at risk, causing investors to flee, the price to drop, block rewards to depreciate, more miners to leave, etc, in a downward spiral.

While this dystopian scenario is possible, it’s unlikely in the near term: Bitcoin is only five years old and, by design, miners will continue being rewarded with new bitcoins for over a century. Until then, Bitcoin has plenty of time to gain popularity as a transactional currency. This will require much infrastructure to be built, from exchanges to wallets to payment providers, but much of the work is already under way. It will also require greater acceptance by merchants and users.

Thinking about this makes me appreciate the cleverness of Bitcoin’s design. On the surface, it offers a novel solution to the problem of distributed consensus (aka the Byzantine General’s Problem). But beyond that, it’s also a very carefully orchestrated economic system that would have failed very easily—and quickly—if its creator(s) hadn’t so deliberately aligned the incentives of investors, miners and users, seemingly with an eye towards maximizing the network’s security throughout its stages of adoption.

Such conceptual cleverness, however, cannot guarantee lasting success in the real world, where it remains to be seen whether Bitcoin can withstand market, regulatory, and competitive forces. It’s conceivable that someone will invent an altcoin that’s even better optimized to rewarding miners (for example, an altcoin with a perpetual inflation rate that’s high enough to give miners significant additional revenues but low enough to not scare away investors). If this happens, and this altcoin one day surpasses Bitcoin in mining power, will Bitcoin remain relevant? It’s hard to say, so grab some popcorn and enjoy the ride.

(Full disclosure — I own some bitcoins, which I bought in late 2013.)

Thursday, June 20, 2013

Announcing the Mortgage Hacking Calculator

Mortgages can be quite confusing and comparing them can be difficult. That's why I created the Mortgage Hacking Calculator at http://www.mortgagehacking.com. Please check it out and let me know of any feedback you may have. I hope you find it useful!

Tuesday, April 27, 2010

PicLike

Check out PicLike, the new app I made using the Flickr API, Google App Engine and the Facebook Like button: http://piclike.appspot.com.

Sunday, May 24, 2009

Unifying dynamic and static types with LFE

In my last post I described how to use LFE to overcome some of the weaknesses of parameterized modules. Unfortunately, all is not rosy yet in the land of LFE types. Parameterized modules allow you to create only static types. The compiler doesn't do static type checking, but you have to define the properties of your types at compile time. This works in many cases, but sometimes you want totally dynamic containers that map keys to values. In Erlang, this is typically done with dicts. We could still use them with LFE, but I don't like having different methods of accessing the properties of objects depending on whether their types were defined at run time or compile time.

Let's use macros to solve the problem.

In my last post, I relied on the build-in 'call' function to access the properties of static objects. Let's create a wrapper to 'call' that lets us access the properties of dicts in exactly the same manner as we do properties of other objects:

We can use 'dot' to get and set the properties of both dicts and static objects:



> (: dog test)
(lola rocky)


I think this is kinda of cool, though to be honest I'm not entirely sure it's a great idea to obfuscate in the code whether we're dealing with dicts or static objects.

Sunday, May 10, 2009

Geeking out with Lisp Flavoured Erlang

One of the features I dislike the most about Erlang is records. They're ugly and they require too much typing. Erlang makes me write


Dog = #dog{name = "Lolo", parent = #dog{name = "Max"}},
Name = Dog#dog.name,
ParentName = (Dog#dog.parent)#dog.name,
Dog1 = Dog#dog{
name = "Lola",
parent = (Dog#dog.parent)#dog{name = "Rocky"}}


When I want to write


Dog = #Dog{name = "Lolo", parent = #dog{name = "Max"}},
Name = Dog.name,
Size = Dog.size,
Dog1 = Dog{
name = "Lola",
parent = Dog.parent{name = "Rocky"}}


In the defense of records, they're just syntactic sugar over tuples and as such they enable fast access into a tuple's properties despite Erlang's dynamic nature. In compile time, they're converted into fast element() and setelement() calls that don't require looking up the property's index in the tuple. Still, I dislike them because 1) in many cases, I'd rather optimize for more concise code than faster execution and 2) if the Erlang compiler were smarter could allow us to write the more concise code above by inferring the types of variables in your program when possible. (I wrote a rant about this a long time ago, with a proposed solution to it in the form of a yet unfinished parse transform called Recless.)

Parameterized modules provide a somewhat more elegant and dynamic dealing with types, but they still require you to do too much work. You can define a parameterized module like this:

-module(dog, [Name, Size]).

This creates a single function called 'new/2' in the module 'dog'. You can call it as follows:


Dog = dog:new("Lola", "big").


It returns a tuple of the form


{dog, "Lola", "Big"}.


You can't set only a subset of the record's properties in the 'new' function. This doesn't work


Dog = dog:new("Lola").


To access the record's properties you need to define your own getter functions, e.g.


name() ->
Name.


which you can call it as follows:


Name = Dog:name().


(This involves a runtime lookup of the 'name' function in the 'dog' module, which is slower than a record accessor).

There's no way to set a record's properties after it's created short of altering the tuple directly with setelement(), which is lame and also quite brittle. To create a setter, you need do the following:


name(Val) ->
setelement(2, THIS, Val).


Then you can call


Dog1 = Dog:name("Lola").


to change the object's name.

When LFE came out I was hoping it would provide a better way of dealing with the problem. Unfortunately, records in LFE are quite similar to Erlang records, though they are a bit nicer. Instead of adding syntactic sugar, LFE creates a bunch of macros you can use to create a record and access its properties.


(record dog (name))
(let* ((dog (make-dog (name "Lola")))
(name (dog-name dog))
(dog1 (set-dog-name dog "Lolo")))
;; do something)


LFE still requires us to do too much typing when dealing with records to my taste, but LFE does give us a powerful tool to come up with our own solution to the problem: macros. We can use macros generate all those repetitive brittle parameterized module getters and setters that in vanilla Erlang we have to write by hand. This can help us avoid much the tedium involved in working with parameterized modules.

(ErlyWeb performs similar code generation on database modules, but it does direct manipulation of Erlang ASTs, which are gnarly.)

Let's start with the 'new' functions. We want to generate a bunch of 'new' functions allow us to set only a subset of the record's properties, implicitly setting the rest to 'undefined'.


(defun make_constructors (name props)
(let* (((props_acc1 constructors_acc1)
(: lists foldl
(match-lambda
((prop (props_acc constructors_acc))
(let* ((params (: lists reverse props_acc))
(constructor
`(defun new ,params
(new ,@params 'undefined))))
(list
(cons prop props_acc)
(cons constructor constructors_acc)))))
(list () ())
props))
(main_constructor
`(defun new ,props (tuple (quote ,name) ,@props))))
(: lists reverse (cons main_constructor constructors_acc1))))


This function take the module name and a list of properties. It returns a list of sexps of the form


(defun new (prop1 prop2 ...prop_n-m)
(new prop1 prop2 ... prop_n-m 'undefined))


as well as one sexp of the form


(defun (prop1 prop2 ... prop_n)
(tuple module_name prop1 prop2... prop_n)


The first set of 'new' functions successively call the next 'new' function in the chain, passing into it their list of parameters together with 'undefined' as the last parameter. The last 'new' function takes all the parameters needed to instantiate an object and returns a tuple whose first element is the module, and the rest are the object's property values. (We need to store the module name in the first element so we can use the parameterized modules calling convention, as you'll see later.)

Now let's create a function that generates the getters and setters


(defun make_accessors (props)
(let*
(((accessors idx1)
(: lists foldl
(match-lambda
((prop (acc idx))
(let* ((getter `(defun ,prop (obj) (element ,idx obj)))
(setter `(defun ,prop (val obj)
(setelement ,idx obj val))))
(list (: lists append
(list getter setter)
acc)
(+ idx 1)))))
(list () 2)
props)))
accessors)))


This function takes a list of properties and returns a list of sexps that implement the getters and setters for the module. Each getter takes the object and returns its (n + 1)th element, where n is the position of its property (the first element is reserved for the module name). Each setter takes the new value and the object and returns the tuple after setting its (n + 1)th element to the new value.

Now, let's tie it this up with the module declaration. We need to create a macro that generates the module declaration, constructors, getters, and setters, all in one swoop. But first, we need to expose make_constructors and make_accessors to the macro by nesting them inside a (eval-when-compile) sexp.


(eval-when-compile
(defun make_constructors ...)
(defun make_accessors ...))

(defmacro defclass
(((name . props) . modifiers)
(let* ((constructors (make_constructors name props))
(accessors (make_accessors props)))
`(progn
(defmodule ,name ,@modifiers)
,@constructors
,@accessors))))


(defclass returns a `(progn) sexp with a list of new macros that it defines. This is a trick Robert Virding taught me for creating a macro that in itself creates multiple macros.)


Now, let's see how this thing works. Create the file dog.lfe with the following code


;; import the macros here

(defclass (dog name size)
(export all))


Compile it from the shell:


(c 'dog)


Create a dog with no properties


> (: dog new)
#(dog undefined undefined)


Create a new dog with just a name


> (: dog new '"Lola")
#(dog "Lola" undefined)


Create a new dog with a name and a size


> (: dog new '"Lola" '"medium")
#(dog "Lola" "medium")


Get and set the dog's properties


> (let* ((dog (: dog new))
(dog1 (call dog 'name '"Lola"))
(dog2 (call dog1 'size '"medium")))
(list (call dog2 'name) (call dog2 'size)))
("Lola" "medium")


This code uses the same parameterized module function calling mechanism that allows us to pass a tuple instead of a function as the first parameter to the 'call' function. Erlang infers the name of the module that contains the function from the first element of the tuple.

As you can see, LFE is pretty powerful. I won't deny that sometimes I get the feeling of being lost in a sea of parenthesis, but over time the parentheses have grown on me. The benefits of macros speak for themselves. They give you great flexibility to change the language as you see fit.

Sunday, May 03, 2009

How to work on cool stuff

I attended the Bay Area Erlang Factory last week. It was a great event. I met many Erlang hackers, attended interesting talks, learned about cool projects (CouchDB, QuickCheck, Nitrogen, Facebook Chat), gave a talk about ErlyWeb, and drank beer (without beer, it wouldn't be a true Erlang meetup).

My favorite talk was by Damien Katz. He told the story of how he had decided to take a risk, quit his job, and work on his then amorphous project. He wanted to work on cool stuff, and that was the only way he could do it. Even if nothing else came out of it, he knew it would have been a great learning exercise. Something great did eventually come out of it, as he created CouchDB (which looks awesome btw) and IBM eventually hired him to work on it full time.

Damiens' story reminded me of the time I started working ErlyWeb a few years ago. After I left the company I was working for at the time, I decided to take a few months and work on something cool. I didn't know what exactly it would be or how long it would take, but I knew that I wanted to build a product that would help people communicate in new ways, and I wanted to build it with my favorite tools. I knew the chance of failure was high, but I figured the learning alone would be worth it. I also viewed open source as an insurance policy of sorts. Even if I couldn't get a product off the ground, my code could live on and continue to provide value to people.

Doing it paid off. My savings dwindled, but I learned Erlang, created ErlyWeb and Vimagi, met many like minded people, and it opened new doors. Now I work on cool stuff at Facebook, ErlyWeb lives on, and every day people are using Vimagi to create amazing art and share it with their friends.

The moral of the story: if you're not working on cool stuff, take a risk and try to make it happen. Don't worry about building the next Google or making lots of money, because you'll probably fail. But the lessons you learn and the connections you make will be worth it.

Monday, March 09, 2009

Parallel merge sort in Erlang

I've been thinking lately about the problem of scaling a service like Twitter or the Facebook news feed. When a user visits the site, you want to show her a list of all the recent updates from her friends, sorted by date. It's easy when the user doesn't have too many friends and all the updates are on a single database (as in Twoorl's case :P). You use this query:


"select * from update where uid in ([fid1], [fid2], ...) order by creation_date desc limit 20"


(After making sure you created an index on uid and creation_date, of course :) )

However, what do you when the user has many thousands of friends, and each friend's updates are stored on a different database? Clearly, you should fetch those updates in parallel. In Erlang, it's easy. You use pmap():


fetch_updates(Uids) ->
pmap(
fun(Uid) ->
Db = get_db_for_user(Uid),
query(Db, [<<"select * from update where uid =">>,
Uid, <<" order by creation_date desc limit 20">>])
end, Uids).


%% Applies the function Fun to each element of the list in parallel
pmap(Fun, List) ->
Parent = self(),
%% spawn the processes
Refs =
lists:map(
fun(Elem) ->
Ref = make_ref(),
spawn(
fun() ->
Parent ! {Ref, Fun(Elem)}
end),
Ref
end, List),

%% collect the results
lists:map(
fun(Ref) ->
receive
{Ref, Elem} ->
Elem
end
end, Refs).


Getting the updates is straightforward. However, what do you do once you've got them? Merging thousands of lists can take a long time, especially if you do it in a single process. The last thing you want is that your site's performance would grind to a halt when users add lots of friends.

Fortunately, merging a list of lists isn't too hard to do in parallel. Once you've implemented your nifty parallel merge algorithm, you can theoretically speed up response time by adding more cores to your web servers. This should help you maintain low latency even for very dense social graphs.

So, how do you merge a list of sorted lists in parallel in Erlang? There is probably more than one way of doing it, but this is what I came up with: you create a list of single element lists. You scan through the main list, and for each pair of lists you spawn a process that merges the two lists and sends the result to the parent process. The parent process collects all the results, and repeats as longs as there is more than one result. When only one result is left, the parent returns it.

Let's start with the base case of how to merge two lists:


%% Merges two sorted lists
merge(L1, L2) -> merge(L1, L2, []).

merge(L1, [], Acc) -> lists:reverse(Acc) ++ L1;
merge([], L2, Acc) -> lists:reverse(Acc) ++ L2;
merge(L1 = [Hd1 | Tl1], L2 = [Hd2 | Tl2], Acc) ->
{Hd, L11, L21} =
if Hd1 < Hd2 ->
{Hd1, Tl1, L2};
true ->
{Hd2, L1, Tl2}
end,
merge(L11, L21, [Hd | Acc]).


Now, to the more interesting part: how to merge a list of sorted lists in parallel.


%% Merges all the lists in parallel
merge_all(Lists) ->
merge_all(Lists, 0).

%% When there are no lists to collect or to merge, return an
%% empty list.
merge_all([], 0) ->
[];

%% When no lists are left to merge, we collect the results of
%% all the merges that were done in spawned processes
%% and recursively merge them.
merge_all([], N) ->
Lists = collect(N, []),
merge_all(Lists, 0);

%% If only one list remains, merge it with the result
%% of all the pair-wise merges
merge_all([L], N) ->
merge(L, merge_all([], N));

%% If two or more lists remains, spawn a process to merge
%% the first two lists and move on to the remaining lists
%% without blocking. Also, increment the number
%% of spawned processes so we know how many results
%% to collect later.
merge_all([L1, L2 | Tl], N) ->
Parent = self(),
spawn(
fun() ->
Res = merge(L1, L2),
Parent ! Res
end),
merge_all(Tl, N + 1).

%% Collects the results of N merges (the order
%% doesn't matter).
collect(0, Acc) -> Acc;
collect(N, Acc) ->
L = receive
Res -> Res
end,
collect(N - 1, [L | Acc]).


So, how well does this perform? I ran a benchmark on my 2.5 GHz Core 2 Duo Macbook Pro. First, I created a list of a million random numbers, each between 1 and a million:


> L = [random:uniform(1000000) || N <- lists:seq(1, 1000000)].


Then, I timed how long it takes to sort the list, first with lists:sort() and then with my shiny new parallel merge function.


> timer:tc(lists, sort, [L]).
{688149,
[1,1,1,1,2,6,7,8,10,11,11,11,13,13,14,15,15,16,17,17,20,21,
21,22,23,24,29|...]}


Less than a second. lists:sort() is pretty fast!

Before we can pass the list of numbers into merge_all(), we have to break it up into multiple lists with a single element in each list:


> Lists = [[E] || E <- L].


Now for the moment of truth:


> timer:tc(psort, merge_all, [Lists]).
{8187563,
[1,1,1,1,2,6,7,8,10,11,11,11,13,13,14,15,15,16,17,17,20,21,
21,22,23,24,29|...]}


About 8.2 seconds :(

It's not exactly an improvement, but at least we learned something. In this test case, the overhead of process spawning and inter-process communications outweighed the benefits of parallelism. It would be interesting to run the same test it on machines that have more than two cores but I don't have any at my disposal right now.

Another factor to consider is that lists:sort() is AFAIK implemented in C and therefore it has an unfair advantage over a function implemented in pure Erlang. Indeed, I tried sorting the list with the following pure Erlang quicksort function:


qsort([]) -> [];
qsort([H]) -> [H];
qsort([H | T]) ->
qsort([E || E <- T, E =< H]) ++
[H] ++
qsort([E || E <- T, E > H]).



> timer:tc(psort, qsort, [L]).
{2066387,
[1,2,3,3,4,5,6,6,7,7,8,8,10,10,10,12,13,14,14,15,17,18,19,
22,24,26,26|...]}


It took about ~2 seconds to sort the million numbers.

The performance of merge_all() doesn't seem great, but consider that we spawned ~1,000,000 processes during this test. It had ~19 levels of recursion (log2 500,000). At each level, we spawned half the number of processes as the previous level. The sum of all levels is 500,000*(1 + 1/2 + 1/4 + 1/8 ... + 1/19) ~= 1,000,000 (http://en.wikipedia.org/wiki/Geometric_series). 8 seconds / 500,000 processes = 0.000016 seconds / process. It's actually quite impressive!

Let's go back to the original problem. It wasn't to sort one big list, but to merge a list of sorted lists with 20 items in each list. In this scenario, we still benefit from parallelism but we don't pay for the overhead of spawning hundreds of thousands of processes to merge tiny lists in the first few levels of recursion. Let's see how long it takes merge_all() to merge a million random numbers split between 50,000 sorted lists.


> Lists = [lists:sort([random:uniform(1000000) || N <- lists:seq(1, 20)])
|| N1 <- lists:seq(1, 50000)].
> timer:tc(psort, merge_all, [lists]).
{2259870,
[1,1,4,5,8,10,10,12,12,13,14,14,14,16,16,17,18,18,18,18,18,
19,19,19,21,22,25|...]}


This function call took just over 2 seconds to run, roughly the same time as qsort(), yet it involved spawning 25,000*(1 - 0.5^15)/(1 - 0.5) ~= 50,000 processes! Now the benefits of concurrency start being more obvious.

Can you think of ways to improve performance further? Let me know!

Tuesday, January 13, 2009

Custom Tags for Facebook Platform

Check out the announcement on the developer blog about a project I've been working on at Facebook . It's a feature that lets you create your own FBML tags for Platform apps.

Sunday, June 29, 2008

Numenta

Last week, I attended the Numenta workshop. I didn't know much about Numenta before I went. My friend's excitement about the technology Numenta is building piqued my curiosity, so I decided to check it out. It seemed that almost everyone else in the conference had read Jeff Hawkins's On Intelligence and at least experimented with Numeta's tools, so I felt like a real n00b. I'm happy I went, though, because I learned about some interesting ideas and technologies.

Jeff Hawkins, Numenta's founder, has been fascinated with the workings of the brain throughout his career, but only two decades into it, after he founded Palm and Handspring, was he able to devote his efforts to artificial intelligence. In On Intelligence, Hawkins discusses his theories on the brain's functions in detail. Numenta, a company he founded with Dileep George and Donna Dubinsky, aims to put these ideas to work in commercial and research applications.

Numenta is a platform company. The platform they develop is NuPIC (Numenta Platform for Intelligent Computing), a software toolkit essentially for building pattern classifiers. The fundamental concept behind NuPIC is called HTM (Hierarchical Temporal Memory). It postulates that the cortex learns to recognize patterns using a combination of two basic algorithms: hierarchical belief propagation, and the detection of invariants in a sequence of transformations in time. I won't get into what this all means because there's plenty of documentation on the Numenta website. I recommend browsing it if you find this interesting.

HTM is not just theory. Although NuPIC is in a very early stage, companies are applying NuPIC to a wide range of problems, including vision, voice recognition, finance, motion recognition (recognizing motion capture data to detect if a person is walking, running, sitting, etc) and games. This is just a small subset of its potential uses.

Using Nupic in its current state isn't trivial. It provides the building blocks for HTM pattern classifiers, but application developers still have to do a good deal of work to tune the parameters of their HTM (How many nodes? How many levels in the hierarchy? How much training data to use? How many categories? What transformations to apply to the input over time to train the system?) to their problem domain. Also, some important features haven't been implemented yet. For example, although NuPIC can be pretty effective at classifying images that contain a single object against a plain background (with enough training), it isn't designed to recognize objects in images with noisy backgrounds or with multiple objects. (The problem of how to identify interesting objects in a scene is called the "attention" problem. To solve it you need to have a mechanism by which the top nodes could send feedback down to the bottom nodes. Hawkins said Numenta will tackle it in a future release.)

One reason I find Numenta so interesting is that I believe that NuPIC, or something like it, will play a role in the evolution of the Web. The current generation of web applications is effective at aggregating massive amounts of data in different verticals (pictures, videos, bookmarks, status messages, paintings), slicing and dicing it in different ways, searching it, and displaying it in an organized fashion. Mashups provide additional context for the data gathered in the different silos of the web (Kosmix is a good example), but they don't add any real "intelligence" to the mix, i.e. they don't extract new knowledge from the data they aggregate. Numenta's technology could be used to implement a new layer of intelligence on top of existing services by training it to recognize spacial and temporal pattens in the data they've collected. For example, imagine a Flickr API that let you submit an image and Flickr would tell you what the objects in the image are and where the picture was taken. Or a Facebook API for identifying the people in a picture. Or a Skype API for recognizing the speaker from a voice sample (creepy, I know). Or a HotOrNot API for automatically classifying the hotness of a person (ok, bad example :) ). Or a YouTube API for identifying the objects and events in a video clip. Or a icanhascheezburger API for automatically classifying the LOLness of a cat (well... maybe not :) ).

If this happens, maybe some day a mashup of these web services will be used to build something that resembles real AI. If (when?) someone manages to build a real-life WALL*E (great movie!), I think there's a good chance its HTMs will be trained on the vast amounts of data gathered on the web.

Saturday, June 28, 2008

Twoorl Goes Multilingual

Since its launch, Twoorl users have helped translate it to Spanish, German, French, Korean, Polish, Portuguese (Brazilian) and Russian. This is an awesome contribution from the Twoorl community. Big thanks to everyone who contributed a translation!

If you're fluent in a language that Twoorl hasn't been translated into and you'd like to contribute a translation for it, obtain the file twoorl_eng.erl, translate the english strings, and email me the modified file. (Please make sure the file is encoded in UTF-8.) If you're familiar with Git, you can also clone the repository, make the changes, and send me a message through GitHub to pull your updates. Thanks in advance!

Wednesday, May 28, 2008

Announcing Twoorl: an open source ErlyWeb-based Twitter clone

With the recent brouhaha over Twitter's scalability problems, I thought, wouldn't it be fun to write a Twitter clone in Erlang?

Last weekend was cold and rainy here in Palo Alto, so I sat down and hacked one, and thus Twoorl was born. It took me one full day plus a couple of evenings. The codebase is about 1700 lines (including comments). You can get it at http://code.google.com/p/twoorl

twoorl_screenshot.png

Note: you need the trunk version of ErlyWeb to make it work (when released, it will be the 0.7.1 version).

Many people written about Twitter's scalability problems and how to solve them. Some have blamed Rails (TechCrunch is among them), whereas others, including Blaine Cook, Twitter's Architect, have convincingly argued that you can scale a webapp written in any language/framework if you've figured out how to Just Add More Servers to handle the growing traffic. Eran Hammer-Lahav wrote some of the most insightful articles on the subject, On Scaling a Microblogging Service.

I have no idea why Twitter is having a hard time scaling. Well, I have some suspicions, but since I haven't been in the Twitter trenches, such speculation isn't worth wasting many pixels on.

I didn't write a Twitter clone in Erlang because I thought my implementation would be inherently more scalable than a Rails one (although it may be cheaper to scale because Erlang has very good performance) . In fact, Twoorl right now wouldn't scale well at all since I prioritized simplicity above all else.

The reasons I wrote Twoorl are:

- ErlyWeb needs more open source apps showing how to use the framework. It's hard to pick how to use the framework just from the API docs.
- Twitter is awesome. Once you start using it, it becomes addictive. I thought it would be fun to write my own.
- Twitter is very popular, but I don't know of any open source clones. I figured somebody may actually want one!
- Some people think Erlang isn't a good language for building webapps. I like to prove them wrong :)
- Although you can scale pretty much anything, your choice of language can make a difference in of performance and stability, both of which lead to happy users.
- I think Erlang is a great language for writing a Twitter clone because Twitter's functionality offers interesting opportunities benefit from concurrency. Here are a couple of ideas I thought of:

1) If you use sharding, the Tweets for different users would be stored in separated databases. When you render the page for someone's timeline, wouldn't it be advantageous to fetch the tweets for all the users she follows in parallel? In Ruby, you would probably do something like this:


def get_tweets(users)
var alltweets = Array.new()
users.each { | user |
alltweets.add(user.fetch_tweets())
}
alltweets.sort()
return alltweets
end


(Please forgive any language errors -- my Ruby is very rusty. Treat the above as Pseudo code.).

This code would work well enough for a small number of tweet streams, but as the number gets large, it would take a very long time to execute.

In ErlyWeb, you could instead do the following:


get_tweets(Users) ->
sort(flatten(pmap(fun(Usr) -> Usr:tweets() end, Users)))


This would spawn a process for each user the user follows, fetch the tweets for that user, then reassemble them in sorted order in the original process before rendering the page. (Think of it as map/reduce implemented directly in the application controller.) If a user follows hundreds of other users, querying their tweets in parallel can significantly reduce page rendering time.

2) Background tasks. When a user sends a tweet, the first thing you want to do is store it in the database. Then, depending on the features, you have to do a bunch of other stuff: send IM/SMS notifications, update RSS feeds, expire caches, etc. Why not do those tasks in different background processes? After to write to the DB, you can return an immediate reply to the user, giving him or her the perception of speed, and then let the background processes do all the extra work for processing the tweet.

(Such technique works very well for Facebook apps, by the way. In Vimagi, when the user submits a painting, the app first saves the painting data, and then it spawns a new process to update the news feed and profile box, send notifications, etc.)

Anyway, I hope you enjoy Twoorl. It's still in very early alpha. It doesn't have many features and it probably has bugs. Please take Twoorl for a spin and give me your feedback! I'll also appreciate useful contributions :)

Sunday, May 18, 2008

Erlang vs. Scala

In my time wasting activities on geeky social news sites, I've been seeing more and more articles about Scala. The main reasons I became interested in Scala are 1) Scala is an OO/FP hybrid, and I think that any attempt to introduce more FP concepts into the OO world is a good thing and 2) Scala's Actors library is heavily influenced by Erlang, and Scala is sometimes mentioned in the same context as Erlang as a great language for building scalable concurrent applications.

A few times, I've seen the following take on the relative mertis of Scala and Erlang: Erlang is great for concurrent programming and it has a great track record in its niche, but it's unlikely to become mainstream because it's foreign and it doesn't have as many libraries as Java. Scala, on the hand, has the best of both worlds. Its has functional semantics, its Actors library provides Erlang style concurrency, and it runs on the JVM and it has access to all the Java libraries. This combination makes Scala it a better choice for building concurrent applications, especially for companies that are invested in Java.

I haven't coded in Scala, but I did a good amount of research on it and it looks like a great language. Some of the best programmers I know rave about it. I think that Scala can be a great replacement for Java. Function objects, type inference, mixins and pattern matching are all great language features that Scala has and that are sorely missing from Java.

Although I believe Scala is a great language that is clearly superior to Java, Scala doesn't supersede Erlang as my language of choice for building high-availability, low latency, massively concurrent applications. Scala's Actors library is a big improvement over what Java has to offer in terms of concurrency, but it doesn't provide all the benefits of Erlang-style concurrency that make Erlang such a great tool for the job. I did a good amount of research into the matter and these are the important differences I think one should consider when choosing between Scala and Erlang. (If I missed something or got something wrong, please let me know. I don't profess to be a Scala expert by any means.)

Concurrent programming


Scala's Actor library does a good job at emulating Erlang style message passing. Similar to Erlang processes, Scala actors send and receive messages through mailboxes. Like Erlang, Scala has pattern matching sematics for receiving messages, which results in elegant, concise code (although I think Erlang's simpler type system makes pattern matching easier in Erlang).

Scala's Actors library goes pretty far, but it doesn't (well, it can't) provide an important feature that makes concurrent programming so easy in Erlang: immutability. In Erlang, multiple processes can share the same data within the same VM, and the language guarantees that race conditions won't happen because this data is immutable. In Scala, though, you can send between actors pointers to mutable objects. This is the classic recipe for race conditions, and it leaves you just where you started: having to ensure synchronized access to shared memory.

If you're careful, you may be able to avoid this problem by copying all messages or by treating all sent objects as immutable, but the Scala language doesn't guarantee safe access to shared objects. Erlang does.

Hot code swapping


Hot code swapping it a killer feature. Not only does it (mostly) eliminates the downtime required to do code upgrades, it also makes a language much more productive because it allows for true interactive programming. With hot code swapping, you can immediately test the effects of code changes without stopping your server, recompiling your code, restarting your server (and losing the application's state), and going back to where you had been before the code change. Hot code swapping is one of the main reasons I like coding in Erlang.

The JVM has limited support for hot code swapping during development -- I believe it only lets you change a method's body at runtime (an improvement for this feature is in Sun's top 25 RFE's for Java). This capability is not as robust as Erlang's hot code swapping, which works for any code modification at any time.

A great aspect of Erlang's hot code swapping is that when you load new code, the VM keeps around the previous version of the code. This gives running processes an opportunity to receive a message to perform a code swap before the old version of the code is finally removed (which kills processes that didn't perform a code upgrade). This feature is unique to Erlang as far as I know.

Hot code swapping is even more important for real-time applications that enable synchronous communications between users. Restarting such servers would cause user sessions to disconnect, which would lead to poor user experience. Imagine playing World of Warcraft and, in the middle of a major battle, losing your connection because the developers wanted to add a log line somewhere in the code. It would be pretty upsetting.

Garbage collection


A common argument against GC'd languages is that they are unsuitable for low latency applications due to potential long GC sweeps that freeze the VM. Modern GC optimizations such as generational collection alleviate the problem somewhat, but not entirely. Occasionally, the old generation needs to be collected, which can trigger long sweeps.

Erlang was designed for building applications that have (soft) real-time performance, and Erlang's garbage collection is optimized for this end. In Erlang, processes have separate heaps that are GC'd separately, which minimizes the time a process could freeze for garbage collection. Erlang also has ets, an in-memory storage facility for storing large amounts of data without any garbage collection (you can find more information on Erlang GC at http://prog21.dadgum.com/16.html).

Erlang might not have a decisive advantage here. The JVM has a new concurrent garbage collector designed to minimize freeze times. This article and this whitepaper (PDF warning) have some information about how it works. This collector trades performance and memory overhead for shorter freezes. I haven't found any benchmarks that show how well it works in production apps, though, and if it is as effective as Erlang's garbage collector for low-latency apps.

Scheduling


The Erlang VM schedules processes preemptively. Each process gets a certain number of reductions (roughly equivalent to function calls) before it's swapped out for another process. Erlang processes can't call blocking operations that freeze the scheduler for long periods. All file IO and communications with native libraries are done in separate OS threads (communications are done using ports). Similar to Erlang's per-process heaps, this design ensures that Erlang's lightweight processes can't block each other. The downside is some communications overhead due to data copying, but it's a worthwhile tradeoff.

Scala has two types of Actors: thread-based and event based. Thread based actors execute in heavyweight OS threads. They never block each other, but they don't scale to more than a few thousand actors per VM. Event-based actors are simple objects. They are very lightweight, and, like Erlang processes, you can spawn millions of them on a modern machine. The difference with Erlang processes is that within each OS thread, event based actors execute sequentially without preemptive scheduling. This makes it possible for an event-based actor to block its OS thread for a long period of time (perhaps indefinitely).

According to the Scala actors paper, the actors library also implements a unified model, by which event-based actors are executed in a thread pool, which the library automatically resizes if all threads are blocked due to long-running operations. This is pretty much the best you can do without runtime support, but it's not as robust as the Erlang implementation, which guarantees low latency and fair use of resources. In a degenerate case, all actors would call blocking operations, which would increase the native thread pool size to the point where it can't grow anymore beyond a few thousand threads.

This can't happen in Erlang. Erlang only allocates a fixed number of OS threads (typically, one per processor core). Idle processes don't impose any overhead on the scheduler. In addition, spawning Erlang processes is always a very cheap operation that happens very fast. I don't think the same applies to Scala when all existing threads are blocked, because this condition first needs to be detected, and then new OS threads need to be spawned to execute pending Actors. This can add significant latency (this is admittedly theoretical: only benchmarks can show the real impact).

Depends on what you're doing, the difference between process scheduling in Erlang and Scala may not impact performance much. However, I personally like knowing with certainty that the Erlang scheduler can gracefully handle pretty much anything I throw at it.

Distributed programming


One of Erlang's greatest strengths is that it unifies concurrent and distributed programming. Erlang lets you send a message to a process in the local or on a remote VM using exactly the same semantics (this is sometimes referred to as "location transparency"). Furthermore, Erlang's process spawning and linking/monitoring works seamlessly across nodes. This takes much of the pain out of building distributed, fault-tolerant applications.

The Scala Actors library has a RemoteActor type that apparently provides the similar location-transparency, but I haven't been able to find much information about it. According to this article, it's also possible to distribute Scala actors using Terracotta, which does distributed memory voodoo between nodes in a JVM cluster, but I'm not sure how well it works or how simple it is to set up. In Erlang, everything works out of the box, and it's so simple to get it working it's in the language's Getting Started manual.

Mnesia


Lightweight concurrency with no shared memory and pure message passing semantics is a fantastic toolset for building concurrent applications... until you realize you need shared (transactional) memory. Imagine building a WoW server, where characters can buy and sell items between each other. This would be very hard to build without a transactional DBMS of sorts. This is exactly what Mnesia provides -- with the a number of extra benefits such as distributed storage, table fragmentation, no impedance mismatch, no GC overhead (due to ets), hot updates, live backups, and multiple disc/memory storage options (you can read the Mnesia docs for more info). I don't think Scala/Java has anything quite like Mnesia, so if you use Scala you have to find some alternative. You would probably have to use an external DBMS such as MySQL cluster, which may incur a higher overhead than a native solution that runs in the same VM.

Tail recursion


Functional programming and recursion go hand-in-hand. In fact, you could hardly write working Erlang programs without tail recursion because Erlang doesn't have loops -- it uses recursion for *everything* (which I believe is a good thing :) ). Tail recursion serves for more than just style -- it's also facilitates hot code swapping. Erlang gen_servers call their loop() function recursively between calls to 'receive'. When a gen_server receive a code_change message, they can make it a remote call (e.g. Module:loop()) to re-enter its main loop with the new code. Without tail recursion, this style of programming would quickly result in stack overflows.

From my research, I learned that Scala has limited support for tail recursion due to bytecode restrictions in most JVMs. From http://www.scala-lang.org/docu/files/ScalaByExample.pdf:


In principle, tail calls can always re-use the stack frame of the calling function. However, some run-time environments (such as the Java VM) lack the primitives to make stack frame re-use for tail calls efficient. A production quality Scala implementation is therefore only required to re-use the stack frame of a directly tail-recursive function whose last action is a call to itself. Other tail calls might be optimized also, but one should not rely on this across implementations.


(If I understand the limitation correctly, tail call optimization in Scala only works within the same function (i.e. x() can make a tail recursive call to x(), but if x() calls y(), y() couldn't make a tail recursive call back to x().)

In Erlang, tail recursion Just Works.

Network IO


Erlang processes are tightly integrated with the Erlang VM's event-driven network IO core. Processes can "own" sockets and send and receive messages to/from sockets. This provides the elegance of concurrency-oriented programming plus the scalability of event-driven IO (the Erlang VM uses epoll/kqueue under the covers). From Googling around, I haven't found similar capabilities in Scala actors, although they may exist.

Remote shell


In Erlang, you can get a remote shell into any running VM. This allows you to analyzing the state of the VM at runtime. For example, you can check how many processes are running, how much memory they consume, what data is stored Mnesia, etc.

The remote shell is also a powerful tool for discovering bugs in your code. When the server is in a bad state, you don't always have to try to reproduce the bug offline somehow to devise a fix. You can log right into it and see what's wrong. If it's not obvious, you can make quick code changes to add more logging and then revert them when you've discovered the problem. I haven't found a similar feature in Scala/Java from some Googling. It probably wouldn't be too hard to implement a remote shell for Scala, but without hot code swapping it would be much less useful.

Simplicity


Scala runs on the JVM, it can easily call any Java library, and it is therefore closer than Erlang to many programmers' comfort zones. However, I think that Erlang is very easy to learn -- definitely easier than Scala, which contains a greater total number of concepts you need to know in order to use the language effectively (especially if you consider the Java foundations on which Scala is built). This is to a large degree due to Erlang's dynamic typing and lack of object orientation. I personally prefer Erlang's more minimalist style, but this is a subjective matter and I don't want to get into religious debates here :)

Libraries


Java indeed has a lot of libraries -- many more than Erlang. However, this doesn't mean that Erlang has no batteries included. In fact, Erlang's libraries are quite sufficient for many applications (you'll have to decide for yourself if they are sufficient for you). If you really need to use a Java library that doesn't have an Erlang equivalent, you could call it using Jinterface. It may or may not be a suitable option for your application. This can indeed be a deal breaker for some people who are deciding between the two languages.

There's an important difference between Java/Scala and Erlang libraries besides their relative abundance: virtually all "big" Erlang libraries use Erlang's features concurrency and fault tolerance. In the Erlang ecosystem, you can get web servers, database connection pools, XMPP servers, database servers, all of which use Erlang's lightweight concurrency, fault tolerance, etc. Most of Scala's libraries, on the other hand, are written in Java and they don't use Scala actors. It will take Scala some time to catch up to Erlang in the availability of libraries based on Actors.

Reliability and scalability


Erlang has been running massive systems for 20 years. Erlang-powered phone switches have been running with nine nines availability -- only 31ms downtime per year. Erlang also scales. From telcom apps to Facebook Chat we have enough evidence that Erlang works as advertised. Scala on the other hand is a relatively new language and as far as I know its actors implementation hasn't been tested in large-scale real-time systems.

Conclusion


I hope I did justice to Scala and Erlang in this comparison (which, by the way, took me way too much to write!). Regardless of these differences, though, I think that Scala has a good chance of being the more popular language of the two. Steve Yegge explains it better than I can:


Scala might have a chance. There's a guy giving a talk right down the hall about it, the inventor of – one of the inventors of Scala. And I think it's a great language and I wish him all the success in the world. Because it would be nice to have, you know, it would be nice to have that as an alternative to Java.

But when you're out in the industry, you can't. You get lynched for trying to use a language that the other engineers don't know. Trust me. I've tried it. I don't know how many of you guys here have actually been out in the industry, but I was talking about this with my intern. I was, and I think you [(point to audience member)] said this in the beginning: this is 80% politics and 20% technology, right? You know.

And [my intern] is, like, "well I understand the argument" and I'm like "No, no, no! You've never been in a company where there's an engineer with a Computer Science degree and ten years of experience, an architect, who's in your face screaming at you, with spittle flying on you, because you suggested using, you know... D. Or Haskell. Or Lisp, or Erlang, or take your pick."


Well, at least I'm not trying too hard to promote LFE... :)

Wednesday, May 14, 2008

Is Facebook running one of the world's biggest Erlang clusters?

I just read on the Facebook engineering blog this post describing how Facebook used Erlang to scale Facebook Chat to 70 million active users overnight. WOW.

This announcement should remove any doubts that Erlang is *the* platform for building scalable realtime (aka Comet) applications.

Monday, May 12, 2008

Erlang does have shared memory

I occasionally hear people say things such as "Erlang makes concurrency easy because it doesn't have shared memory" or "processes in Erlang communicate just by message passing." Just earlier today I came across one such comment on Steve Yegge's post Dynamic Languages Strike Back (you'll have some scrolling down to do -- it's a long article :) ).

For many purposes, it's "good enough" to think of Erlang as lacking shared memory. Erlang's built-in message passing semantics makes inter-process communications trivial to implement, and for many concurrent applications, it's all you need. Also, when you first learn Erlang, it's easy to get excited by the promise of not having to worry about the difficult shared memory problems you encounter in "traditional" concurrent programming. And that's for a good reason: Erlang indeed makes concurrent programming much easier (and more scalable, reliable, etc) than other languages. However, it's not true that Erlang processes can't share memory.

Erlang has (a kind of) shared memory: it's called ets.

From the ets documentation:


This module provides very limited support for concurrent updates. No locking is available, but the safe_fixtable/2 function can be used to guarantee that a sequence of first/1 and next/2 calls will traverse the table without errors and that each object in the table is visited exactly once, even if another process (or the same process) simultaneously deletes or inserts objects into the table. Nothing more is guaranteed; in particular any object inserted during a traversal may be visited in the traversal.


Multiple Erlang processes can simultaneously access and manipulate an ets table, which makes ets act very much like shared memory. ets isn't identical to shared memory in other languages, however. These are the main differences:

- Objects are copied when inserted into and looked-up from ets tables.
- Basic consistency is guaranteed. Individual ets records never get garbled.
- ets tables are not garbage collected (this lets you store massive amounts of data in RAM without incurring garbage collection penalties -- an important trait for soft real time performance.)

Despite these differences, as far as the programmer is concerned, ets is effectively shared memory. If you want to guarantee that multiple processes get a consistent snapshot of a set of objects in a plain ets table, you're out of luck.

The good news is that Erlang doesn't leave you to your own devices to figure out some subtle solution involving locking around critical regions to access ets tables safely from multiple processes. Instead, it provides a very nice tool for working with ets: Mnesia. Mnesia is is a kind of STM for Erlang, with some extra properties such as support for distributed and persistent storage. Mnesia has a simple transaction API you can use to ensure atomicity, isolation and consistency when accessing objects in an ets table.

Here's an example, taken for the Mnesia documentation, of how to raise an employee's salary in a transaction:


raise(Eno, Raise) ->
F = fun() ->
[E] = mnesia:read(employee, Eno, write),
Salary = E#employee.salary + Raise,
New = E#employee{salary = Salary},
mnesia:write(New)
end,
mnesia:transaction(F).


So, Erlang has shared memory, and it also has Mnesia, which provides easy transactional access to ets. Does that mean that concurrent programming in Erlang is just like other languages, but with using Mnesia for storing shared data? Not exactly. When you program in Erlang, you still use message passing in many situations where in other languages you would rely on semaphors/locks/monitors/signals/etc to enable inter-thread communications. In the simplest possible example, a producer/consumer application, you would use messaging in Erlang, not ets. In most other languages, you would probably use monitors to protect against concurrent access to a shared buffer (see the wikipedia examples).

Erlang isn't the only language that has message queues. In fact, you could use the basic concurrency facilities in most languages to implement message queues as higher-level abstractions for inter-thread communications (although you probably wouldn't be able to replicate Erlang's selective receive using pattern matching, and it probably wouldn't scale or perform as well as Erlang). This is exactly what some of the actor libraries out there do. However, you would have to be very careful to not send pointers/references to objects that would end up being shared between threads, or you would potentially run into nasty bugs when multiple threads modify the same object. In Erlang, all data is immutable, which makes such bugs impossible. And if even you figure out how to ensure copy semantics for message passing, you would still have your work cut out for you to allow processes to communicate between VMs...

Implementing full Erlang style concurrency isn't trivial. I don't think it can be added as a library to a language that doesn't have it by design with support from the runtime.

Erlang takes you as close as possible to concurrent (and distributed!) programming bliss -- but it does have (a kind of) shared memory: ets.

Sunday, April 20, 2008

Startup School

I attended startup school on Saturday. It was a great experience and a rare opportunity to hear to a such impressive speakers share their wisdom about technology and entrepreneurship. Some of my favorite talks were by David Heinemeier Hansson, Greg McAdoo, Marc Andreesen, Paul Buchheit and Michael Arrington. I met a bunch of programmers and entrepreneurs and I also chatted briefly with DHH about 37signals and Peter Norvig about new search startups and their chances of competing with Google. Many thanks to YCombinator for organizing such a great event!

If you haven't attended, you can see all the videos here. Highly recommended.

Monday, April 14, 2008

Concurrency and expressiveness

Damien Katz's article Lisp as Blub has sparked a lively debate on Hacker News on the relative merits of Erlang and other languages for building robust applications. Good points were made on all sides (except for the tired complaints about Erlang syntax -- I have a suspicion that most people who complain about it haven't done much coding in Erlang). Unfortunately, I think that a key point was lost in all the noise: all else being equal, a language with great support for concurrency and fault tolerance has a higher expressive power than a language that doesn't. It just lets you build concurrent applications much more easily (less code, fewer bugs, better scalability, yadda yadda).

Sunday, April 13, 2008

EC2 gets persistent block level storage

I just caught Amazon's announcement of the new persistent storage engine for EC2. This is great stuff. It lets you create persistent block level storage devices ranging from 1GB to 1TB in size and attach them to EC2 instances in predetermined availability zones. This service complements Amazon's other storage services -- EC2 and SimpleDB -- in providing raw block-level storage devices that are persistent, fast and local (so you don't have to worry about SimpleDB's eventual consistency issues). You can use these volumes for anything -- running a traditional DBMS (MySQL, Postgres) is the first thing that comes to mind.

This announcement is a departure from Amazon's tradition of announcing services only once they become available. It looks like Amazon is feeling the heat of competition from Google App Engine and is becoming more open to win over the hearts and minds of developers who are drawn to GAE for its auto-magical scalability. The ability to attach multiple terabyte-sized volumes on demand alleviates some of those concerns when deploying on Amazon's infrastructure. I'm sure it won't be long before someone creates an open source BigTable-like solution for applications that need massive scalability and redundancy on top of multiple persistent storage volumes (I think this would be a great application to write in Erlang, but I don't know how well Erlang performs in applications that require heavy disc IO).

I like what Amazon is doing. By providing the basic building blocks for scalable applications, its enables startups to create their own GAE competitors (Heroku is the first one that comes to mind) on top of Amazon's infrastructure. Smart move.

Google has the advantage of being able to provide APIs for tight integration with other Google services such as authentication and search (the latter is hypothetical as of now). We'll see how strongly this plays in Google's favor in the coming months.

Of course, price is still a question mark. Neither Amazon's persistent storage service nor GAE have had their prices announced.

Another missing detail is the Amazon store service's reliability. If a disc fails, do you lose your data? What's the failure probability? Etc.

All this is great for developers. Competition between Amazon and Google means developers will enjoy more services and for lower prices in the coming years.

Tag cloud in ErlyWeb howto

Nick Gerakines wrote a good tutorial on how to make tag clouds in ErlyWeb. Check it out at http://blog.socklabs.com/2008/04/tag_clouds_in_erlang_with_erly/.

Tuesday, April 01, 2008

ErlyWeb renamed "Erlang on Rails"

Erlang is *almost* a tipping point. Thanks to reddit, many people are interested in it. However, it's not there yet. Despite being the only language that got concurrency right, and all its other standout features, many developers still use other languages. The only explanation I can think of is that Erlang hasn't had much PR over the years (the Erlang movie nonwithstanding). I'm confident that a good PR boost will help push Erlang over the hill. Unfortunately, Ericsson doesn't seem interested in heavily promoting Erlang the way Microsoft promotes .NET and Sun promotes Java (this may be because many Ericsson employees have never heard of Erlang). So, I decided to take things into my own hands. I don't have a budget, so I need to get creative. The best way to succeed if you're small is to ride a big wave -- and what's a better wave to ride than Ruby on Rails?

Ruby on Rails is very popular -- much more than ErlyWeb. I believe this popularity is due to the "on Rails" meme, which is just bursting with positive connotations. It sounds young, fresh, happy. It's the anti-enterprisy software. It emancipates you from burdensome type systems, explicit getters and setters, and (ugh) XML. Its metaprogramming wizardry is made of bliss. Its evokes images of riding in environmentally-friendly transportation looking out the window at grassy meadows, rolling hills and sunny skies.

I think that renaming ErlyWeb to "Erlang on Rails" will help win over the hearts and minds of many programmers who are currently on the fence. They may be curious about Erlang but are turned off by its telcom image. "Erlang on Rails" conveys a more balanced feeling of industrial strength applications from the telcom world mixed with the social Web 2.0 era of interconnectedness that celebrates the rise of individualism over grey corporate culture.

2008 will be the year of Erlang on Rails. I know it.

Update: This was an April Fool's joke, in case it's not obvious anymore :)

Monday, March 31, 2008

Announcing the Erlang FriendFeed API

I just hacked together an (alpha) implementation of the FriendFeed API for Erlang. You can get the code at http://code.google.com/p/erlang-friendfeed/. Enjoy!

By the way, my shiny new FriendFeed is at http://friendfeed.com/yariv.

Sunday, March 23, 2008

I Play WoW: A Cool Facebook App Built With ErlyWeb

Nick Gerakines, the Author of Facebook Application Development, created with ErlyWeb the very cool Facebook app I Play WoW. I Play WoW bridges between real people and the characters they play on World of Warcraft. Nick told me he got a lot of feedback such as "Wow! I didn't know my brother-in-law is in my guild!" and "Its been 5 years since I talked to some of them, but a bunch of my friends from school play on these realms and I didn't even know they played".

Some facts:
- 53.5k installs
- 2.9k daily active users
- 1.3k application fans
- 200+ new users a day on average
- In the past 30 days its gotten over 2 million page views where users spend more than 5 minutes on average on the application
- Erlang application layout:
* Charstore w/ Mnesia: Acts as the raw character store and cache for interactions between wowarmory.com
* I Play WoW w/ ErlyWeb + Mnesia: The front-end and ui for the application. The majority of the FB API calls are made here or are spawned from here.
* There is still one component in perl that is yet to be ported over, mainly due to not having enough time. Its on the list of things to do.

(My note: it sounds like Nick is also using spawned processes to make FB API calls asynchronously. It's a great technique for reducing page load time and avoiding the annoying timeouts Facebook imposes on page renderings.)

If you play World of Warcraft (an addiction I've luckily been able to avoid this far :) ) and you are on Facebook, give I Play WoW a try. You may discover that your boss is a level 10 ogre or something :)

Congrats, Nick, for creating such a successful app with ErlyWeb!

Sunday, March 16, 2008

SnapTalent Ads

You may have noticed I put SnapTalent (http://snaptalent) ads on my blog. This is the first time I have put ads on my blog. I've avoided them because ads from most ad networks are usually irritating/ugly, but the SnapTalent ones are nice looking, unobtrusive and are relevant to my blog's readers because they advertise hacker jobs at startups. I can't say those ads have generated substantial revenue for me thus far but at least I've made enough money to buy the SnapTalent guys a round of beers :)

Sunday, March 09, 2008

In Response to "What Sucks About Erlang"

Damien Katz's latest blog post lists some ways in which Damien Katz thinks Erlang sucks. I agree with some of these points but not with all of them. Below are my responses to some of his complaints:

1. Basic Syntax

I've heard many people express their dislike for the Erlang syntax. I found the syntax a bit weird when I started using it, but once I got used to it it hasn't bothered me much. Sometimes I mess up and use the wrong expression terminator, and sometimes things break when I cut and paste code between function clauses, but it hasn't been real pain point for me. I understand where the complaints are coming from, but IMHO it's a minor issue.

Since the release of LFE last week, if you don't like the Erlang syntax, you can write Erlang code using Lisp Syntax, with full support for Lisp macros. If you prefer Lisp syntax to Erlang syntax, you have a choice.

2. 'if' expressions

The first issue is that in an 'if' expression, the condition has to match one of the clauses, or an exception is thrown. This means you can't write simple code like


if Logging -> log("something") end


and instead you have to write


if Logging -> log("something"); true -> ok end


This requirement may seem annoying, but it is there for a good reason. In Erlang, all expressions (except for 'exit()') must return a value. You should always be able to write

A = foo();

and expect A to be bound to a value. There is no "null" value in Erlang (the 'undefined' atom usually takes its place).

Fortunately, Erlang lets you get around this issue with a one-line macro:


-define(my_if(Predicate, Expression), if Predicate -> Expression; true -> undefined end).


Then you can use it as follows:


?my_if(Logging, log("something"))


It's not that bad, is it?

This solution does have a shortcoming, though, which is that it only works for a single-clause 'if' expression. If it has multiple clauses, you're back where you started. That's where you should take a second look at LFE :)

The second issue about 'if' expressions is that you can't call any user- defined function in the conditional, just a subset of the Erlang BIFs. You can get around this limitation by using 'case', but again you have to provide a 'catch all' clause. For a single clause, you can simply change the macro I created to use a case statement.


-define(case(Predicate, Expression), case Predicate -> Expression; _ -> undefined end).


For an arbitrary number of clauses, a macro won't help, and this is something you'll just have to live with. If it really bothers you, use LFE.

3. Strings

The perennial complaint against Erlang is that it "sucks" for strings. Erlang represents strings as lists of integers, and for some reason many people are convinced that this is tantamount to suckage.


...you can't distinguish easily at runtime between a string and a list, and especially between a string and a list of integers.


A string *is* a list of integers -- why should we not represent it as such? If you care about the type of list you're dealing with, you should embed it in a tuple with a type description, e.g.


{string, "dog"},
{instruments, [guitar, bass, drums]}


But if you don't care what the type is, representing a string as a list makes a lot of sense because it lets you leverage all the tools Erlang has for working with lists.

A real issue with using lists is that it's a memory-hungry data structure, especially on 64 bit machines, where you need 128 bits = 16 bytes to store each character. If your application processes such massive amounts of string data that this becomes a bottleneck, you can always use binaries. In fact, you should always use binaries for "static" strings on which you don't need to do character-level manipulation in code. ErlTL, for example, compiles all static template data as binaries to save memory.


Erlang string operations are just not as simple or easy as most languages with integrated string types. I personally wouldn't pick Erlang for most front-end web application work. I'd probably choose PHP or Python, or some other scripting language with integrated string handling.


I disagree with this statement, but instead of rebutting it directly, I'll suggest a new kind of Erlang challenge: build a webapp in Erlang and show me how string handling was a problem for you. I've heard a number of people say that Erlang's string handling is a hinderance in building webapps, but by my experience this simply isn't true. If you ran into real problems with strings when building a webapp, I would be very interested in hearing about them, but otherwise it's a waste of time hypothesizing about problems that don't exist.

4. Functional Programming Mismatch

The issue here is that Erlang's variable immutability supposedly makes writing test code difficult.


Immutable variables in Erlang are hard to deal with when you have code that tends to change a lot, like user application code, where you are often performing a bunch of arbitrary steps that need to be changed as needs evolve.

In C, lets say you have some code:

int f(int x) {
x = foo(x);
x = bar(x);
return baz(x);
}

And you want to add a new step in the function:

int f(int x) {
x = foo(x);
x = fab(x);
x = bar(x);
return baz(x);
}

Only one line needs editing,

Consider the Erlang equivalent:

f(X) ->
X1 = foo(X),
X2 = bar(X1),
baz(X2).

Now you want to add a new step, which requires editing every variable thereafter:

f(X) ->
X1 = foo(X),
X2 = fab(X1),
X3 = bar(X2),
baz(X3).



This is an issue that I ran into in a couple of places, and I agree that it can be annoying. However, discussing this consequence of immutability without mentioning its benefits is missing a big part of the picture. I really think that immutability is one of Erlang's best traits. Immutability makes code much more readable and easy to debug. For a trivial example, consider this Javascript code:


function test() {
var a = {foo: 1; bar: 2};
baz(a);
return a.foo;
}


What does the function return? We have no idea. To answer this question, we have to read the code for baz() and recursively descend into all the functions that baz() calls with 'a' as a parameter. Even running the code doesn't help because it's possible that baz() only modifies 'a' based on some unpredictable event such as some user input.

Consider the Erlang version:


test() ->
A = [{foo, 1}, {bar, 2}],
baz(A),
proplists:get_value(A, foo).


Because of variable immutability, we know that this function returns '1'.

I think that the guarantee that a variable's value will never change after it's bound is a great language feature and it far outweighs the disadvantage of having to use with unique variable names in functions that do a series of modifications to some data.

If you're writing code like in Damien's example and you want to be able to insert lines without changing a bunch of variable names, I have a tip: increment by 10. This will prevent the big cascading variable renamings in most situations. Instead of the original code, write


f(X) ->
X10 = foo(X),
X20 = bar(X10),
baz(X20).


then change it as follows when inserting a new line in the middle:


f(X) ->
X10 = foo(X),
X15 = fab(X10),
X20 = bar(X15),
baz(X20).


Yes, I know, it's not exactly beautiful, but in the rare cases where you need it, it's a useful trick.

This issue could be rephrased as a complaint against imperative languages: "I don't know if the function to which I pass my variable will change it! It's too hard to track down all the places in the code where my data could get mangled!" This may sound outlandish especially if you haven't coded in Erlang or Haskell, but that's how I really feel sometimes when I go back from Erlang to an imperative language.


Erlang wasn't a good match for tests and for the same reasons I don't think it's a good match for front-end web applications.


I don't understand this argument. Webapps need to be tested just like any other application. I don't see where the distinction lies.

5. Records

Many people hate records and on this topic I fully agree with Damien. I think the OTP team should just integrate Recless into the language and thereby solve most of the issues people have with records.

If you really hate records, another option is to use LFE, which automatically generates getters and setters for record properties.

Incidentally, if you use ErlyWeb with ErlyDB, you probably won't use records at all and you won't run into these annoyances. ErlyDB generates functions for accessing object properties which is much nicer than using the record syntax. ErlyDB also lets you access properties dynamically, which records don't allow, e.g.


P = person:new_with([{name, "Paul"}]),
Fields = person:db_field_names(),
[io:format("~p: ~p~n", [Field, person:Field(P)]) || Field <- Fields]


Records are ugly, but if you're creating an ErlyWeb app, you probably won't need them. If they do cause you a great deal of pain, you can go ahead and help me finish Recless and then bug the OTP team to integrate it into the language :)

6. Code oragnization


Every time time you need to create something resembling a class (like an OTP generic process), you have to create whole Erlang file module, which means a whole new source file with a copyright banner plus the Erlang cruft at the top of each source file, and then it must be added to build system and source control. The extra file creation artificially spreads out the code over the file system, making things harder to follow.


I think this issue occurs in many programming languages, and I don't think Erlang is the biggest offender here. Unlike Java, for instance, Erlang doesn't restrict you to defining a single data type per module. And Ruby (especially Rails) applications are also known for having multitudes of small files. In Erlang, you indeed have to create a module per gen-server and the other 'behaviors' but depending on the application this may not be an issue. However, I don't think there's anything wrong with keeping different gen-servers in different modules. It should make the code more organized, not less.

7. Uneven Libraries and Documentation

I haven't had a problem with most libraries, and in cases where they do have big shortcomings you can often find a good 3rd party tool. The documentation is a pain to browse and search, but gotapi.com makes some of this pain go away.


Summary

Is Erlang perfect? Certainly not. But sometimes people exaggerate Erlang's problems or they don't address the full picture.

Here are some suggestions I have for improving Erlang:

- Add a Recless-like functionality to make working with records less painful.
- Improve the online documentation by making it easier to browse and search.
- Make some of the string APIs (especially the regexp library) also work with binaries and iolists.
- Add support for overloading macros, just like functions.
- Add support for Smerl-style function inheritance between modules.

Like any language, Erlang has some warts. But if it were perfect, it would be boring, wouldn't it? :)