Monday, September 25, 2006

Mnesia? MySQL? It's The Same Transaction!

I have a confession: I didn't like my last attempt at adding transaction support to the Revamped MySQL Driver. Don't get me wrong -- it's better to have some transaction support in the driver than none, but the approach I took had a couple of shortcomings. One was minor and one major.



The minor shortcoming was that the transaction handling added a non-trivial level of complexity to the API due to its introduction of a few new functions such as new_transaction(), add_statement(), add_execute() and commit(). This made the MySQL transaction API much less elegant than the Mnesia transaction API, which, in its simplest form, boils down to a single function -- mnesia:transaction/1.



As an example, compare




T = mysql:new_transaction(pool_id),
T1 = mysql:add_query(T,
<<"INSERT INTO person(name) VALUES ('bob')">>),
Res = mysql:commit(T1).


to




mnesia:transaction(fun() -> mnesia:write(Bob) end).


Even ignoring the ugliness of the SQL statement, it's easy to see that the Mnesia example is much clearer.



My attempt at simplifying the MySQL transaction handling by making it possible to write the above example as




mysql:transaction(pool_id, fun(T) ->
add_query(T,
<<"INSERT INTO person(name) VALUES ('bob')">>)
end)


was in improvement, but it still added more complexity to the driver's API than the Mnesia's single transaction function.



Putting aesthetic concerns aside for a second, having a transaction API that's different from Mnesia's would make my life hard when implementing different ErlyDB drivers that must conform to the same transaction specifications. Having a consistent approach to transactions in all drivers was quite important for ErlyDB.



The bigger shortcoming with the original transaction interface in the MySQL driver was that although it was possible to execute a sequence of INSERT, UPDATE and DELETE statements in a transaction, it was impossible to execute SELECT statements in a transaction and actually do something with the results. For example, it was impossible to do the equivalent of




mysql:fetch(p1, <<"BEGIN">>),
mysql:fetch(p1,
<<"INSERT INTO person(name) VALUES ('bob')">>),
Res = mysql:fetch(p1,
<<"SELECT last_insert_id()">>),
[[Id]] = mysql:get_result_rows(Res),
mysql:fetch(p1,
["UPDATE person SET name='Jane' WHERE id=",
integer_to_list(Id)]),
mysql:fetch(p1, <<"COMMIT">>).


Even worse, if you had more than one connection in the connection pool, there was no way at all to implement the above transaction because the connection pooling algorithm would execute each statement in a different connection!



Ouch.



I basically ran into 2 problems: in Erlang, functions are stateless. There was no way for mysql:fetch to know that it's part of a transaction unless it was given a parameter that would inform it of this fact. (This parameter could either be a transaction record, as in the original approach, or a continuation, letting the user implement transactions in continuation passing style.) In addition, the transaction function executed in the client process, but each SQL statement was executed in the process of a connection chosen by the dispatcher. The only way to execute a group of statements in one connection is to send them to the dispatcher in a batch.



The first problem caused the API complexity; the second problem made using results from SELECT statements inside a transaction impossible.



Fortunately, I was able to find work-arounds to both problems.



To tackle the first issue, I dug deep into my memory of obscure Erlang documentation I had read. In the Advanced Topics section of the Erlang course is a short section on the process dictionary. In Erlang, each process has a mutable data structure called the process dictionary. The process dictionary allows you to 'cheat' by storing in it values and later retrieving them in other functions without passing those values as explicit parameters. This feature is generally discouraged because it makes your code less readable and it potentially introduces obscure bugs. (It's interesting how in most programming languages, mutable data is a fact of life, whereas functional languages such as Erlang and Haskell teach you to avoid it as much as possible :) ). I have never used the process dictionary before, and I will do my best to refrain from using it again, but for this specific feature, it was very useful.



Detour: It is actually possible to emulate the process dictionary in a "clean" way using an Erlang process that maintains state for all other processes. To get and set the state for the current process, you would make calls such as




State = gen_server:call(state_server,
{get_state, self()}).


and




gen_server:cast(state_server,
{set_state, self(), State})


(gen_server:call is blocking and get_server:cast is non-blocking).



The main disadvantages with this approach is that it involves a higher overhead due to message passing and that the state server needs to monitor other processes using erlang:monitor in order to garbage-collect state data belonging to processes that have died or crashed.



Back to the MySQL driver: With the process dictionary, mysql:fetch and mysql:execute could know that they were called inside a transaction, which made it possible to remove the mysql:add_query and mysql:add_execute functions. However, I still faced the problem that each call to mysql:fetch and mysql:execute would go to a different connection because of the connection pooling mechanism.



Thankfully, Erlang let me solve this problem quite elegantly: send the transaction function to the connection's process via the dispatcher, execute the function in the connection's process, and then send the result of the function to the calling process!



(For veteran Erlangers, this is probably far from groundbreaking, but for me, the notion of sending a function to different process that would execute it and send back the result is pretty cool. This is another small way in which Erlang has changed the way I think about programming.)

This is the solution I implemented in the MySQL driver. When you call mysql:transaction(ConnPool, Fun), the Fun parameter is sent to the dispatcher, which picks the next connection from the connection pool and then sends the Fun to the connection's process. The connection process puts a flag in the process dictionary marking the existence of a transaction context, and then executes the Fun. Calls to mysql:fetch and mysql:execute inside the Fun pick up the transaction context, and instead of sending their SQL statements to the dispatcher, they call mysql_conn functions directly. This ensures that all statements in a transaction are executed in the same connection.



With this new approach, the MySQL driver now lets you write transaction handling code in the style of the Mnesia API. The new way of writing the above example is:




mysql:prepare(insert_developer,
<<"INSERT INTO developer(name) VALUES (?)">>),
mysql:transaction(conn_pool,
fun() ->
mysql:execute(insert_developer, ["Bob"]),
{data, Res} =
mysql:fetch(<<"SELECT last_insert_id()">>),
[[Id]] = mysql:get_result_rows(Res),
mysql:fetch(
[<<"UPDATE developer SET name='Jane' WHERE id=">>,
integer_to_list(Id)])
end).


Two notes about this example:



  • The PoolId parameter is no longer necessary in calls to mysql:fetch and mysql:execute when they are called inside a transaction. For consistency, you can keep the parameter in the calls -- it'll just be ignored.

  • The last call to mysql:fetch demonstrates the use of iolists, which are more efficient than string concatenation in certain cases, such as when you're sending the result to a socket.



I made another change to the MySQL driver besides the transaction API change: inspired by Joe Armstrong's last article about transactional memory, I added versioning to prepared statements. In the last version, when a prepared statement changed, the dispatcher would scan all connections and the ones that have marked it as prepared would be updated. This solution is fine as long as there aren't too many database connections and the set of prepared statements is stable, but it doesn't scale too well.



In the latest version of the driver, I made it so when mysql:execute sends a request to the dispatcher, the dispatcher checks what the latest version of the statement is in its memory and then forwards to the request to execute the prepared statement with the given version to a connection from the connection pool. If the connection hasn't prepared the statement or if the prepared version of the statement in the connection is lower than the one in the dispatcher's request, the connection would request the latest version of the statement from the dispatcher. This ensures that all connections execute the latest version of each statement while keeping communications to a minimum.



(Things actually get more complicated when a call to mysql:execute is made in a transaction, but I'll spare you such messy details :) )



I put the new version for the driver in branches/v0.9. The old version is in branches/v0.8. Please get the latest version, give it a test drive and let me know if you find any problems (I tested it myself and it looks pretty stable).



Now that the MySQL driver looks solid, I can finally go back to ErlyDB :)



Update: I forgot to mention that transactions are automatically rolled back if the Fun returns or throws {error, Err}. For instance,




mysql:transaction(conn_pool1,
fun() ->
mysql:fetch(<<"DELETE FROM customer">>,
throw({error, just_kidding})
end).


would have no impact on the data because cause the transaction will be rolled back.

10 comments:

Rodrigo Kumpera said...

Out of curiosity, do you know how mnesia handles this situation?

Yariv said...

Good question. I'm not sure, actually. Maybe I'll go digging in the code and see what it does.

mikko.mattila@kirkas.com said...

Okay sorry about the reposts, I just pressed the submit button a few times because nothing happened when I pressed it.. sorry again

Yariv said...

Hi Mikko,

This is essentially how I first solved the transaction issue. I added a list of statements to the transaction prior to sending them to the dispatcher, which then executed them one by one. The problem is that the results of the SELECT statement were unavailable to the statements that have followed it. This is the case also in your code if I understand it correctly.


Don't worry about the double post -- I'll delete the redundant ones.


Thanks,
Yariv

mikkom said...

... And I have to add that I didn't mean that I would not need information from select (you can do this with subqueries) but that you would need to modify logic based on select statements.


In fact I think by doing this you are using SQL as kind of a variable alternative and that is not a good thing.


in fact I think it's quite hard to find examples where you need to modify program structure inside SQL transaction.

Yariv said...

Clarification: in this case, the program structure isn't modified, but the values returned from the SELECT statement need to be used in the following statement. In any case, I'm not convinced that the logic *never* has to be decided inside the transaction...

mikkom said...

I'm not convinved either, just could not get any good examples into my head.


The last_insert_id() example is basically the primary reason why I added possibility to use multiple queries the first place.

mikkom said...

I got many good examples to my head during the day of coding, just wanted to say that your way is the correct one. I'll add some kind of similar way to my framework too..

cheers,
Mikko

Richard said...

interesting... anyone have drivers for SQLITE3? How would you define the effort required? Guess I'm going to look at the code now.

Olivier said...

Hello Yariv, in some cases another way is to call MySql stored procedure. Is it possible with a simple mysql:fetch() ?