reliability

Implementation of “Don’t Lose Your ets Tables”

May 8th, 2013  |  Published in erlang, reliability  |  Bookmark on Pinboard.in

A couple years ago I wrote about how to avoid losing your ets tables if the Erlang processes that own those tables crash. The original post resulted from accidentally losing an ets table full of video subscriber data during a debug session on a live customer production site. Oops.

If you’re looking for some code that implements what that post describes, look no further than DeadZen‘s etsgive repository. DeadZen is a very experienced Erlang developer, so it’s no surprise that his example code is straightforward and clean.

Just as described in my post, DeadZen’s supervisor code starts two workers, one that manages the table and one that uses it. The table manager process takes the following steps:

  1. traps exits
  2. links to the table user worker process (the supervisor starts the table user worker first)
  3. creates the ets table
  4. sets itself as the table heir process
  5. gives ownership of the table away to the table user worker process

The table user worker process first handles the transfer message resulting from the manager giving it ownership of the table. Once it becomes the owner, it can handle a call to increment a counter in the table and a call to check the table contents. It can also handle a call to die, which you would issue interactively from an Erlang shell to exercise the code. This causes the worker process to die, which means the table manager regains ownership of the table because it’s the table’s heir. The table manager then waits for the supervisor to start a a new table user worker process, and once it’s up and running, the manager links to it and then transfers table ownership to it.

This written description might be difficult to work through, but fear not, DeadZen also supplies an example Erlang shell session showing how it all works. Clone his repository and try it out for yourself!

Don’t Lose Your ets Tables

March 23rd, 2011  |  Published in erlang, reliability  |  Bookmark on Pinboard.in

In my recent QCon talk I talked about accidentally crashing an Erlang process on a customer’s subscription streaming video website running live in production. The code involved had not been used in production before, and the customer had decided somewhat unexpectedly to turn on a new feature that required it. The developer who wrote it had not tested it and had long since left the company.

The purpose of the code was to monitor bandwidth and session usage for each video subscriber to make sure they weren’t streaming more than they’d paid for. Concerned about the viability of the code, a colleague and I logged into the customer site (with their permission, of course), chose a subscriber at random, and, in an Erlang shell, I interactively invoked a function in the code in question to check that subscriber’s current bandwidth and session count. After a second check, we saw the numbers dropping, potentially indicating the subscriber was logging out, and we wanted to make sure all went well when the subscriber completely stopped streaming. After waiting a bit, I interactively called the function again, and — BAM! — the process holding session state for all paying customers crashed.

The original developer had used an Erlang ets table, an in-memory data store, to hold the subscriber data, and wrote something like this for lookups:

[SubscriberData] = ets:lookup(Table, Subscriber),

My interactive call from the shell looked up a nonexistent subscriber, so the result was the empty list [] rather than [SubscriberData], which caused a pattern mismatch and a badmatch exception. Uncaught, the exception crashed the process. Since the process owned the ets table, when it went down it took the ets table and all subscriber session data with it. It wasn’t so bad, since all it meant was that for a few hours a few subscribers potentially got a bit more video than they’d paid for, but still, it’s not at all the kind of design Erlang’s “Let It Crash” philosophy actually encourages. Crashing a process when something unexpected occurs is perfectly fine, since coding defensively introduces problems of its own, but you can still avoid losing your ets tables like this relatively easily.

Name an Heir

When you create an ets table you can also name a process to inherit the table should the creating process die:

TableId = ets:new(my_table, [{heir, SomeOtherProcess, HeirData}]),

If the creating process dies, the process SomeOtherProcess will receive a message of the form

{'ETS-TRANSFER', TableId, OldOwner, HeirData}

where TableId is the table identifier returned from ets:new, OldOwner is the pid of the process that owned the table, and HeirData is the data provided with the heir option passed to ets:new. Once it receives this message, SomeOtherProcess owns the table.

Give It Away

Alternatively, you can create an ets table and then give it to some other process to keep it:

TableId = ets:new(my_table, []),
ets:give_away(TableId, SomeOtherProcess, GiftData),

If the creating process dies, the process SomeOtherProcess will receive a message of the form

{'ETS-TRANSFER', TableId, OldOwner, GiftData}

where TableId is the table identifier returned from ets:new, OldOwner is the pid of the process that owned the table, and GiftData is the data provided in the ets:give_away call. Once it receives this message, SomeOtherProcess owns the table.

Table Manager

Instead of naming an heir or giving a table away, you can just have your Erlang supervisor process create a child process whose sole task is to own the table. This process creates the table as a named public table, thus allowing other processes to know its name and read/write it directly, with ets built-in concurrency protection dealing with any concurrency issues. Since the owner process does nothing more than create the table and then wait to be told to shut down, the likelihood of it crashing and taking the table with it is practically nil. The drawback here, though, is that the process actually using the table may have to coordinate with the owner process to ensure the table is available, and worse, it ends up using what is essentially a global variable — the table name — which can make code harder to read and maintain.

A Combination Approach

A nice way of managing ets tables, though, is to use a combination of the three previous techniques:

  1. The Erlang supervisor creates a table manager process. Since all this process does is manage the table, the likelihood of it crashing is very low.
  2. The table manager links itself to the table user process and traps exits, allowing it to receive an EXIT message if the table user process dies unexpectedly.
  3. The table manager creates a table, names itself (self()) as the heir, and then gives it away to the table user process.
  4. If the table user process dies, the table manager is informed of the process death and also inherits the table back.

Once it inherits the table, the table manager can then for example wait until the supervisor recreates the table user process, and then repeat the steps above to give the table to the new table user process. Other variations on this approach, like maybe a small pool of child process clones that cooperate to transfer the table between them in case of error, are of course also possible. Even though there are still process coordination issues here (but nothing difficult), I like this approach because it avoids global named tables and takes advantage of Erlang's supervision hierarchy.

The title of my QCon talk was "Let It Crash...Except When You Shouldn't." This scenario is an example of "when you shouldn't" — losing ets data due to a process crash is easily avoided.

Controlling Erlang’s Heart

February 22nd, 2009  |  Published in code, erlang, reliability  |  Bookmark on Pinboard.in

Erlang’s heart feature provides a heartbeat-based monitoring capability for Erlang runtime systems, with the ability to restart a runtime system if it fails. It works reasonably well, but one issue with it is that if an error occurs such that it causes repeated immediate runtime crashes, heart will happily keep restarting the runtime over and over again, ad infinitum.

For yaws 1.80, released a few days ago on Feb. 12, I added a check to the heart setup in the yaws startup script to prevent endless restarts. I thought I’d share it here because it’s useful for Erlang systems in general and is in no way specific to yaws. It works by passing startup information from one incarnation to the next, checking that information to detect multiple restarts within a given time period. We track both the startup time and the restart count, and if we detect 5 restarts within a 60 second period, we stop completely. This is not to say that yaws is in dire need of this capability — it’s extremely stable in general and 1.80 in particular is a very good release — but I added it mainly because other Erlang apps sharing the same runtime instance as yaws may not enjoy that same high level of stability, especially while they’re still under development.

The command heart runs to start a new instance is set in the HEART_COMMAND environment variable. For yaws, it’s set like this (I’ve split this over multiple lines for clarity, but it’s just one line in the actual script):

HEART_COMMAND="${ENV_PGM} \
  HEART=true \
  YAWS_HEART_RESTARTS=$restarts \
  YAWS_HEART_START=$starttime \
  $program "${1+"$@"}

where

  • ${ENV_PGM} is /usr/bin/env, which allows us to set environment variables for the execution of a given command.
  • HEART is an environment variable that we use to indicate the command was launched by heart.
  • YAWS_HEART_RESTARTS is an environment variable that we use to track the number of restarts already seen. The yaws script initially sets this to 1 and increments it for each heart restart.
  • YAWS_HEART_START is an environment variable that we use to track the time of the current round of restarts. This is tracked as UNIX time, obtained by the script via the “date -u +%s” command.
  • $program is the yaws script itself, i.e., $0.
  • ${1+"$@"} is a specific shell construct that passes all the original arguments of the script unchanged along to $program.

The yaws script looks for HEART set to true, indicating that it was launched by heart. For that case, it then checks YAWS_HEART_RESTARTS and YAWS_HEART_START to see how many restarts we’ve seen since the start time. We get the current UNIX time and subtract the YAWS_HEART_START time; if it’s less than or equal to 60 seconds and the restart count is 5, we exit completely without restarting the Erlang runtime. Otherwise we restart, first adjusting these environment variables. If the restart count is less than 5 within the 60 second window, we increment the restart count and set the new value into YAWS_HEART_RESTARTS but keep the same YAWS_HEART_START time. But if the current time is more than 60 seconds past the start time, we reset YAWS_HEART_RESTARTS to 1 and set a new start time for YAWS_HEART_START. Look at the yaws script to see the details of this logic — scroll down to the part starting with if [ "$HEART" = true ].

Note that this approach is much like the way Erlang receive loops generally track state, by recursively passing state information to themselves.

Clearly Time To End This

May 18th, 2008  |  Published in commentary, distributed systems, erlang, reliability  |  Bookmark on Pinboard.in

A technical discussion stops being a vehicle for learning when the following start to occur:

  • Someone starts making stuff up.
  • Instead of answering questions put to them, someone starts pointing out “flaws” in the questions themselves.
  • One challenges the other to some sort of programming contest.
  • Name calling.

The first two aren’t so bad, but when either of the latter two appears, it’s time to stop. Unfortunately, the third item has now entered my back-and-forth with Ted Neward. Since Ted has given me the last word, I’ll take it, but it’s clearly time to move on.

Given that a number of statements Ted’s made about Erlang in this discussion simply aren’t true, it’s quite clear Ted has never written any production Erlang code. [Update: Patrick Logan has posted a detailed analysis of Ted’s misunderstandings of Erlang.] Being a long-time author, it bothers me when people write authoritatively on topics they have no business writing about, so my only goal with my responses in this conversation has simply been to set the record straight with respect to Erlang. Ted originally said Erlang was a study in concurrency; I merely pointed out that it was more importantly a study in reliability. That’s really not even debatable. Unfortunately, it’s turned into a frustrating one-sided conversation because Ted lacks any detailed knowledge of Erlang, so he keeps unhelpfully trying to shift the focus elsewhere.

In his past two responses, Ted has picked at my questions like a grammar school English teacher, accusing me of conflating things, making bad assumptions, etc. I see that Patrick Logan is trying to clarify things, which might help. Yet Ted still hasn’t adequately explained why he’s taken such a hard stance against reliability being a fundamental feature of Erlang, nor how UNIX processes and Erlang processes are the same, as he keeps asserting, nor has he explained why he thinks it’s much, much harder to make an Erlang application manageable and monitorable than it is to build Erlang’s reliability into other systems like the JVM or Scala.

But now, we see the worst: the “programmer challenge.” Ugh. Thankfully, I’m sure most readers know that a programming contest of the sort Ted proposes would prove absolutely nothing. I guess he proposed it because I mentioned how I recently spent a quarter of a day making an Erlang application monitorable, in response to his continued claims that doing so is really hard, so now he wants to make a competition of it. I’d rather that you just explain, Ted, the experiences you’ve had that have led you to claim that Erlang applications can’t be easily managed or monitored. Better yet, since you’re the one who wants a contest, and given that you’re the one making all the claims, why don’t you go off and see how quickly you can build Erlang’s reliability into Scala and the JVM, since you claim it’s so simple?

If you’re a regular reader of Ted’s blog, you know that Ted generally offers good advice and you can learn useful things from him. He’s a good writer and a wonderful conference presenter, as he can make hard concepts easier to grok and generally does so with humor to keep you awake. But I feel that anyone in Ted’s position has a responsibility to avoid passing off incorrect information to his readers as fact. My advice therefore is simply that you don’t take what Ted says as gospel for this particular topic. Let me assure you that Erlang offers far, far more value than just exceptional concurrency support, which is where Ted’s initial posting in this thread seemed to want to limit it, and which is all I objected to. Unlike Ted, I’ve written quite a bit of Erlang code, and I use it every single day. If you write distributed systems, you owe it to yourself to explore Erlang’s capabilities and features. I’ve been writing and researching middleware and distributed systems for nearly 20 years now, and I’ve seen a lot over the years. Erlang is by far the most innovative and sound approach to distributed systems development I’ve ever seen and experienced — the trade-offs its designers chose are simply excellent. Like I’ve said numerous times over the past year, I really wish I’d found Erlang a decade ago, because I know for certain it would have saved my teams and me countless hours of development time.

Thinking in Language, But Not Clearly

May 9th, 2008  |  Published in commentary, distributed systems, erlang, languages, reliability  |  Bookmark on Pinboard.in

Ted Neward finally responds to my comments about his remarks concerning Erlang. I really don’t mean to pick on Ted — I like Ted! — but unfortunately, this time around his response misses the mark in more ways than one.

First, Ted says:

Erlang’s reliability model–that is, the spawn-a-thousand-processes model–is not unique to Erlang. In fact, it’s been the model for Unix programs and servers, most notably the Apache web server, for decades. When building a robust system under Unix, a master-slave model, in which a master process spawns (and monitors) n number of child processes to do the actual work, offers that same kind of reliability and robustness. If one of these processes fail (due to corrupted memory access, operating system fault, or what-have-you), the process can simply die and be replaced by a new child process.

There’s really no comparison between the UNIX process model (which BTW I hold in very high regard) and Erlang’s approach to achieving high reliability. They are simply not at all the same, and there’s no way you can claim that UNIX “offers that same kind of reliability and robustness” as Erlang can. If it could, wouldn’t virtually every UNIX process be consistently yielding reliability of five nines or better?

Obviously, achieving high reliability requires at least two computers. On those systems, what part of the UNIX process model allows a process on one system to seamlessly fork child processes on another and monitor them over there? Yes, there are ways to do it, but would anyone claim they are as reliable and robust as Erlang’s approach? I sure wouldn’t. Also, UNIX pipes provide IPC for processes on the same host, but what about communicating with processes on other hosts? Yes, there are many, many ways to achieve that as well — after all, I’ve spent most of my career working on distributed computing systems, so I’m well aware of the myriad choices here — but that’s actually a problem in this case: too many choices, too many trade-offs, and far too many ways to get it wrong. Erlang can achieve high reliability in part because it solves these issues, and a whole bunch of other related issues such as live code upgrade/downgrade, extremely well.

Ted continues:

There is no reason a VM (JVM, CLR, Parrot, etc) could not do this. In fact, here’s the kicker: it would be easier for a VM environment to do this, because VM’s, by their nature, seek to abstract away the details of the underlying platform that muddy up the picture.

In your original posting, Ted, you criticized Erlang for having its own VM, yet here you say that a VM approach can yield the best solution for this problem. Aren’t you contradicting yourself?

It would be relatively simple to take an Actors-based Java application, such as that currently being built in Scala, and move it away from a threads-based model and over to a process-based model (with the JVM constuction[sic]/teardown being handled entirely by underlying infrastructure) with little to no impact on the programming model.

Would it really be “relatively simple?” Even if what you describe really were relatively simple, which I strongly doubt, there’s still no guarantee that the result would help applications get anywhere near the levels of reliability they can achieve using Erlang.

As to Steve’s comment that the Erlang interpreter isn’t monitorable, I never said that–I said that Erlang was not monitorable using current IT operations monitoring tools. The JVM and CLR both have gone to great lengths to build infrastructure hooks that make it easy to keep an eye not only on what’s going on at the process level (“Is it up? Is it down?”) but also what’s going on inside the system (“How many requests have we processed in the last hour? How many of those were successful? How many database connections have been created?” and so on). Nothing says that Erlang–or any other system–can’t do that, but it requires the Erlang developer build that infrastructure him-or-herself, which usually means it’s either not going to get done, making life harder for the IT support staff, or else it gets done to a minimalist level, making life harder for the IT support staff.

I know what you meant in your original posting, Ted, and my objection still stands. Are you saying here that all Java and .NET applications are by default network-monitoring-friendly, whereas Erlang applications are not? I seem to recall quite a bit of effort spent by various teams at my previous employer to make sure our distributed computing products, including the Java-based products and .NET-based products, played reasonably well with network monitoring systems, and I sure don’t recall any of it being automatic. Yes, it’s nice that the Java and CLR guys have made their infrastructure monitorable, but that doesn’t relieve developers of the need to put actual effort into tying their applications into the monitoring system in a way that provides useful information that makes sense. There is no magic here, and in my experience, even with all this support, it still doesn’t guarantee that monitoring support will be done to the degree that the IT support staff would like to see.

And do you honestly believe Erlang — conceived, designed, implemented, and maintained by a large well-established telecommunications company for use in highly-reliable telecommunications systems — would offer nothing in the way of tying into network monitoring systems? I guess SNMP, for example, doesn’t count anymore?

(Coincidentally, I recently had to tie some of the Erlang stuff I’m currently working on into a monitoring system which isn’t written in Erlang, and it took me maybe a quarter of a workday to integrate them. I’m absolutely certain it would have taken longer in Java.)

But here’s the part of Ted’s response that I really don’t understand:

So given that an execution engine could easily adopt the model that gives Erlang its reliability, and that using Erlang means a lot more work to get the monitorability and manageability (which is a necessary side-effect requirement of accepting that failure happens), hopefully my reasons for saying that Erlang (or Ruby’s or any other native-implemented language) is a non-starter for me becomes more clear.

Ted, first you state that an execution engine could (emphasis mine) “easily adopt the model that gives Erlang its reliability,” and then you say that it’s “a lot more work” for anyone to write an Erlang application that can be monitored and managed? Aren’t you getting those backwards? It should be obvious that in reality, writing a monitorable Erlang app is not hard at all, whereas building Erlang-level reliability into another VM would be a considerably complicated and time-consuming undertaking.