web

New Column: Yaws

July 1st, 2011  |  Published in column, erlang, web, yaws  |  Bookmark on Pinboard.in

The July/August 2011 issue of Internet Computing is out, and this time my column (pdf) covers the Erlang Yaws web server. Since Yaws has too many features to detail in just one column, I wrote it as an introductory piece, covering only those features that are most commonly used.

You might also take a look at my slides from my “A Decade of Yaws” talk (pdf) from Erlang Factory London 2011.

More Erlang Web Server Benchmarking

May 18th, 2011  |  Published in erlang, performance, testing, web  |  Bookmark on Pinboard.in

In my previous blog entry I questioned the value of most web server benchmarking, particularly as related to Erlang. Typical benchmarks are misleading, inaccurate, and poorly executed. Perhaps worse, the intent of publishing them seems to be to assert that the fastest web server (at least according to the tests performed) is of course also the best web server. You’d think the flaws of this fallacy would be so obvious that nobody would fall for it, but think again: watching the delicious “erlang” tag over the past few days revealed the benchmarks my blog post referred to to be one of the most bookmarked Erlang-related pages during that timeframe.

Not surprisingly, though, it looks like I’m not the only one bothered by poor benchmarking practices. Over on his blog, Mark Nottingham just published a brilliant set of rules for HTTP load testing. It’s quite instructive to take your favorite set of published web server benchmarks and see just how many of Mark’s rules they violate.

Like I hinted last time, if you want benchmarks, you are best off by far if you run them yourself. That way, their relevance to the problems you’re addressing will be much more likely, and you can run them in a similar, or even the same, environment on which you plan to deploy. You can also gear the benchmarks to much more closely resemble your applications and the loads you require them to handle. Doing the benchmarking work yourself will give you valuable hands-on experience with the servers and frameworks you’re considering, allowing you to get a feel for important factors such as feature completeness and correctness, ease of development, flexibility, and ease of deployment and runtime management/monitoring, none of which can be gauged by someone else’s performance benchmarks. Finally, by doing your own benchmarking you can also help ensure the validity and usefulness of your results by following Mark’s load testing rules.

Erlang Web Server Benchmarking

May 9th, 2011  |  Published in code, erlang, performance, web  |  Bookmark on Pinboard.in

Over on his blog, Roberto Ostinelli published “A comparison between Misultin, Mochiweb, Cowboy, NodeJS and Tornadoweb.” I was going to write a reply comment there, but it got pretty long so I decided to publish it here instead. I’m going to ignore the non-Erlang web servers it discusses and focus entirely on Erlang. I’m not trying to really pick specifically on Roberto here, but rather I decided to finally write something I’ve been meaning to write for awhile now about Erlang web servers and benchmarking.

First, I second the request made by one commenter for including Yaws in the measurements. Roberto, if you need help with the code or setup, just let me know. If one insists on writing these kinds of benchmarks, which as you’ll learn if you read this whole entry is something I question, the least he or she could do is include Yaws since it’s the granddaddy of all Erlang web servers.

Based on the benchmark code Roberto published, I wrote the following simple Yaws module to conform to the problem statement and registered it in my Yaws configuration as a “/” appmod:

-module(yaws_bench).
-export([out/1]).

out(Arg) ->
    [{status, 200},
     {content, "text/xml",
      case yaws_api:queryvar(Arg, "value") of
          {ok, Value} ->
              ["<http_test><value>", Value, "</value></http_test>"];
          _ ->
              "<http_test><error>no value specified</error></http_test>"
      end}].

I then measured it on my Ubuntu 10.10 two-core system using Roberto’s published httperf command against the misultin and Mochiweb code he published, and found that Yaws definitely holds its own, even though it’s a full-featured web server and does not claim to be just a lightweight library offering (sometimes partial) HTTP support as some frameworks do. For some tests Yaws outperforms misultin, and for others it doesn’t. This is interesting, considering that neither Klacke nor I have made any attempts at performance improvements in Yaws recently.

Second, the benchmarks do not compare apples to apples. Both Mochiweb and Yaws, for example, produce replies that are larger in size than misultin’s replies, primarily because they both include Server and Date headers. As I’ve learned from years of helping maintain Yaws, date calculations can noticeably and surprisingly impact Erlang web server performance, yet simply leaving Date headers out isn’t an option for real-world apps since HTTP 1.1 pretty much requires them (section 13.2.3 of RFC 2616 states, “HTTP/1.1 requires origin servers to send a Date header, if possible, with every response, giving the time at which the response was generated…”). Caches use Date headers for several reasons, for example in the absence of cache-control headers to help heuristically calculate content expiration. Even ignoring the date calculation requirements, just creating and delivering larger replies due to the presence of the Server header will negatively impact any comparisons based on request/second measurements.

Third, the benchmarking approach includes no application “think time.” How many real-world apps just blast request after request down a connection without any intervening time to handle replies? If the goal is to measure something akin to real-world apps, then the benchmarks should at least be using something like httperf’s --wsess option to simulate client think time. And unfortunately doing that is hard to get right for generic benchmarks, since different client apps will have different think times.

On a related note, what exactly is the goal of these benchmarks? To imply that faster is better? That’s unfortunately a commonly-held fallacy. Given that the blog entry states that the target is dynamic applications, then consider the fact that the performance of a real-world dynamic application is often dominated by something other than the web server — perhaps some back-end service from which page data is being fetched, for example. A real-world setup greatly concerned with performance is likely to have nginx out in front, probably with a local cache, to handle fast-path requests, shunting only those requests it can’t fulfill off to the slower back-end server. Such benchmarking games are therefore often misguided as far as real-world dynamic apps are concerned because they end up measuring something that isn’t even in the critical path in a real setup.

I don’t agree with Kyle Drake’s comment on Roberto’s blog about code ugliness, since the Erlang code posted there is very clear and would look like “garbage” only to someone who doesn’t know the language. But I do agree with the sentiment, which is that for dynamic apps, what often matters is what kind of code, and how much, you have to write and maintain to support your app. Given that Erlang web servers tend to make use of the underlying Erlang/OTP facilities for HTTP parsing and socket handling, then all things considered you’re just not going to get a huge variation in performance among them, assuming they’re written halfway decently. What matters for dynamic apps are the stability of the web server/library and the programming model it offers. These are what Roberto should really be benchmarking, but of course that’s basically impossible since stability would take a long time to prove, and programming model is a matter of taste that can’t be conveniently measured using artificial benchmarking tools. This reminds me of one of my old columns on this very issue as applied to enterprise middleware, entitled “The Performance Presumption” (PDF); the short version is that people often measure performance simply because performance is relatively easy to measure. The lesson is that you shouldn’t rely on generic benchmarks, but rather you should take the time to create specific benchmarks that mimic the app you want to develop, and base your decisions on the results of that exercise.

On top of all that, I don’t really understand the desire to keep writing new Erlang web frameworks for performance reasons. As I stated earlier, if a framework uses Erlang’s built-in packet decoding and socket handling, it won’t perform a great deal better than any other Erlang web framework. OTOH, if someone writes a new framework with the hope of providing a really nice new programming model — webmachine is a fantastic example of this — then they shouldn’t be “proving” how good the programming model is by trying to show how fast it is. Ever seen webmachine being advertised via performance benchmarks? Neither have I.

Let’s face it, the Erlang web development community isn’t large enough to support numerous web servers and frameworks. I’m sure some will disagree, but publishing artificial benchmarks designed to “prove” which is best IMO results mostly in just fragmenting the community. If you really have an itch to write a fast Erlang web server, you’d help the community much more by contributing to an existing one, including the Erlang inets web server included in Erlang/OTP and now powering the Erlang website. For Yaws, Klacke and I often take patches and suggestions from our users, and we gladly welcome solid contributions intended to improve Yaws performance. If you’re just dying to show off your chops, note that improving performance in a long-lived and highly stable codebase like Yaws without breaking anyone’s code is far more challenging than writing another new server that basically doesn’t differ much from what already exists.

Or perhaps better yet, contribute to the Erlang core. IMO the next major performance improvements in Erlang web servers will come not from minor tweaks in handling binaries or such things, but rather via radical improvements in the Erlang TCP driver or even from developing a whole new HTTP-specific driver. Unlike a war of artificial benchmarks among Erlang web servers, these approaches have a great chance to improve the lot of all Erlang web systems.

Warp: A Haskell Web Server

May 1st, 2011  |  Published in functional programming, haskell, performance, web  |  Bookmark on Pinboard.in

A new “Functional Web” column! Michael Snoyman contributed the May/June 2011 issue, entitled Warp: A Haskell Web Server (PDF). This is a nice follow-up to the previous column on Snap, and it includes great detail on how Warp is designed as well as some nice performance graphs.

Process Bottlenecks in Erlang Web Apps

March 19th, 2011  |  Published in column, erlang, functional programming, performance, web  |  Bookmark on Pinboard.in

In my “Functional Web” March/April 2011 column (PDF), I explore potential process bottleneck problems within Erlang web applications, specifically around the issue of process fan-in. I’ve seen developers create Erlang web applications that suffer from bottlenecks related to interprocess communication as explained in the column, yet as far as I’ve seen this issue is rarely if ever discussed among Erlang developers.

As always, all constructive feedback on the column is welcomed. Just post your comments here or email me.