performance

Faster WF Still

October 21st, 2007  |  Published in erlang, performance  |  Bookmark on Pinboard.in

OK, so you could say that I’m a bit obsessed with Tim Bray’s Wide Finder project. Just a little. I mean, I started this blog just a few weeks ago and so far almost every posting has been about it:

There’s also my Sept./Oct. Internet Computing column, Concurrency with Erlang, and sometime in early November, my next column, entitled Reliability with Erlang, will be published. Neither column is connected at all to the Wide Finder, but they just further reveal my current obsession with Erlang.

In my previous post I described my fastest solution up to that point, but here’s an even faster one: tbray16.erl, which is identical in every way to tbray15.erl from my previous post except that it uses wfbm4.erl, which provides all the performance gains. This version of Boyer-Moore searching includes two simple tweaks:

  • Uses hard-coded constants for string lengths, since the strings are fixed, rather than constantly recalculating them with length/1.
  • Fixes a nagging problem with my Boyer-Moore implementation where it wasn’t handling repeated characters in the fixed pattern very well. What I did in the previous version was choose the lesser of the two shifts if both characters appeared in the pattern, which worked but isn’t technically correct, and it also meant two dict lookups to get the shift values rather than just one. Now, I do the right thing: just keep track of the number of comparisons and subtract that from the shift value, use that if the result is positive and non-zero, otherwise just shift by 1.

This version shaves another whole second off the previous version for about a 25% speedup. The fastest I’ve seen it run on my 8-core Linux box is:

real    0m3.107s
user    0m16.243s
sys     0m2.134s

Meanwhile, Anders Nygren has been exploring eliminating using the dict altogether for the shift value lookup, but nothing I’ve tried there has been an improvement. But thanks to Anders for prompting me to properly fix that Boyer-Moore code and at least eliminate one of the dict lookups.

OK, Just One More WF

October 18th, 2007  |  Published in erlang, performance  |  Bookmark on Pinboard.in

When writing my previous post I silently hoped I was finished contributing more Erlang solutions to Tim Bray’s Wide Finder project. Tim already told me my code was running really well on his T5120, and yet in my last post, I nearly doubled the speed of that code, so I figured I was in good shape. But then Caoyuan Deng came up with something faster. He asked me to run it on my 8-core Linux box, and sure enough, it was fast, but didn’t seem to be using the CPU that well.

So, I thought some more about the problem. Last time I said I was using Boyer-Moore searching, but only sort of. This is because I was using Erlang function argument pattern matching, which proceeds forward, not backward as Boyer-Moore does. I couldn’t help but think that I could get more speed by doing that right.

I was also concerned about the speed of reading the input file. Reading it in chunks seems like the obvious thing to do for such a large file (Tim’s sample dataset is 236140827 bytes). It turns out that reading in chunks can cumulatively take over a second using klacke’s bfile module, but it takes only about a third of a second to read the whole thing into memory in one shot. By my measurements the bfile module is noticeably faster at doing this than the Erlang file:read_file/1. Even my Macbook Pro can read the whole dataset without significant trouble, so I imagine the T5120 can do it with ease.

So, I changed tactics:

  • Read the whole dataset into an Erlang binary in one shot, then break it into chunks based on the number of schedulers that the Erlang system is using.
  • Stop breaking chunks into smaller blocks at newline boundaries. This took too much time. Instead, just grab a block, search from the end to find the final newline, and then process it for pattern matches.
  • Change the search module to do something much closer to Boyer-Moore regarding backwards searching, streamline the code that matches the variable portion of the search pattern, and be smarter about skipping ahead on failed matches.
  • Balance the parallelized collection of match data by multiple independent processes against the creation of many small dictionaries that later require merging.

This new version reads the whole dataset, takes the first chunk, finds the final newline, then kicks off one process to collect matches and a separate process to find the matches. It then moves immediately onto the next block, doing the same thing again. What that means is the main process spends its time finding newlines and launching processes while other processes look for matches and collect them. At the end, the main process collects the collections, merges them, and prints out the top ten.

On my 8-core 2.33GHz Linux box with 8 GB of RAM:

$ time erl -smp -noshell -run tbray15 main o1000k.ap
2959: 2006/09/29/Dynamic-IDE
2059: 2006/07/28/Open-Data
1636: 2006/10/02/Cedric-on-Refactoring
1060: 2006/03/30/Teacup
942: 2006/01/31/Data-Protection
842: 2006/10/04/JIS-Reg-FD
838: 2006/10/06/On-Comments
817: 2006/10/02/Size-Matters
682: 2003/09/18/NXML
630: 2003/06/24/IntelligentSearch

real    0m4.124s
user    0m25.124s
sys     0m1.916s

At 4.124s, this is significantly faster than the 6.663s I saw with my previous version. The user time is 6x the elapsed time, so we’re using the cores well. What’s more, if you change the block size, which indirectly controls the number of Erlang processes that run, you can clearly see a pretty much linear speedup as more cores get used. Below is the output from a loop where the block size starts at the file size and then divides by two on each iteration (I’ve edited the output to make it more compact by flattening the time output into three columns):

$ ((x=236140827)) ; while ((x>32768))
do echo $x
    time erl -smp -noshell -run tbray15 main o1000k.ap $x >/dev/null
    ((x/=2))
done

236140827: 0m38.072s, 0m52.159s, 0m3.984s
118070413: 0m18.294s, 0m37.922s, 0m4.571s
59035206:  0m11.374s, 0m36.694s, 0m9.098s
29517603:  0m4.598s,  0m27.825s, 0m2.180s
14758801:  0m4.225s,  0m26.237s, 0m2.134s
7379400:   0m4.181s,  0m25.779s, 0m1.873s
3689700:   0m4.124s,  0m25.124s, 0m1.916s
1844850:   0m4.149s,  0m24.931s, 0m1.969s
922425:    0m4.132s,  0m24.894s, 0m1.822s
461212:    0m4.170s,  0m24.588s, 0m2.026s
230606:    0m4.185s,  0m24.548s, 0m2.035s
115303:    0m4.215s,  0m24.755s, 0m2.025s
57651:     0m4.317s,  0m25.199s, 0m1.985s

The elapsed time between the top four entries show the multiple cores kicking in, essentially doubling the performance each time. Once we hit the 4 second range, performance gains are small but steady roughly down to a block size of 922425, but then they start to creep up again. My guess is that this is because smaller blocks mean more Erlang dict instances being created to capture matches, and all those dictionaries then have to be merged to collect the final results. In the middle, where performance is best, the user time is roughly 6x the elapsed time as I already mentioned, which means that if Tim runs this on his T5120, he should see excellent performance there as well.

Feel free to grab the files tbray15.erl and wfbm3.erl if you want to try it out for yourself.

One More Erlang Wide Finder

October 14th, 2007  |  Published in erlang, performance  |  Bookmark on Pinboard.in

[Update: WordPress totally destroyed the original version of this posting, so I had to almost completely rewrite it. :-( ]

Since posting my second version of an Erlang solution to Tim Bray’s Wide Finder, which Tim’s apparently been getting some good performance out of, I haven’t had time to try anything new. I mean, I work for a startup, so you could say I’m a bit busy. But the fact that the output of my earlier solution was just a number of semi-matches, rather than the list of top ten matches that the original Ruby version produced, was gnawing at me. In short, I didn’t finish the job. So last night, as I watched the marathon Red Sox playoff game, I worked on getting the output to match that of the Ruby version.

The executive summary is that this version has output exactly like that of Ruby, and as an added bonus it runs almost twice as fast as my original tbray5.erl code even though it does more work. On my 8-core 2.33 GHz Intel Xeon Linux box, the best time I’ve seen is 6.663 sec. It has more lines of code, though.

You can grab tbray14.erl and wfbm.erl if you’d like to try them out. Or just run the following commands:

wget http://steve.vinoski.net/code/tbray14.erl http://steve.vinoski.net/code/wfbm.erl
erl -make -smp
erl -smp -noshell -run tbray14 main o1000k.ap

Below find the details of how it works.

Boyer-Moore string seaching

Tim’s searching for data in his web logs that match this pattern:

GET /ongoing/When/\d\d\dx/(\d\d\d\d/\d\d/\d\d/[^ .]+)\\s

There’s a trailing space at the end of the pattern, hence that last \s. Obviously, the first part of the pattern is fixed, the second part variable. The part in parentheses is what Tim wants to see in the final top ten output list.

One of the problems with my previous version was how it broke the data up so it could look for matches. It used Erlang’s string:tokens function to first break on newlines, and then called it again to divide each line into space-separated chunks. Using that function also meant first converting Erlang binaries to strings. All in all, too slow.

I decided to instead pursue solutions that let me leave the data in the form of an Erlang binary and search through it that way. I wrote a character-by-character thing that worked, but it was also too slow. I tried various regular expression Erlang packages, as well as just using Erlang’s built-in pattern matching, but they were too slow too.

I finally settled on a combination of Boyer-Moore and Erlang’s built-in matching. It lets me advance through the data relatively quickly looking for the fixed part of that pattern, and then use Erlang’s pattern matching to get the rest. The code to do this is in wfbm.erl; let’s break it down function by function.

Constants

First, some constants:

-define(STR, "GET /ongoing/When").
-define(REVSTR, "mehW/gniogno/ TEG").
-define(STRLEN, length(?STR)).
-define(MATCHHEADLEN, length("/200x/")).
-define(SKIP, length("/200x/2007/10/15/")).

The first one, STR, defines the fixed part of the pattern we’re looking for, while REVSTR is the same string, only backwards. Boyer-Moore works by searching backwards, so we need the backwards version to let us do that. STRLEN is just the length of the fixed search string. MATCHHEADLEN is the length of the text we need to drop off the variable part of the patterns we find, so that our final output strings match the original Ruby output. And finally, SKIP is just the length of the front part of the variable part of the pattern, which has variable content but is always the same length.

Shift table

Boyer-Moore searching shifts the search string along the text being searched based on which characters don’t match and where those characters appear in the search string. The following code precomputes a table that tells us how to shift the search string along:

set_shifts(_, Count, Tbl) when Count =:= ?STRLEN - 1 ->
    Tbl;
set_shifts([H|T], Count, Tbl) ->
    New = ?STRLEN - Count - 1,
    NTbl = dict:store(H, New, Tbl),
    set_shifts(T, Count+1, NTbl).

set_defaults([], Tbl) ->
    Tbl;
set_defaults([V|T], Tbl) ->
    set_defaults(T, dict:store(V, ?STRLEN, Tbl)).

init() ->
    set_shifts(?STR, 0, set_defaults(lists:seq(1, 255), dict:new())).

The init/0 function is called to initialize the shift table. Callers are expected to invoke this once up front, and then pass the table in whenever they want to search. The set_defaults/2 function just sets the shift amount for all characters to the length of the search string, and then the set_shifts/3 function sets the correct shift values in the same table for the characters in the search string.

Finding matches

The exported find/2 function (not shown) calls find_matches/3 to get the work done. This function comes in three forms:

find_matches(<<?STR, Tail/binary>>, Tbl, Acc) ->
    case get_tail(Tail) of
        {ok, More} ->
            {H, Rest} = split_binary(Tail, More),
            {_, Match} = split_binary(H, ?MATCHHEADLEN),
            Result = binary_to_list(Match),
            find_matches(Rest, Tbl, [Result | Acc]);
        no_match ->
            find_matches(Tail, Tbl, Acc)
    end;
find_matches(Bin, _, Acc) when size(Bin) < ?STRLEN ->
    Acc;
find_matches(Bin, Tbl, Acc) ->
    {Front, _} = split_binary(Bin, ?STRLEN),
    Shift = get_shift(lists:reverse(binary_to_list(Front)), ?REVSTR, Tbl),
    {_, Next} = split_binary(Bin, Shift),
    find_matches(Next, Tbl, Acc).

The middle variant is invoked when we have searched to the end of the binary, and it’s too short to contain any more matches. This version just returns a list of the accumulated matches.

The first variant is invoked when the front of the binary matches the fixed portion of the pattern we’re searching for. Note that this isn’t strictly Boyer-Moore, since that algorithm searches in reverse, unless Erlang argument pattern matching also matches in reverse, which is unlikely. When the fixed part matches, we have to check the next part to ensure that it matches the variable part of the pattern, and we call get_tail/1 to do that; that’s described later, below.

The last variant of find_matches/3 gets called when the front of the binary doesn’t match. It first splits the binary to take enough characters off the front of the binary to match against the fixed search string, converts that first part to a string, reverses it, and passes it to get_shift/3:

get_shift([C1|T1], [C2|T2], Tbl) when C1 =:= C2 ->
    get_shift(T1, T2, Tbl);
get_shift([C1|_], _, Tbl) ->
    dict:fetch(C1, Tbl).

This pair of functions simply walks the reversed string character-by-character until it finds a mismatch, and returns the shift amount from the Boyer-Moore table for that character. The find_matches/3 function then uses that shift amount to split the binary again at the right spot, and then invoke itself recursively on the second half of the split binary to continue looking for matches.

Now, get_tail/1 is what find_matches/3 calls when the front of the binary matches the fixed part of the search pattern and we need to determine whether the tail of the binary matches the variable part of the search pattern. It has multiple variants. First, the easy ones:

get_tail(<<>>) ->
    no_match;
get_tail(Bin) ->
    get_tail(Bin, none, 0).

The first returns the atom no_match when an empty binary is passed in. The second variant calls get_tail/3, which does all the work. We pass in the atom none to initialize our search state, and we initialize the match length to zero.

The get_tail/3 function has a number of variants. The first four, shown below, just reject binaries that don’t match the variable portion of the search pattern:

get_tail(<<"/20",_:8,"x/",_:32,$/,_:16,$/,_:16,$/, Rest/binary>>, _, _)
  when size(Rest) =:= 0 ->
    no_match;
get_tail(<<"/20",_:8,"x/",_:32,$/,_:16,$/,_:16,$/,32:8, _/binary>>, _, _) ->
    no_match;
get_tail(<<"/19",_:8,"x/",_:32,$/,_:16,$/,_:16,$/, Rest/binary>>, _, _)
  when size(Rest) =:= 0 ->
    no_match;
get_tail(<<"/19",_:8,"x/",_:32,$/,_:16,$/,_:16,$/,32:8, _/binary>>, _, _) ->
    no_match;

We match the front of the variable portion of the pattern, where the date numbers appear, but we disallow anything that has an empty binary following it, or is followed immediately by a space character (shown here as 32:8, where 32 is the ASCII value for the space character). We do these matches twice, once for strings that start with "/20" and again for strings that start with "/19".

When the front of the binary matches the date portion of the variable part of our search pattern, we hit the following get_tail/3 variants:

get_tail(<<"/20",_:8,"x/",_:32,$/,M1:8,M0:8,$/,D1:8,D0:8,$/, Rest/binary>>,
 none, Len)
   when ((M1-$0)*10 + (M0-$0)) =< 12, ((D1-$0)*10 + (D0-$0)) =< 31 ->
    get_tail(Rest, almost, Len+?SKIP);
get_tail(<<"/19",_:8,"x/",_:32,$/,M1:8,M0:8,$/,D1:8,D0:8,$/, Rest/binary>>,
 none, Len)
  when ((M1-$0)*10 + (M0-$0)) =< 12, ((D1-$0)*10 + (D0-$0)) =< 31 ->
    get_tail(Rest, almost, Len+?SKIP);

These two indicate potentially good matches, so they change the search state from none to almost. They then recursively invoke the search with the Rest of the binary. Depending on what it holds, it will hit one of the following:

get_tail(<<32:8, _/binary>>, found, Len) ->
    {ok, Len};
get_tail(<<32:8, _/binary>>, _, _) ->
    no_match;
get_tail(<<$., _/binary>>, _, _) ->
    no_match;
get_tail(<<_:8, Rest/binary>>, almost, Len) ->
    get_tail(Rest, found, Len+1);
get_tail(<<_:8, Rest/binary>>, State, Len) ->
    get_tail(Rest, State, Len+1).

The first variant here looks for a space character at the front of the rest of the binary, but only when we’re in the found state. That marks the end of a successful search, so for this case, we return ok and the length of the match. The second variant also finds a space character, but in any state other than found; this is an error, so we return no_match.

The third variant here searches for a period/full stop character, written as $. in Erlang. This character isn’t allowed in our match, so if we see it, we return no_match.

The final two variants of get_tail/3 catch all other characters at the front of the binary. If we’re in the almost state, the first of these variants continues the search in the found state. Otherwise, the second variant just continues the search at the next character, keeping the same state.

Now that we’ve seen the get_tail/3 functions, let’s go back and look at the first variant of find_matches/3 again, to tie it all together:

find_matches(<<?STR, Tail/binary>>, Tbl, Acc) ->
    case get_tail(Tail) of
        {ok, More} ->
            {H, Rest} = split_binary(Tail, More),
            {_, Match} = split_binary(H, ?MATCHHEADLEN),
            Result = binary_to_list(Match),
            find_matches(Rest, Tbl, [Result | Acc]);
        no_match ->
            find_matches(Tail, Tbl, Acc)
    end;

If get_tail/1 indicates a match, we split the tail of the binary at More, which is the length of the match. We then take the head of that split and split it again to strip off the unwanted portion of the matched binary. This makes it look like the strings that Ruby prints out, corresponding to the parenthesized portion of Tim’s original regular expression. We then convert the matched binary to a string and store it in our accumulator list.

The main code

The file tbray14.erl contains the main code that invokes the code described so far. It’s pretty much the same as the original tbray5.erl, which I’ve already described in detail, so I won’t repeat that description here. The main difference, other than calling wfbm:find/2 to find matches, is the management of those matches. The code uses Erlang dictionaries to track hit counts for each match, and there’s also code to merge the dictionaries created by multiple Erlang worker processes. Look in the file if you want to see that code.

Results

As I said earlier, the best time I’ve seen from this version is 6.663 seconds on Tim’s o1000k.ap dataset:

$ time erl -smp -noshell -run tbray14 main o1000k.ap
2959: 2006/09/29/Dynamic-IDE
2059: 2006/07/28/Open-Data
1636: 2006/10/02/Cedric-on-Refactoring
1060: 2006/03/30/Teacup
942: 2006/01/31/Data-Protection
842: 2006/10/04/JIS-Reg-FD
838: 2006/10/06/On-Comments
817: 2006/10/02/Size-Matters
682: 2003/09/18/NXML
630: 2003/06/24/IntelligentSearch

real    0m6.663s
user    0m34.530s
sys     0m12.010s

As you can see, the output matches the original Ruby version exactly, which was my goal for this version. The speedup is due to more efficient searching. I believe this efficiency is shown by the CPU time, which is just above 5x of the real time; for tbray5.erl, the CPU usage tends to be about 7x the real time. This version uses fewer Erlang processes as well. I found that it works best when reading 8MB blocks from the file, splitting them into 2 chunks at a newline characters, and then processing each chunk for matches in a separate Erlang process. Thus, tbray14:main/1 is set to these values by default. However, YMMV, so if you want to experiment with different chunk sizes and different file block sizes, do it from the command line like this:

time erl -smp -noshell -run tbray14 main chunkCount o1000k.ap blockSize

where chunkCount is the number of chunks to break each file block into, and blockSize is the size of the block to read from the input data file.

Hopefully Tim will get a chance to see how this version runs on his new machine.

Wide Finder in Python

October 7th, 2007  |  Published in performance, python  |  Bookmark on Pinboard.in

Fredrick Lundh has posted some really nice code, along with detailed explanations, showing Tim’s Wide Finder implemented in various ways in Python, using both threads and processes. I ran his wf-6.py on an 8-core 2.33 GHz Intel Xeon Linux box with 8GB of RAM, and it ran best at 5 processes, clocking in at 0.336 sec. Another process-based approach, wf-5.py, executed best with 8 processes, presumably one per core, in 0.358 sec. The multithreaded approach, wf-4.py, ran best with 5 threads, at 1.402 sec (but also got the same result with 19 threads, go figure). Using the same dataset, I get 11.8 sec from my best Erlang effort, which is obviously considerably slower. I used the bash shell time built-in for all measurements, for consistency.

I’ve been coding in Python for years, but lately I’ve been using it a lot, and can’t get enough of it. It’s so clean, and as Fredrick’s code shows, it’s got some extremely powerful capabilities that are also extremely easy to use. It’s obviously smoking fast as well. Some folks who commented on some of my previous blog entries seem to equate programs written using dynamic languages like Python with “unmaintainable one-off solutions.” If that’s your experience with such languages, blame the programmer, not the language.