About a year ago we had some internal coding challenge in the company, in which I decided to take part. The goal was simple enough – we were given a huge file (couple of GB) in a specific format where some IDs were put on a per-row basis along with other attributes. The goal was to read off of standard input the ID and find the associated attributes (or return some error i think when the ID did not exist). The entries were sorted according to the ID .

The language was not specified, you could use any. I used C#.

With data being more or less random it was very hard to do better than just binary search, though I’ve tried various algorithms (interpolation search with binary search cutoff was in some cases a little better) but ended up with a straight binary search. I had however a small caching algorithm that was building a map of pointers to the original file so that the initial binary search would be performed in-memory rather than seeking in  the heavy file. The cache was only ensuring it was sparse enough to ensure it’s size was below 512K. This was because the test procedure involved running the program multiple times on single-item inputs rather than once on multi-item input, so I judged that loading/saving may contribute too much if the cache was large).

The benchmark for me was simply the amount of reads of the disk until the answer could be given and I managed to keep it at about 13 which compared with raw binary search was a bit better (16 without cache).

Imagine my surprise when after all tests were ran, my program took twice as much as the best one !

Of course I started analyzing why did this happen and there were two very interesting conclusions:

1. Loading and Saving

HDD disks nowadays have about 32MB of cache. One thing that I noticed was that when I ran my program twice on the same input, it returned instantly. Just because the data being accessed on the disk was already in the cache and could be returned in no time. I assumed the same (to perhaps an even higher degree) would happen to my cache being saved and loaded each time the process started.

I was wrong. Loading and especially saving 512KB of data takes time. A lot of time if you do that frequently. In our test cases, the program was ran couple of hundreds of times with different inputs. That means a couple of hundred of loads and possibly as many saves (there not always were changes in the cache, but thanks to some re-balancing there often where).

After removing the cache and reverting to the raw binary search, the program was still 25% slower than the winning one, still having one major disadvantage, but no longer having the cache.

2.Process startup time

Now we come to the main point that interested me in this story – .NET process startup time.

In native code, the binary EXE file is just executed. The binary format (PE) of the EXE defines where the code starts (where the bootstrap code starts) so that the OS can just go ahead and execute this. In .NET world however, all we have is a managed assembly. When you execute a managed EXE, it jumps to a initialization function in mscoree.dll (If I’m not mistaken, it’s a Windows dll, not a framework dll, so it will tell you so even if you don’t have the framework installed at all). There it is decided which CLR version should be initialized (1.0/2.0/4.0) and only after this initialization CLR JITs and executes your managed code. This initialization has to take time. But how much of it ?

First – some baseline tests:

Most basic program, not doing anything, just main function that exits immediately. Executed 500 times using a batch script on my PC.

.NET version DEBUG RELEASE
2.0 39,08 s 39,55 s
4.0 37,55 s 36,08 s
4.5 36,25 s 35,51 s

Note here that 4.0 and 4.5 are not really different CLR types – it’s just framework and compiler.

Having this baseline I tried different things to speed up the startup – first changing the build type to x86/x64 instead of any cpu. Result – nothing.  The numbers didn’t differ by more than a single second (and sometimes in the wrong direction)

Next – removing all the referenced assemblies. Result – the same – no change.

After some more unsuccessful attempts of trying various optimization and compilation options which yielded nothing I’ve found the only thing that in my tests has made any difference. NGEN.

Pre-generating the native image (thus removing the need for JIT compiling) speeded up the startup time by about 10% (the best in class 4.5 compilation reached 32,63 s). All other attempts for optimization didn’t provide signifficant results.

For comparison I wrote a similar program in C++ (main function, return immediately), and used the same test procedure. The result – 26,85 s.

A bonus lesson learned (at least for me) was that for any competition before you submit your program one should check the impact of the test procedure on ones program. It may just make all the difference.

 

Stability testing is important, but often overlooked. Especially in stable products. We had a saying about one of the products I was working that it had two bugs – one known and one hidden. And that every time you fixed any of them it immediately (and miraculously) returned to a stable state of having two bugs.

After spending about two weeks trying to find and fix one of them, we resolved to find and fix both. Surely enough there were some signs of the other – hidden bug – namely the number of handles reported by task manager. After a couple of days it was reaching 15 000. That’s a lot of handles event for a server application. Of course the worst thing was that after weekend of idleness, the handle count didn’t go down (unless you restarted the application and then it started growing again from nominal ~150).

In situations like these WinDbg is an invaluable ally and we quickly found out that the handles are mostly to a stopped threads and are not released because a finalizer thread is stuck.

Not only stuck but stuck with a very peculiar stacktrace:

Wait for single object? No wonder it’s frozen. But what is it waiting on ?  One thing worth noticing is the last line of the abbreviated stacktrace here which seems to be pointing at some COM object problem.

Some digging inside revealed the following piece of code:

I really don’t like WMI management objects, I find them hard to work with and the API somewhat cryptic, but sometimes there is no way around it. This particular code what being executed as a part of startup configuration from the main thread which then proceeded to execute some main program timer loop. Now the thing about the management objects is that they sometime have (or use something that has) destructors which are executed by the finalizer thread.

Say your main method looks like this:

I see a lot of methods like this (especially in software that is about 5-8 years old 😉 If your readConfiguration() method is using something that will be accessing COM object in a destructor – you’re in trouble (your Finalizer thread will show the same stacktrace as the one in the beginning of this post).

Now why is that: the whole issue boils down to the annotation above the Main method – [STAThread]. Your main thread while creating the COM object will associate it with its own Single Threaded Apartment. Because of this, when finalizer thread will want to do something with this COM object, it won’t be able to do this directly, but will have to do it through the thread that created this COM object – your main thread. Your main thread however will be busy doing other things and not willing to proxy to the COM object (even if you do the Thread.sleep). The end result will be your finalizer thread frozen waiting, your handle count growing and eventually a crash of the application.

How to alleviate this problem – there are multiple ways to avoid it. Easiest fix is to just remove [STAThread], provided you don’t need it for other COM objects. Other is to execute your COM object creating code in another thread that is MTA. I chose to avoid using WMI at all – we’ve found that the reading of service start mode was completely unnecessary and there only for legacy reasons.

One interesting thing I noticed is that if you call GC.WaitForPendingFinalizers() in main thread, it will indeed wait, but also will release the finalizer thread from its waitOne by interacting with the COM on its behalf.

The other day I was watching a new series of lectures about Least significant digit Radix sort given by Robert Sedgewick and Kevin Wayne on Coursera (https://www.coursera.org/course/algs4partII) and it got me thinking – why don’t I use it anywhere ?

I got so comfy with Java’s mergesort and Comparable<T> that I never thought on optimizing that even in some performance critical code. So i though i should at least check what the performance impact can be.

LSD Radix sort is only useful in cases where you have a reasonable length of the keys on which you are sorting, so for my tests I assumed long as THE key data type. Reasoning behind was the following datatype:

since timestamp (Date object) can be expressed as long, and it’s perfectly reasonable to sort by date, it will do.

There are two additional things I had to define to enable more general sorting:

An interface that allows for extracting the sort key from object (and thus allowing for sorting various object by various keys – an equivalent of Comparator) and implementation of that for my data type.

So now for the sort itself:

So how this works:

We start by extracting the keys to a separate array (to avoid extracting them for every pass). It’s an optimization step, but it’s worth to do it, since calling the extractor method that many times will be expensive (but of course, there is a memory trade-off). Then, for each double octet (16 bits) starting from least significant ones, we do a key-index counting sort on extracted keys also moving the objects themselves into an auxiliary array. The result of each pass is an array sorted by that double byte. Because the sort is stable, we can start from least significant and move ‘up’.

The method is nor overly complicated, there are a couple of subtleties though:

lines 20-21 and 34-35 – the XOR with most significant bit of the most significant byte is required in order to correctly sort the negative numbers (negating the sign bit)

lines 40-46 – exchanging the auxiliary array with initial array (or just two aux arrays in case of keys) is needed in order to avoid copying all elements from aux array which is typically done in one-pass key-index sort.

Also note here, that thanks to the fact we have an even number of passes (4), we use the input array as an auxiliary array in the last pass, so we end up with sorted elements in original array and there is no need to copy elements from aux array to the original array.

So, is this any good ? Here’s results:

Size Comparable LSD Radix
10 0 ms. 10 ms.
100 1 ms. 9 ms.
1_000 3 ms. 6 ms.
10_000 37 ms. 10 ms.
100_000 178 ms. 65 ms.
1_000_000 466 ms. 179 ms.
10_000_000 8882 ms. 2219 ms.

For small number of elements it’s definitely not worth it, but that was expected. Above certain threshold though it definitely makes sense – it’s faster than system sort, and since Java uses mergesort for objects anyway, it doesn’t have that much bigger memory requirements than system sorting (the two aux key arrays, but in performance critical code if you want to sacrifice some purity for performance it too can be avoided).

There is one low-hanging optimization that can double the performance for large arrays – if you can limit your key size to a smaller number of bytes, you can get away with less passes. For example I tested with the Date converted to milliseconds from start of unix epoch which is long datatype, but afterwords did some tests with the key calculated by making the number of milliseconds relative to January 1-st 2000 (since I knew my data set couldn’t contain dates prior to that one) and lowered resolution to seconds which allowed me to fit it in 4 byte int.

It not always can be done, but as with many optimizations – a lot depends on inherent properties of your data.

Everyone (ok, nearly everyone or this wouldn’t happen) knows that if you have a multithreaded application and different threads are accessing shared resources (and some of them do writes), you have to synchronize the access somehow.

Microsoft introduced a neat set of Concurrent Collections in the framework to help with that, but before ConcurrentDictionary there were plain old Dictionary. Some time ago I came across a piece of software that was crashing due to lack of proper synchronization (unfortunately this still happens a lot) and it got me thinking, what is the worst that could happen. So i started experimenting with multiple threads and a Dictionary to see in how many ways can it break. What i found surprised me. Outside of Duplicate Key exception and key not found exception which one may argue are not entirely tied to lack of proper synchronization there are two that stood out.

First one is rather trivial (it’s actually the exception I’ve observed in the aforementioned piece of software, so no surprise there):

Collection is being resized, and we’re still trying to do some operations on it.

The second one is more interesting. At some point my small testing program just didn’t end. I looked at the task manager and this is what I’ve seen:

Inline image 1

25% CPU utilization – most likely one thread going crazy. I connected through WinDBG and here’s some things I’ve seen:

So there is one offending thread – let’s see what it’s doing:

So it seems it’s stuck on inserting – actually inside framework Dictionary class – spinning!

Let’s find this dictionary and see if it’s corrupted:

This is my dictionary, it was declared as Dictionary<long,int>. There is four of them, but that’s because the program was running in a loop creating dictionaries and trying to break them. Remaining three just have not yet been garbage collected as can be observed below

Let’s see if that indeed is the case:

And surely that’s the only one with any gc root at all.

Before we look inside, a word about the structure of Dictionary – there are two important structures inside –

  • int[] buckets – it holds the index of the entries table that contains the first Entry for the hashCode that maps to that bucket
  • Entry[] entries – it contains all the entries in a linked-list fashion (Entry class has a next field pointing to the next Entry)

So the way this works is that if you put object A, a hashCode is calculated for it, then a bucket index is selected by calculating hashCode % buckets.length and the int in the buckets table for that index is pointing at the first entry in that ‘bucket’ that is contained in the entries table. If there is none, yours is the first. If there are some, you iterate the whole chain, and insert after the last one, updating the ‘next’ pointer of the previously last element.

Knowing that, we can look at the structure of my Dictionary:

Ok – just 10 elements inside the Dictionary, let’s see what the Entries look like:

 

For brevity I include only two relevant entries (actually only one is relevant). Entry number six – the next entry is – number seven (so far so good). Entry number seven – the next is … number seven ! Aha! That’s really bad. Concurent inserting (and deleting) without any synchronization has corrupted the Dictionary so that in the Entries linklist one of the entries is pointing at itself!

I would imagine Dictionary code for inserting would look something like this:

With such a corrupted structure this Insertion would then fall into infinite loop spinning on next.next.next… (which is what I could observe)

If you know any other way this could go wrong, send me an email or leave a comment. I’m sure there has to be some other ill behavior resulting from lack of proper synchronization.

Save early, save often. Sounds almost like a gospel, and having experienced various software crashes I usually make sure that I am saving often and making backups and so on. The wordpress blog editor is a little different though. Being used to the fact that in a browser CTRL+S saves the HTML document of the page rather than anything else I didn’t use it (Actually in wordpress blog post editor CRTL+S behaves as you would expect it to – saves draft).

So after writing for an hour while working on previous post and jumping between Eclipse and chrome to report on test results I clicked ‘Distraction Free Writing mode’ which basically is a fullscreen mode. Instead of turning on the fullscreen mode, the editor blinked and the edited text disappeared. CRTL+Z didn’t work, there was nothing left from the post in the page source. A brief look into the database showed that there is nothing there for this post. I was doomed – an hour of work gone.

Then I thought – sure, it disappeared, but it has to be somewhere in process memory still (some cache, not yet cleared). So I turned on the WinDbg, connected to the chrome process (chrome runs the tabs in separate processes, and it has a task manager which allows you to see which process corresponds to which tab). Thankfully I remembered some parts of what I was writing so I searched for “ArrayList implementation”:

0:007>s -a 0 L?80000000 “ArrayList implementation”

Search for ascii string (-a) in addresses starting from o for 80000000 bytes (L signifies length) matching  “ArrayList implementation”. And surely enough I got lots of results (~20 of them).

All that was left to do was to trace back to the beginning of the text (“ArrayList implementation” was somewhere in the middle) and dump that memory:

0:007> da 08bdbd10 L40000

0a53d418 “I’ve always thought that Java is”
0a53d438 ” in the wrong by not allowing ge”
0a53d458 “neric collections of primitive t”

I actually managed to recover all of the text thus saving myself an hour of work and 5 tons of frustration. After that I’ve googled on how to turn the auto-save and revisions.

BTW. If you know an easier way to do this – drop me a line, I’ll update the post to include the easiest method.

 

EDIT 1:

Jack Z suggested T-Search which is really a cheating engine for games (one that allows you to freeze you health, ammo or whatever) – It’s got a nice UI and would probably be easier to use and faster than using the WinDbg. It’s got a memory browser with a ASCII search function.