Friday, January 14, 2011

Premature Optimization and Ruby's Singleton Class

Last week I wrote a blog post on how you can dynamically extend an object using Ruby's Singleton (or Eigen) class.  To me the ability to dynamically add methods to an object instance is a powerful capability that Ruby possesses compared to other similar languages.  I was surprised by two themes that came up in comments on the post and in discussions I had with various other folks:
  1. Defining a class with the methods and instantiating with the string it is a better alternative.
  2. Extending objects dynamically is very slow and should be avoided for performance reasons.
Let's examine the first theme which suggests that there is an inherent problem with dynamically modifying the object.  I find that this is where classical object oriented training in strongly typed languages clouds the mind.  Let us look at an example to illustrate the problem.  Suppose we want to find all of the vowels, consonants, digits, upper and lowercase letters in a string that has already been created randomly.  A solution that would involve the more traditional approach might be:

We have created a random string and then used it as the source to instantiate a new object that has the methods we are interested in to get the answers to our questions in an expressive manner that is readable.  There is nothing wrong with this solution if we were writing the code in Java or C# where the type of an object is important.  The creation of the class in Ruby provides zero benefit to the desired solution.

The alternative that I proposed in my previous blog post would implement the current example in the following manner:

Both methods produce the same result:
consonants: ffpLCBVXBnfcgpy
vowels: Oi
capitols: LOCBVXB
lowercase: ffpnifcgpy
numbers: 222

So now that we have shown that we don't need to create a class to improve our code let's address the second point.  To me this point touches on the rule Premature Optimization is the Root of all Evil.  I can remember back a few years ago there was a debate about whether Ruby was as fast as Perl.  Of course Perl was faster at the time but my opinion was who cares?  My argument was that 99% of the applications being developed would never appreciably notice the difference in execution time so why make a decision based on a minuscule performance benefit? That is not intelligent.  For the case where it does matter the one thing we should have learned is that language improvements and Moore's Law will improve performance without changes to the code.   If you can't wait and still need to eek out everyone once of performance, then you probably should be dusting off your C chops at that point.

For those who still disagree with me let's go ahead and benchmark the various implementations using a new class, the object's Singleton class and just invoking regex matches on the string object itself.  The following script was used for the benchmarking to gain insight into the performance of each implementation:

Here are the results for 1 million iterations using Ruby 1.8.7:
      user     system      total        real
singleton:     133.460000   0.260000 133.720000 (133.876499)
regex:         110.420000   0.190000 110.610000 (110.698841)
regular class: 114.500000   0.170000 114.670000 (114.721063)
and using Ruby 1.9.2:
      user     system      total        real
singleton:      84.540000   0.180000  84.720000 ( 84.629888)
regex:          62.440000   0.130000  62.570000 ( 62.509541)
regular class:  68.990000   0.150000  69.140000 ( 69.053286)
Looking at just the last set of numbers for Ruby 1.9.2 we learn that the fastest method for the 1 million iterations was of course just using the regex directly on the random string.  Coming in 10.5% slower was the instantiation of a class using the string and invoking the methods defined in the class.  The Singleton implementation was 22.5% slower than the Class approach.

This may seem like quite a significant reduction in performance but another view is to look at the time difference per iteration.  The Singleton approach takes 0.0000155 seconds more per iteration than the Class one.  I would argue this is more than good enough for 99% of applications being developed. It is even less of an issue when you put this in the context of a practical example. This all started when I showed how code looping through files in a directory could be cleaned up by extending each String object that was the filename. There can be only 32k files in a directory in most file systems which is two orders of magnitude less then our sample iteration size. The last point being that this was benchmarked on my OS X laptop so imagine the numbers running on a standard server. The differences in that situation would be even more negligible.

It is interesting to point out that how much faster 1.9.2 is than 1.8.7. In this case just switching to the latest version resulted in a 36% performance boost that was free. Not to say that every release in the future will bring a sizable benefit but it is something to consider.

The main takeaway is that the code in the last blog post was a perfectly valid solution that takes advantage of the unique idea of using an object's Singleton class to add functionality to it at runtime.  The performance difference is totally negligible for most cases.  It might be interesting to see where the interpreter is spending the difference in time from a purely pedantic viewpoint but in the examples that we are working with it isn't a consideration.  I think this would make Knuth proud!