Modeling and Performance

Scott Francis
Next Post
Previous Post
The interpreted vs. compiled debate has been going on for a long time now.  Keith Swenson brings up a version of it in his Go Flow blog in this post on Model Strategy and Performance.  I think at any given point in time, the compilers have a very good argument because they can find situations where a higher degree of performance is required.  Even in the BPM world, if I can imagine a process that executes 10,000 times an hour, you can just imagine that same process at Wal-mart to picture that process running 10,000,000 times an hour! However, the right design decisions for software are not judged by a single point in time, these decisions are judged over time… and over time, the interpreters have a lot working in their favor:
  1. Over time, the interpreters have time to make their interpretations smarter.  Just-in-time compiling of java is one example… finding better algorithms (in terms of big-O notation) is another approach…
  2. Over time, hardware gets faster.  my interpreted code will run some percentage faster each year.  Although compiled code will also increase in speed, the real time-differential on any given problem will get smaller (and eventually, small enough that I won’t care).  Examples of this:  graphical user interfaces, once to slow to even ponder using, now we have all kinds of bells-and-whistles – even though commandline interfaces are still faster!  Java is another example…
  3. Over time, hardware gets cheaper, especially if you are looking at per FLOP or per operation.  As a result, even if I have to buy more hardware, in a few years I can buy twice the cpu’s at roughly half the cost… and the CPUs are faster too (or, utilize multiple cores to get faster).
  4. Developer time and maintenance time, as costs, usually outweigh the cost of additional hardware expense to provision a system.  When I was in college, a professor demonstrated that the human can still write better (faster) code than a compiler.  He had experts at the university (and named them) write performance-optimal solutions for a matrix fill routine.  The lisp program was the simplest.  But the slowest.  The C routine was 3x faster than the Lisp version.  But then our professor wrote an assembly code solution that was yet 1/3 faster than the C version.  He did the loop-unrolling by hand, for example.  Well, why don’t we just write all our important apps in assembly?  Because it would be incredibly inefficient use of a scarce resource: namely, our software developers (and, in the case of BPM solutions, our business analysts)!
Given those reasons (and others), my general thesis on interpretation vs. compiling is:  first, come up with the best/right representation from the perspective of the authors and consumers of that format (the developers typically, and in the case of BPM, the business analysts, process owners, and process developers).  If performance is a problem, invest in addressing the bottlenecks or areas with the most yield first.  If necessary, figure out how to compile or just-in-time-compile the model for performance.  But doing this performance-enhancement work is a lot easier to do on the “right representation” than it is to try to bake in this kind of performance up front and still end up with the “right representation” for users… The short story is, transforming your model may yield performance benefits in the short term, but if those benefits come at the cost of a good representation of your process, then over time you’ll lose to those who invested in the right representation and gave other technologies the time to catch them up on the performance front.

Tags: