Over-Optimizing – Effective Business Process User Interfaces (Part 2)

Scott Francis
Next Post
Previous Post

In Lean thinking, one of the cardinal sins is overproduction. Linkedin UIIn BPM thinking, we always assume that the requirements are wrong. When you take those two principles together, it should be clear that an iterative approach with frequent testing of our hypotheses and direction is warranted.

Over-optimizing is just one form of over-production.  What does over-optimizing mean, in the context of a BPM project?  Usually it has to do with optimizing for a problem that might happen in the future, rather than optimizing around known problems in the present, often as a result of assuming that optimizing in the future will be harder than optimizing now.

  • Assuming we will have performance challenges in the future, and then designing and building for that up-front, at the cost of a clean design or re-using off-the-shelf code or software.  I’ve seen lots of suboptimal technology decisions when people assume a performance problem will manifest.  Often proved wrong once implemented.
  • Assuming that we will have scalability challenges in the future, and then designing and building for that up-front.  Much more efficient to only pay the cost of scaling when the need for scaling arrives, rather than before we know whether our new process will be adopted.  A classic case is the “eventually we’ll have 100,000” users scenario. But for the first year, it is 300…
  • Assuming that we’ll need to have extremely efficient work-routing with layers of complexity to finely grain that work routing… in reality, we should leave the design simple and open-ended to adapt to whatever the future work routing requirements turn out to be.

In each case, we’re in a sense borrowing trouble from the future.  We’re not skating to where the puck will be, we’re skating to one of the possible places a puck could go on the ice – and that’s not productive.

The most important achievement in a BPM solution is one that is easily understood and maintained by the organization who is responsible for the health of the business processes.  Any trade-offs against that goal should be made very cautiously, as the expenses accrued to future maintenance of a complex solution are ongoing.

So, the next time you’re worried about performance, take a two-pronged approach:

  1. First, make sure your solution is designed as cleanly as possible.  Should you suffer performance problems, you’ll have a clean design that can be more easily understood, bottlenecks more easily identified, and solutions more easily proposed.  Spaghetti code is a killer with performance optimization.
  2. Second, construct a sandbox test, so that you can get an idea of the order of magnitude of the problem.  The cautionary note here is, you have to be very careful how you extrapolate (good or bad) from the test. What a sandbox test can show you is some of the likely issues or bottlenecks, but it isn’t necessarily going to reflect real-world use case and real-world experiences, which could be better or worse.  Still, identifying problem areas early could help speed resolution later if they turn out to be real problems.

Finally, make sure that *before* you fix a performance problem, you write a test that demonstrates it – and then prove that your fix makes the test faster or makes the test pass the requirement.  If you don’t test, you’re not dealing with real data and real numbers, you’re just dealing with product knowledge lore.

 

 

Tags: