Process Optimization Revisited

Scott Francis
Next Post
Previous Post
Rashid Khan, formerly of Ultimus, recently posted complaining about “The Hype about Simulation and Optimization“.  He has a clear vendor-perspective on this subject – he understands why prospects and analysts like simulation and optimization (and round-tripping) so much – but he laments that these two groups of people are so caught up in a set of features he views as largely hype rather than substance. He’s particularly hard on Sinur, but that’s not surprising – Sinur is simply the most vocal / published analyst on the subject of simulation, optimization, and their value to business, so he’s the natura foil for this dialog. I look at Sinur’s comments as more aspirational and attempting to move the market toward better simulation and optimization capabilities, rather than reflecting exactly what a user’s experience will be using simulation and optimization products. Mr. Khan rightly points out that if you are doing purse simulation, you have a lot of work to do:
Yes, BPMS can do simulation. However the results of simulation will depend entirely on the assumptions the business analyst makes about a large number of parameters. For example the business analyst has to make assumptions about the following for each step in the process:
  • The Task Time, which is the time to actually do the task at the step
  • Number of resources: How many people are available to do the task, and are they dedicated or shared with other tasks
  • The cost of each resource
  • The probability that the step will be activated, in case it is conditional
  • The probability that the user will send the case backwards, because in real life things often go backwards instead of always going forward as planned.
These assumptions have to be made for all the steps in the process. So if there are 30 steps, there are at least 150 assumptions! That is no small task that “business folks” will engage in lightly.
Of course, he’s right, if you’re starting without historical data or statistics with which to populate your simulation.  As well, if you make bad assumptions, you’ll get bad results. So far, so good.  I’m with Mr. Khan in all respects – pure simulation is hard work.  You have to know that the simulation has a probability of paying off before you take on this kind of homework.  It isn’t a typical business user task to fill in this kind of information accurately (though, I have seen a six sigma practitioner provide this kind of data to good effect). However, Mr. Khan goes on to state that even round-tripping is a fantasy.  Round-tripping with respect to simulation/optimization is the concept of using real data from your process execution to populate your simulation or optimization scenario.  I believe Lombardi was the first vendor to include this round-tripping functionality in its core product offering, without having to use any third party reporting, simulation, or analysis tools (thus, part of putting the S for Suite in BPMS).  So what’s the advantage?  Real data is feeding your simulation or optimization exercise – real, measured data. Mr. Khan says not-so-fast:
he most important parameter for simulation and optimization is Task Time, which is the actual time required to perform a task. One has to know the actual Task Time in order to run any kind of simulation.  However, there is no BPM software that I know of that measures Task Time, because it is simply not possible to do it. Consider the challenge. Most user steps in a process use some type of electronic form. If the software has to measure how much time the user took to complete the task, what does it measure? It cannot simply measure the time the form was open, because the user could have the form open and be having a conversation with her colleague. Or the user could open the form, understand the task, close the form and be thinking or researching the task with the form closed. There is no accurate way to measure Task Time. Therefore BPM systems simply do not measure Task Time because they cannot!
But here I have to disagree…First, I’ll concede that actual task time cannot be measured by a BPMS due to just the reasons Mr. Khan suggested-  and yet.  And yet why does it matter if the user took a phone call in the middle of the task?  Won’t that get averaged in with tasks where they didn’t do any such thing?  Isn’t that an accurate recording of how long the work took for an average worker of average productivity?  We are not, after all, automotons.  Some BPMS offerings will not capture the difference between a user taking two days to do a task, versus opening the form for 10 seconds and then again two days later for 30 seconds to finish the work.  That *is* a problem for measuring task time… However, measuring the time that an average process sits at a particular task waiting to be finished (or started) can be useful to optimization, because from a process perspective, I don’t care why the task took a long time, I just want it to get done faster (if that is a goal of the process) and meet SLA’s.  So, by measuring it, I can manage it. Second, Task Time is not the most important parameter for simulation. Granted, lots of improvement projects focus on task time (is this a holdover from a manufacturing focus, or a misunderstanding of what makes a good process?)  Most BPMS tools in fact encourage over-focus on task time by attempting to measure and report it – primarily because task time is something that all processes have.  Not all processes have a loan amount, or a hire/no-hire decision, or a medical outcome. The most important parameter for simulation and optimization will depend on the process. For example, if you care most about the time to complete the process, then you want to measure TOTAL process time, not individual task time, and look for the factors in your process that correlate to longer process time.   Those factors may be the particular inputs:  particular contract type, a particular region of the country, the team it was assigned to, the person assigned to a particular part of the process, etc.  Those correlations are all going to be more interesting than task time in a vacuum.  But suppose you wish to optimize on throughput.  If so, you’re looking for where instances of your process stack up, and how to smooth out those bottlenecks.  You might look to augment the staff that handle the parts of the process that back up, or you might look to improve the process upstream to make their job easier, or to route some percentage of instances around that bottleneck through automated processing.  Or, you might again go back to correlating business process variables with the instances that get stacked up in a bottleneck.  But essentially a histogram of where your instances are waiting (or a heat map) will be your most useful tool here. But what you might be trying to do is optimize or simulate around outcomes. And that, to me, is where things really get interesting.  This is where we can attempt to understand the relationship between inputs and outputs and look to optimize on good values for those outputs.  At the core, this is applying six sigma techniques to a process, really (at least, applying the statistical tools).  But there is a bit of art to the science as well. I do agree with Mr. Khan on many of his further points:  simulation and optimization, while useful, are not easy in practice.  You are likely to need the help of skilled specialists in either the tools or statistics, or both.  There IS too much hype around the vendor tools – many of them can’t handle the really complex or high volume processes that customers are deploying.  I, too, believe that actual adoption of simulation and optimization tools from BPMS suites is quite low – lots of playing with these tools, but not a lot of focused effort behind using them. Great anecdotal argument at the end of his post about Sinur’s view that the state of Arizona should do simulation on what the affect of cutting 1.2 Billion dollars would be, and Mr. Khan pointing out how difficult that would be to simulate.  I would say, indeed- with a BPMS tool, impossible!  However, in Austin, Texas, the city often outsources simulation and modeling of scenarios to a group of people who have been quite successful at advising the city and businesses here:  economists.  And it makes sense – modeling and simulating economic outcomes is just different than modeling and simulating business processes! Great thought provoking article from Mr. Khan, just had to add my 2cents… I think that simulation, optimization, and round-tripping have a lot of utility if you focus on the right things, and if you have help from people with the right skills (or you yourself have them).

Related Posts