A Model's Beauty is in the Eye of the Beholder

  • August 21, 2008
  • Scott
  • 3 Comments

The case for modeling without thought of execution…

I recently came across a blog entry from IDS Scheer on their Aris BPM Blog. Thanks to Sandy Kemsley for pointing me to it from her blog. Upon first read of the article by Sebastian Stein, I was struck by the difference in perspective between those who implement processes and those who model them. For those who model (Modelers), the Model is the chief output and goal. Having a Model that will survive the test of time is the goal. You can see that bias throughout the post. In fact, the core philosophy is embodied right here:

“A business process model, depicted in one of the popular notations like BPMN or EPC, should not contain any technical details. If the underlying IT infrastructure or implementation technology changes, the business process model should remain stable. Your warning bells should ring if you have to change your business process just because you changed the implementation technology used.”

The two key points:

  1. No technical details
  2. stable with respect to technology changes

Something Overlooked by a Model-only Perspective…

But there are some problems with this… First, all the BPMN/BPMS tools that I have worked with support layering of processes. This layering allows the user to create a model that reflects Business sensibilities at the top layer, and if needed, several layers down in detail. So, if your need is to model something without “any technical details” you are not prevented from doing so in the BPMN-oriented tools that I’ve used. Second, when you get to a certain level of detail, the process design should be informed by Technology. How so? It is important to understand if a transition is a manual or an automated one. Is it a non-value-added manual step? Then generally we want to automate it, or ideally remove it. Value-added manual step? Then generally we want to optimize around its constraints, but automation won’t be the goal. However, we may want to use technology to reduce errors, to improve time-to-execute, etc. In the posting, Sebastian doesn’t go into detail as to what he considers a “technical detail”, but it does beg the question: what is too technical? How about input and output data from a step in the process? These are critical process design considerations (if you know that a piece of data is required as an input, but you’re not sure where it comes from, you have a problem to resolve in your process design. And those inputs and outputs help define the “contract” of an activity or subprocess (or even of the entire process). Third, Modeling tools today make it exceedingly easy to change a Model to adapt to Process changes. While it seems like a good idea to have a Model that is “stable” with respect to technology changes – the fact is, business processes change faster and more often than the technologies and systems that support them. The real problem isn’t keeping the Process consistent across technology changes – the problem is that the underlying technology may not be flexible enough to adapt to the new process model! At the least, the technology layer is often not agile enough to do so at a sufficiently affordable price and on a sufficiently short timeline (unless of course, that process technology layer is a good BPMS). Fourth, the resilience that one needs, truly, is with respect to performance data. Performance data analysis is what will drive my process improvement activities, or identifying a process operating outside control limits. I need to be able to compare the performance of my process now to the performance of my process next year, to the performance of the process last year… If my process changes dramatically, how do I do that? Note: I’m not saying the technology changed. The process changed. So what I need is a way to track data that will make sense even in the face of relatively substantial changes in my process. BPMS tools can provide this facility, either baked in or via smart modeling practices, by taking snapshots of data at key milestones in the process that are not likely to change, semantically, even while the syntax (specific steps) of the process may change. To this end, even though the order entry portion of the process may change dramatically, you can still track information around the # of orders in, the value of those orders, the time it takes to process them, etc. even though the order entry process may go from highly manual to highly automated to web-self-service (or may yet encompass all three).

How do we Sum it up?

So the argument is that a modeling-only tool buys you a benefit (stability against technical change) that you don’t need, while not providing a benefit (technical agility with respect to business process changes) that you do need… yet still doesn’t address the key stability need -that of the measured process performance data. Moreover, the integration from most modeling tools to an actual functioning BPMS is, for the most part, non-existent from a practical perspective. Even when that integration exists, it is usually lacking process execution sensibilities in the model. There is a difference between drawing a model that represents the business needs and drawing one that can NOT be executed because of ambiguities and inconsistencies. For the best integrations I’ve seen so far, the products and the integration are all written by one vendor. (I’m definitely interested in seeing examples of this kind of tooling and integration and I’d be happy to write up reviews for such) I’ve actually written an import to a BPMS suite using an Aris model as a starting point – and its hard!  There is a ton of non-relevant data in the export – positioning information, for example – and other information you need is difficult to lay hands on (roles/ownership).  To be fair, this wasn’t a BPMN diagram in Aris, but it WAS a diagram of a process, in a very unstructured environment.  It wasn’t any easier than parsing it out of Visio vdx files.  My recommendation, is that if you are given a process modeled in a modeling only tool – your first instinct should be to redraw that process in your execution modeling environment rather than try to import it (unless the importer ships with your product, in which case, give it a try!).  You’ll be surprised how fast you can recreate the model in your execution environment.

Now what?  Does an Execution-Oriented Model still make sense?

Okay. Given the arguments Sebastian presents, it seems he is suggesting that if you don’t know what product you will use to implement, you should use Aris to model your process (in fairness, if you don’t know what execution environment you will use, paper, visio, and Aris are all good options). And that, because it is “agnostic” with respect to the implementation tool you use, there is some derived benefit (this is really the point I disagree with). However, if you are going to build your solution in a completely different toolset, and you accept my premise that exports out of Aris (and other modeling tools) into execution BPMS suites leave a great deal to be desired, then you come to an interesting crossroads. Is he suggesting that once given an Aris model we should just write BPEL xml or some Java code to implement the process? or that we should then use a BPMN-oriented modeling suite to re-model and then implement the process? In our experience, just “writing code” to codify a process in a modeling tool is a mistake. For one, how can the business determine if you have faithfully reproduced the process in your code? Extensive usability / UAT testing might reveal an answer, but it is a very expensive way to find out, and it only happens after all the code is written – and any mistakes will be very expensive to fix at this point because they could be simple mistakes or they could be conception or foundational mistakes. An Agile development process can help, but many organizations have trouble carrying off this approach with traditional software tools. If the technical team uses a BPMN execution environment (a BPMS) to build that process, then the business will be able to see the process in BPMN, a language (drawing) that they can understand, and understand the semantics thereof. By visually inspecting the design, the business can eliminate the greatest proportion of future defects at the earliest part of the design phase. And the technical team will implement each portion of the process in context of the business process at that point. And that is critical for providing useful business context to the technical team at the time they most need it.

Which Model is the Master?

And finally, now that your Process is implemented in an execution-oriented BPMS, as well as modeled in your modeling-only environment… which Model is the “Master”? Of course, you can make either answer work.  But let’s be clear about the choice you make : Option 1:  The Model as drawn by the business in the modeling tool is the master.  it does NOT reflect what is actually happening in the business, or within IT, but it does show what the business was hoping the process would look like when the project started.  (Optionally, it may have even been revised and updated at the end to reflect some of the changes that implementation and testing revealed needed to be made). Option 2:  The Model that works as agreed to by IT and the Business, drawn and executed in the BPMS environment.  This is the model that was actually tested by business users in UAT, by Unit Testing in IT, and system testing in IT.  This is the model that is actually running your business process in production, and it reflects reality. Is it important that your original Model is resilient to technology change in this context?  Is it relevant that your model doesn’t have any technical details in it? Or does it seem to be more interesting that there is now a BPMN model that represents what actually runs in your business every day, that can be measured and analyzed over time.  Does it matter that this BPMS is resilient to back-end technology changes (activities provide abstraction to what type of integration, and each integration can provide abstraction as to what specific systems are being tapped)?  Does it matter that this BPMS can support relatively rapid changes in process to adapt to your real business?  Does it matter that you can map the data you are tracking to your Model, to generate heat maps and highlight problem areas? Well, you can guess where our heads are at.  Modeling is important, but Execution makes it relevant to the bottom line, and makes the Model itself more valuable.  If you want help turning your models into reality, we can help.

Related Posts
  • October 18, 2018
  • Larry
  • 0 Comments

There are multiple regulatory agency cooperative initiatives involving digitization and standardization like t...

  • October 17, 2018
  • Scott
  • 0 Comments

There's more than one way to build a UI for process. At BP3 we always look at simplifying and speeding up the ...

  • October 11, 2018
  • Ariana
  • 0 Comments

While the use of DTC ads for prescription drugs remains controversial, the evidence is clear that direct-to-co...