A Model’s Beauty is in the Eye of the Beholder

Scott Francis
Next Post
Previous Post

The case for modeling without thought of execution…

I recently came across a blog entry from IDS Scheer on their Aris BPM Blog. Thanks to Sandy Kemsley for pointing me to it from her blog. Upon first read of the article by Sebastian Stein, I was struck by the difference in perspective between those who implement processes and those who model them. For those who model (Modelers), the Model is the chief output and goal. Having a Model that will survive the test of time is the goal. You can see that bias throughout the post. In fact, the core philosophy is embodied right here:

“A business process model, depicted in one of the popular notations like BPMN or EPC, should not contain any technical details. If the underlying IT infrastructure or implementation technology changes, the business process model should remain stable. Your warning bells should ring if you have to change your business process just because you changed the implementation technology used.”

The two key points:

  1. No technical details
  2. stable with respect to technology changes

Something Overlooked by a Model-only Perspective…

But there are some problems with this… First, all the BPMN/BPMS tools that I have worked with support layering of processes. This layering allows the user to create a model that reflects Business sensibilities at the top layer, and if needed, several layers down in detail. So, if your need is to model something without “any technical details” you are not prevented from doing so in the BPMN-oriented tools that I’ve used. Second, when you get to a certain level of detail, the process design should be informed by Technology. How so? It is important to understand if a transition is a manual or an automated one. Is it a non-value-added manual step? Then generally we want to automate it, or ideally remove it. Value-added manual step? Then generally we want to optimize around its constraints, but automation won’t be the goal. However, we may want to use technology to reduce errors, to improve time-to-execute, etc. In the posting, Sebastian doesn’t go into detail as to what he considers a “technical detail”, but it does beg the question: what is too technical? How about input and output data from a step in the process? These are critical process design considerations (if you know that a piece of data is required as an input, but you’re not sure where it comes from, you have a problem to resolve in your process design. And those inputs and outputs help define the “contract” of an activity or subprocess (or even of the entire process). Third, Modeling tools today make it exceedingly easy to change a Model to adapt to Process changes. While it seems like a good idea to have a Model that is “stable” with respect to technology changes – the fact is, business processes change faster and more often than the technologies and systems that support them. The real problem isn’t keeping the Process consistent across technology changes – the problem is that the underlying technology may not be flexible enough to adapt to the new process model! At the least, the technology layer is often not agile enough to do so at a sufficiently affordable price and on a sufficiently short timeline (unless of course, that process technology layer is a good BPMS). Fourth, the resilience that one needs, truly, is with respect to performance data. Performance data analysis is what will drive my process improvement activities, or identifying a process operating outside control limits. I need to be able to compare the performance of my process now to the performance of my process next year, to the performance of the process last year… If my process changes dramatically, how do I do that? Note: I’m not saying the technology changed. The process changed. So what I need is a way to track data that will make sense even in the face of relatively substantial changes in my process. BPMS tools can provide this facility, either baked in or via smart modeling practices, by taking snapshots of data at key milestones in the process that are not likely to change, semantically, even while the syntax (specific steps) of the process may change. To this end, even though the order entry portion of the process may change dramatically, you can still track information around the # of orders in, the value of those orders, the time it takes to process them, etc. even though the order entry process may go from highly manual to highly automated to web-self-service (or may yet encompass all three).

How do we Sum it up?

So the argument is that a modeling-only tool buys you a benefit (stability against technical change) that you don’t need, while not providing a benefit (technical agility with respect to business process changes) that you do need… yet still doesn’t address the key stability need -that of the measured process performance data. Moreover, the integration from most modeling tools to an actual functioning BPMS is, for the most part, non-existent from a practical perspective. Even when that integration exists, it is usually lacking process execution sensibilities in the model. There is a difference between drawing a model that represents the business needs and drawing one that can NOT be executed because of ambiguities and inconsistencies. For the best integrations I’ve seen so far, the products and the integration are all written by one vendor. (I’m definitely interested in seeing examples of this kind of tooling and integration and I’d be happy to write up reviews for such) I’ve actually written an import to a BPMS suite using an Aris model as a starting point – and its hard!  There is a ton of non-relevant data in the export – positioning information, for example – and other information you need is difficult to lay hands on (roles/ownership).  To be fair, this wasn’t a BPMN diagram in Aris, but it WAS a diagram of a process, in a very unstructured environment.  It wasn’t any easier than parsing it out of Visio vdx files.  My recommendation, is that if you are given a process modeled in a modeling only tool – your first instinct should be to redraw that process in your execution modeling environment rather than try to import it (unless the importer ships with your product, in which case, give it a try!).  You’ll be surprised how fast you can recreate the model in your execution environment.

Now what?  Does an Execution-Oriented Model still make sense?

Okay. Given the arguments Sebastian presents, it seems he is suggesting that if you don’t know what product you will use to implement, you should use Aris to model your process (in fairness, if you don’t know what execution environment you will use, paper, visio, and Aris are all good options). And that, because it is “agnostic” with respect to the implementation tool you use, there is some derived benefit (this is really the point I disagree with). However, if you are going to build your solution in a completely different toolset, and you accept my premise that exports out of Aris (and other modeling tools) into execution BPMS suites leave a great deal to be desired, then you come to an interesting crossroads. Is he suggesting that once given an Aris model we should just write BPEL xml or some Java code to implement the process? or that we should then use a BPMN-oriented modeling suite to re-model and then implement the process? In our experience, just “writing code” to codify a process in a modeling tool is a mistake. For one, how can the business determine if you have faithfully reproduced the process in your code? Extensive usability / UAT testing might reveal an answer, but it is a very expensive way to find out, and it only happens after all the code is written – and any mistakes will be very expensive to fix at this point because they could be simple mistakes or they could be conception or foundational mistakes. An Agile development process can help, but many organizations have trouble carrying off this approach with traditional software tools. If the technical team uses a BPMN execution environment (a BPMS) to build that process, then the business will be able to see the process in BPMN, a language (drawing) that they can understand, and understand the semantics thereof. By visually inspecting the design, the business can eliminate the greatest proportion of future defects at the earliest part of the design phase. And the technical team will implement each portion of the process in context of the business process at that point. And that is critical for providing useful business context to the technical team at the time they most need it.

Which Model is the Master?

And finally, now that your Process is implemented in an execution-oriented BPMS, as well as modeled in your modeling-only environment… which Model is the “Master”? Of course, you can make either answer work.  But let’s be clear about the choice you make : Option 1:  The Model as drawn by the business in the modeling tool is the master.  it does NOT reflect what is actually happening in the business, or within IT, but it does show what the business was hoping the process would look like when the project started.  (Optionally, it may have even been revised and updated at the end to reflect some of the changes that implementation and testing revealed needed to be made). Option 2:  The Model that works as agreed to by IT and the Business, drawn and executed in the BPMS environment.  This is the model that was actually tested by business users in UAT, by Unit Testing in IT, and system testing in IT.  This is the model that is actually running your business process in production, and it reflects reality. Is it important that your original Model is resilient to technology change in this context?  Is it relevant that your model doesn’t have any technical details in it? Or does it seem to be more interesting that there is now a BPMN model that represents what actually runs in your business every day, that can be measured and analyzed over time.  Does it matter that this BPMS is resilient to back-end technology changes (activities provide abstraction to what type of integration, and each integration can provide abstraction as to what specific systems are being tapped)?  Does it matter that this BPMS can support relatively rapid changes in process to adapt to your real business?  Does it matter that you can map the data you are tracking to your Model, to generate heat maps and highlight problem areas? Well, you can guess where our heads are at.  Modeling is important, but Execution makes it relevant to the bottom line, and makes the Model itself more valuable.  If you want help turning your models into reality, we can help.

Tags:

Related Posts

Risk Management
Preserving the Model
And One (Process) Ring to Rule them All
  • Pingback: Mixing Process Design and Implementation Details is Evil | ARIS BPM Blog()

  • In response to Mr. Stein, I wrote a comment awaiting moderation on their blog. Here’s the text in case you don’t want to flip back-and-forth. We don’t (yet) moderate our own blog comments. We might very well implement a process change in the near future to do so, but this wouldn’t be a technology change – we’ll still use wordpress to implement our blog ;)

    ——
    Sebastian-
    I don’t think this is a debate between good and evil :) It isn’t quite so epic in proportion. Although it does feel like a debate between the ivory tower and the real world implementation of processes.

    i’ll go by my points since that is the structure you chose as well:
    1. you claimed my first two points were invalid because you said technical detail doesn’t belong in the process. Well, what if the process depends on something that is a nightly batch. that’s a technical detail. in or out of the process? what about inputs to a process activity or step? outputs? are those technical details or not ? (it isn’t clear what counts, so its hard to say whether your line in the sand is correct). It is fair to say that not all technical details belong at all levels of a business process (in my world, business processes may be nested, like a Russian doll), but at some level of abstraction, those technical details will make sense (especially if we’re in an implementation rendition of the process). Based on your exclusion of exception handling and data manipulation, it simply sounds like you are advocating for a “cleaner” business process diagram. In fact, to a point I agree with that – you hide these details at a lower level of a process definition (at a level that you would likely not even call a process, but a level that OMG would still describe as a process), because the implementations details of exception handling rarely, if ever, effect the top-level or two of a process.

    2. you said you invalidated this point, but i didn’t see it in your argument…

    3. i have done work for companies that have mainframe systems that are 50 years old that are still a core part of their business. the business processes have changed substantially, but the core technology is still there… IT assets that are core to the business are difficult to change. The risk associated with change is high, and the cost associated with switching technologies is VERY high. BPMS solutions provide an opportunity to further leverage those assets while introducing a more agile process-oriented layer above them that allows the business to reconfigure their interactions with such back-end systems. BPMS doesn’t replace all those technologies, but allows you to put them to work in ways that suit business processes that are new, modified, or evolving. As for IT selecting a new service to implement some back-end function. you’re right, the business experts shouldn’t necessarily care, but it doesn’t mean that that element isn’t part of an IT process that should be represented in a process-oriented (but executable) diagram. And they might care if that new service fails to meet previously expected SLA’s or ToS…

    4. I realize that you never brought up stable performance indicators. I’m pointing out that this is the stability that really matters. to further illustrate that stability of business process is not really a goal that customers should or even are attempting to achieve. They are attempting to achieve STANDARDIZATION of process because it helps you then achieve division of labor, apply theory of constraints, and improve the process more effectively. But you presume wrongly when you assume i’m not dealing with heterogeneous environments. The environments of our customers are very much heterogeneous. However, our customers have chosen a BPMS to help them navigate that heterogeneity and extract a common (standard) process across it all. You sure as heck don’t want to implement the same process in two different middleware layers. That path leads to madness in IT. You implement one process and at its integration points, integrate it to the appropriate systems. The integrations can be “smart” – knowing that in some cases you integrate to system A and some places to system B, rather than always assuming system A. But these decisions are based on the context data of the process instance you are running… If someone said “here are these requirements, now go and implement them in java and C++” people would think it was crazy. Even though we’re talking “process” and not “code” – what you’re saying amounts to the same thing! Moreover, if the process did need to be reimplemented, the BPMS tools I have used support enough documentation to represent business requirements – the implementations at the BPMN level would be substantially the same. And the differences would only be nods to the differences between tools – which would have to be there anyway if i implemented the process twice…. (Moreover, I would never suggest you just “change on the fly” – although by comparison to traditional 18-month development cycles for IT, a 3-4 month cycle for BPMS will feel that way – requirements are still a valid thought process, but model-only isn’t necessarily a valid part of that thought-process).

    Having participated in the implementation of over 100 production processes, the “problem of explaining to business experts” the strange constructs of implementation have been minimal. However, I HAVE had to spend significant time explaining BPMN. It has some complexity to it, especially with respect to splits and joins – and this has NOTHING to do with the implementation subsystem, just the modeling notation itself. And most business experts aren’t familiar with petri nets and concurrent programming. So I give them some rules of thumb to use around those elements, and fully expect that I’ll have to refine them to get to an executable model representation. My use case is different – I’m worried about how to get customers’ process ideas implemented in production (using whatever technologies make sense), and I just don’t see the model-only paradigm as adding lots of value to the process. A company can spend a year modeling all of these things… and then start implementation… only by then the processes have changed… now what? And we’ve postponed the ROI that we could get by implementing quickly and starting the continuous process improvement cycle…

  • In response to Mr. Stein, I wrote a comment awaiting moderation on their blog. Here’s the text in case you don’t want to flip back-and-forth. We don’t (yet) moderate our own blog comments. We might very well implement a process change in the near future to do so, but this wouldn’t be a technology change – we’ll still use wordpress to implement our blog ;)

    ——
    Sebastian-
    I don’t think this is a debate between good and evil :) It isn’t quite so epic in proportion. Although it does feel like a debate between the ivory tower and the real world implementation of processes.

    i’ll go by my points since that is the structure you chose as well:
    1. you claimed my first two points were invalid because you said technical detail doesn’t belong in the process. Well, what if the process depends on something that is a nightly batch. that’s a technical detail. in or out of the process? what about inputs to a process activity or step? outputs? are those technical details or not ? (it isn’t clear what counts, so its hard to say whether your line in the sand is correct). It is fair to say that not all technical details belong at all levels of a business process (in my world, business processes may be nested, like a Russian doll), but at some level of abstraction, those technical details will make sense (especially if we’re in an implementation rendition of the process). Based on your exclusion of exception handling and data manipulation, it simply sounds like you are advocating for a “cleaner” business process diagram. In fact, to a point I agree with that – you hide these details at a lower level of a process definition (at a level that you would likely not even call a process, but a level that OMG would still describe as a process), because the implementations details of exception handling rarely, if ever, effect the top-level or two of a process.

    2. you said you invalidated this point, but i didn’t see it in your argument…

    3. i have done work for companies that have mainframe systems that are 50 years old that are still a core part of their business. the business processes have changed substantially, but the core technology is still there… IT assets that are core to the business are difficult to change. The risk associated with change is high, and the cost associated with switching technologies is VERY high. BPMS solutions provide an opportunity to further leverage those assets while introducing a more agile process-oriented layer above them that allows the business to reconfigure their interactions with such back-end systems. BPMS doesn’t replace all those technologies, but allows you to put them to work in ways that suit business processes that are new, modified, or evolving. As for IT selecting a new service to implement some back-end function. you’re right, the business experts shouldn’t necessarily care, but it doesn’t mean that that element isn’t part of an IT process that should be represented in a process-oriented (but executable) diagram. And they might care if that new service fails to meet previously expected SLA’s or ToS…

    4. I realize that you never brought up stable performance indicators. I’m pointing out that this is the stability that really matters. to further illustrate that stability of business process is not really a goal that customers should or even are attempting to achieve. They are attempting to achieve STANDARDIZATION of process because it helps you then achieve division of labor, apply theory of constraints, and improve the process more effectively. But you presume wrongly when you assume i’m not dealing with heterogeneous environments. The environments of our customers are very much heterogeneous. However, our customers have chosen a BPMS to help them navigate that heterogeneity and extract a common (standard) process across it all. You sure as heck don’t want to implement the same process in two different middleware layers. That path leads to madness in IT. You implement one process and at its integration points, integrate it to the appropriate systems. The integrations can be “smart” – knowing that in some cases you integrate to system A and some places to system B, rather than always assuming system A. But these decisions are based on the context data of the process instance you are running… If someone said “here are these requirements, now go and implement them in java and C++” people would think it was crazy. Even though we’re talking “process” and not “code” – what you’re saying amounts to the same thing! Moreover, if the process did need to be reimplemented, the BPMS tools I have used support enough documentation to represent business requirements – the implementations at the BPMN level would be substantially the same. And the differences would only be nods to the differences between tools – which would have to be there anyway if i implemented the process twice…. (Moreover, I would never suggest you just “change on the fly” – although by comparison to traditional 18-month development cycles for IT, a 3-4 month cycle for BPMS will feel that way – requirements are still a valid thought process, but model-only isn’t necessarily a valid part of that thought-process).

    Having participated in the implementation of over 100 production processes, the “problem of explaining to business experts” the strange constructs of implementation have been minimal. However, I HAVE had to spend significant time explaining BPMN. It has some complexity to it, especially with respect to splits and joins – and this has NOTHING to do with the implementation subsystem, just the modeling notation itself. And most business experts aren’t familiar with petri nets and concurrent programming. So I give them some rules of thumb to use around those elements, and fully expect that I’ll have to refine them to get to an executable model representation. My use case is different – I’m worried about how to get customers’ process ideas implemented in production (using whatever technologies make sense), and I just don’t see the model-only paradigm as adding lots of value to the process. A company can spend a year modeling all of these things… and then start implementation… only by then the processes have changed… now what? And we’ve postponed the ROI that we could get by implementing quickly and starting the continuous process improvement cycle…