It isn't easy to fill in the white space.? It is harder to design a good software solution from scratch than to fix a bug in an otherwise working solution, or to design a small addition to a working piece of software. What if you could have tools that just help you right away, and then later infer the process (filling in the white space for you)?? That's the promise of "process mining".
Along those lines, Dave Brakoniecki tackles the idea of "inverting the process life cycle", in response to a post by Keith Swenson on the subject:
Imagine a patient file or case. This is a favorite example in the ACM space since the expertise of the doctor defines the process or work to be completed. How useful to the doctor is a case management tool that has no information on the patient and no ability to schedule tests? Not very ? all of the work would need to be done outside the tool and duplicated in the tool. Still, building integrations to the patient records and to the systems that organize blood work, for example, would be better done at design time than run time.
Even if this was possible at runtime, few doctors would be interested in doing it.
(incidentally, I think this is why something like IBM Watson is getting good airplay in medical / healthcare circles.? It has data and context on a subject domain)
So, a lack of preloaded or pre-integrated data seems like a problem. But supposing you have this pre-existing data, there are a small number of firms prepared to help you discover the processes you are already executing without realizing it.? It isn't yet clear to me how big these services projects are (there aren't any shrink-wrap solutions that require no services).
And Dave points out another issue:
In most organizations, what is problem with their Sharepoint deployment, their Lotus Notes application or that little Access database application they wrote three years ago? In almost all these cases, the problem is the same. End users were given a powerful and flexible tool without training and ending up building a system that is impossible to maintain but essential to the business.
I have seen many successful projects start from this position: The end users actually asking for more help in managing the technology so they can spend more time doing their jobs.
This is summarized nicely as "the Sharepoint Effect" in a previous post on this blog.? And I agree with Dave - many projects start exactly this way.
Then Dave gets into what might be an example of ACM in the wild - Basecamp.? Although it doesn't bill itself as an ACM tool, one could argue that it is one, by accident.? In which case:
Perhaps the most important reason for the ACM camp to try and adopt a solution like Basecamp is that it would give them immediate mainstream legitimacy with tangible customers who have already inverted the process life cycle and will do it again next week. It probably also indicates the delivery model and price point required to disrupt the markets they are targeting.
I'm not sure the ACM vendors are prepared to be at those price points, however.
Keith makes some interesting points in his original post, not all of which are argued by Dave.? Certainly measurement before improving is A Good Thing.? We've been implementing "shadow processes" and listening to processes implemented in other systems for years, and using that data to inform our new process models.? But because we're listening to real systems, we have to implement the broadcast or listening of those interesting transitions in the systems of record.
In short, there's no magic bullet. But you can certainly do better by measuring twice and cutting once, as they say.? But we can do better than that. We can measure any number of times, and we can get more than one "cut" at the new and improved process by leveraging A/B testing to determine what actually produces the best results.