BPMCAMP Sessions: Introduction to Process Mining
- October 27, 2015
- 0 Comments
In a session exploring an area of BPM that is still relatively nascent, David Brakoniecki presented on the topic, and then demonstrated two of the well-known tools in the space (Fluxicon’s Disco, and ProM).
Starting off, the objective was to cover three topics, followed by Q&A:
- What is process mining
- A discussion of the tooling available
- A demo of the tooling
In it’s simplest explanation, process mining is big data and statistical techniques applied to a process context. While Big Data is hot right now, it is often *not* applied in the context of process, or with a process lens. The hypothesis underpinning process mining is that those techniques might be more valuable in a process context.
Much of process mining owes a debt to professor Wil van der Aalst, of Eindhoven University of Technology. The vendors and thought leaders in most cases studied under him at the same university or were heavily influenced by his thinking.
Key requirement to feed most of these process mining techniques is an event log, which consists of at least three things: a Unique identifier, an activity (or status or action), and a time stamp. Collectively, each incidence is an event, and a collection of events around a particular unique identifier is a case.
By examining this data, there are a few algorithms that will then put a process together, including detecting parallel flows. Three interesting scenarios were discussed:
- Play-in: This is what most BPM customers do today. Model the process, which then produces the event log data through explicit modeling and integrations.
- Play-out: The event log is examined to derive a process model from the data (what typically is done with process mining approaches).
- Replay – examine the event log to see how it fits with the pre-defined model, identify bottlenecks, etc.