Thinking about Data as the new Oil… Process Context to Refine it

  • November 7, 2013
  • Scott

In a post-dinner conversation a couple weeks ago with some fellow BPM evangelists, Lance brought up something that the CEO of Tableau had said, if I’m not mistaken – that “data is the new oil”.  And the analogy makes some sense. The data can be “mined” in a sense.  A lot of value can be unlocked by mining that data.  And visualizing is a step toward understanding the data.

But if data is the raw material – the crude oil – then context is what you apply to refine that raw material, to reduce it down to something truly useful.

What better context to apply to this raw material than your business process context.  Understanding vast quantities of data in the context of the business processes that are supported goes a great distance toward improving the signal-to-noise ratio, and unlocking understanding of process and process failure can have dramatic improvements on a business.

If the data wasn’t generated by BPM suites, it can still be distilled into process using products like Fluxicon Disco, or used to enrich process-centric data and visualization.

Understanding your business, understanding your goals, understanding the processes that execute against those goals – this is what makes that raw data come to life.   And this is going to motivate us to make some further investments in a process-lens on the data mountain that is out there.


Related Posts
  • February 14, 2018
  • Ariana

Migration from BP3 on Vimeo. Director of BPLabs, Rico DiMarzio, details BP3's history with migrations and...

  • February 7, 2018
  • Ariana

Process Transformation and Artificial Intelligence from BP3 on Vimeo. Where does artificial intelligence ...

  • January 4, 2018
  • Scott

If you were wondering whether microservices architecture was coming for BPM engines, this post makes it pretty...

  • Emiel Kelly


    Data is in most processes indeed one of the important enablers. But be aware not to treat all data the same, as they come on different levels in a process:

    – data that is specific for individual cases (customer info, promises etc)
    – Data that tells you how your process is running now (monitoring all cases)
    – Data that tells you how your process performed in the past (throughput times, costs etc)

    You talked about a process mining product in your post. Those products focus most on the 3rd type of data and it makes it possible to dive into individual case data. So that’s all very useful for process improvement.

    But customers, don’t want you to improve your processes, they want those processes to do what you promise.

    In that sense, the second type of data is most important; knowing what is happening now. Are the cases still on track? Monitoring data that really tells you something. Having said that; what’s a speedometer without a throttle or a brake? useless. So, the most important thing is that you’re able to act on that data.

    And that always seems hard in many organizations.

    That’s why they love process mining probably: we can see what we’ve done wrong….in the past.
    And of course good data is needed for that and tools like process mining help you to focus on the right data for process improvement.

    But for all the customer you didn’t serve well, it’s too late.

    So, I would like to see data serve the live operation of a process. Just keep on pushing the F5 button and mining tools can help you with that 😉

    • Emiel – good points all, and thanks for commenting-
      Yes, I just cited one example, which isn’t a product that we have any financial interest in, for example. The tools we use are better at the first two types of process context, and so perhaps I felt a subconscious need to talk about the third type as well.

      An interesting thing about studying the past a bit – if you’re not changing anything, odds are that you are still doing (on average) now, what you were doing then. All of these types of tools and data have their place- the key to me is that I feel like process context helps separate wheat from chaff in the data. Of course, I’m a bit biased pro-process!