Is Swarming a Legitimate Tactic for BPM?

Scott Francis
Next Post
Previous Post

Linkedin teamLately there’s been some discussion of “swarming” as an organizational response to process needs.  Part of this thought process comes from the ACM crowd, noting natural phenomenon like birds in flight, etc. and noting that organizations properly empowered could also move in unison and swarm to solve problems, but it also gets airplay at places like the BPM and Case Management conference recently (BPMCM15).  Sandy Kemsley once again has provided great blow-by-blow coverage on here blog.

Sinur cited a number of examples of processes that are leveraging emerging technologies, including knowledge workers’ workbenches that incorporate smart automated agents and predictive analytics; and IoT applications in healthcare and farming. The idea is to create goal-driven and proactive “smarming” processes that figure out on their own how to accomplish a goal through both human and automated intelligence, then assemble the resources to do it.

To start with, of course there is work that benefits from “swarming” resources to the problem.  Disaster recovery probably benefits from construction workers who independently make decisions to relocate to the area and participate in the reconstruction, not to mention the top-down organizations like Home Depot and others that proactively move materiel into the field of play simultaneously.

But try swarming to a software project or deployment… or swarming to contract legal reviews.  Try it with building BPM solutions.  It doesn’t work.  To build quality outcomes you need commitment and skills. You don’t want your average software developer showing up to do construction work and vice verse.  But it is more nuanced than that.  Even the wrong software developer won’t help a software project; swarming will actually hurt the project and slow it down.

Swarming “compute” resources isn’t even really a relevant topic or analogy in my book.  Scaling IT resources (compute) in the cloud is a black box* to the consumers of each service, and the only way it works is if the original services were designed to do it, and if the problem they’re solving is a problem that lends to additional compute scale or can only be single-threaded.

*In most cases, if you need more compute, the requester of the compute doesn’t allocate your additional hardware, the implementer of the compute model does the allocation… otherwise someone who wants to make a single request for compute might find that they have to know how to allocate more compute power because the existing capacity is tapped out…