Testing and Performance – #bpmCamp 2010 @ Stanford
- April 19, 2010
- 0 Comments
We had two somewhat related sessions at bpmCamp – one in which Flournoy Henry presented findings and data for scaling Teamworks, and another discussion with Dave Knapp engaging the group in a discussion about testing in Teamworks. Of course, many of these points were not specific to Teamworks.
Some of the key points from the testing session:
There was a general consensus that too much attention is given to “User Acceptance Testing” as a phase at the end of a major project, rather than engaging in continuous user acceptance testing. At the very least, incorporating user acceptance testing with portions of the solution at each iteration is critical.
Being near the top of a pyramid of integrations and business system dependencies, a major challenge is testing process applications with all the external services required to make the process work. It isn’t just the fact that there is a dependency – one needs consistent test data across several systems to make for realistic testing prior to going to production. There’s also a complexity in promoting process applications – to get all the directly BPM-related assets promoted in concert with any updates required in dependent systems.
There was also discussion around fail-over and roll-back testing. It reminds me of the old adage for backup systems: don’t tell me how often you back-up your systems, tell me how often you’ve done a full restore from backup. Switching successfully requires practice in order to have it go smoothly. But practicing has cost and risk as well. This is part of why cloud computing is so compelling: High Availability software deployments are expensive when you want to do them right.