Wishlist: A good promotion process

Scott Francis
Next Post
Previous Post
In this case, I don’t mean promoting people, I mean promoting custom-developed solutions from a development environment to a test environment, and from a test environment to production. Everyone has such a process, but they usually leave a lot to be desired because they have some of the most common pitfalls of process implementations generally:
  • Overloaded fields: rather than have each field for one and only one purpose, the owners of the process will, at some point, decide to overload the meaning of a field in order to extract more process behaviors out of the same software, or the same number of input fields, without having to go back to IT for a change request.
  • Process Data entered into free-text fields: This is an especially egregious form of the overloaded field. “Please put the name of approver X in your description, otherwise we have to deny your request.” Such business rules should be captured by first-order fields with obvious consequences. Keep in mind the first-time user. Looking at the screen, do they know what information they need to provide and what the consequences of not providing it are? (or consequences of incorrect data)
  • The process isn’t customer-facing: By this, I mean that the process is designed to optimize around the people who execute software promotions. Those aren’t the customers. The software developers, consultants, and business users (hoping to realize ROI) are the customers. The process should be designed to help them successfully promote, and to help them realize when they shouldn’t promote. But most promotion processes are inward facing. Raise as many barriers to entry, and push as much work to the outside world as possible, in order to protect the hard cost expenditures of the group responsible for promotions.
  • SLAs are measuring the wrong thing: Usually SLAs for promotion are written such that they don’t include customer metrics, only internal metrics. Typical example is measuring time to complete the request. Sounds good. But the measurement typically only starts once the request is approved! From a customer point of view this isn’t acceptable as it encourages the promotions team to push back and reject promotion requests. Better to measure the % of successful promotion requests, or the time inclusive of the first attempted request. The promotion team will scream that it isn’t fair, but it incents the right behavior (and who says the world is fair?!), which is getting good code promoted promptly.
  • Not automated in the right places.  Too many manual touch points that don’t add value.
This seems like a general enough problem that someone could write a process around it using a BPMS package. Maybe we (BP3) will do that. But there are a lot of software packages out there for managing this kind of problem. I just think that most of them focus on doing the actual build of software, and moving binaries, and don’t do the whole package well, which would consist of:
  1. Manage the approvals process of each part of a deployment (e.g. DBA, Appserver, and BPM assets)
  2. Ensure nothing is deployed if any of the approvals fail.  Allow optional auto-approve with lack of response from the approver.
  3. Collect all the technical assets (db queries, install scripts, and manual instructions)
  4. Execute any of the automated tasks in the appropriate order, perhaps with human supervision.
  5. Provide recipe for the user to execute manual tasks and perform appropriate validation
  6. Measure statistics for SLAs, etc.
  7. Provide a robust way to extend the “process” for different kinds of promotions (custom fields, custom routing, etc).  Many of the tools allow you to do this, but you get painted into a corner in the process… Often the baked-in process is built around enterprise level apps, but then you lose the flexibility to support less mission-critical apps efficiently.
These issues are especially acute when you’re doing BPM deployments because so often they are incremental or iterative in nature… and so often they touch so many other parts of the business.  You might be thinking that because you’re integrated to the ESB or SOA stack, that the unit/system testing will involve fewer people as a result.  However, in practice, the ESB/SOA stack just adds one more testing component to the integrations to each of the systems your process connects with.  Those systems still have to be tested/validated by people as well as the ESB/SOA stack as well as the BPM changes.  A good process would not only be prescriptive in nature for the various validation teams, but would collect the results in an audit trail to inform the go-live/rollback decision.