Sandy Kemsley recently blogged about a session led by Bank of America's Peter Braun, at CASCON.? The juicy part of her coverage is:
They also have to deal with model governance to keep the WBM and IFW models in sync: each are used for different types of models at different points in the modeling lifecycle, and have specific strengths such that they want to continue using both tools. Because there are multiple non-integrated tools, they need to do model management to consider the interactions between model types, and there is a great deal of manual work to be done when a model is exported between the two environments. After the initial export/import, any changes in a model in one environment have to be manually made to match in the other modeling environment. They have issues with version management, since there is no proper repository being used for the models, and the modelers can end up overwriting each other?s efforts in a shared area on a server. They?ve also looked at ILOG, and have further issues with managing consistency between the rules designed in WBM ? which map to the process server ? and when those rules are rewritten in ILOG in order to externalize them.
Well.? If this isn't an argument for a model-preserving strategy, I don't know what is.? If you're going to use different types of models, my advice would be to use them to model different types of things. For example, using an ERD to model a database schema makes perfect sense.? Using Value Stream mapping makes perfect sense.? But using two different process modeling notations only makes sense if you can keep one of them at a different "layer" than the other, or simply constrain by level of detail.
Even then, you have to worry about managing versions.? The paragraph above makes me think that seamlessly integrated version management of process artifacts is the way to move forward - because it provides a better user experience.? (If it is integrated version management and *doesn't* provide a better experience then I wouldn't recommend the tool).