It is a simple idea, but one of those key differences that makes all the difference. We all know the traditional process life cycle: design the process, automate it, measure performance, and cycle around to improve the design. Instead, we should completely throw the old process life cycle. Don’t design a process, but instead give people a tool they use to get work done. Then, after the fact, we look and see what the process was.
Traditional Process Life Cycle
The traditional process life cycle for BPM has been set out in a very definite form. We start by designing a process — maybe laying out the boxes on a flow diagram. We interview people, and ask them what they think the process is. Or maybe we use a tool to allow business people to collaborate directly on describing the process. Implementation then starts, and that can either be a model preserving strategy where the process diagram is interpreted directly, or it is somehow transformed to an executable form. The completed application is tested in the standard manner, and finally deployed into actual production use.
After deploying, we can switch to a number of different tools to monitor and measure the success of the process, like process analytics, history, or even just simply asking the users where the process is working, and where it does not. We use that insight to improve the process, and after testing, the improved application is deployed to production.
This is not just the BPM life cycle; everyone in the IT department knows this is right and proper way to make an application or solution of any type. BPM offers some special capabilities through more powerful tools, but the life cycle is the same.
Flipping the Life Cycle stands this Approach on its Head.
Step 1: deploy the system into production with real users. There is no need to develop an application or a solution, they simply start using it. The system itself is useful without any modification, the same way that telephone or email are useful without building an application.
Step 2: After it has been in use for a while, use process mining to see what the process has been. Process mining gives you an aggregate picture across an organization of any size, picking out the most common processes, and the exceptions. You have access to metrics telling you how long workers spent in any given step. It can show you what percentage of the time the work proceeded done one path, or down another. From this you understand how to improve the work of the organization.
Flipping the life cycle is an extension to the basic Adaptive Case Management (ACM) pattern.
Why Flip the Life Cycle?
Flipping the life cycle is a useful technique in a number of situations:
- The process is too vast to automate. Consider a patient today interacting with a healthcare supplier. There are literally thousands of reasons that a person might need to interact with a doctor today. It is simply not possible to ask everyone to stop getting healthcare until we have had time to automate every possible interaction. While parts of this interaction is being ever more automated, but show must go on, and it will be a long long time before that job is complete.
- The work is too complex to automate. You think that it is impossible for a process to be too complex? You think that a programming language can handle any degree of complexity? Tell that to someone who is negotiating the merger of two companies. The complexity is great, and even the number of factors have not been enumerated. Case managers are doing these job today, and they will not be automated any time soon.
- The work is too unpredictable. Anyone who has followed my writings has seen plenty of examples of processes that are done only once, and then thrown away. The board of directors that asks a company to shift the focus of a product line in a new direction, or to consolidate two different departments into one. It simply is not economical to automate processes that are different every time.
- The work requires a person who has specific knowledge to that specific particular situation. These are what Jacob Ukelson calls a knowledge process, and they do not have the same characteristics as routine business processes.
Flipping the process life cycle avoids the up-front expense and lock-in to a suboptimal path while at the same time gives you many of the advantages of being able to measure the process performance, and find ways to improve the work.
What Difference Does It Make?
With Adaptive Case Management (ACM), we often say there is no life cycle to mean that there is no separation between design and doing. There is no distinction between the development environment and the run time environment. Planning is part of doing. In fact, you don’t really design a process, you really just plan your work. Planning is somewhat like process design, but it is very much less like programming, and a lot more like communicating.
Planning for knowledge work has to be done by knowledge worker themselves, without specialized skill in designing processes. The doctor himself plans the treatment. The detective herself plans the investigation. These people are specialists in their field, but have no special skills in process design. ACM must not require specialized process skills.
Note that the traditional process life cycle enables different technical specialists to be involved at different times: at the very beginning high level requirements are drawn up by the business owner; then business workers help define the process; then the process specialist (a.k.a. process analyst) figures how best to express this as the process model; then developers and testers may be involved in crafting the application, forms, data persistence; finally an administrator will deploy and manage the final application. The fact that you have specialists for process support means that you can have much more specialized tools, and that drives a kind of arms race of features for more powerful process support.
I have nothing against powerful tools for developing applications, but they quickly become so specialized that a typical doctor, policeman, lawyer, judge, or nurse can no longer use them effectively. The highly specialized tools distance the knowledge worker from the planning activity.
Many knowledge workers will plan and complete their own work. They are evaluated by the normal means: satisfied clients and financial measures. When warning signs appear, managers of large organizations will want to know more about what is going wrong. Post facto process mining gives those managers many of the benefits of a rigorous process: they have visibility to what has been working well, and they have a visual representation of why particular processes did not go well.
The traditional life cycle introduces a delay between the design of the process, and the actual use of the process. This is not a technical delay, as most BPM vendors will show you one-button deployment that can put a process on-line in seconds. Instead the delay is caused by more mundane reasons having to do with approvals, sign-offs, testing, debugging, and simply manpower available to do the improvement. By flipping the order of these, and you can actually compare the work with a process that was designed after the work was done.
If you think about this last idea it is really amazing: If your process lasts years, you can monitor progress, and discover practices from ones going well, and then apply those practices to other cases — even when those techniques were not known when the work was started.
Pingback: What the Process Mining Manifesto means to ACM | Collaborative Planning & Social Business
I really liked this post, and it hits on a point that is critical for managing knowledge processes – iterative design and understanding how work actually gets done. If we want to improve knowledge worker process – the first thing we need to do is collect the right data – not after the fact symptomatic, biased data – but true “in-vivo” data of how the process is actually performed. Only then can we be effective at resolving the problems and speeding up the process. If we could view knowledge worker decision processes as they are executed – then most of the difficulty involved with speeding up such a process (or fixing a broken one) would go away… It would be simple to see where any given decision process stands at any given time, and to understand whether it is healthy or not. I blogged about this a while back:
This is a really good post. I suggest that process centric analytics (“process mining” being a part of that), is perhaps more fundamental to BPM than design, modelling and execution. Process centric analytics can be applied in all scenarios, whether processes are structured or seemingly unstructured like case management and/or “knowledge” work. You also point out that these kinds of analytics leads to the identification of best practices. Optimization techniques and tools are under represented in a BPM solution space dominated by modelling and executing technology. Process centric analytics will be an enabler of optimization tools. This is why the recent initiation of work by the Workflow Management Coalition (WfMC) Process Simulation and Optimization Working Group, established to create a new Process Simulation and Optimization Standard, is so timely.
John, thanks for the comment. I was glad to see you were participating in the simulation and optimization working group. It seems to me that that group is quite promising.
Very nice post. I like the idea of an iterative approach. Not only to understand better about the actual process but to find the right balance between structure and freedom. A challenge we are facing is how to make the tool valuable by providing visibility and rigor, without making the tool so difficult to use that no one adopts.
It would be interesting to explore what you mean by “rigor.” Do you have examples?
What i mean by rigor… is for a leader to have complete visibility to a request made, task assigned, decision implemented. The rigor would be in being able to know have people done what they said the will do. However, the system of record needs to be simple to use and not overly complex.
OK, like transparency and completeness of record so there is complete oversight of the activity.
“transparency and completeness of record so there is complete oversight of the activity” – Keith, this is very well said, I see know why your are the expert and blogger and I’m just commenting 🙂
So we need a tool that can provide that, that is as easy to use as email… i don’t think outlook tasks will meet the needs. We’re exploring some bpm like tools and project management ones with simple workflow
Pingback: Process Mining Manifesto clarifies Market for APD « Fujitsu Interstage Blog
Pingback: Fill in the White Space, and Inverting the Process Life Cycle » Process for the Enterprise
A response to this was posted by David Brakoniecki at