In an earlier post, I introduced the concept of a “Model Preserving Strategy” versus a “Model Transforming Strategy” and defined them as two approaches that a BPMS can take in the lifecycle of a business process. I then posted a couple of situations where the Model Preserving Strategy is a better choice, but it is not always a better choice. This post is dedicated to those situations where the Model Transforming Strategy shines.
The main reason for transforming a model into another form, is to realize performance improvements.
Transforming for Execution
Processes and programs are written by people, but execute in a machine. Those machines (or execution environments) have certain capabilities, either limitations or affordances for doing specific things. Transforming the model can allow you to take advantage of those capabilities, or work around the limitations, to have a faster running program. This is like an optimizing compiler that will take a third generation language and convert it to machine code that uses specific capabilities of the target machine in order to run faster.
The model transforming strategy can do the same for business processes by taking in a diagram that is meaningful to a business analyst and producing an optimized output that runs faster in the execution environment.
Transforming for Developer
Another reason to transform the model is to put it in a form that is comfortable for the system integrator. If you have Java developers on staff, then transforming the BPMN model to Java would allow the developer to be more effective with less training than if they are forced to learn to work within the constraints of BPMN.
This will not necessarily be from BPMN to a programming language. My post on “Human Facilitator Processes” illustrates a transformation from BPMN to BPMN. The difference being that the business domain model was a BPMN diagram representing flow of responsibility, while the system domain model was a BPMN representation of the flow of data.
The point is that the transformation is from a form that is best for a business analyst to use, to a form that is best for the system integrator to use. If your business process requires a lot of system integration work, then optimizing the time of the developer might be significant.
Model Preserving Strategy
The entire reason for preserving the model across the lifecycle is to make end-to-end coordination easy at the expense of the intermediate steps. System engineer are not prevented from doing the system integration, but they simply must do the work directly in the same model that the business person used. The execution of the model is not prevented, it simply must execute from the same form that the business person used. This is the tradeoff.
How to Decide
If you have a BPM project that involves a lot of programming work to integrate systems, relatively less work from the business domain, and a low likelihood of a need to evolve the business model over time, then a Model Transforming Strategy will allow you to optimize the use of the systems engineers. They can work in a familiar way using tools and skills they already have.
If you anticipate a large rate of transaction processing, you may need to consider the Model Transformation Strategy for high performance. How large is large? You start by counting the cases per month, multiply by the number of activities per case on the average, then divide by the number of hours per month to find the approximate number of activity executions per hour. If this works out to be more than 1 million activity executions per hour, then you must consider carefully whether a compiled approach is required. If you anticipate 10,000 activity executions per hour or less, then there will be no problem using an interpreted approach, even on a very modest server. These are just ballpark figures, and between these two you will have to consider the specifics of the implementation and the type of transactions being invoked. It is worth noting that many important business critical processes involve less than 10,000 activity executions per hour, and performance is simply not an issue.
Those knowledgeable about performance know that programs spend most of their time in 5% of the code. If you can optimize that 5% of the code, you can often attain 99% of the performance of optimizing the entire code. This explains why interpreted BPM diagrams working at a high level by calling into function which are themselves optimized often perform just about as fast as compiled programs. Thus overall performance is less related to whether the code is compiled or not, and more related to whether the supporting functionality is a good fit for the need.
The rise in popularity of interpreted languages for web applications is an indication that CPU performance is no longer a limiting factor in many web applications. Experience shows that servers are more limited by IO bandwidth now, than they are by CPU usage, and so overall performance can often be enhanced by spending CPU time on interpreting a language if in return you can reduce the data transfer requirements.
If most of the project effort is going to be system integration, or you have an extremely high rate of activity execution, then the Model Transforming Strategy allows flexibility to be able to optimize for the programmer and for the specific requirements of the execution environment.