Model Strategy & Performance

In an earlier post, I introduced the concept of a “Model Preserving Strategy” versus a “Model Transforming Strategy” and defined them as two approaches that a BPMS can take in the lifecycle of a business process. I then posted a couple of situations where the Model Preserving Strategy is a better choice, but it is not always a better choice. This post is dedicated to those situations where the Model Transforming Strategy shines.

The main reason for transforming a model into another form, is to realize performance improvements.

Transforming for Execution

Processes and programs are written by people, but execute in a machine. Those machines (or execution environments) have certain capabilities, either limitations or affordances for doing specific things. Transforming the model can allow you to take advantage of those capabilities, or work around the limitations, to have a faster running program. This is like an optimizing compiler that will take a third generation language and convert it to machine code that uses specific capabilities of the target machine in order to run faster.

The alternative to compiling a program to machine code, is to interpret the program. Interpreted languages such as LISP, PERL, PHP, Ruby, and JavaScript come to mind. Sometimes a “just in time” compiler is used to compile the code just before it is run, but the environment does it transparently so that the transformed version is never seen by anyone. Java is a language which is compiled to an intermediate form that is then interpreted. The trade-off between using a compiled language and an interpreted language is much debated, and still depends upon how much performance you need. The core of a database server is a performance critical component clearly requiring a highly optimized language. Many web application development environments use an interpreted language, because CPU performance is not a critical aspect of these types of applications.

The model transforming strategy can do the same for business processes by taking in a diagram that is meaningful to a business analyst and producing an optimized output that runs faster in the execution environment.

Transforming for Developer

Another reason to transform the model is to put it in a form that is comfortable for the system integrator. If you have Java developers on staff, then transforming the BPMN model to Java would allow the developer to be more effective with less training than if they are forced to learn to work within the constraints of BPMN.

This will not necessarily be from BPMN to a programming language. My post on “Human Facilitator Processes” illustrates a transformation from BPMN to BPMN. The difference being that the business domain model was a BPMN diagram representing flow of responsibility, while the system domain model was a BPMN representation of the flow of data.

The point is that the transformation is from a form that is best for a business analyst to use, to a form that is best for the system integrator to use. If your business process requires a lot of system integration work, then optimizing the time of the developer might be significant.

Model Preserving Strategy

The entire reason for preserving the model across the lifecycle is to make end-to-end coordination easy at the expense of the intermediate steps. System engineer are not prevented from doing the system integration, but they simply must do the work directly in the same model that the business person used. The execution of the model is not prevented, it simply must execute from the same form that the business person used. This is the tradeoff.

How to Decide

If you have a BPM project that involves a lot of programming work to integrate systems, relatively less work from the business domain, and a low likelihood of a need to evolve the business model over time, then a Model Transforming Strategy will allow you to optimize the use of the systems engineers. They can work in a familiar way using tools and skills they already have.

If you anticipate a large rate of transaction processing, you may need to consider the Model Transformation Strategy for high performance. How large is large? You start by counting the cases per month, multiply by the number of activities per case on the average, then divide by the number of hours per month to find the approximate number of activity executions per hour. If this works out to be more than 1 million activity executions per hour, then you must consider carefully whether a compiled approach is required. If you anticipate 10,000 activity executions per hour or less, then there will be no problem using an interpreted approach, even on a very modest server. These are just ballpark figures, and between these two you will have to consider the specifics of the implementation and the type of transactions being invoked. It is worth noting that many important business critical processes involve less than 10,000 activity executions per hour, and performance is simply not an issue.

Those knowledgeable about performance know that programs spend most of their time in 5% of the code. If you can optimize that 5% of the code, you can often attain 99% of the performance of optimizing the entire code. This explains why interpreted BPM diagrams working at a high level by calling into function which are themselves optimized often perform just about as fast as compiled programs. Thus overall performance is less related to whether the code is compiled or not, and more related to whether the supporting functionality is a good fit for the need.

The rise in popularity of interpreted languages for web applications is an indication that CPU performance is no longer a limiting factor in many web applications. Experience shows that servers are more limited by IO bandwidth now, than they are by CPU usage, and so overall performance can often be enhanced by spending CPU time on interpreting a language if in return you can reduce the data transfer requirements.


If most of the project effort is going to be system integration, or you have an extremely high rate of activity execution, then the Model Transforming Strategy allows flexibility to be able to optimize for the programmer and for the specific requirements of the execution environment.

3 thoughts on “Model Strategy & Performance

  1. The interpreted vs. compiled debate has been going on for a long time now. I think at any given point in time, the compilers have a very good argument because they can find situations where a higher degree of performance is required. Even in the BPM world, if I can imagine a process that executes 10,000 times an hour, you can just imagine that same process at Wal-mart to picture that process running 10,000,000 times an hour!

    However, the right design decisions for software are not judged by a single point in time, these decisions are judged over time… and over time, the interpreters have a lot working in their favor… I think having a good representation is actually more important than having the most performant one at any given point in time…

  2. Pingback: Process for the Enterprise » Blog Archive » Modeling and Performance

  3. Pingback: Process for the Enterprise » Blog Archive » Keith Swenson on Model-Preservation vs. Model-Transformation

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s