Final Keynote EDOC 2014: Barbara Weber

Barbara Weber is a professor at University of Innsbruck in Austria.  Next year she will be hosting the BPM 2015 conference at that location.  She gave a talk on how they are studying the difficulties of process modeling.   My notes follow:

Most process model research is focusing on the end product of process models. Studies have shown that a surprisingly large number, from 10% to 50% of existing models, have errors.  Generally process models are created and then the quality of the final model is measured, in terms of complexity of model, model notation, secondary notation, and measure accuracy, speed, and mental effort.   Other studies take collections of industrial models, and measure size, control flow complexity and other metrics, and look for errors like deadlocks and livelocks.

Standard process modeling lifecycle is (1) elicitation, and then (2) formalization. Good communications skills needed in first part. Second part requires skills in a particular notation. She calls this PPM (Process of process modeling). Understanding this better would help both practice and teaching. This can be captured from a couple of different perspectives.

1) logging of modeling interactions
2) tracking of eye movement
3) video and audio
4) biofeedback collecting heart rate etc.

Nautilus Project focused on logging modeling environment. Cheetah experimental platform (CEP) guides modelers through sessions and the other things is that it records the entire thing and plays it back later.  The resulting events can be imported to a process mining tool and analyze the process of process modeling.  She showed some details of the log file that is captured.

Logging at the fine grained level was not going anywhere, because the result was looking like a spaghetti diagram.  They broke the formalization stage into five phases:  

  • problem understanding: what the problem is, what has to be modeled, what notation to use
  • method finding: how to map the things into the modeling notation
  • Modeling: actually doing the drawing on the canvas
  • Reconciliation: is about then improving the understandability of the model, like factoring, layout, and typographic clues all of which make maintenance easier
  • Validation – search for quality issues, comparing external and internal representation, syntactic and semantic, and pragmatic quality issues

They validated this with user doing a “think aloud” work.  They could then map the different kinds of events to these phases.  For example creating elements are modeling phase, while moving and editing existing is more often reconciliation phase.  She showed to charts from two users: one spent a lot of time in problem understanding, and then build quickly, the other user proceeded quite a bit more slowly, adding and removing things over time.

Looking at different users, they found (unsurprisingly) that less experienced users take a lot more time in the ‘problem understanding’ phase.  In ‘method finding’ they found that people with a lot of domain knowledge were significantly more effective.  At the end there are long understanding phases that occur around the clean up.  They did not look at ‘working memory capacity’ as a factor, even though it is well known that this is a factor in most kinds of modeling.  

Second project “Modeling Mind” took a look at eye movements and other biofeedback while modeling.  These additional events in the log will add more dimensions of analysis.  With eye tracking you measure number of fixations, and mean fixation duration.  Then define areas of interesting (modeling canvas, text description, etc.)  They found that eye trace patterns matched well to the phases of modeling.  Initial understanding they spend a lot of time on the text description with quick glances elsewhere.  During the building of the model, naturally you look at the canvas and the tool bar.  During reconciliation there is a lot of looking from model to text and back.

What they would then like is to get a continuous measure of mental effort.  That would give an indication of when people are working hard, and when that changes.  These might give some important clues.  Nothing available at the moment to make this easy, but they are trying to capture this.  For example, maybe measuring the size of the pupil.  Heart rate variability is another way to approximate this.

Conclusion: it is not sufficient to look only at the results of process modeling — the process maps that result — but we really need to look at the process of process modeling: what people are actually doing at the time, and how they accomplish the outcome.  This is the important thing you need to know in order to build better modeling environments, better notations and tools, and ultimately increase the quality of process models.  This might also produce a way to detect errors that are being made during the modeling, and possibly ways to avoid those errors.

Note that there was today no discussion of elicitation phase (process discovery) but that is an area of study they are doing as well.

The tools they use (Cheetah) is open source, and so there are opportunities for others to become involved.

Q&A

Can the modeling tool simulate a complete modeling environment?  Some of the advanced tools check at run time and don’t allow certain syntactic errors.  Can you simulate this? –  The editor models BPMN, and there is significant ability to configure the way it interacts with the user.

Sometimes it is unclear what is the model, and what is the description of the model.  Is this kept clearly separated in your studies?  Do we need more effort to distinguish these more in modelers?  – WE consider that modeling consists of everything including understanding what you have to do, sense making, and then the drawing of the model.  

This is similar to cognitive modeling.  Have you considered using brain imaging techniques?  – we will probably explore that.  There is a student now starting to look at these things. We need to think carefully whether the subject is important enough for such a large investment.

Have you considered making small variations in the description, for example tricky key word, and see how this effects the task?  – we did do one study where we had the same, slightly modified requirements to model.  These can have a large effect.

Starting from greenfield scenario, right?  What about using these for studying process improvement on existing models? – some little bit of study of this.  The same approach should work well.  Would definitely be interesting to do more work on this.

 

2 thoughts on “Final Keynote EDOC 2014: Barbara Weber

  1. The spaghetti problem stems from the underlying architecture. Who will you ask? The economic stakeholder or the end user. The economic stakeholder politicizes the process, so accuracy is lost. This is the core problem of any elicitation process. Research shows that in an organization you have two levels of generalists, and two levels of expertise. These are mutually exclusive. Generalists win negotiations 55% to 35% of the time. Experts win negotiations 8% to 2% of the time. It shouldn’t surprise us that our elicitation are full of errors.

  2. Pingback: QE Research Group | EDOC 2014

Leave a comment