I have written many times about how culturally we have a tendency to want to simplify problems, and solve the separate parts, and this is reductionism. Scientific management is based on this idea, and it is one of the ideas that leads to problematic BPM implementations when your the process is truly complex. In this post I consider where reductionism cam from.
Principle of Non-Contradiction
I have often attributed reductionism to the Elightenment Age of thinkers, but actually it starts much earlier than that.
Aristotle, while attempting to make a framework for logically understanding the universe, presented his “Principle of Non Contradiction” in his major work “Metaphysics” which was based on earlier work by Protagorus, Parminides, and Socrates. The ontological version of this principle was:
It is impossible that the same thing belong and not belong to the same thing at the same time and in the same respect.
Aristotle uses this as an organizing paradigm for knowledge itself. In using this principle, Aristotle believes that all things can be divided into categories. You might imagine those categories with the following tree:
What this says, quite simply, is that you can pick a quality such as “A” and divide all things into those that have the quality A, and those that do not. It is impossible for something to both have and not have the quality A. Then, of those things with A, you can pick another quality and divide again. We all know this from the game “20 questions” where you attempt to identify something based on a series of yes/no questions. Is it an animal? Does it have 4 legs? Can it fly? And so on.
Aristotle and for all of history up to this day has applied this principle to the organization of knowledge. The pursuit of knowledge is proken into fields, and those fields in to sub fields, and people working in them specialize to such a high degree that they really can not communicate effectively to people outside their field. There is the occasional voice against this, such as E. O. Wilson in his book “Consilience” makes a brilliant case for unifying knowledge as an alternate way of understanding.
Reasoning
Having built an ontology, we need to also be able to reason about this, and again we go back to Aristotle, and we find the basic building block of logic is the syllogism, the essence of deductive logic: All men are mortal. Socrates is a man. Therefor we can deduce that Socrates is mortal.
You have a major premise, along with a minor premise, and you can from that make a conclusion. Link these into a chain, and you get polysyllogisms, also known as sorites, where you deduce outcomes based on many separate rules as the conclusion of one rule becomes the minor premise of the next. These chains of logic, also known as a logical discourse, becomes the basis of reasoning for thousands of years.
Enlightenment
So in the 1600’s Descartes and Newton found themselves in a scientific tradition based on this classical logic. Descartes adds the idea that the intuition should not be trusted, and instead one should reason only from established facts and pure deduction. Decartes’ belief in the simple mechanistic nature of the world led him to promoting the idea of dualism, where the source of free will was somehow external to the mechanism of the body.
Newton took things a step further. He showed that complex phenomena are based on fundamental, simple laws. For example “F = ma” is a marvelously simple equation that explains so much about motion that for the first time people were able to calculate the motion of things in the real world.
The idea that underneath complex physical phenomena were simple basic rules was very compelling. What is more, the idea bore fruit. Gravity was a simple force proportional to the masses involved and inversely to the square of the distance. With this, and the idea of momentum, the orbits of the planets could be calculated. Maxwell came up with a set of simple formulas that describe electro-magnetic phenomena. All fields of science took up this idea that there is a simple underlying rule.
Scientific Management
Management science was no exception. Design what is to be produced, and then look at each step of production. Break those steps into sub-step. Fredrick Winslow Taylor watched workers carefully, and recorded exactly how much time and how much motion was necessary. Reduce the amount of motion and the work gets easier. Reduce the time, and you can be more productive. It is an idea that fits perfectly with industrial age mechanization.
When Henry Ford designed his factory, he took all these ideas into account, and the result was the first production automobile that could be mass produced at a price that a large part of the population could afford. What you need to remember is that Ford was in complete control of the factory line. Every aspect of building a car within that environment could be carefully controlled, and outside influences during production could be eliminated. In such a controlled environment, reductionism works fairly well.
Over-generalization
What is less discussed is the attempt to use reductionism in fields where it does not work. For example, Freud and many other Psychologists attempted to find the basic rules behind the workings of the brain, but however much they try, they find that the various divisions are always deeply interdependent upon each other in complex ways.
Nutritionists tried to isolate the “vitamins” that are the essential ingredients of life, and gave them simple letter names in the misguided idea that there would be only a few of them. Now we know that biological systems
need a huge variety of ingredients, and that processing of one ingredient can effect the processing other in complex ways. The best ‘symbol’ of reductionism is a mechanical duck which portrays the functioning of the digestive system as a machine. We know ecosystems are not reducible to simple relationships, and experiences such as the Yellowstone predator policy.
In management science, there is a similar divide between fields where reductionism is useful, and where it fails. Distinguishing these is really the subject of other blog posts, however if you will allow a brief (and possibly oversimplified) statement: reductionism works reasonably well in places where you have complete control over the working environment, and it fails to work in situations where organizations must respond to external pressures. But … more on this later.
Breakdown of Reductionism
It really is not until the 20th century that you see serious philosophers, such as Karl Popper, considering the limits of reductionism. Kurt Gödel published his incompleteness theorems in 1931 which showed that reductionism is questionable even in pure logic realms such as mathematics — precisely the place where the purity of reductionsim should hold if it holds anywhere.
The concept that is hard to explain with a reductionist point of view, is that of emergent phenomenon. A whirlpool is a phenomenon that is hard to explain by breaking down into component parts. Instead, the whirlpool depends in a non-trivial way on the shape of the riverbed that the water is flowing over. It is even hard to describe the exact edge where a whirlpool stops, and the regular river begins.
In the 19th century some phenomena, like turbulence, was believed to be caused by the “breakdown” of the normal rules. Somehow, the simple rules exist to a point, and then the rules stop applying.
The real advance against reductionism found mathematical basis with the work of chaos theory, which demonstrated that no breakdown is necessary, but showed how regular iterative interactions can produce both smooth and complex phenomenon. This brought about researchers in the field of Complexity, supported very notably by the Santa Fe Institute, to ponder the nature of complexity, and why it is that it does not yield to reduction.
It is my experience that the general population does not know enough about complexity. It remains an esoteric science and is widely misunderstood. That is the nature of all great advances in science, when they are in the process of formation. Understanding complexity, and how to cope with complexity, remains possible the most important goal for management and organizations today – particularly because they must overcome their intuition for reductionism.
Keith, this is great post consolidating the core rationale behind Adaptive Case Management: ‘That human business interactions can be described using an ontology, but not prescribed or predicted using it for reductionist decomposition.’
For those who are interested in the Complexity-ACM link in more detail, here the related posts since 2008:
The Elements of Applications – http://isismjpucher.wordpress.com/2008/08/07/70/
The Death of Process – http://isismjpucher.wordpress.com/2008/12/17/the-death-of-process/
Complex Adaptive Process – http://isismjpucher.wordpress.com/2009/11/02/complex-adaptive-business-process/
Predictive Analytics and Causality – http://isismjpucher.wordpress.com/2009/11/18/predictive-analysis-and-causality/
The Problem with BPM Flowcharts – http://isismjpucher.wordpress.com/2010/10/04/the-problem-with-bpm-flowcharts/
Process Evolution – http://isismjpucher.wordpress.com/2011/01/28/process-evolution-between-order-bpm-and-chaos-social/
The Complexity of Simplicity – http://isismjpucher.wordpress.com/2011/07/06/the-complexity-of-simplicity/
The Age of Complexity – http://isismjpucher.wordpress.com/2012/08/16/sapere-aude-an-age-of-enlightenment-for-business/
Thanks again, Keith for putting it together so elegantly as I am always much to wordy …
All the best, Max
Thanks Max. I know you have been prolific in exploring the connection between complexity and how reductionist approaches fail. I believe you were the one who recommended Karl Popper’s work to me. Appreciate the links.
Keith,
Yes, very much so.
Some comments:
“It is impossible for something to both have and not have the quality A. ”
and
“leads to problematic BPM implementations”
Yes and it goes beyond that. The relational DB paradigm and the object – oriented design and development are based on similar assumption. If it is not said, if there is no place to put it, it doesn’t exist. You can’t have an instance, unless there is a class to instantiate and so on. The closed world assumption brought a lot of problems for interoperability in particular and dealing with information on general. Learning some “new” ways takes more effort when you have to first unlearn “old” ones.
“The real advance against reductionism found mathematical basis with the work of chaos theory”
I believe it started earlier, with Henri Poincaré. It’s funny how the pre-disciplinary thinking could later be only continued with cross-disciplinary one.
“It is my experience that the general population does not know enough about complexity. ”
Yes but it has never been easier to get that knowledge. For example the SFI you mentioned provides free MOOCs, one available this year and more planned for the next.
Ivo, thanks for the comment. I am sure you are right on all these points. The tail end of the post gets a bit weak because I am trying to bring it to a close, I don’t want to write 80,000 words, and if I do anything less I am employing reductionist techniques to fit something reasonable in. I really should to a post on the root of holistic thinking which I am sure goes back to prehistory as well.
I love the connection you make to object oriented programming. It had not occurred to me, but certainly it is there. It seems a kind of arrogance to say that you are going to know in advance all of the possible attributes of something, or that something is defined by a fixed set of attributes in the first place. Yet that is how we model the world.
I know this web page provides quality based content and extra
information, is there any other web page which
gives these data in quality?