Over time, we’ve developed methods to deal with one type of uncertainty in technical problems, known as fuzziness. In 1965, mathematician and early AI researcher, Lotfi Zadeh, introduced fuzzy sets to represent knowledge that is unclear or imprecise, which we call “fuzzy.” In classical set theory, something either belongs to a set or it doesn’t. But unlike the clear boundaries of classical sets, fuzzy sets allow for partial membership.

Fuzziness isn’t randomness; it’s about uncertainty in outcomes. It measures how much an outcome occurs, not just whether it happens. For example, using fuzzy logic, we might say a project activity takes “about 4 weeks.” Other possible times, like 3 or 5 weeks, are given different values, or “beliefs,” based on how close they are to the vague notion of “about 4 weeks.”

Our brains are hardwired to see uncertainty as a risk or threat; it’s physiologically normal to feel stress when faced with unfamiliar situations. Uncertainty affects everything we do, including how we create models or come up with solutions. We often must balance how precise we are when solving a problem with the uncertainty involved. Whether using approaches like PMBOK or ADKAR, we often seek certainty rather than focusing on what truly matters:  a significant or meaningful result. While structured processes or logical models give us a sense of predictability and sometimes lead to success, they don’t always offer real precision — just the illusion of it.

In pursuit of accuracy, we tend to over-rely on frameworks and methods. But in complex projects, how much precision do we actually need? Are we sacrificing time and significance, what really matters, just to feel a sense of certainty? In complex systems, the more detailed the system, the less exact our information tends to be anyway. Precision, information, and complexity are all closely linked.

While these frameworks can sometimes lead us to falsely believe in predictability or accuracy, this type of self-deception feels only slightly better than admitting we don’t know. The reality is that high precision often requires significant time or money. As change practitioners and project managers, we should ask ourselves: do our problems truly demand such extreme precision?

Kolata’s (1991) signal routing problem illustrates this well. If we want a 1% accurate solution for finding the best path in a network of 100,000 nodes, it takes a lot of supercomputer time. Increasing the accuracy to just 0.75% can take months! On the other hand, if we accept a 3.5% accuracy (still more precise than most problems), and even increase the network size to 1,000,000 nodes, the computing time drops to a few minutes. This shows that slightly lowering precision can save a huge amount of time and money, without necessarily reducing the impact of the solution.

So, can we accept a little less accuracy in our solutions? The answer depends on the situation, but for most of the challenges we face in daily and organizational life, the answer is a resounding yes. A good, timely answer will always have more value than a perfect one that arrives too late.

Leave a Reply

Your email address will not be published. Required fields are marked *