top of page

What to do when forecasts and estimates conflict

Updated: Jul 20, 2022


Sad Man looking at calendar

At 55 Degrees, we think probabilistic forecasting is great. Heck, it is a key feature in our products ActionableAgile Analytics and Portfolio Forecaster. By basing your future outcomes on past outcomes, you can provide high accuracy with minimal effort. Assuming, of course, that current conditions are similar to your past conditions.


Recently a customer had a great question stemming from his difficulties getting buy-in for using Monte Carlo simulations at his workplace.


Histogram results for Monte Carlo in ActionableAgile Analytics
Histogram results for Monte Carlo in ActionableAgile Analytics
What happens if the forecasted likely outcomes from a Monte Carlo simulation fly in the face of the teams experience? How do you prove the model and get buy-in?

Here’s our answer…

One approach you might try is to talk up the pros of using a tool such as a Monte Carlo Simulation:

  • Uses actual data from what you’ve accomplished before

  • Can quickly run thousands of simulations to see how wide the range of possible outcomes really is

  • Can give you a better understanding of which of those outcomes is really most likely

  • Turns the focus away from whether or not a single outcome is right or wrong to determining how confident you need to be in any given forecast.

That may make some headway. But, many hard-core proponents of effort estimates may need a bit more to decide that some simulation could be better at determining possible delivery dates than the experts themselves.

What I try to remember is that what we’re really trying to achieve are better outcomes. Our success doesn’t (usually) live or die on which path people take to get the better outcomes.

Add a pinch of science

So, in this situation, why not let them take a more empirical approach and, potentially, prove it to themselves?

  • Have people make a hypothesis of which they think will be better and why. Capture assumptions.

  • Then, execute the experiment by utilizing both effort estimates and probabilistic forecasts.

  • Finally, reflect and draw conclusions about which approach was better

What do I mean by better? That’s a good question you should ask yourself when comparing a new option to your previous way of doing something. If either of the following conditions apply, I consider the new option to be “better:”

  • More accurate (especially when no additional difficulty is added)

  • Same accuracy but easier and/or less time-consuming

A true scientist would tell you to repeat the experiment and see if the outcomes continually prove that conclusion.

The worst case scenario is that a hypothesis proves to be wrong — yours or theirs. Either way, your forecasts win the contest because you’ve proven what is more accurate.

 

Have you done this experiment? Share your outcomes in the comments below! Want to try out probabilistic forecasting? Try ActionableAgile Analytics for free whenever you’re ready.

Related Posts

See All

Blog Post

bottom of page