What Does a Good Forecast Look Like?

“Does anyone in FP&A ever review the accuracy of their predictive financial models from time to time?” That’s the question I posed on LinkedIn recently, and it generated a surprisingly good discussion. It turns out that FP&A professionals generally care quite a lot about the accuracy of their forecasts.

One of the businesses I worked in was strongly linked to the travel industry. So, our business tended to fluctuate seasonally. But we never seemed to be able to get our P&L and volume forecasts right. It was low margin, high volume business. So, volumes made a big difference. It was so frustrating trying to predict the effect of Easter being early or late, public holidays, school holidays, and other factors like the economy, the weather, and exchange rates.

When we did variance analysis, the Managing Director would get frustrated with us sometimes, because sometimes the explanation for a variance would be that we’d neglected to include a factor in the forecast that we really should have known about (like having a different number of Saturdays in a month). And so, at one stage, we even looked at SPSS statistical modelling techniques as a way of helping.

I’m glad we didn’t go for computerised statistical modelling in the end. It was too expensive and I don’t think the end result would have been much more accurate. And even if it would have increased the accuracy of our forecasts, would that have helped us? Would it have led to different behaviour, different action plans?

Is the forecast wrong or the performance?

One of the things that complicates the comparison of actual performance against forecast, is we’re talking about performance, not purely experience.

When we forecast the weather, naturally there is nothing that anyone can do to influence the outcome. It just happens. It’s a forecast of experience. So, any variation between the weather we actually get, and the forecast, is due to the forecast being inaccurate. Improving weather forecast accuracy is all about understanding the factors that are affecting weather, and working out how to build them into a statistical model.

When we forecast performance, the one performing is supposed to be trying to influence the outcome. And then it becomes complicated and slightly circular.

a forecast of performance are not like a forecast of nature or experienceFor instance, a couple of years ago, if I was to predict that Usain Bolt would win all the 100m and 200m races he was to run in the next 2 years, you may have said that was a pretty reasonable forecast. However, as we all know, he lost the last 100m competition of his career. So, my forecast would have been wrong. But was that because of something wrong in my forecasting process? Or because of Bolt’s performance? Or should my forecasting process have predicted that Bolt’s performance would dip? Or was it less about Bolt’s performance, and more about other competitors improving?

It’s the same with business. When our sales turn out different to forecast, is that because our modelling wasn’t sophisticated enough? Were we over optimistic? Or is it because the Marketing team have done really well or really badly? Are our sales people on a winning streak? Is our pricing wrong?

Forecasts are always wrong

One person, who commented on the LinkedIn post I mentioned earlier, wondered “who should be blamed” for over predicting gross revenue, for example.

My position is that no-one should be “blamed” for not forecasting what actually happened, for two reasons:

Firstly, as I used to say to my FP&A team, “forecasts and budgets are always wrong”. No one in the world is able to predict what’s going to happen with 100% accuracy.

And the main reason for making that rather obvious point is to stop ourselves from focusing the forecasting effort too heavily on the analysis and number crunching. We have to build the models and do the analysis, sure. That takes effort and time. But refining and refining and refining, working on the model the whole time, focusing on the number that comes out the end, without actually discussing the implications with the business, is missing the point.

And that’s where my second reason comes in…

We have to remember the reasons why we do (P&L) forecasts:

  1. To predict whether we are going to make the return that the shareholders want, so that we can start to take corrective action and/or manage the message;
  2. To be confident that if conditions continue in the way that we assume/predict, we will still be able to afford the spending plans we have in mind;
  3. To give managers an idea what is expected of them, on the basis of certain assumptions, in terms of business and revenue generation;
  4. To give managers an avenue to tell Finance of any changes in their spending plans – these spending plans have an obvious effect on the P&L forecast, but they also may affect some of the assumptions for other parts of the forecast (e.g. advertising campaigns that are linked with discount plans and sales volume assumptions).

And we have to ask ourselves, would we fulfil these objectives any better if we were able to be predict the future with total accuracy? And to be honest, the answer is ‘yes’. The better you can predict the future, the more confident you can be in planning, etc. And at one end of the scale, if it were possible to predict the future with perfect accuracy, we could give cast iron guarantees to our shareholders, and have 100% confidence in carrying out the business plan. And that would mean a very successful business, with investors always willing to put more money in!

But the Pareto principle undoubtedly applies – we probably get 80% of the accuracy of our models from 20% of the effort we put in – so getting that last 20% of accuracy has therefore taken 5 times as much effort.

And actually, the other economic principle – the “law of diminishing returns” – comes into play as well. It gets harder and harder (more time consuming and more expensive) to get more accurate. So, to push past our existing level of accuracy will be, on average, even harder to achieve.

What I’m saying is that it becomes a cost/benefit question. Are our forecasts good enough? What additional benefit would we get out of investing (in time, expertise and sophistication) to get greater accuracy?

Business forecasting pro, Steve Morlidge, also talks about “Good enough forecasting” in an article on his blog. And if you want an even deeper look into better forecasts, I’d recommend another of his articles – Why Bother Forecasting? – and you may want to take a look at his book, Future Ready: How to Master Business Forecasting.

Spurious Accuracy

One of the weird terms that resonates with me is, “spurious accuracy”. I kind of like the fact that it sounds like an oxymoron, but it isn’t. It conveys the fact that sometimes we make our forecast models (or analysis in general) so complex that it looks like the answer should be highly accurate. But often all that we’ve done is to increase the likelihood of getting it wrong by increasing the number of factors involved in getting to “the answer”.

There is a temptation to think that adding more factors into a model will increase its accuracy. But that’s not normally the case, unless you’re very careful. You have to take into account the variability of each of the factors you use, because the variability of the final output is a function of the variability of all the factors used in the model.

What that means is that quite often the complexity of a model – the thing that makes it look like it should be accurate – is the very thing that makes it inaccurate – spurious accuracy.

A beautiful model

But it’s not just P&L forecasting that needs a predictive financial model. And in fact, when I asked the question on LinkedIn I was thinking more about other financial models, such as the ones that support project business cases, acquisition appraisal, big contracts and partnership deals, etc.

When we’re trying to make a strategic decision about pricing a big contract, the viability of a project, or how much we should pay to acquire a business, accuracy is very important. We need to use financial models that build in all the relevant factors. But how do we know how accurate they are?

It would be good to know, wouldn’t it? Then we could refine the information we use, and the models we build, to make better decisions.

There are two problems, though.

Firstly, these models are normally custom built in spreadsheets. One-off spreadsheets are normally throw-away, and don’t normally get revisited after the decision is made.

what makes a good financial model?Secondly, the output of the decision-making models is not normally comparable directly, or easily, with anything in the accounting or MI systems. What I mean is that if we predicted additional revenue, for example, from a deal, there may be no way of tracing any additional revenue we actually get to what caused it. How much of it was due to that deal? How much was due to other factors?

Another problem is that sometimes the assumptions we have to use are not verifiable. For example, we may have to rely on information given to us during due diligence in an acquisition process. And this is the reason that there must be a link between the financial modelling and the legal contract processes in a big deal. Key assumptions that affect pricing or valuation might need warranties in the contract, and such like.

If you want to go deeper into the best ways to do financial modelling, and discover the best practice standards that exist to help you, then I’d recommend an article that Anders Liu-Lindberg and Lance Rubin recently published on LinkedIn. That’s a good introduction, and it contains several helpful links to get further help.

The real value of reviewing the models

As we conclude, it may seem as if the accuracy of predictive models is too difficult to measure, assess and improve.

As we’ve seen above, it is difficult. Whether it’s too difficult to attempt is a question of cost vs benefit.

Having a forecast is better than not having a forecast. And having a more accurate forecast is better than a less accurate one.

And therefore, finding a systematic way of measuring the accuracy of your predictive models must also have some value.

Even if you don’t arrive at definitive conclusions, the process of the review will:

  • Increase the depth of your knowledge and understanding of forecasting best practice; and therefore,
  • Improve the accuracy of your forecasts and predictive financial models over time;
  • Help you to understand the business and the environment in which it operates; and therefore,
  • Help you to give better support to strategic business decisions.

All in all, with all its difficulties, reviewing and assessing the accuracy of predictive financial models should be seen as a rewarding and beneficial process.

About the author

Andy Burrows is a freelance writer and blogger in subjects relating to Financial Management and Business Accounting. He has more than 20 years’ experience in senior roles in Finance, and lives in Hampshire, England with his wife and four children. More of Andy’s blog articles, along with other material and online courses for Finance professionals, can be found at www.superchargedfinance.com

About Prof Janek Ratnatunga 1129 Articles
Professor Janek Ratnatunga is CEO of the Institute of Certified Management Accountants. He has held appointments at the University of Melbourne, Monash University and the Australian National University in Australia; and the Universities of Washington, Richmond and Rhode Island in the USA. Prior to his academic career he worked with KPMG.
Scroll to Top