How reliable are financial forecasts?

14 September 2009
| By Robert Keavney |
image
image
expand image

Robert Keavney draws back the curtains and sheds some statistical light on financial forecasts.

Every week there are hundreds of thousands, probably millions, of financial predictions made somewhere on the planet.

Fund managers, economists, journalists, financial advisers, and private citizens express views about what will happen to equities, interest rates, currencies, commodities, real estate, and so on.

What percentage of them will prove to be accurate? What percentage will have been based on reliable methodology, giving them a reasonable prospect of being accurate?

There has been research into the reliability of financial predictions, and the results are not flattering.

Despite this, and despite the well-known biases that bedevil financial predictions, they continue to flood forth from most financial services organisations, the media and many individuals.

A study by Tadeusz Tyszyk and Piotr Zielonka (Centre for Market Psychology of Leon Kaminski Academy of Entrepreneurship and Marketing, Warsaw, Poland) compared the predictions of financial analysts with those of weather forecasters, and quoted other research on related topics, suggesting interesting conclusions.

It is essential to understand that this article refers to forming a view on the markets, not on individual securities. Its conclusions will not apply to stock-picking, and are only relevant for asset allocation decisions.

The study noted that, where it is difficult or impossible to verify the accuracy of experts’ judgments against some objective criteria, the judgments were often tested for inter-judge correlations.

To explain this, assume that 10 mathematicians were asked to solve an equation that was too complex for any layman to understand. It would be reassuring if all 10 came to the same answer, even though it is theoretically possible that all are wrong.

Conversely, if all 10 came to different answers, one would feel doubts about the answer given by any one of them, even though one might be correct.

The study reported that the inter-judge correlation among auditors was found to be around 0.70.

In simple terms, this means that when an auditor forms a view on an item, most other auditors would form the same view. No doubt this is comforting to the clients of the auditing profession.

Clients of financial forecasters have less cause for comfort. Their inter-judge reliability was found to be below 0.40.

This tells us that Polish forecasters can’t agree on anything! Actually, common sense, and the racial discrimination act, suggests the problem will not be confined to forecasters whose names end in ‘ski’.

There is a lack of agreement among the predictions of ‘financial experts’, which any casual observer could have noticed without the need for academic research.

Consistent with the analogy of the 10 mathematicians, prima facie this suggests we might be well served to view individual financial forecasts with a little scepticism.

After all, few other ‘experts’ would agree with any individual’s opinion.

Studies have shown that various kinds of experts are subject to over-confidence.

Someone who claimed he was 80 per cent certain of his forecasts, but who was right only 45 per cent of the time would be said to have demonstrated over-confidence.

One interesting conclusion of the Tyszky and Zielonka study was that “the weather forecasters reveal a smaller over-confidence effect than the financial analysts”.

Perhaps financial forecasters themselves might be well served to feel more scepticism about the value of their own public pronouncements.

The study explored the reasons advanced by experts when their forecasts failed and noted differences between the two groups.

One of these was a “significantly higher” acknowledgment by weather forecasters “that there is no certainty that the event in question can be accurately predicted”.

Financial analysts were less willing to recognise the possibility that may be simply unable to achieve what they are trying to do.

Tyszky and Zielonka hypothesized an explanation for the less realistic self-assessments by financial analysts. Following Dawes, Faust and Meehl (1989), they recognised two methods of making expert predictions.

One relies on judgments that emerge from the head of experts, based on whatever information they feel is relevant. We will refer to this method as subjective judgment.

The other method relies on “an external procedure that reflects … empirically established relations”. We will refer to this method as statistical.

The study reported that the common methods of weather forecasting are all statistical while most financial forecasts are subjective judgments. It noted that: “Experts in domains where statistical procedures are used, when forecasting uncertain events … manifest less over-confidence.” It is difficult for statisticians to be over-confident about something when their statistics show them that it only works a certain percentage of the time.

A simple example can illustrate a statistical approach: given the chance to make an even money bet (ie, double or nothing) on whether a roll of the dice will produce a six, a statistician will always bet against whereas a chronic gambler may feel a ‘certainty’ that the six is due, based on a little voice in his head that no one else hears.

Statisticians will know that there is a one in six chance of being wrong and a six turning up. But they’d be delighted to bet against a six on a thousand rolls, as the probability of making a profit on this strategy approaches 100 per cent.

The point is statistical methods are probabilistic. Statisticians aren’t over-confident of their accuracy in any one call, but they can objectively demonstrate when a call has a high probability — that is, it will be right more often than not.

Statistical methods require a certain character, one which is prepared to put more weight on empirical probabilities than one’s own gut hunches.

In fact, there is a saying among weather forecasters that “high quality forecasts should be made with the curtains drawn”.

Amazingly, many financial analysts express subjective judgments that are counter to established statistical relationships.

As examples, many express or imply a relationship between the economy and stock market prospects.

Statements along the lines of “the economy is recovering, so the prospects of the stock market are healthy” or “it is too early to buy shares as we expect the recession to worsen,” clearly presuppose GDP and equity returns are linked.

As a matter of statistical fact, there is virtually no relationship between equity returns and economic growth.

Equally, there is no meaningful relationship between cash rates and stock market returns.

If statistical methods were more prevalent among financial analysts it would, at the very least, suppress predictions based on demonstrably non-existent relationships.

It might also lead to better guidance about stock market prospects.

At the peak, many commentators offered reassurance about the value of the market, on the grounds that its PE (price to earnings ratio) was not extreme.

Not only has this reassurance proved to be meaningless, it could have been known to be meaningless at the time by simply testing whether there is a reliable relationship between PE using one year’s E and subsequent returns. As it happens, there is a very weak correlation.

Surely it is incumbent on anyone who expresses a view based on PE to test whether there is a basis for it. In any case, those who listen to such predictions can be armed with the facts.

The flaw with using one year’s E is exposed by the recent slashing of earnings. Standard & Poor’s reports US corporate earnings have imploded by around 90 per cent.

If fair value is an average multiple of current earnings then fair value has fallen by 90 per cent. The whole concept of fair value is destroyed if it fluctuates as wildly as this.

The only forms of PE that have useful predictive validity (ie, is meaningfully correlated with future returns) uses long-term average E and medium/ long-term real returns.

Another method with demonstrated merit is Tobin’s Q — that is, price to ‘book’ of the market. Again this is useful only for medium/long-term returns.

At this point we have moved to statistical methods. This brings several advantages.

First, it involves methods that demonstrably work more often than not, which is likely to result in better advice for our clients.

As a secondary benefit, it provides an antidote to over-confidence, as the statistics demonstrate we are dealing in probabilities, and that there are always instances counter to the general trend.

Third, it will free us from widely accepted but demonstrably false theories, such as a growing economy produces strong equity markets, low inflation justifies a higher PE, falling interest rates will cause stocks to rise and so on.

Forecasters may well protest that they fully understand their predictions might not be accurate, though research suggests that generally they are over-confident.

Certainly this will vary among individuals. The only way to test an individual’s subjective judgmental ability would be by examining every individual forecast against chance, over a long-term time frame.

If this was done for most forecasters, I suspect that few would want their results published.

There are many predictions where no one, including the forecaster, can have good reason to believe. Examples of these are prognostications that some market will reach X level by Y date.

No possible basis for this statement of market timing can be demonstrated, so one is expected to rely on the super-human capacities of the purported seer.

This is not a basis on which to advise clients.

Perhaps financial forecasters might do well to be more like their colleagues in the weather profession, and keep their curtains drawn.

There may be occasions when one needs to peek through the shades — for example, there was recently a threat to the viability of the banking system, which was not something to be blind to, however rare it is to face such a circumstance.

If there was a Subjective Prediction Society, its motto might be ‘Often wrong, but never in doubt!’ Financial planners are better to always be in doubt but use methods that have a reasonable probability of being right.

When it comes to the future, “I don’t know” is a perfectly respectable position to take. It is in accord with reality and is honest with one’s listeners/ readers.

However, planners need to create portfolios — that is, in practice they need to take a position in regard to markets.

Generally, they would be much better placed to rely on robust statistical models than subjective judgments — both their own and those of ‘experts’. Even then, any judgment should be overlaid with a dose of caution.

As the world has just demonstrated, major downturns are hard for clients (and their advisers) to take.

I would also like to acknowledge and correct an error in my previous article (August 13, 2009). I implied that some awards nominate only one default fund.

This was the worst sort of error — one that could have been corrected by the requisite research. Apologies to all concerned.

Never again will I rely on so dubious a source as myself without checking the veracity of any views I express to myself.

Robert Keavney became a financial planner in 1982, and has played many roles since then, and still believes financial planning can be an honourable profession.

Read more about:

AUTHOR

 

Recommended for you

 

MARKET INSIGHTS

sub-bg sidebar subscription

Never miss the latest news and developments in wealth management industry

Greg

I have passed this exam, and it is not easy or fair exam. It's no wonder that advisers are falsifying their results. ...

2 days 22 hours ago
Ralph

How did the licensee not check this - they should be held to task over it. Obviously they are not making sure their sta...

3 days ago
JOHN GILLIES

Faking exams and falsifying results..... Too stupid to comment on JG...

3 days 1 hour ago

AustralianSuper and Australian Retirement Trust have posted the financial results for the 2022–23 financial year for their combined 5.3 million members....

9 months 3 weeks ago

A $34 billion fund has come out on top with a 13.3 per cent return in the last 12 months, beating out mega funds like Australian Retirement Trust and Aware Super. ...

9 months 1 week ago

The verdict in the class action case against AMP Financial Planning has been delivered in the Federal Court by Justice Moshinsky....

9 months 3 weeks ago

TOP PERFORMING FUNDS

ACS FIXED INT - AUSTRALIA/GLOBAL BOND