Time to rate the researchers

8 March 2005
| By Larissa Tuohy |

The power of research houses was highlighted by the recent decision of UBS Global Asset Management to no longer allow its funds to be rated by van Eyk Research, which it sees as a competitor. Already, Count Financial has deleted UBS from its recommendations and others may follow. This reminds us that a good rating from a research house is critical in raising substantial funds under management via financial planners.

As such, I think the time has come to rank the quality of research houses, to assist financial planners when selecting superior providers. This would enhance advice and provide an additional level of protection in the event of a claim.

We could call the business: Whose Stars are Brightest. Modelling ourselves on research houses, Whose Stars would encourage the concept that the Australian Securities and Investments Commission may require use of such a service, to demonstrate a reasonable basis for selecting a research house to support product selection. We could even suggest it is professionally negligent for planners to do it themselves.

This article is not intended to reflect adversely on UBS or van Eyk (or any other organisation). Instead, it intends to highlight broad questions about the position and accountability of researchers.

Managers generally know their place in relation to research houses — powerless and sycophantic. At a recent asset manager’s presentation, the chief executive officer began with a recitation of the four and five-star ratings received from several research houses.

In private, this business expresses scorn for the quality of some of the research processes to which they are subject. However, the chief executive officer understood that advisers’ actions are determined by ratings, so had to carry on the pretence that these ratings did demonstrate quality.

One wonders how research houses might respond if the shoe was on the other foot, if they had to speak sycophantically of Whose Stars, irrespective of their real views, in order to avoid its disapproval.

Different business models have been considered for Whose Stars. Each has its strengths but also poses challenges:

Whose Stars might charge companies a fee if they publicise our star ratings. However, this could give a bias to grant more high ratings than low, as few ads boast of a single star. Also, if some highly-rated company has paid fees to widely promote this fact, it might be tempting to keep it high to maintain this revenue;

Whose Stars could charge those it rates. Yet again, this could lead to a bias to rate high, as low-ranked companies are an unlikely source of repeat business. It could also leave first-class groups outside the research universe, simply because they do not feel the need to retain the service.

Finally, Whose Stars could rely on its research subscribers as the sole revenue source. However, precedence suggests there is inadequate profit in this, and it may have to move into another core business — for instance, funds management with research a loss leader or marginally profitable area. This could cause the businesses it rates to fear a conflict of interest.

Quite rightly, planners are subject to scrutiny about their business model — for example, whether recommendations are influenced by institutional ownership or commission. Research houses should be subject to similar analysis as they materially determine the industry’s funds inflow — and many work on one of the above models.

Whose Stars would be tempted to publish a table outlining its star ratings for each research house. This could garner free publicity and perhaps create an expectation by consumers that such a service will be used by planners.

However, to publish a poor rating might have a material impact on the business of a researcher. It would be irresponsible to do this without being certain, and being able to demonstrate that Whose Stars’ processes produced results. This is an attitude which the research houses themselves should be encouraged to adopt towards fund managers.

It would be reasonable for the mainstream media to ask Whose Stars to demonstrate competence before reporting our judgements. Presumably this won’t occur, as they don’t ask this about the star ratings now being given to funds.

Advisers used to believe fund managers were ‘experts’ who would produce superior returns, but became disillusioned. Now many believe research houses are the ‘experts’ — despite the failure of many to notice the absurd valuations during the tech boom.

As a sceptic, with an analytic nature, my view is that all ratings are subjective until there is evidence to the contrary. What would constitute evidence? How could planners test that a research house has any research skills?

A researcher’s rating criteria will not just focus on superior returns. Perhaps volatility is a factor, or an assessment of an organisation’s capacity to do what it claims to do. Whatever the criteria, it must be capable of reasonably objective analysis — otherwise ratings represent untestable assertions.

Many planners don’t even know whether their researcher’s five-star funds have outperformed their one-star in the long-term, or vice versa.

Why is there not publicly available hard data? Why isn’t there a basis for comparing research houses, by publishing the track records of their recommendations?

I suspect Money Management would be delighted to conduct such an analysis, if all the research houses were prepared to give full disclosure of their complete history of ratings over the last seven years (a period pre-dating the collapse of the tech bubble).

One might expect that all research houses would already have this material, independently audited and published, as a powerful marketing tool. If they don’t, why not? Surely, it can’t be due to the fact that they haven’t actually analysed their own results or they don’t want the results known.

There are material differences in the processes of research houses. Some ratings models seem to chase recent past performance. In my view, this risks chronic under-performance. One-stars may really beat five-stars if a process leads to upgrades only after a good run (that is, pre-downturn?) and downgrades only after a bad period (pre-recovery?).

Tyndall is a real world illustration of this. Its share funds have been exceptional in the long-term, even though it experienced a two-year period of poor returns. Generally, they weren’t well ranked in 1990 but outpaced the All Ords by 189 per cent (absolute) from June 1990 to September 1997. Towards the end of this period, some research houses belatedly upgraded them.

They then experienced an extremely difficult time during the tech boom madness, falling 41 per cent behind the market to February 2000. This led to a number of downgrades. From that period to January 2005, Tyndall has beaten the index by 87 per cent and has recently been given some upgrades.

Such a ‘research’ process would effectively discourage investors from holding this fund during two long, strong periods but encourage investment for one short, poor period. Yet, over the whole time it has beaten the index by 413 per cent.

Some researchers (like van Eyk) do provide some comparative historical performance data but, to the best of my knowledge, most have never been subjected to public measurement and reporting of their track record — that is, evidence about whether they do add value.

There is no Whose Stars to research the researchers — but there should be. Are they willing to subject themselves to measurement?

Robert Keavney is chief executive officer of Centrestone Wealth Advisers.

Read more about:

AUTHOR

 

Recommended for you

 

MARKET INSIGHTS

sub-bg sidebar subscription

Never miss the latest news and developments in wealth management industry

JOHN GILLIES

Might be a bit different to i the past where at most there was one man from the industry on the loaded enquiry boards a...

17 hours 45 minutes ago
Simon

Who get's the $10M? Where does the money go?? Might it end up in the CSLR to financially assist duped investors??? ...

5 days 12 hours ago
Squeaky'21

My view is that after 2026 there will be quite a bit less than 10,000 'advisers' (investment advisers) and less than 100...

1 week 5 days ago

AustralianSuper and Australian Retirement Trust have posted the financial results for the 2022–23 financial year for their combined 5.3 million members....

9 months 2 weeks ago

A $34 billion fund has come out on top with a 13.3 per cent return in the last 12 months, beating out mega funds like Australian Retirement Trust and Aware Super. ...

9 months 1 week ago

The verdict in the class action case against AMP Financial Planning has been delivered in the Federal Court by Justice Moshinsky....

9 months 2 weeks ago

TOP PERFORMING FUNDS

ACS FIXED INT - AUSTRALIA/GLOBAL BOND