Deconstructing funds ratings

21 January 2011
| By Mark Hoven |
image
image
expand image

Mark Hoven outlines the key points of difference in the various approaches to fund ratings and examines the need for improved adviser knowledge of these methodologies.

Financial advisers know that two seemingly identical financial products can produce very different investment outcomes, and to fail to recognise the differences may have a material impact on a client’s portfolio.

It’s not that one product may be necessarily superior to the other, it’s about understanding the nuances and how each is likely to perform in the future, then making the right investment decision according to the client’s individual circumstances and risk appetite.

Likewise, there are dangers in making simplistic comparisons between fund ratings from different research houses.

A closer inspection may show meaningful differences in the way that research houses assess funds and assign fund ratings.

More important to understand is what the ratings ultimately mean. Differences can exist even where two research houses use the same ratings scales.

Until these differences are fully understood, treating all fund ratings in the same way runs the risk that advisers may use fund ratings in ways that were not intended.

To truly understand the nature of fund ratings and to enable a more effective comparison where more than one research house rating is being used, the following important building blocks need to be considered:

  • qualitative fund ratings versus quantitative rankings;
  • fund rating scales;
  • fund rating objectives;
  • peer group classifications;
  • fund ratings distribution;
  • mapping strategies to funds;
  • screening;
  • fund rating construction;
  • currency of research; and
  • research reports.

Qualitative fund ratings versus quantitative rankings

Let’s clear up the confusion between forward-looking qualitative fund ratings and backward-looking quantitative rankings.

In Australia, most research houses provide qualitative fund ratings using teams of research analysts conducting fundamental analysis to form an opinion about the investment merits of the product as it exists today and how it might perform in the future.

In contrast, quantitative rankings are generated based on how the product performed in the past.

Although there are many different techniques to extract predictive value from historical performance, and analysts performing qualitative analysis take this information into account when evaluating a manager, a ranking is ultimately about looking in the rear view mirror.

Unfortunately, confusion often occurs when scales used on fund ratings and rankings, typically 'stars' in Australia, are the same. Users should always check definitions of the scale to understand which is which.

Fund rating scales

Rating scales used by research houses in Australia are a mixed bunch of words, letters and stars. Graduations in rating scales also vary, with the smallest number of levels used being three and the largest seven.

The symbology used for a fund rating scale is ultimately a business decision and often has little additional meaning beyond helping to differentiate between fund rating levels and other research houses.

However, the fund rating scale graduation has more significance in terms of understanding the research house’s opinion of a product, and may also dictate how a wealth manager classifies funds at the different rating levels.

Fund rating objectives

Rating objectives are, arguably, the most important building block and where differences between research houses are often overlooked.

Just as an investment strategy has an objective for the portfolio manager to achieve (eg, to outperform the S&P/ASX 200 by 3 per cent over five years with 2 per cent tracking error), fund ratings need an objective so that rating analysts know the context they should use to assess an investment.

Various elements can underpin the assessment and research houses often combine several.

They may include:

  • How likely is the product to generate the promised returns?
  • How likely is the product to remain within the defined risk (or volatility) limits?
  • How likely are the promised outcomes going to be achieved over the time horizon stated by the fund?
  • Is the fund’s stated objective (including the chosen benchmark) valid or likely to be achieved?
  • Are the product fees reasonable?
  • How does the product compare relative to peers?
  • Will the sector perform in the future?
  • How likely is the product to fail?
  • How likely is the investment manager to fail?
  • How likely is the product to be withdrawn?

Users should investigate which of these elements are incorporated into the fund ratings they commonly use and then decide whether an assessment on that basis meets their needs.

Peer group classifications

Most research houses classify the managed funds world into specific peer groups to simplify the comparison of funds.

These groups are defined by selecting natural boundaries within which funds are largely homogenous, or at least have more similarities than differences in terms of asset class, asset size, investment objectives, risk, and investment style.

Fund ratings are generally defined within the context of these peer groups. In other words, fund ratings can be compared within a peer group but not across peer groups.

For example, a highly-rated Australian equity large-cap manager cannot be compared with an equally rated emerging-markets equity fund.

The level of risk attributable to different asset classes is often misunderstood by investors who may look at the fund rating in isolation, rather than the level of risk inherent in an investment and whether it is appropriate for them.

Unfortunately, there is no single or agreed method for classification of the funds management universe in Australia.

Peer group definitions vary between research houses, and from the standard peer groups established by fund data companies.

These differences are important because users may be comparing fund ratings from different research houses to funds that have been classified differently. Allowing for the possibility that the fund rating objectives may be different too, this comparison is therefore compounding one difference with another.

Coverage and fund ratings distribution

Coverage relates to how many investment strategies a research house chooses to research, either qualitatively or quantitatively. Investment strategies, or capabilities, are represented by the individual investment teams and pools of money behind the unit trusts that retail investors purchase.

There are as many different coverage models in Australia as there are research houses.

Models include rating the top quartile of performing fund managers only, rating 'investment-grade or higher' funds only, rating all funds that meet qualifying criteria, and rating all funds on approved product lists of wealth managers.

The drivers behind these coverage models may include the overarching philosophy (eg, ‘only rate the best’), the business model employed by the research houses, the size of research teams, and client requirements.

The combined effect of these different coverage models leads to overlapping coverage between the research houses.

In Australia, research house coverage broadly ranges from 300 to 600 investment strategies per firm. But most research houses assign ratings to funds, not investment strategies, so it is also traditional to count the number of funds rated. On this basis, research house coverage ranges from 2,000 to 4,000 funds.

However, these coverage models also lead to skews in coverage as some research houses, by design, have a bias toward higher-quality managers, some have a more balanced distribution of strong and weak managers, and others span a larger overall universe.

Mapping of strategies to funds

The relationship between investment strategies (essentially the ‘pooled’ vehicle) and corresponding funds (the ‘fund family’) is an important one that gets little attention in the Australian market.

There is no commercially available database in Australia that comprehensively defines these relationships, so each research house makes its own judgment about which funds belong to the same ‘family’, generally based on instruction from fund managers.

However, like all families, there are sometimes important differences among family members that should be considered before simplistically copying, or ‘mapping’, fund ratings to related (or sibling) funds.

Despite being underpinned by the same investment strategy, there may be material differences in gross performance between funds in the same family due to differences in portfolios, after adjusting for allowable differences in cash holdings and fees.

Since administration platforms began dominating Australian investment markets in the early 2000s, there has been an explosion of platform-badged versions of the same fund, many of which are in mandate form.

In these situations, the mandate issuer or mandate issuer’s custodian owns and trades the fund investments in replication of the fund managers’ own portfolio construction and execution.

Simply put, there are now two different fund managers involved, and two different sets of business capabilities, risks, and execution considerations that need to be taken into account when assigning a fund rating to some platform-badged funds.

Screening

Research houses generally conduct a screening process to determine which funds to rate. The screens are the control mechanism for the coverage model, serving to limit the number and quality of funds ultimately submitted for qualitative assessment.

Not surprisingly, each research house handles this process differently. While most research houses have rating scales that rate funds ‘strong’ to ‘weak’, some screens only allow funds to pass that are expected to be assessed at the higher rating levels.

Screens can be both qualitative and quantitative. Qualitative factors may include state of fund readiness, demand from wealth management groups and platforms, and how many rated funds are in that peer group already. Quantitative factors include historical returns, volatility, performance ratios, age of fund, size of fund, and tenure of portfolio manager.

Rating construction

After allowing for differences in objectives, coverage, and screening, the mechanics of conducting a fund rating are generally very common across research houses. The process typically consists of:

  • preparation and review of latest available information (desk research);
  • interview with investment team;
  • review by analytical peers, either formally via a rating committee process or informally;
  • forming a fund rating opinion; and
  • delivery of report and fund rating outcome.

Each research house may perform each of these basic steps in different ways, with different emphasis and different controls.

The combination of all these steps applied by different quality teams leads to the unique value proposition of each research house.

Currency of research

Advisers have an expectation that research houses maintain ongoing ‘surveillance’ on rated investment products and will update the research or fund ratings in a timely manner when there is material change.

Surveillance can be performed in a number of different ways that may not be transparent to the research user.

However, the frequency of surveillance can be judged through empirical evidence such as:

  • published peer review schedules;
  • analysis of ageing of the published research reports;
  • rating actions related to positive and negative events;
  • rating actions related to new product launches;
  • rating actions related to product withdrawals;
  • announcements related to changes that are not ultimately material to the fund rating;
  • changes in recommendations for approved product lists and model portfolios;
  • published sector and thematic research;
  • research house involvement in overseas manager visits and dedicated trips of their own; and
  • research house attendance at fund manager briefings.

Users of fund ratings should be aware of the way surveillance is conducted for the ratings they are using so they are able to form a view about the currency of the opinion.

Research reports

Research quality is best demonstrated through the written product research provided by research houses.

Research houses often provide more than one version of a report on each fund, so users of research need to make sure they are referring to the most appropriate version for their needs.

Are the fullest reports comprehensive enough to describe the product, identify the risks, address investor suitability, and outline the research house view? Are all reports reasonably up to date? Do these reports exist for all rated products?

Transparency and education

In summary, there is significant diversity in the ways research houses approach investment research, and they do not always end up with the same outcomes. Material differences can exist even where the fund ratings look the same.

Ultimately, we believe a diversity of fund ratings opinions is healthy for the market, and research houses shouldn’t be forced to fit into a single mould or way of doing things.

However, with additional choice and diversity comes the need for improved education. Research houses should be transparent about their methods, and what their ratings mean.

Equally, advisers should ensure they understand the differences between ratings provided by different research houses and use fund rating outcomes appropriately.

Together, these actions will help maximise the value that research houses can bring to the retail advice process.

Mark Hoven is the managing director of Standard & Poor’s Fund Services.

Read more about:

AUTHOR

 

Recommended for you

 

MARKET INSIGHTS

sub-bg sidebar subscription

Never miss the latest news and developments in wealth management industry

Squeaky'21

My view is that after 2026 there will be quite a bit less than 10,000 'advisers' (investment advisers) and less than 100...

1 week 1 day ago
Jason Warlond

Dugald makes a great point that not everyone's definition of green is the same and gives a good example. Funds have bee...

1 week 1 day ago
Jasmin Jakupovic

How did they get the AFSL in the first place? Given the green light by ASIC. This is terrible example of ASIC's incompet...

1 week 2 days ago

AustralianSuper and Australian Retirement Trust have posted the financial results for the 2022–23 financial year for their combined 5.3 million members....

9 months 2 weeks ago

A $34 billion fund has come out on top with a 13.3 per cent return in the last 12 months, beating out mega funds like Australian Retirement Trust and Aware Super. ...

9 months ago

The verdict in the class action case against AMP Financial Planning has been delivered in the Federal Court by Justice Moshinsky....

9 months 2 weeks ago

TOP PERFORMING FUNDS

ACS FIXED INT - AUSTRALIA/GLOBAL BOND