Research houses may claim to be taking an important step towards greater transparency in the release of their performance statistics, but the depth of this information has been questioned.
Standard & Poor’s (S&P) is following the lead of research houses such as Morningstar and van Eyk in that it will soon release performance tables that give an indication of how its recommendations have panned out compared to the relevant benchmark.
“It’s not the be-all-and-end-all, but I think [advisers] would like to know that the funds that the research house think are better than average are outperforming the average and the benchmarks they’re designed to outperform,” said head of S&P Fund Services Mark Hoven (pictured).
Morningstar co-head of fund research Tim Murphy said the point of Morningstar’s performance graphs was not to make them look like heroes, but they were in line with a drive towards greater transparency.
“Would you invest in a fund if it never had to report on its performance?” Murphy asked. “Most people would say no to that, so why would you use research from someone without ever getting to see the performance of the recommendations that they’ve been making?”
Lonsec only provides its performance statistics to key clients and its general manager of research, Grant Kennaway, said that would not change. However, he added that Lonsec was building a more detailed attribution model across the whole funds universe and not just its best ideas funds, which would give it a clearer idea of where their analysts’ strengths and weaknesses laid.
Hoven said perhaps some research houses did not publish their performance stats more widely because it could entail a business risk. Murphy suggested it might be due to the researcher’s business model, such as those that were paid by fund managers, and questioned the probability of such a researcher outperforming the benchmark if it recommended more than 70 per cent of the funds covered, for example.
Former researcher and brillient! publisher Graham Rich said research houses releasing their performance stats was a good start, but he felt that those researchers that did not should not be criticised because it is a very complex area.
“It is very easy to misinterpret the information, and it is very difficult to compare the information unless there is a set of standards applied across the different research houses,” he said.
Rich added that the rating methodologies between the research houses were so different that it was difficult to compare like-for-like ratings and, therefore, a researcher’s performance relative to its peers. He was also concerned that the performance tables and graphs being made available were too simplistic.
“If any fund tried to come out with the degree of simplicity that they come out with on rating themselves, the ratings house would say that’s not good enough,” he said.