top of page

The Folly of Performance Reporting

We’ve all seen it; advisers flaunting their investment prowess by showing the shiny performance data of their model portfolios, often in comparison to the relevant benchmarks. Have you noticed that nearly in all cases, (especially in industry press) the firm model portfolio outperforms the relevant benchmark?

 Is it plausible to suggest to clients and to our peers, that we outperform the benchmarks all the time?  Surely that can’t be right? Statistically speaking, someone has to underperform. Is it that those who underperform don’t shout about it on the roof tops or are we just too proud to acknowledge that we don’t always beat the market; that in fact, the market beats most of us most of the time?

Indulge me and let me tell you how one firm (surely not yours?) manages to ‘beat the markets’ – at least on paper.  Their fund selection process is based around star manager ratings and funds quartile rankings – basically past performance. So just after the have made those ‘tactical’ changes to the model portfolios, they produce the perform results for the last quarter, last  year and may be last 3 years.  And BOOM! It’s a winner, in comparison to the benchmark! Only one problem, their clients haven’t been in these shiny new model portfolios in the last quarter or last year, let alone last 3 years.

The fact is many clients actual portfolios bear little semblance to the performance of the ‘model’ portfolio. Unless you have discretionary approval, when you make adjustments to the model portfolio, you’ll need clients’ approval before these changes can be applied to their portfolios. Sometimes, this doesn’t happen for months.  Keeping track of all the different models is like juggling, keeping many balls in the air, especially if you make frequent tactical changes.  So unless you time-weight the performance of these different models, it isn’t a true reflection of your performance.  And even then, clients’ actual experience may be completely different since you don’t adjust their individual portfolio immediately following these changes. Which leads me to ask, does it really matter? For most clients, how the model portfolio performs against the benchmark is almost non- issue. What matters is how their own portfolio performs in relation to their plan. Do you think clients see through the smoke and mirrors, when we send them shiny model portfolio reports but their own actual portfolio tells a different story?

So, why are we flexing our ‘investments expertise’ muscle using performance data?  May be we need to be a bit more humble and acknowledge it’s OK that we sometimes do underperform? Maybe we should follow the example of one planner who often tells their clients, ‘I won’t make you a killing on the stock market, but I probably won’t kill you either.’ It’s OK to let clients know that; like doctors, lawyers and pretty much every other professional, we are not omniscient. That, like doctors, inspite our best efforts, sometimes the patient dies.

Should we focus more on the process? As in if we have robust and defensible portfolio management process in place and communicate this clearly to clients, then how the model portfolio performs against the benchmark is almost a non-issue. Surely, if clients agree with the inputs and assumptions, they can’t reject the outcome? And there’ll be little point trying to persuade our peers and clients that we are better than we actually are.

So next time you feel the need to flex your investment prowess muscle in the trade press, please don’t show us your shiny portfolio performance (thank you very much.) Instead, tell us about the process. But if you must show performance, then let’s have it audited and verified by third-parties who must apply industry-wide standards.

Please feel free to accuse me of being petty; of making a big deal out of something meant to be a joke, because that’s really what it is – a joke! I know it, you know it and the clients (probably) know it too!

bottom of page