Op-Ed: Peer Rankings Matter—When Used Properly (Part Two)

It’s useful to study top-performing funds, but it’s not an apples-to-apples comparison, longtime investor Tony Waskiewicz says.
Reported by Tony Waskiewicz

Tony Waskiewicz

In my last op-ed, I explored the pitfalls that come with relying on peer rankings to determine a CIO’s performance. Peer rankings are the result of many factors and thus are certainly not a simple reflection of the skills and talents of the investment team.

So, how should a CIO be judged? Several quantitative and qualitative factors should go into the evaluation of a CIO, but one effective way to judge the investment skill of a CIO is to examine the CIO’s performance over market cycles relative to the institution’s customized benchmark.   

This customized benchmark should be investable. In other words, it should be something the institution could actually own. In previous articles, I have explained how an organization can go through the process of developing a portfolio-level benchmark that is investable.   

This investable benchmark is the return the organization would have achieved through passive implementation of its asset allocation strategy, and it can be compared with the return the institution’s investment office generates. The difference represents the contribution or detraction of the decisions the investment office makes.  

So, comparing “alphas” rather than absolute returns is a better way to compare and judge investment offices, right? As long as all institutions benchmark performance the same way, the answer could be “yes.” But we all know that organizations benchmark their portfolios differently, so comparing alphas could be tricky and misleading.

Does a peer ranking analysis have any use at all? Of course it does! Studying top-performing funds and learning from others can make us better. But putting two numbers side by side and declaring that the higher number means one CIO is better than the other is not sound or even reasonable. 

Take it from longtime industry insider Cynthia Steer, who points out that “comparing peer returns as a way to judge skill and to measure success is not a good practice. The circumstances, needs, and risks of institutions are all over the place. No two investors are alike.” 

And Steer should know—she serves on seven boards, including the investment committees for Smith College and the Hartford HealthCare. 

She also notes that “business models, post COVID-19, are evolving. Boardrooms are having more conversations about stakeholder financial objectives rather than just shareholder returns, and these conversations need to extend into the office of the CIO. It makes sense so that stakeholder objectives and their expectations for relative performance are put into perspective for each institution.”

As such, Steer advocates for executives, boards, and committees to use peer data as a reference point to help determine how an organization should structure, resource, and govern the investment office. 

“Peer performance is not an effective way to judge the skill of the CIO, but comparing portfolio complexity across peers can provide important context for an institution trying to make wise decisions relating to the investment office budget, staff structure, team member compensation, and overall resourcing needs. Meeting complex objectives requires the appropriate resources, and stakeholders need to recognize when and how to adequately resource an investment function to meet its expectations. Many institutions have complicated circumstances and high expectations for outcomes, but do not allocate the budget or resources required to achieve the desired outcomes.” 

In other words, peer analysis can be an effective governance tool—one that helps shape the direction and practices of the institution. 

None of this is to say that CIOs whose portfolios have fared well in recent peer rankings should go unnoticed. In fact, just the opposite. 

Sure, some CIOs of high-performing funds may be simply benefiting from the strong financial position of their institution (which enables risk-taking and higher budgets for the investment office), but many top-performing CIOs have worked incredibly hard to make wise and prudent decisions on behalf of their institution. Alpha is hard and it does matter! And CIOs producing excess returns should be applauded. 

A good CIO can have a significant impact on the institution. However, a CIO’s skills and overall contributions may not always be revealed in a peer ranking that includes a group of investors with different needs, circumstances, risk tolerances, governance structures, investment policy statements (IPS), spending policies, resources, budgets, and tenures as an investment office. 

If you are a fiduciary wondering if you have a good CIO, consider whether your institution has sound processes, has prudent management of its assets, and if it is consistently meeting its objectives over longer-term, rolling periods. 

Also consider if the investment office is working collaboratively with its stakeholders, provides education and transparency, helps advance the mission of the institution, and is a good culture fit. If all of this is so, then you likely have the right people in place. 

If this is still not good enough and you feel a need to push for a higher peer ranking, you may need to advocate for your institution to change its policy, risk targets, liquidity needs, operational framework, spending limits, investment office budget (including the compensation required to attract and retain talent), processes, and the time period for evaluation. A change in these factors will have the greatest impact on how your institution will fare relative to its peers. If some of these factors cannot, or should not be changed, then maybe everything is working just fine. In other words, maybe using peer ranking to judge the success or failure of your program and your investment team is not the most effective evaluation tool for your circumstances.

I can only hope that CIOs who prudently discharged their leadership responsibilities over the past year can be recognized for their exceptional work. Further, I hope institutional leaders and industry observers know how to see through the peer rankings to recognize when great work is being done. 

Addendum for Peer-Focused Institutions

Despite all the shortcomings of using peer rankings, if a board or committee would still like to use external factors to measure success, then it should also consider using external factors to develop the portfolio’s investment strategy. 

In other words, the institution should not set risk limits. It should cast aside the institution’s liquidity needs and discount circumstances such as how much the endowment supports the operational budget. A pension plan should not be concerned about the duration of its liabilities or its funded status. A health care system should disregard balance sheet metrics such as days cash on hand and cash-to-debt ratios. A foundation should not factor in its spending rate. 

Rather than incorporating these organization-specific factors into the strategy setting process, a peer-focused organization should only consider what its peers are doing and make a board- or committee-endorsed decision about how the organization would like to invest the same or differently than the peers it studied. 

This same organization’s investment policy should also drop references to its mission, risk, return objective, and liquidity needs and replace it with language that makes “beating our peers” the clear mandate. While this approach may seem imprudent (hopefully you recognize it as such), it would solve for the mismatch of using internal factors to develop a strategy yet external factors to judge its success.   

Regardless, any use of peer rankings as a measure of success must avoid using short-term time periods, such as one-year results, as the basis for measurement. Institutions use different techniques to calculate and report performance (such as “gross” versus “net” manager returns and lagged versus unlagged private investment returns). These differences can smooth over longer periods of time, but create significant skew in shorter time periods.  

Tony Waskiewicz has nearly 30 years of financial services, investment advisory, and CIO experience and most recently served as chief investment officer for Mercy Health in St. Louis.

 This feature is to provide general information only, does not constitute legal or tax advice, and cannot be used or substituted for legal or tax advice. Any opinions of the author do not necessarily reflect the stance of Institutional Shareholder Services or its affiliates.

Related Stories:

Op-Ed: What Are We Really Measuring When We Use Peer Rankings? (Part One)

Op-Ed: Is Your Benchmark for Real? Probably Not

Benchmarking Addendum: But What’s the Best Way to Benchmark Alts?

Op-Ed: Which Injections Should Investors Watch in 2021?

Tags
Asset, Asset Management, Investment, peer ranking, ranking, Tony Waskiewicz,