Ralph FrankThe field of solvency management/fiduciary management/delegated consulting/outsourced-CIO/implemented advice (referred to collectively as empowered practitioners (EP)) is fast growing. Parties interested in EP face a range of challenges, not least of all figuring out what is being offered. A distinction needs to be drawn between where advice is given (but the client is responsible for taking the end decision) and where the EP takes and executes decisions for which it is then accountable. Measuring the quality of advice is a long-standing challenge—and not one to be tackled in this piece. However, I propose that it is possible to measure and compare the quality of management.
An EP mandate is tailored to the client’s specific circumstances—it’s a service, not a product. Consequently, each client/EP agreement will likely have a different benchmark, performance target, risk metric, and attaching tolerance. These parameters frame the client’s assessment of whether the EP has met the expectations created on appointment, in isolation. Comparison of EPs’ delivery on expectations across clients, with a range of parameters, has been sought too. It is possible to create such a meaningful comparison.
The first step is to consider whether the EP has delivered on the excess return targeted relative to the benchmark. Comparability follows from calculating what percentage of the excess return targeted has been achieved. For example, if an excess return of 2% per annum is being targeted and the mandate has delivered 19% compared to a benchmark return of 18%, then 50% of the target has been delivered.
The second step is to assess how much of the risk budget has been utilised in delivering the performance. Preferred definitions of risk (e.g. volatility, drawdowns, etc.) might differ by EP and client—there is no single “correct” measure. Some appointments might have multiple risk metrics, but a single metric needs to be “nominated” for use in the assessment process. The maximum permitted level of the metric is referred to as the “risk budget”. Comparability is achieved by assessing what percentage of the risk budget has been used in delivering the returns. The percentage approach renders the choice of risk measure irrelevant. If, for example, the strategy is being run with a risk budget—defined as volatility—of 5% per annum, but only 2% of risk is realised, then 40% has been utilised.
The third step is to control for whether the EP has remained within the risk budget. An EP might achieve a multiple of the return target by taking a multiple of the risk—not what the client ordered, despite the risk taken having paid off. This control term is defined as:
{1 – % of risk budget utilised compared to maximum anticipated}
If the risk budget is exceeded then this term detracts from the overall score. For example, if the mandate has a risk budget of 5% per annum, but 7.5% of risk is realised, then 150% of the risk budget has been utilised and this term has a value of -0.5.
I refer to the resulting metric as the “Modified Risk-Adjusted Outcome”, computed as follows:
{{% excess return
achieved compared to targeted} ÷ {% risk budget utilised compared to maximum
anticipated}} + {1 – % risk budget utilised compared
to maximum anticipated}
The metric gives an indication as to whether the risk and return expectations agreed between the EP and client have been met. Differences in benchmarks, return targets, definitions of risk, and/or risk budgets need not prevent meaningful assessment and comparison of EPs.
That was the easy part. Now for the terminology in this field to be simplified and made consistent…
Ralph Frank is the CEO of Charlton Frank.