Measuring Hedge Fund Risk

There are two standard approaches to measuring portfolio risk: the variance based approach and the value-at-risk approach. These two approaches are not incompatible, and many portfolio managers use both.

The variance of a portfolio return is the expected squared deviation of the return from its mean. If the portfolio return has a normal distribution, the variance of the return completely describes the riskiness of the return. Although normality is not necessary for application of the variance-based approach, the approach becomes less useful if returns differ very sharply from a normal distribution. Derivative securities and portfolios that include derivatives are notable for their lack of normality.

The variance-based approach is most powerful if returns have a linear factor structure, so that the random return of each asset can be decomposed into linear responses to a small number of market-wide factors plus an asset specific risk. A linear factor model is a useful model for simple stock and bond portfolios, but not for portfolios that include derivatives. Derivatives have a non-linear relationship to their underlying security, and so a portfolio including derivatives (except plain-vanilla futures contracts) cannot be modelled with a linear factor model.

Because of the lack of normality and the inadequacy of factor models, variance-based approaches do not work well for portfolios that include derivatives. Most (but not all) hedge funds include derivatives. Some types of hedge fund strategies, for example, betting on currency or interest rate realignments, lead to highly non-normal portfolio returns and poor factor model fit even without any derivatives exposure. It is clear that some other approach instead of (or in addition to) the variance-based approach is needed to measure the risk of hedge funds.

In the aftermath of the LTCM collapse, the President’s Working Group on Financial Markets (1999) recommended use of the value-at-risk (VaR) approach to monitor hedge fund risk and guard against extreme events. VaR is defined as the maximum loss to be sustained within a given time period for a given level of probability. So for example a hedge fund might have a 5-day, 1% VaR of $100,000, meaning that only in one trading week out of 100 the fund will have a loss of $100,000 or more. VaR describes one feature of the return distribution – the length of the lower tail to reach a chosen cumulative probability value. Knowing VaR is equivalent to knowing variance only in the special case of a normal distribution.

VaR is more difficult to estimate than variance, and there are no simple rules for determining the contribution to VaR of individual asset positions, as there are for variance. Linear factor models cannot be used to decompose VaR into a set of risk exposures and an asset-specific risk, as can be done for variance.

The strength of VaR lies in its generality. It works for a portfolio including derivatives and other non-linear return patterns, and does not rely on variance serving as a useful measure of dispersion. A fundamental problem with VaR is that it is extremely difficult to estimate the true probability of low probability events. Hedge funds require additional risk assessment techniques, such as stress testing, to monitor the source and severity of low probability events.

Stress tests are computer-based “what-if” simulations of a portfolio’s reaction to extreme adverse conditions. Stress tests examine the effects of simultaneous adverse changes in market prices, bond yields, exchange rates, volatility, and correlations on portfolio value.

Apply for Hedge Fund Certification!

https://www.vskills.in/certification/certified-hedge-fund-manager

Back to Tutorials

Share this post
[social_warfare]
Sources of Risk
Markowitz ‘s Portfolio Theory

Get industry recognized certification – Contact us

keyboard_arrow_up