The Sharpe ratio is the most widely used measure of risk-adjusted investment return. Here's what it means, how to calculate it, and why a higher-return investment isn't always better.
Raw return figures are misleading without context. A fund returning 15% per year sounds better than one returning 10% โ until you learn the first has wild swings that could wipe out 40% of your portfolio in a bad year. The Sharpe ratio adjusts returns for risk.
The S&P 500 has historically had a Sharpe ratio of around 0.4โ0.5 over long periods. Hedge funds and other actively managed strategies frequently advertise high Sharpe ratios, but many revert toward market ratios when fees are accounted for.
Assumes normal distribution of returns: Many investment strategies have asymmetric returns โ small consistent gains punctuated by rare catastrophic losses. These score well on Sharpe until the disaster happens (this is the signature of strategies selling options).
Uses historical volatility: Past standard deviation doesn't predict future risk, especially for concentrated or illiquid portfolios.
Can be gamed: Smoothing returns (e.g. through illiquid asset valuation) artificially reduces measured standard deviation and inflates the ratio.
The Sharpe ratio is most useful for comparing similar strategies or funds โ two equity funds, two bond funds, two multi-asset strategies. Comparing a volatile tech growth fund to a stable dividend fund using Sharpe ratio gives you the risk-adjusted comparison that raw returns obscure. It's less useful in absolute terms (is a 0.7 Sharpe "good"?) and more useful in relative terms (is fund A's 0.9 better than fund B's 0.6?).