Run thousands of portfolio scenarios in seconds with Sourcetable AI. Calculate VaR, stress test positions, and analyze risk without complex Excel formulas.
Andrew Grosser
February 16, 2026 • 14 min read
January 2024: $2 million portfolio, 60/30/10 stocks/bonds/alternatives. Run 10,000 Monte Carlo paths: 95% VaR shows $180k potential loss, worst 1% exceeds $320k, 22% chance of losing $100k+ this year. This isn't speculation—it's statistical reality showing your complete risk distribution.
Excel makes this impossible: 10,000 random scenarios, correlation matrices, Cholesky decomposition, MMULT array formulas—200,000+ cells for a 20-asset portfolio that break when you change weights. Sourcetable eliminates this. Upload holdings, ask "Run 10,000-path Monte Carlo, show 95% VaR," and instantly get risk metrics with distribution charts. Start simulating portfolio risk for free at sign up free.
Why does generating correlated random returns require Cholesky decomposition?
Because independent random draws don't preserve the correlation structure between assets. If you generate separate NORM.INV(RAND()) calls for SPY and TLT returns, each simulation path treats them as uncorrelated—ignoring the historical -0.31 correlation that drives diversification benefits. The Cholesky decomposition transforms your correlation matrix into a lower triangular matrix L where L × LT = Correlation Matrix. Multiply your uncorrelated random draws by this matrix, and you get correlated returns that preserve historical relationships.
In Excel, this means creating a correlation matrix (CORREL function across all asset pairs), performing Cholesky decomposition (no native function—requires manual LDL decomposition or VBA), generating uncorrelated random normal variables (NORM.INV(RAND()) for each asset), multiplying the random vector by the Cholesky matrix (MMULT array formula), and scaling by asset means and standard deviations. For a 15-asset portfolio, your correlation matrix is 15×15 (225 cells), the Cholesky decomposition requires complex array formulas, and each simulation path needs 15 correlated random draws.
What happens when correlations shift during market stress?
Historical correlations underestimate tail risk because assets become more correlated during crashes. In normal markets, SPY and HYG (high-yield bonds) show 0.52 correlation. During March 2020, correlation spiked to 0.87—diversification disappeared exactly when you needed it most. Your Monte Carlo model using full-sample correlations will show better diversification benefits than you'll actually get in a crisis, leading to systematically underestimated VaR in the scenarios that matter.
Sourcetable handles this through conditional correlation analysis. Ask: "Run simulation using March 2020 correlation regime" or "Show VaR with correlations increased to 0.80 across all equity positions." The AI automatically adjusts the correlation matrix, recalculates Cholesky decomposition, generates new correlated random returns, and shows you how tail risk changes. What would require rebuilding your entire Excel model happens with a single question. You can even compare: "Show VaR side-by-side using normal vs stress correlations" and instantly see the difference—typically 30-50% higher VaR under stress conditions.
VaR tells you the threshold: "95% VaR of $200,000 means you have 5% chance of losing more than $200k." But it doesn't tell you how much worse things get in that worst 5%. Conditional VaR (CVaR, also called Expected Shortfall) answers the critical question: "Given that you've breached the VaR threshold, what's your average loss?" For portfolios with fat tails or leverage, CVaR can be 50-80% worse than VaR.
Take a $5 million long-short equity portfolio with 150% gross exposure (100% long, 50% short). Run 10,000 Monte Carlo paths and extract metrics:
The 95% VaR of $187k looks manageable—under 4% of capital. But CVaR shows that when you breach that threshold (5% of the time), average loss is actually $298k—59% worse than VaR. And in the worst 1% of scenarios, you're looking at nearly $400k losses. This matters for position sizing: if you size positions assuming 4% maximum loss, you're severely underprepared for the tail outcomes that CVaR reveals.
How do you calculate CVaR from simulation results?
Sort all 10,000 simulated returns, identify the 95th percentile threshold (VaR), then average all returns below that threshold (CVaR). In Excel: sort your 10,000 portfolio returns ascending, use PERCENTILE.INC to find the 95% threshold (row 9,500 in your sorted list), use AVERAGEIF to average all returns below that threshold. Sounds simple, but when you're running this daily with portfolio composition changes, you're constantly updating correlation matrices, regenerating random draws, recalculating percentiles, and refreshing CVaR calculations.
Sourcetable calculates both automatically and explains the gap: "Your 95% VaR is $187k, but 95% CVaR is $298k—average loss in the worst 5% is 59% higher than the VaR threshold. This suggests significant tail risk from leverage and short positions." The AI can also show the full distribution beyond VaR: "Plot returns for the worst 5% of scenarios" generates a histogram showing exactly how bad things get when you breach VaR. For risk committees asking "what's our worst-case loss?", this visualization is far more informative than a single VaR number.
Let's walk through using Monte Carlo simulation to assess risk for a $3 million growth portfolio on December 29, 2023, before the January 2024 tech correction. Portfolio composition:
Step 1: Upload historical returns and run baseline simulation
Upload daily return data for all positions from January 2022 through December 2023 (2 years, ~500 trading days). Calculate correlation matrix—QQQ/NVDA show 0.68 correlation, QQQ/TLT show -0.42 (negative correlation provides diversification), ARKK/NVDA show 0.71 (highly correlated growth exposure). Ask Sourcetable: "Run 10,000-path Monte Carlo simulation, show me one-month VaR at 90%, 95%, and 99% confidence levels."
Results (based on December 2023 market conditions):
The 95% VaR of $261k (8.7% loss) suggests moderate risk—there's a 5% chance of losing more than $261k over the next month. But notice the wide distribution: standard deviation of 6.8% means outcomes ranging from +15% gains to -12% losses are well within possibility.
Step 2: Stress test for tech sector correction
Late December 2023, you're concerned about stretched tech valuations and rising yields. Ask Sourcetable: "Run stress scenario: QQQ down 8%, ARKK down 12%, MSFT down 7%, NVDA down 10%, GOOGL down 6%, TLT up 3%. Show portfolio impact."
Scenario results:
The stress scenario shows a $230k loss—within your 95% VaR envelope but definitely in the tail. The TLT position provides only $9k of offset (+3% on 10% of portfolio), insufficient to meaningfully reduce losses during a tech selloff. This suggests reconsidering your diversification strategy.
Step 3: Test increased bond allocation for protection
What if we increase bonds to 20% to reduce tail risk?
Ask Sourcetable: "Compare two allocations: current (90% equities/10% bonds) vs revised (80% equities/20% bonds). Show VaR and CVaR for both." The AI runs simulations for both allocations and presents comparison:
The revised allocation reduces tail risk by 16% at the cost of 0.3% monthly return—an acceptable tradeoff if you're concerned about near-term correction. The CVaR improvement ($54k) is particularly meaningful because it shows smaller losses in the worst scenarios.
Step 4: What actually happened in January 2024
January 2024 returns (first 3 weeks):
The actual January pullback resulted in a $151.5k loss—within your simulated distribution (worse than 70th percentile, better than 95% VaR). Two surprises: NVDA held up better than expected (-2.4% vs -10% in stress scenario), and TLT declined (-2.8%) rather than providing diversification benefit. The Monte Carlo simulation correctly identified meaningful downside risk, though the specific scenario (bonds declining alongside equities) was outside your base case assumption.
Had you shifted to the 80/20 allocation based on simulation results, your loss would have been roughly $128k instead of $151k—saving approximately $23k. More importantly, knowing your 95% VaR was $261k meant you weren't surprised or panicked by a $151k decline; it fell within expected risk parameters and you could stay disciplined rather than panic-selling at the bottom.
How do you determine if your position sizes are too large?
Calculate the probability of exceeding your maximum acceptable loss threshold across your simulation paths. VaR tells you statistical percentiles, but "probability of ruin" tells you the chance of hitting losses that would force you to change your strategy—whether that's investor redemptions, margin calls, or psychological inability to continue.
For a hedge fund with $50 million AUM, acceptable maximum drawdown might be 15% ($7.5M loss) before triggering investor redemptions. Run 10,000 Monte Carlo paths over a 6-month horizon. If 230 paths show losses exceeding 15%, your probability of ruin is 2.3% (230/10,000). Is 2.3% acceptable? That depends on your risk tolerance and investor agreements, but you now have a concrete number to evaluate.
In Excel, calculating this requires: running all simulation paths (computationally expensive), counting paths where cumulative loss exceeds your threshold (COUNTIF across thousands of cells), calculating percentage (count/total paths), and repeating for different threshold levels to understand sensitivity. Want to test different position sizes? Rebuild the entire model with new weights and re-run.
Sourcetable makes this analysis conversational: "What's the probability of losing more than 15% over the next 6 months with my current portfolio?" Response: "2.3% chance (230 out of 10,000 paths). Worst scenario shows 27.8% loss. Average loss in scenarios exceeding 15% is 18.4%." Follow-up: "How does that probability change if I reduce equity exposure by 10%?" Response: "Probability drops to 1.4% with reduced exposure—saving $450k in average loss across the worst scenarios."
This approach transforms position sizing from guesswork into probability management. Instead of arbitrarily deciding "I'll allocate 20% to this position," you can ask: "What position size keeps my probability of 15% drawdown below 2%?" and let simulation results guide your decision. For leveraged portfolios, this is critical—leverage amplifies both returns and tail risk, and probability of ruin analysis shows you where the risk-reward tradeoff breaks down.
Standard Monte Carlo simulation assumes returns follow the historical distribution—typically modeled as normal distribution with mean and standard deviation from past data. This works reasonably well in stable markets but catastrophically fails during regime changes. The problem: market returns aren't normally distributed. They exhibit negative skewness (more frequent small gains, occasional large losses) and excess kurtosis (fat tails—extreme outcomes more common than normal distribution predicts).
Take the S&P 500 from 2010-2019 (bull market): mean daily return +0.051%, standard deviation 0.78%, skewness -0.47, kurtosis 6.2 (vs 3.0 for normal distribution). Run simulation using 2010-2019 parameters, and you'll systematically underestimate the probability of extreme drawdowns because your model doesn't capture the fat tails and negative skew.
How do you account for regime changes in simulation?
Segment historical data into distinct regimes (bull market, bear market, high volatility, low volatility) and run separate simulations for each regime weighted by probability. Identify regimes using VIX levels, moving average trends, or economic indicators. For example:
Run Monte Carlo separately for each regime, then weight results by regime probability to get blended VaR. This produces more realistic tail risk estimates because it explicitly models that stress regimes have both negative mean returns and higher volatility—double impact on losses.
In Excel, regime-based simulation requires: segmenting historical data by regime indicator, calculating separate correlation matrices for each regime, running 10,000 paths for each regime using regime-specific parameters, weighting results by regime probabilities, and aggregating to produce final distribution. Each regime needs its own set of formulas, and combining results requires complex array operations.
Sourcetable handles this through intelligent questioning: "Run simulation using three VIX regimes: low (VIX<15, 45% probability), normal (VIX 15-25, 40%), high (VIX>25, 15%). Show VaR for blended distribution and each regime separately." The AI automatically segments your historical data by VIX levels, calculates regime-specific parameters including correlations that shift during stress, runs 10,000 paths for each regime, weights by probability, and presents comparison table showing how VaR differs across regimes. Result: "Blended 95% VaR: -$287k. Low VIX regime only: -$198k. High VIX regime only: -$561k—showing tail risk is 2.8× worse in stress conditions."
Single-period Monte Carlo simulation (e.g., one-month horizon) is straightforward: generate correlated returns, apply to portfolio weights, calculate result. But multi-period simulation (e.g., 10-year retirement planning) requires deciding: do you rebalance back to target weights periodically, or let winners run and losers shrink? This decision dramatically affects simulation results because rebalancing enforces "buy low, sell high" discipline that improves long-term risk-adjusted returns.
Example: 60/40 stock/bond portfolio with monthly rebalancing vs no rebalancing over 10 years (10,000 paths):
No rebalancing produces higher median return ($2.31M vs $2.18M) because you let equity winners run, but also higher volatility ($694k vs $487k StdDev) and worse tail outcomes ($1.05M vs $1.29M in worst 5%). For risk-averse investors, the lower median return from rebalancing is worth the significant tail risk reduction.
Implementing rebalancing in Excel Monte Carlo is complex: track portfolio weights after each period's returns, compare to target allocation, calculate trades needed to rebalance, account for transaction costs, apply rebalancing to each of 10,000 simulation paths, and repeat for every period (120 months for a 10-year simulation). For 10,000 paths × 120 months, you're performing 1.2 million rebalancing calculations.
How often should you rebalance in simulation?
Test multiple frequencies (monthly, quarterly, annually, threshold-based) and compare risk-return tradeoffs. More frequent rebalancing reduces volatility but increases transaction costs. Less frequent rebalancing lets trends develop but risks large deviations from target allocation.
Sourcetable lets you test rebalancing strategies without formula complexity: "Run 10-year simulation with monthly rebalancing to 60/40. Compare to quarterly rebalancing and 5% threshold rebalancing (rebalance only when allocation drifts >5%)." The AI runs all three strategies across 10,000 paths, tracks weights and rebalancing trades, accounts for transaction costs (you can specify: "assume 0.1% cost per rebalance"), and presents comparison table showing median return, volatility, worst-case, and total transaction costs for each approach.
Result might show: "Monthly rebalancing: $2.18M median, $487k StdDev, $28k total costs. Quarterly: $2.21M median, $512k StdDev, $11k costs. Threshold (5%): $2.24M median, $538k StdDev, $8k costs." Threshold-based rebalancing offers best median return with lowest costs, but slightly higher volatility—you can choose the tradeoff that fits your risk tolerance. For wealth managers presenting options to clients, this comparison clarifies the impact of rebalancing discipline in concrete dollar terms.
If your question is not covered here, you can contact our team.
Contact Us