Home AI Trading Strategies / Monte Carlo Portfolio Simulation

Monte Carlo Portfolio Simulation Trading Strategy

Run thousands of portfolio scenarios in seconds with Sourcetable AI. Calculate VaR, stress test positions, and analyze risk without complex Excel formulas.

Andrew Grosser

Andrew Grosser

February 16, 2026 • 14 min read

January 2024: $2 million portfolio, 60/30/10 stocks/bonds/alternatives. Run 10,000 Monte Carlo paths: 95% VaR shows $180k potential loss, worst 1% exceeds $320k, 22% chance of losing $100k+ this year. This isn't speculation—it's statistical reality showing your complete risk distribution.

Excel makes this impossible: 10,000 random scenarios, correlation matrices, Cholesky decomposition, MMULT array formulas—200,000+ cells for a 20-asset portfolio that break when you change weights. Sourcetable eliminates this. Upload holdings, ask "Run 10,000-path Monte Carlo, show 95% VaR," and instantly get risk metrics with distribution charts. Start simulating portfolio risk for free at sign up free.

The Correlation Matrix Problem That Breaks Excel Monte Carlo Models

Why does generating correlated random returns require Cholesky decomposition?

Because independent random draws don't preserve the correlation structure between assets. If you generate separate NORM.INV(RAND()) calls for SPY and TLT returns, each simulation path treats them as uncorrelated—ignoring the historical -0.31 correlation that drives diversification benefits. The Cholesky decomposition transforms your correlation matrix into a lower triangular matrix L where L × LT = Correlation Matrix. Multiply your uncorrelated random draws by this matrix, and you get correlated returns that preserve historical relationships.

In Excel, this means creating a correlation matrix (CORREL function across all asset pairs), performing Cholesky decomposition (no native function—requires manual LDL decomposition or VBA), generating uncorrelated random normal variables (NORM.INV(RAND()) for each asset), multiplying the random vector by the Cholesky matrix (MMULT array formula), and scaling by asset means and standard deviations. For a 15-asset portfolio, your correlation matrix is 15×15 (225 cells), the Cholesky decomposition requires complex array formulas, and each simulation path needs 15 correlated random draws.

What happens when correlations shift during market stress?

Historical correlations underestimate tail risk because assets become more correlated during crashes. In normal markets, SPY and HYG (high-yield bonds) show 0.52 correlation. During March 2020, correlation spiked to 0.87—diversification disappeared exactly when you needed it most. Your Monte Carlo model using full-sample correlations will show better diversification benefits than you'll actually get in a crisis, leading to systematically underestimated VaR in the scenarios that matter.

Sourcetable handles this through conditional correlation analysis. Ask: "Run simulation using March 2020 correlation regime" or "Show VaR with correlations increased to 0.80 across all equity positions." The AI automatically adjusts the correlation matrix, recalculates Cholesky decomposition, generates new correlated random returns, and shows you how tail risk changes. What would require rebuilding your entire Excel model happens with a single question. You can even compare: "Show VaR side-by-side using normal vs stress correlations" and instantly see the difference—typically 30-50% higher VaR under stress conditions.

Value at Risk vs Conditional VaR: Why 95% VaR Hides Your Worst Losses

VaR tells you the threshold: "95% VaR of $200,000 means you have 5% chance of losing more than $200k." But it doesn't tell you how much worse things get in that worst 5%. Conditional VaR (CVaR, also called Expected Shortfall) answers the critical question: "Given that you've breached the VaR threshold, what's your average loss?" For portfolios with fat tails or leverage, CVaR can be 50-80% worse than VaR.

Take a $5 million long-short equity portfolio with 150% gross exposure (100% long, 50% short). Run 10,000 Monte Carlo paths and extract metrics:

  • 95% VaR: $187,000 loss (3.74% of equity)
  • 95% CVaR: $298,000 loss (5.96% of equity)
  • Maximum loss across all paths: $512,000 (10.24%)
  • Worst 1% of scenarios: Average loss $387,000 (7.74%)

The 95% VaR of $187k looks manageable—under 4% of capital. But CVaR shows that when you breach that threshold (5% of the time), average loss is actually $298k—59% worse than VaR. And in the worst 1% of scenarios, you're looking at nearly $400k losses. This matters for position sizing: if you size positions assuming 4% maximum loss, you're severely underprepared for the tail outcomes that CVaR reveals.

How do you calculate CVaR from simulation results?

Sort all 10,000 simulated returns, identify the 95th percentile threshold (VaR), then average all returns below that threshold (CVaR). In Excel: sort your 10,000 portfolio returns ascending, use PERCENTILE.INC to find the 95% threshold (row 9,500 in your sorted list), use AVERAGEIF to average all returns below that threshold. Sounds simple, but when you're running this daily with portfolio composition changes, you're constantly updating correlation matrices, regenerating random draws, recalculating percentiles, and refreshing CVaR calculations.

Sourcetable calculates both automatically and explains the gap: "Your 95% VaR is $187k, but 95% CVaR is $298k—average loss in the worst 5% is 59% higher than the VaR threshold. This suggests significant tail risk from leverage and short positions." The AI can also show the full distribution beyond VaR: "Plot returns for the worst 5% of scenarios" generates a histogram showing exactly how bad things get when you breach VaR. For risk committees asking "what's our worst-case loss?", this visualization is far more informative than a single VaR number.

Real-World Example: Simulating the January 2024 Tech Selloff Before It Happened

Let's walk through using Monte Carlo simulation to assess risk for a $3 million growth portfolio on December 29, 2023, before the January 2024 tech correction. Portfolio composition:

  • $900k QQQ (Nasdaq-100 ETF, 30% weight)
  • $600k ARKK (ARK Innovation ETF, 20%)
  • $450k MSFT (Microsoft, 15%)
  • $450k NVDA (NVIDIA, 15%)
  • $300k GOOGL (Alphabet, 10%)
  • $300k TLT (20-year Treasury ETF, 10% for diversification)

Step 1: Upload historical returns and run baseline simulation

Upload daily return data for all positions from January 2022 through December 2023 (2 years, ~500 trading days). Calculate correlation matrix—QQQ/NVDA show 0.68 correlation, QQQ/TLT show -0.42 (negative correlation provides diversification), ARKK/NVDA show 0.71 (highly correlated growth exposure). Ask Sourcetable: "Run 10,000-path Monte Carlo simulation, show me one-month VaR at 90%, 95%, and 99% confidence levels."

Results (based on December 2023 market conditions):

  • Expected return (one month): +2.1% ($63k gain)
  • Standard deviation: 6.8% ($204k)
  • 90% VaR: -6.2% (-$186k loss)
  • 95% VaR: -8.7% (-$261k loss)
  • 99% VaR: -13.4% (-$402k loss)
  • Probability of negative return: 38%

The 95% VaR of $261k (8.7% loss) suggests moderate risk—there's a 5% chance of losing more than $261k over the next month. But notice the wide distribution: standard deviation of 6.8% means outcomes ranging from +15% gains to -12% losses are well within possibility.

Step 2: Stress test for tech sector correction

Late December 2023, you're concerned about stretched tech valuations and rising yields. Ask Sourcetable: "Run stress scenario: QQQ down 8%, ARKK down 12%, MSFT down 7%, NVDA down 10%, GOOGL down 6%, TLT up 3%. Show portfolio impact."

Scenario results:

  • QQQ position: -$72k (-8% × $900k)
  • ARKK position: -$72k (-12% × $600k)
  • MSFT position: -$31.5k (-7% × $450k)
  • NVDA position: -$45k (-10% × $450k)
  • GOOGL position: -$18k (-6% × $300k)
  • TLT position: +$9k (+3% × $300k)
  • Total portfolio impact: -$229.5k (-7.65%)

The stress scenario shows a $230k loss—within your 95% VaR envelope but definitely in the tail. The TLT position provides only $9k of offset (+3% on 10% of portfolio), insufficient to meaningfully reduce losses during a tech selloff. This suggests reconsidering your diversification strategy.

Step 3: Test increased bond allocation for protection

What if we increase bonds to 20% to reduce tail risk?

Ask Sourcetable: "Compare two allocations: current (90% equities/10% bonds) vs revised (80% equities/20% bonds). Show VaR and CVaR for both." The AI runs simulations for both allocations and presents comparison:

  • Current 90/10: 95% VaR -$261k, 95% CVaR -$342k, Expected return +2.1%
  • Revised 80/20: 95% VaR -$219k, 95% CVaR -$288k, Expected return +1.8%
  • Risk reduction: VaR improves by $42k (16%), CVaR improves by $54k (16%)
  • Return tradeoff: Expected return declines by 0.3% ($9k per month)

The revised allocation reduces tail risk by 16% at the cost of 0.3% monthly return—an acceptable tradeoff if you're concerned about near-term correction. The CVaR improvement ($54k) is particularly meaningful because it shows smaller losses in the worst scenarios.

Step 4: What actually happened in January 2024

January 2024 returns (first 3 weeks):

  • QQQ: -4.2% (-$37.8k on your position)
  • ARKK: -9.1% (-$54.6k)
  • MSFT: -5.8% (-$26.1k)
  • NVDA: -2.4% (-$10.8k)
  • GOOGL: -4.6% (-$13.8k)
  • TLT: -2.8% (-$8.4k—bonds fell on inflation concerns)
  • Total actual loss: -$151.5k (-5.05%)

The actual January pullback resulted in a $151.5k loss—within your simulated distribution (worse than 70th percentile, better than 95% VaR). Two surprises: NVDA held up better than expected (-2.4% vs -10% in stress scenario), and TLT declined (-2.8%) rather than providing diversification benefit. The Monte Carlo simulation correctly identified meaningful downside risk, though the specific scenario (bonds declining alongside equities) was outside your base case assumption.

Had you shifted to the 80/20 allocation based on simulation results, your loss would have been roughly $128k instead of $151k—saving approximately $23k. More importantly, knowing your 95% VaR was $261k meant you weren't surprised or panicked by a $151k decline; it fell within expected risk parameters and you could stay disciplined rather than panic-selling at the bottom.

Probability of Ruin: The Metric That Matters More Than VaR for Position Sizing

How do you determine if your position sizes are too large?

Calculate the probability of exceeding your maximum acceptable loss threshold across your simulation paths. VaR tells you statistical percentiles, but "probability of ruin" tells you the chance of hitting losses that would force you to change your strategy—whether that's investor redemptions, margin calls, or psychological inability to continue.

For a hedge fund with $50 million AUM, acceptable maximum drawdown might be 15% ($7.5M loss) before triggering investor redemptions. Run 10,000 Monte Carlo paths over a 6-month horizon. If 230 paths show losses exceeding 15%, your probability of ruin is 2.3% (230/10,000). Is 2.3% acceptable? That depends on your risk tolerance and investor agreements, but you now have a concrete number to evaluate.

In Excel, calculating this requires: running all simulation paths (computationally expensive), counting paths where cumulative loss exceeds your threshold (COUNTIF across thousands of cells), calculating percentage (count/total paths), and repeating for different threshold levels to understand sensitivity. Want to test different position sizes? Rebuild the entire model with new weights and re-run.

Sourcetable makes this analysis conversational: "What's the probability of losing more than 15% over the next 6 months with my current portfolio?" Response: "2.3% chance (230 out of 10,000 paths). Worst scenario shows 27.8% loss. Average loss in scenarios exceeding 15% is 18.4%." Follow-up: "How does that probability change if I reduce equity exposure by 10%?" Response: "Probability drops to 1.4% with reduced exposure—saving $450k in average loss across the worst scenarios."

This approach transforms position sizing from guesswork into probability management. Instead of arbitrarily deciding "I'll allocate 20% to this position," you can ask: "What position size keeps my probability of 15% drawdown below 2%?" and let simulation results guide your decision. For leveraged portfolios, this is critical—leverage amplifies both returns and tail risk, and probability of ruin analysis shows you where the risk-reward tradeoff breaks down.

Why Historical Return Distributions Fail (and How to Fix Them with Regime-Based Simulation)

Standard Monte Carlo simulation assumes returns follow the historical distribution—typically modeled as normal distribution with mean and standard deviation from past data. This works reasonably well in stable markets but catastrophically fails during regime changes. The problem: market returns aren't normally distributed. They exhibit negative skewness (more frequent small gains, occasional large losses) and excess kurtosis (fat tails—extreme outcomes more common than normal distribution predicts).

Take the S&P 500 from 2010-2019 (bull market): mean daily return +0.051%, standard deviation 0.78%, skewness -0.47, kurtosis 6.2 (vs 3.0 for normal distribution). Run simulation using 2010-2019 parameters, and you'll systematically underestimate the probability of extreme drawdowns because your model doesn't capture the fat tails and negative skew.

How do you account for regime changes in simulation?

Segment historical data into distinct regimes (bull market, bear market, high volatility, low volatility) and run separate simulations for each regime weighted by probability. Identify regimes using VIX levels, moving average trends, or economic indicators. For example:

  • Bull regime (VIX < 15): Mean +0.065%, StdDev 0.62%, probability 45%
  • Normal regime (VIX 15-25): Mean +0.038%, StdDev 0.88%, probability 40%
  • Stress regime (VIX > 25): Mean -0.021%, StdDev 1.94%, probability 15%

Run Monte Carlo separately for each regime, then weight results by regime probability to get blended VaR. This produces more realistic tail risk estimates because it explicitly models that stress regimes have both negative mean returns and higher volatility—double impact on losses.

In Excel, regime-based simulation requires: segmenting historical data by regime indicator, calculating separate correlation matrices for each regime, running 10,000 paths for each regime using regime-specific parameters, weighting results by regime probabilities, and aggregating to produce final distribution. Each regime needs its own set of formulas, and combining results requires complex array operations.

Sourcetable handles this through intelligent questioning: "Run simulation using three VIX regimes: low (VIX<15, 45% probability), normal (VIX 15-25, 40%), high (VIX>25, 15%). Show VaR for blended distribution and each regime separately." The AI automatically segments your historical data by VIX levels, calculates regime-specific parameters including correlations that shift during stress, runs 10,000 paths for each regime, weights by probability, and presents comparison table showing how VaR differs across regimes. Result: "Blended 95% VaR: -$287k. Low VIX regime only: -$198k. High VIX regime only: -$561k—showing tail risk is 2.8× worse in stress conditions."

Portfolio Rebalancing in Simulation: The Hidden Driver of Multi-Period Returns

Single-period Monte Carlo simulation (e.g., one-month horizon) is straightforward: generate correlated returns, apply to portfolio weights, calculate result. But multi-period simulation (e.g., 10-year retirement planning) requires deciding: do you rebalance back to target weights periodically, or let winners run and losers shrink? This decision dramatically affects simulation results because rebalancing enforces "buy low, sell high" discipline that improves long-term risk-adjusted returns.

Example: 60/40 stock/bond portfolio with monthly rebalancing vs no rebalancing over 10 years (10,000 paths):

  • With rebalancing: Final value $2.18M (median), StdDev $487k, worst 5%: $1.29M
  • No rebalancing: Final value $2.31M (median), StdDev $694k, worst 5%: $1.05M

No rebalancing produces higher median return ($2.31M vs $2.18M) because you let equity winners run, but also higher volatility ($694k vs $487k StdDev) and worse tail outcomes ($1.05M vs $1.29M in worst 5%). For risk-averse investors, the lower median return from rebalancing is worth the significant tail risk reduction.

Implementing rebalancing in Excel Monte Carlo is complex: track portfolio weights after each period's returns, compare to target allocation, calculate trades needed to rebalance, account for transaction costs, apply rebalancing to each of 10,000 simulation paths, and repeat for every period (120 months for a 10-year simulation). For 10,000 paths × 120 months, you're performing 1.2 million rebalancing calculations.

How often should you rebalance in simulation?

Test multiple frequencies (monthly, quarterly, annually, threshold-based) and compare risk-return tradeoffs. More frequent rebalancing reduces volatility but increases transaction costs. Less frequent rebalancing lets trends develop but risks large deviations from target allocation.

Sourcetable lets you test rebalancing strategies without formula complexity: "Run 10-year simulation with monthly rebalancing to 60/40. Compare to quarterly rebalancing and 5% threshold rebalancing (rebalance only when allocation drifts >5%)." The AI runs all three strategies across 10,000 paths, tracks weights and rebalancing trades, accounts for transaction costs (you can specify: "assume 0.1% cost per rebalance"), and presents comparison table showing median return, volatility, worst-case, and total transaction costs for each approach.

Result might show: "Monthly rebalancing: $2.18M median, $487k StdDev, $28k total costs. Quarterly: $2.21M median, $512k StdDev, $11k costs. Threshold (5%): $2.24M median, $538k StdDev, $8k costs." Threshold-based rebalancing offers best median return with lowest costs, but slightly higher volatility—you can choose the tradeoff that fits your risk tolerance. For wealth managers presenting options to clients, this comparison clarifies the impact of rebalancing discipline in concrete dollar terms.

Frequently Asked Questions

If your question is not covered here, you can contact our team.

Contact Us
How many Monte Carlo simulation paths are needed for stable 99% VaR estimates?
Convergence for 99% VaR requires approximately 10,000 paths for +-5% stability; 100,000 paths for +-1% stability. At 99% confidence, you are estimating the 1% tail -- only 100 out of 10,000 paths fall in the loss tail, creating high sampling variance. For 99.9% VaR (used in Basel Advanced approaches), you need 1 million or more paths for reliable estimates. Quasi-random (quasi-Monte Carlo) sequences like Sobol can achieve similar precision with 10x fewer paths than pseudo-random numbers, dramatically reducing computation time for large portfolios.
How do you model fat-tailed return distributions in Monte Carlo simulations?
Replace the standard normal distribution with Student-t distributions. Financial returns typically have kurtosis of 4-8 (vs. 3 for normal). Using t(4) degrees of freedom adds realistic fat tails: the 99% quantile is 22% larger than the normal equivalent. More sophisticated models use Gaussian copulas with marginal t-distributions to model joint tail behavior -- the 2008 crisis demonstrated that Gaussian copulas dramatically understated co-default probability in CDOs. For equity portfolios, a multivariate t-distribution with 5 degrees of freedom and a DCC-GARCH covariance matrix typically fits historical loss distributions significantly better than a simple multivariate normal.
What is the difference between cross-sectional and time-series Monte Carlo for portfolio simulation?
Cross-sectional (single-period) Monte Carlo simulates the distribution of portfolio returns over one time step by sampling from a multivariate joint distribution -- ideal for 1-day or 10-day VaR. Time-series Monte Carlo simulates a portfolio over multiple time steps (e.g., daily for 1 year) using a process model, capturing path-dependence and dynamic effects like option gamma, rebalancing, and option expiration. A 12-month simulation with 250 daily steps x 100,000 paths requires 25 million individual return draws. For portfolios with options, barrier features, or path-dependent strategies, time-series Monte Carlo is essential; cross-sectional Monte Carlo will misvalue these instruments.
How do you validate a Monte Carlo model through backtesting and model comparison?
Validation involves three tests: distributional testing (compare simulated return quantiles to historical quantiles using KS tests or Q-Q plots); VaR backtesting (count exceptions over 250 days and apply Kupiec/Christoffersen tests); and stress calibration (verify the model generates the 2008 or 2020 episode losses within the simulated distribution). A properly calibrated model should pass the Kupiec test at 95% confidence and show Q-Q plots with points near the 45-degree line through the tails. If the model consistently underestimates losses in the 1-5% left tail, increase the degrees of freedom or add a jump component.
How does Monte Carlo simulation handle option Greeks and non-linear payoffs in a portfolio?
For options, each simulation path reprices the option using the full pricing formula (Black-Scholes, Heston, etc.) at the future date -- capturing all non-linearities including gamma, vanna, and volga effects that delta approximations miss. A portfolio with 1,000 options might require 100 million option pricing calculations per simulation run. Variance reduction techniques (control variates, importance sampling, stratified sampling) can reduce required paths by 5-10x. For American options, use least-squares Monte Carlo (Longstaff-Schwartz) to approximate the exercise boundary along each path.
What are the best variance reduction techniques and when should each be used?
Control variates use a correlated variable with known expected value to reduce simulation noise -- effective when a good control exists (e.g., European option as control for an Asian option). Antithetic variates pair each random path with its negative, cutting variance by 30-50% at zero extra path computation. Importance sampling shifts the sampling distribution toward the loss region of interest -- most effective for rare event estimation (99.9% VaR) where standard Monte Carlo places few samples in the critical tail. Quasi-Monte Carlo (Sobol sequences) achieves convergence of O(1/N) vs. O(1/sqrt(N)) for standard Monte Carlo, making it 10x more efficient for smooth payoff functions.
How do you incorporate stochastic volatility and jumps into Monte Carlo portfolio models?
The Heston stochastic volatility model adds a mean-reverting variance process. Simulate both asset price and variance jointly using Euler or Milstein discretization, correcting for the non-central chi-squared distribution of variance. Merton's jump-diffusion model adds Poisson-distributed jumps: crash frequency lambda = 1-3 events per year with mean jump size -20% and standard deviation 15% are typical equity calibrations. The combined Bates model (Heston + jumps) captures both the volatility smile and the leptokurtic return distribution. Calibrate to observed options prices using market data -- using historical data alone typically underestimates implied jump risk premia by 30-50%.
Andrew Grosser

Andrew Grosser

Founder, CTO @ Sourcetable

Sourcetable is the AI-powered spreadsheet that helps traders, analysts, and finance teams hypothesize, evaluate, validate, and iterate on trading strategies without writing code.

Share this article

Sourcetable Logo
Ready to implement the Monte Carlo Portfolio Simulation strategy?

Backtest, validate, and execute the Monte Carlo Portfolio Simulation strategy with AI. No coding required.

Drop CSV