Going To The Past With The VMI To Predict The Future.

The Value Momentum Index (VMI) classifies playoff teams at Area Under the Receiver Operating Characteristic Curve (AUC) = 0.81 across 88 team-seasons (2023–2025) under the current V3 specification, from player-level production signals alone. If you pull one playoff team and one eliminated team at random, Production Value ($PV) correctly identifies which is which 81% of the time.

Total payroll shows no statistically significant relationship with playoff qualification (p = 0.60).

These are the full version-stamped blind backtest results. The framework has evolved and the numbers should reflect what the current system actually produces.

In truth I had no intention of testing this because I had no idea this might have been a feature. I had all of this historical data and I wanted to see how far it could go.

I wanted to know can a framework built to value individual players also classify team outcomes? If I took all of the aggregated data from the VMI and pointed it backwards without the season results, what would I get?

This isn’t a prediction engine. I built it to measure production value, grade contract efficiency, and project sustainability. That it classifies competitive outcomes from those inputs is structural validation and evidence that production economics drive competitive variance more reliably than payroll, roster construction philosophy, or market narrative.


88 Team-Seasons, Three Blind Seasons Under V3

VMI has been through two production specifications.

The original V2 ran six seasons (2020–2025) through each year’s contemporaneous calculation engine. In early 2026 I rebuilt the engine to integrate expected goals data and league-specific cap accounting structures. V3 is the current production system.

V3 retroapplied to 2020–2022 degraded because those seasons lack the xG data the new engine requires. That’s the expected result of applying a richer specification to years where the richer inputs don’t exist. The V2 numbers for those seasons remain valid under their original specification, but the backtest I’m presenting here uses V3 across its valid era: 2023 through 2025.

88 team-seasons. 29–30 clubs per season. All three seasons tested blind. The VMI scores were calculated before outcomes were known, and the weights were set by design, not fitted to results.


AUC = 0.81 Across Three Seasons, 0.92 in 2024

This is the correct test for a binary outcome: did you make the playoffs or not?

It’s a classification diagnostic and very rarely a linear correlation. AUC measures how well Production Value separates the two groups across every possible threshold simultaneously.

AUC = 0.81 pooled.

2024 alone hits 0.92, approaching clinical diagnostic accuracy.

2023: AUC = 0.77. 29 teams. Ranking correlation: 0.52. Playoff accuracy: 72.4% (21 of 29).

2024: AUC = 0.92. 29 teams. Ranking correlation: 0.74. Playoff accuracy: 86.2% (25 of 29).

2025: AUC = 0.79. 30 teams. Ranking correlation: 0.53. Playoff accuracy: 73.3% (22 of 30).

Overall PV-only playoff classification: 68 of 88 team-seasons, 77.3%.

Playoff teams carry a median $PV of $23.6M versus $18.8M for eliminated teams, a $4.8M separation (p < 0.001). The logistic relationship between Production Value and qualification probability is steep and well-calibrated — each incremental million dollars of PV produces a measurable, consistent shift in odds, with the inflection point sitting in the high teens and saturation reaching above $25M. Production Value explains 93 times more playoff variance than total payroll on the logistic model.


The V2 Baseline (2020–2025, Six Seasons)

For the record, the original V2 backtest across six seasons using each year's native specification produced: r = 0.704 (R² = 49.6%), 0.80 average ranking correlation, and 89.1% composite playoff classification from a three-signal blend of Production Value, VMI Alpha, and Value Delta. Total payroll explained 2.0%. Those numbers remain valid under the specification that produced them. V3 is a different engine measuring the same thing with richer inputs, and the backtest reflects that evolution.


$PV Explains 32.1% of Competitive Variance; Payroll Explains 1.3%

Production Value vs. league points: r = 0.566 (R² = 32.1%) across the three-season V3 era. A single production metric explains nearly a third of competitive variance across a salary-capped league with 30 different ownership structures. Total payroll explains 1.3%. Production Value explains competitive outcomes at 24 times the rate of total payroll on a linear basis, and 93 times on the logistic model.

The per-year signal tells the fuller story. 2024 alone: r = 0.718 (R² = 51.5%), ρ = 0.743. The strongest single-season correlation in the dataset under the current specification. 2023 and 2025 run at R² = 25–28%.

The market allocates capital on the wrong axis.

You’re going to read that line a lot here.


40.9% of Teams Ranked Within 3 Positions of Actual Finish

The average rank error is 5.55 positions off (median: 5.0). 40.9% of team-seasons were ranked within 3 positions of their actual finish, and 60.2% within 5 positions.

The VMI doesn’t predict exact final standing. It classifies which clubs are structurally positioned for playoff contention and which are not. The model is directionally accurate within a band, not surgically precise on exact placement. Production economics identify tiers, not seedings.

2024 was the tightest: mean error 4.41, 44.8% within 3, 72.4% within 5.


The Notable Misses Tell You Where The Model Has Edges

The biggest single miss: 2025 Philadelphia, PV rank 24, actual finish 1st. A 23-position error. Philadelphia produced $19.69M in $PV — below the league median — and won the Supporters’ Shield. The V3 engine undervalued their output relative to their competitive return. That’s a calibration edge to investigate, not a reason to dismiss the framework.

2023 St. Louis: PV rank 28, actual finish 4th. $14.72M in Production Value and they won the Western Conference. The second-largest miss in the dataset (24-position error) and a club whose output profile the V3 engine systematically underprices.

These outliers matter. They tell me where the CALC_ENGINE’s assumptions break down and where the next calibration improvement lives.


Production Value Is the Most Consistent Single Signal

I tested the individual signals independently against league points:

$PV: The most consistent predictor. Ranges from ρ = 0.52 (2023) to 0.74 (2024). Roster Alpha (sustainability): Second most consistent. Ranges from ρ = 0.49 (2023) to 0.76 (2024). Cap-Adjusted Delta: Strong in 2023 (ρ = 0.62) and 2024 (ρ = 0.71), dropped in 2025 (ρ = 0.40). Efficiency Delta: The most volatile — highest ceiling in 2023 (ρ = 0.74) but collapsed to near zero in 2025 (ρ = -0.02).

$PV and Roster Alpha are the load-bearing signals. The efficiency metrics swing year to year depending on how the cap structure interacts with production.


The Framework Evolves. The Signal Persists.

Player-level production economics, measured in dollars rather than goals or assists, contain enough competitive information to classify playoff teams at AUC = 0.81 and explain competitive variance at 24 to 93 times the rate of total payroll depending on the test used.

The VMI quantifies the mispricing gap between 32.1% and 1.3%.

It’s a framework built for individual player valuation and the underlying production measurement captures a real competitive signal. Front offices allocating roster budgets on payroll logic are operating on the weaker axis.

The VMI provides a structural alternative.

Methodology Note: All tests use V3 specification across the 2023–2025 valid era (88 team-seasons). V2 baseline (2020–2025, 169 team-seasons) reported for historical context under its original specification. AUC-ROC computed as probability that a randomly selected playoff team has higher PV than a randomly selected non-playoff team. Ranking correlation is Spearman ρ. Playoff spots per conference follow MLS format: 9 per conference (2023–2025). All tests blind — no retroactive application of end-of-season values.

Emmanuel Smith
Turnstile | Production, valuation, and in-season predictive intelligence for the MLS ecosystem.

Institutional access (clubs, funds, prediction markets): institutional@turnstilehq.co

© 2026 Turnstile Sports Business Intelligence, LLC. All rights reserved.

Share Turnstile