How Our CS2 Projection Model Works
Every projection on this page is generated by the same statistical model, which we publish here in full so you can evaluate its assumptions. The model has four inputs and three adjustments. Inputs are observable data — what each player has actually done. Adjustments are context — what's likely to be true about the upcoming match.
Input 1: Recent map sample (last 10–20 maps)
For every active player we track their last 10 to 20 competitive maps, drawn from S-tier and A-tier tournaments only. We exclude online qualifier matches against unranked opposition because those numbers don't predict performance against tier-1 teams. Per-map kills, deaths, ADR, KAST and headshot percentage are stored individually so we can weight them by recency and tournament tier.
Input 2: Career baseline
The recent sample is anchored against the player's career baseline so a one-off slump or hot streak doesn't dominate. A player whose career rating is 1.18 with a recent 1.05 will have their projection pulled toward 1.10–1.12, not 1.05. The exact weighting depends on sample size — 5 recent maps gets ~50% weight against career, 20 maps gets ~75%.
Input 3: Opponent rating differential
Stronger opponents suppress offensive output. We use each team's combined rating (sum of all 5 active players' career ratings) as the opponent strength index. A player facing a team 0.10 rating points stronger than their average opponent loses roughly 1.5 kills off their projection; the inverse boosts it. This adjustment is symmetric — weak opponents inflate projections by the same magnitude.
Input 4: Map pool
Mirage and Dust2 historically produce 8–12% more kills per map than the slate average. Nuke, Ancient and Vertigo run 5–8% below. We don't know the actual veto until match start, so we use the team's typical pick/ban tendencies to weight the expected map pool. Our database tracks every map a team has played in the last 90 days — that becomes the prior on what they'll likely pick this match.
Adjustment 1: Series format multiplier
BO1 matches use a 1.0 multiplier (one map of expected output). BO3 uses 2.4 — the historical average across all BO3 matches that go to a third map is roughly 2.4 maps because matches end at 2-0 or 2-1 with similar frequency. BO5 finals use 4.0. The multiplier converts per-map projections into series projections, which is what PrizePicks lines actually represent.
Adjustment 2: Role-aware variance
Entry fraggers and AWPers have wider variance than support players and IGLs because their roles produce more boom/bust performances. We don't change their projection — that would double-count — but we do flag them with wider confidence intervals. When the model is unsure, we say so.
Adjustment 3: Tournament tier
LAN events produce different stat profiles than online matches even between identical opponents. LAN matches average ~3% lower kill counts because anti-cheat is tighter, network conditions are uniform, and players play more carefully. We apply a small downward adjustment for LAN events to compensate.
What the model can't see
Tilt, injuries, lineup changes within the last 48 hours and "we just lost yesterday and morale is in the dirt" effects are invisible to the model. So is "this is the player's home country and they're playing in front of family." We try to surface known roster moves on this page, but unknowable factors are unknowable. That's why we recommend the projection-vs-line gap as the signal, not raw projection size — when the gap is 2+ kills, even unknown-unknowns aren't enough to flip the value most of the time.
CS2 Projections FAQ
How accurate are CS2 player projections?
Across a large sample of matches, our projections track within 5–10% of actual outcomes for most players. Individual matches deviate significantly because of variance — single CS2 maps can swing 8 to 15 rounds in a way that no statistical model can perfectly anticipate. The right way to use projections is as a baseline for decision-making: when our projection is significantly above or below the posted line, that gap represents statistical edge worth acting on. When it's within 0.5 of the line, it's noise.
What's the difference between CS2 projections and CS2 predictions?
Projections are quantitative — they output a specific number (e.g., 19.8 kills) for an individual stat. Predictions are qualitative — they output a directional call (e.g., "Spirit wins 2-0"). Both are useful for different decisions. Use projections for player-prop entries (over/under on individual stats). Use predictions for moneyline match bets (who wins). The CS2 picks today page combines both.
How often are the projections updated?
Projections refresh every 30 minutes during active match days. Each finished match feeds new player_stats data into the system, the model recomputes for affected players, and the next pageload reflects the new projections. Roster changes (transfers, benchings) propagate within 24 hours. Form trends update map-by-map.
Why do projections sometimes contradict the posted PrizePicks line?
That's the entire point. PrizePicks sets lines based on their internal model plus market movement. Our model uses a different methodology and a different data window, so when our number differs meaningfully from theirs, one of us is missing something the other sees. Statistical edge lives in those disagreements. Of course, both models can be wrong about a specific match — that's why we recommend stacking 2–3 edges into one entry rather than going all-in on a single pick.
Does the model account for new patches and meta shifts?
Partially. Map-pool changes and major patch drops are reflected automatically once enough new matches have been played on the new patch (usually 7–10 days). Bigger meta shifts — like a fundamental rifle balance change — take longer because we need a meaningful sample of matches to confirm they actually changed player stat profiles. During the first week of any major patch, we widen confidence intervals manually until the new sample stabilizes.
Can I get projections for past matches?
Each individual player profile shows their last 10 maps with actual stat lines, which lets you see how recent reality compared to what the model would have projected. We don't archive every daily projection page because the projection is most useful pre-match — once a match finishes, the actual stats become the canonical record on the match page itself.
