OddsIQ
Methodology

How our models actually work

Transparent inputs, honest edge thresholds, public track records, continuous recalibration. Everything you'd want to know if you were betting with these models.

01

The short version

For each game, we compute a probability distribution over outcomes (moneyline, spread, total) using team and player inputs. We compare that probability to the sportsbook's implied probability. The difference is the edge. Edges above our per-market threshold become recommendations.

Edges below the threshold become "no bet." We'd rather skip a game than force a marginal pick.

02

NBA moneyline model

The NBA engine is a 2,000+ line model that incorporates:

  • Team efficiency (offensive and defensive rating, pace)
  • Rolling efficiency over last 10 / last 5 games (weighted recency)
  • Player impact — top-8 rotation, adjusted for availability
  • Schedule context — rest days, back-to-back, travel miles
  • Injury adjustments from official injury reports
  • Line movement signal (where sharp money is moving the line)
  • Home/road splits and pace-adjusted matchups

Output is a sigmoid-shaped probability with a calibration factor (K = 0.25 currently) tuned against historical closing lines. We cap maximum edge at 12% to avoid over-confident picks from thin samples.

03

MLB moneyline model

The MLB engine is a 1,000+ line model focused on pitcher matchups and composite scoring:

  • Starting pitcher ERA, xFIP, K/BB, adjusted for opponent lineup handedness
  • Bayesian prior blending for early-season uncertainty (20-start weight)
  • Lineup composition against the starter's arm side
  • Bullpen strength and recent usage (leveraged innings remaining)
  • Park factors for run environment
  • Weather and wind (for totals and parks like Wrigley)
  • Home plate umpire strike zone tendency
  • Line movement signal

We publish NRFI probabilities separately because first-inning dynamics differ enough from full-game dynamics to deserve their own model.

04

CBB, CFB, and NFL models

These sports launch later in the 2026 roadmap. The base architecture mirrors NBA and MLB: team/player inputs, schedule context, injury adjustments, edge thresholds, public recalibration. Sport-specific wrinkles include recruiting-to-performance gap analysis for college football, pace and tempo adjustments for college basketball, and rest/travel situational edges for NFL.

When each sport launches, this page will expand with a dedicated methodology section. Until then, see the dashboards at /stats for the analytics backbone those models will sit on.

05

How we decide when to bet

Not every predicted edge is a bet. Our decision tree:

BET
Edge ≥ 5%
Confident recommendation
LEAN+
Edge 2–5%
Borderline, use discretion
LEAN
Edge 1–2%
Weak signal, tracking only
PASS
Edge < 1%
No recommendation

Thresholds are sport-specific and tuned against historical CLV (closing line value). We err toward PASS — a missed bet is recoverable; a bad bet isn't.

06

Recalibration

Every Monday, the recalibration job:

  • Pulls last 7 days of finalized games
  • Compares model predictions to closing line value (CLV) and actual outcomes
  • Runs a grid search over key weights (efficiency, recency, injury impact)
  • If a reweighting reduces Brier score without overfitting, updates config
  • Logs the change to our public methodology changelog

Live recalibration results: /transparency →

07

Data sources

  • NBA Stats API (stats.nba.com) — team, player, schedule, injuries
  • MLB Stats API + Baseball Savant — pitcher splits, Statcast
  • CollegeFootballData.com, CollegeBasketballData.com — NCAA data
  • nflverse — NFL play-by-play and stats
  • The Odds API — live odds across 40+ sportsbooks
  • OpenWeatherMap — weather for outdoor sports
  • Public injury reports and team announcements

Full data-source list with update cadence: /data-sources →

08

What we don't do

Equally important:

  • We don't sell individual picks — all recommendations are model-derived and public
  • We don't offer "guaranteed winners" — nobody should
  • We don't retroactively edit historical picks
  • We don't rank sportsbooks by commission rates
  • We don't use unit sizing that amplifies losses ("10-unit locks")
  • We don't model anything where we lack sufficient historical data
09

Limitations — what you should know

Our models are tools, not oracles:

  • Early-season samples are noisy; we use Bayesian priors but confidence is still lower in March (MLB) and October (NBA)
  • Player-specific props are lower-confidence than team-level bets
  • Injuries can invalidate predictions between lineup drop and tipoff
  • Edges of 1–3% are statistically indistinguishable from zero in small samples
  • Sharp books (Pinnacle, Circa) price efficiently; our edge comes mostly from soft books