Form in soccer: not always a winning formula

By Brandon Tan

One of the most discussed statistics in soccer leading up to a match is “league form”: the results of the team’s last six games. We see this statistic referenced again and again by commentators and pundits in their match previews and analyses. The phenomenon is all over the websites of sports news outlets, such as here in the Guardian.


However, is form a statistic that we should care about? Does being “in-form” really predict match outcomes?

To answer this question, I test whether there is a significant correlation between the match outcome and league form. I compiled the fixture results from the English Premier League seasons 2010-11 to 2015-16 for each club and ran a simple linear regression with points earned (Win- 3 points, Draw- 1 point, Loss- 0 points) as the response variable and form (the average points earned over the last six matches) as the explanatory variable controlling for home advantage and the end-of-season rank of the opposing team (see Figure 1).


Figure 1: The (lack of) relationship between form and points in English Premier League soccer

What I found was that there is no statistically significant correlation (at 5% significance) between points earned and form for any club. For instance, consider the results below from running the regression on Manchester United’s fixtures (see Figure 2). Home advantage and rank are clearly significant with p-values close to zero, while form isn’t even close with a p-value of 0.837, way above the 5% significance necessary to suggest a legitimate prediction model.


Figure 2: Regression model summary in which factors form, rank, and home-field advantage predict points in English Premier League soccer

Someone might argue that 6-game form is considering too many games, so I tried running the regression on form defined as the average points earned from the last 3 games instead. Again, I found no statistically significant correlations, with the p-value from running the regression on Manchester United fixtures at 0.494.

This analysis suggests that as soccer fans we really need to stop making such a big deal out of form, because it really doesn’t tell us anything at all.


Team p-value from regression (form = average points earned from last 6 games)
Man United 0.837
Liverpool 0.094
Chelsea 0.103
Tottenham 0.945
Arsenal 0.476
Man City 0.903



Dear Rob Manfred, The Millennials Are Leaving

By Max Kaplan, “The voice of the millennial sports fan”

We millennials are losing interest and it’s not our fault.

We can’t sit through another 4-hour MLB game with 11 pitching changes and 15 walks.

We groan every time a batter steps out of the box to re-adjust his batting gloves for the third time since the last pitch. Or when the pitcher starts pacing around the mound, fondling the rosin bag.

We think “get on with it” when the manager takes a full minute to decide whether to challenge a play and then challenges it – and the fans are gifted another 3-minute stoppage.

It’s not the 10-9 slugfest that’s the problem – it’s the 3-2 game that takes 3.5 hours where nothing happens.

In a game earlier this month, “Make Baseball Fun Again” Bryce Harper faced 27 pitches and didn’t swing at a single one. Great…

The baseball establishment mocks and shames the millennials for not watching the game the “right way.” They patronize our short attention spans and our “addiction” to social media. They say we don’t “respect” the game’s tradition.

I played baseball ‘till high school, have attended over 200 MLB games across 23 different stadiums in my life.

Rob, you need me and my friends – maybe not this year, but we are your future revenue stream – and I’m telling you, it ain’t looking good.

My observed reality: college students would rather watch the English Premier League (or literally any other sport) than an unwatchable baseball game on TV.

We think baseball is getting more boring and guess what? We’re right.

Baseball Boredom Index (BBI)

Everyone knows that MLB games are getting longer and longer. But there is also way less stuff happening.

I created a new statistic, called the “Baseball Boredom Index.” Or BBI for short. It is extremely easy to understand. The BBI is how many minutes you have to wait, on average, until something happens in a baseball game.

Let’s say an “action event” is a ball in play, or a stolen base attempt. This is a low bar for excitement. It includes sacrifice bunts, dribblers to 1B, and pop-outs to SS.

How long do you have to wait between these action events? Over three minutes! That’s a full commercial break between every single moment of ‘action.’

And it has trended up ever since the dawn of the game. The last three seasons have been the slowest in MLB history. As teams incorporate sabermetrics, we are seeing record-level strikeout totals, leaving fewer balls in play and more pitches per game.



Mr. Manfred, I leave you with a bold challenge. The gauntlet has been thrown. Bring us back to 2.5 BBI. 1985 is not that long ago.

The pace of play changes in 2015 led to slightly shorter games, and a lower Baseball Boredom Index. The pitch clock experiment in the Minor Leagues proves we can cut another 10-15 minutes from time of game. It’s a start, but not enough to keep our attention. Please hurry!

Rob, the writing is on the wall. You will lose the attention of the millennials (and everyone else) unless more progress is made. In fact, I just got three text messages, a snap, six tweets, four fb notifications since you started reading this so this article is now over. Bye.

Max Kaplan, “The voice of the millennial Sports Fan”, is a graduating senior at Princeton University Engineering School majoring in Operations Research and Financial Engineering. Max’s “Curse of the Home Run Derby” article hit the front page of in 2011. He has appeared on and NFL Network. His favorite sport used to be baseball.

NFL Divisional Realignment for Earth Day

By Max Kaplan


[Late edit] I was featured in an on-air interview to defend this article on Earth Day.

Earth Day is coming up on April 22, and even the NFL can do its part to reduce its carbon footprint.

I mean, just look at the divisions. Why must the Patriots travel all the way down to Miami every year when there are over twenty teams closer? Talk about waste…

The Eagles, Giants, and Redskins are all cozy and close, but who decided to throw the Cowboys into the East?

Let’s look at the facts. Dallas isn’t in the east. Indianapolis isn’t in the south. This is not how Mother Nature intended. In the name of conservation, preservation, and environmentalism, I’ve come up with a solution!

Let’s realign the divisions. Not willy-nilly but with an eye towards protecting our environment. There’s no need for the Chiefs’ 1,500-mile annual commute to Oakland. Kansas City certainly isn’t in the west of the United States. This isn’t the age of American pioneers. We can do better.

By my calculation, the NFL could save over 165,000 gallons in jet fuel each season by realigning the divisions.

The current divisions are a legacy of the NFL-AFL merger of 1970 and while we have seen the NFL climate change over the last half-century, the warning signs have been evident and growing stronger. We cannot afford to let this problem get any worse. It is time to decommission the old and open the new clean divisions. Sustainability is all about leaving a better future for the next generation. We must act now!

Below are the geographically optimal divisions – in order to minimize overall divisional travel.

  • Southwest Division – ARI, DAL, DEN, SD
  • Pacific Division – LA, OAK, SF, SEA
  • South Division – ATL, HOU, NO, TEN
  • Heartland Division – CHI, GB, KC, MIN
  • Southeast Division – CAR, JAX, MIA, TB
  • Northeast Division – BUF, NYG, NYJ, NE
  • Atlantic Division – BAL, PHI, PIT, WAS
  • Midwest Division – CIN, CLE, DET, IND


By realigning the divisions geographically, we put teams back in their ecological niche with local rivalries. The Raiders have only played the 49ers five times since they moved to Oakland. The Jets have played the Giants only twice in the last ten years – and they share a stadium.


  • Unfortunately, no recycling. None of the divisions stayed the same.
  • But we do have hybrids. The Pacific, Northeast, and Heartland Divisions all run on three teams from the old, clunky divisions.
  • Sometimes the planet is out of equilibrium despite our best efforts. Three of the Heartland Division teams had double-digit win totals in 2015.
  • The Super Bowl is a renewable resource. The Northeast Division (Patriots, Giants) and Atlantic Division (Ravens, Steelers) account for ten of the last sixteen Super Bowl titles.

Problem Formulation

For anyone interested, I layout the NFL minimal travel problem below.


Does Order Matter? An Analysis of Round 1 vs Round 2 Picks in the NBA

By Alex Vukasin

Are teams really making use of their first round picks? Have scouts been able to pinpoint the best talent with their first round picks, or is the draft round not a significant indicator of the talent and future of players in the league?


To answer this question, I analyzed the first and second round players drafted to the NBA from 2005 to 2014. All players who were drafted and played at least one game were included in the analysis, in order to identify only the players who have had NBA experience.

Performance Variables

The response variable throughout the analysis was the draft round, while the explanatory variables were games played, years played, minutes played in total, total rebounds, field goal percentage, three-point percentage, free throw percentage, minutes per game, points, points per game, rebounds per game, assists, and assists per game.

Advanced statistics used as explanatory variables in this study were “win shares”, (the number of wins contributed to a player), win shares per 48 minutes, “box plus-minus”, (the number of points out of the past 100 possessions a player contributes to his team above the average player), and “value over replacement player”, (the number of points a player scores on average given 100 team possessions over a replacement player compared to an average team over the 82-game schedule).

These variables created all share the common rule that a higher value resulted in a better career, while a lower value resulted in a less successful career. Below is a summary of all the variables.


Correlation Analysis: Positive among Performance Indicators, Negative with Round

The method used to test whether there was a causal relationship between “round” and all of these explanatory variables began by analyzing the correlation matrix of all the variables using STATA (Figures 1a and 1b). Although all the variables have a negative correlation with “round”, none are very high, as no value exceeds -0.5. There are also positive correlation coefficients between many of the explanatory variables, so it was not possible to include all the variables in one single regression without having multi-collinearity issues.

Regression Analysis: Performance Variables to Predict Round

Next, I ran some regressions to test which performance variables could help predict the player’s draft round, which would indeed suggest a relationship between the draft round and the player’s career performance.

In Figure 2a, field goal percent, minutes per game, rebounds per game and points per game all decrease as round increases, while minutes per game is the only statistically significant value (at 0.05 significance). This result seems to be notable as it supports the negative correlations between the explanatory variables and “round” as well as the fact that the correlation between “round” and minutes per game was the highest among the relationships between explanatory and response variables. Single regression tests were then conducted between each explanatory variable and response variable “round” in order to account for the high correlation between the explanatory variables (these regressions are not shown). All of these tests result in a negative coefficient for the explanatory variable that is significant at a level of 0.05.

Due to all of these factors having negative but small correlation coefficients with “round” and negative coefficients of significance for each single linear regression, it seems probable that the round in which a player is picked has a slight relationship with how their career will turn out. Although the correlations are not extremely high, the fact that there is a common negative relationship between all of these variables and “round” leads me to believe that there could be other variables indicative of success besides those listed which could be strongly correlated with “round”, that I could study further in another analysis.


Figure 1a: Performance variables negatively correlated with Round


Figure 1b: Performance variables positively correlated with each other


Figure 2a: Regression of Round with Field Goal Percent, Minutes per Game, Rebounds per Game and Points per Game


Editor’s Note: Edits have been made for clarity.

Ode to the Great Bambino

How the Best of the Best Performed Relative to Their Time Period

By Keith Gladstone

Only the best players of a given era are inducted into the National Baseball Hall of Fame in Cooperstown, from classic names like Babe Ruth and Lou Gehrig, to the most recent nominees of Mike Piazza and Ken Griffey Jr. Since the MLB era tainted by PEDs saw unthinkable, sky-high hitting totals, the question of who deserves a seat in the Hall of Fame is open for debate. The great differences in eras alone can convolute our interpretation of the game’s statistics, so in this article I will introduce a method of comparison.  


Indeed, I did an analysis to normalize the career HR totals of all Hall of Famers based on their historical era. Babe Ruth held the career home run record at 714 upon retiring in 1935. Hank Aaron shattered the record almost 40 years years later, but what does this actually mean? Was Hank Aaron better than Babe Ruth?

I calculated a new statistic to measure a player’s HR performance relative to the era in which they played. I call it the “Home Runs to Benchmark Ratio.”

HR to Benchmark Ratio = Annual Career HR Average / HR Era Benchmark

  • A ratio of 1 means the player was an average home run hitter in his own era.
  • A ratio of 2 means the player hit twice as many HR as the average player.

Pitching dominated the game in the “Dead Ball Era,” which ended upon the emergence of Babe Ruth and the Bronx Bombers in the 1920s. 714 HR in an era when the average player hit only 100 HR in a career underscores how impressive Ruth’s prowess was.

The Home Run to Benchmark Ratio rankings below confirm this, with Babe Ruth miles above the rest, followed by other classic Yankee heroes Lou Gehrig and Joe DiMaggio. Stunningly, Hank Aaron does not even crack the top ten. His ratio is 2.48, leaving him 26th overall. The HR performances of The Great Bambino, The Iron Horse, and The Yankee Clipper relative to their contemporaries shows just how incredible they must have been to watch.

MLB HOF All-time HR Rankings – Normalized by Era

MidYear Name Career HR HR to Benchmark Ratio
1 1924 Babe Ruth 714 7.13
2 1931 Lou Gehrig 493 4.76
3 1944 Joe DiMaggio 361 4.45
25 1964 Harmon Killebrew 573 2.49
26 1965 Hank Aaron 755 2.48
27 1950 Ted Williams 521 2.43
28 1935 Earl Averill 238 2.37
29 1960 Mickey Mantle 536 2.33
30 1975 Johnny Bench 389 2.31


Below is a graph of career HR per game against the average HR per game in that era . Players that appear above the line toward the top-left have higher ratios. Babe Ruth is the top left point. 

Screen Shot 2016-04-07 at 10.15.20 AM


The following assumptions were made for data collection and analysis:

  • Player performance is symmetrical over time with a peak in the middle of the player’s career
  • League averages are decent estimates of the “benchmark” over which a player could measure
  • This analysis will consider the modern era (Hall of Famers whose careers occurred mostly after 1900) and those with career batting averages above 0.250
  • Since Hall of Famers had relatively long careers, their statistics are reliable estimates of their abilities

Using the “middle year” as a barometer for a player’s peak

Since the number of players in this dataset is so large, we need a simplified way to capture a player’s top-performing year. For this analysis, we can take the player’s career totals and divide by the number of years played to get a yearly average for the player, and measure this average against the benchmark for the year (selected as the middle year of the player’s career). While this analysis is therefore not perfectly rigorous, it stills serves as a useful method for comparing players from different eras. Put another way, the performance benchmark in 1995 should be similar enough to 1997, and the benchmarks in the 1990s are different enough from those in the 1920s where a benchmark a few years off wouldn’t be a significant issue.

Data Sources

The Mets’ World Series offensive collapse was inevitable

By Ben Ulene

After this year’s World Series ended in a Game 5 comeback win for the Royals, plenty of questions remain about what caused the Mets – who almost nobody [1] predicted would go home after just five games – to lose so quickly. While sloppy defense certainly contributed to their collapse, an even bigger liability was their offense, which only managed a meager 7 extra-base hits in the series.

Should we be surprised that the same team that had excelled at the plate during the NLCS, putting up 21 runs in a four-game sweep of the Cubs [2], could only manage 10 runs over their four losses to Kansas City? Probably not; as the statistics show, the Mets not only came into the World Series with a historically weak offense, but they also were up against a Kansas City bullpen that dominated games like perhaps no other bullpen before.

2015 Mets Offense
Statistic Value All-Time Rank (out of 202 W.S.  teams since 1914)
BA .244 200th
SO 1290 201st
R / Game 4.22 177th
OPS+ 97 181st


First, the Mets’ offense, for a pennant-winning team, had been weak throughout the regular season. The team’s .244 regular season batting average was the fourth-worst of any World Series team since 1914; on top of that, their 1,290 regular season strikeouts were more than any other pennant-winner aside from the 2013 Red Sox (who more than compensated with a .277 regular season team average).

The Mets’ regular season mark of 4.22 runs per game was also the third-lowest of any World Series team in the last twenty years – and the only two to score less played each other (the 2014 Royals and Giants).

Perhaps most strikingly, the team’s OPS+ for the season – a statistic that measures a team’s OPS (on-base percentage + slugging percentage) relative to the rest of the league, with 100 being the league average – was 97, putting it below average in the big leagues this year. Only 23 other teams have ever made it to the World Series with an OPS+ of 97 or lower; of those, only 9 managed to win the series, and none since the 1997 Florida Marlins.

All in all, this was not an offense that anybody should have expected to put up huge numbers against any pitching staff in the World Series.

2015 Royals Bullpen
Statistic Value All-Time Rank (out of 202 WS teams since 1914)
Innings 539 2/3 1st
ERA 2.72 22nd
BAA .214 8th
K/BB 2.63 6th
WHIP 1.13 12th
tOPS+ 78 4th
sOPS+ 80 29th


The Mets weren’t just facing any ordinary pitching unit in the World Series, however, but rather one with a historically dominant bullpen for a World Series team.

Not only did the Royals bullpen hold opposing batters to a .214 average during the regular season, the 8th lowest for any pennant-winning club, but simultaneously posted a 2.63 strikeout-to-walk ratio, the 6th best regular season mark for a World Series team. The bullpen also maintained a 2.72 ERA during the regular season, the lowest for any World Series team since the 1990 Oakland A’s.

More complex statistics also reflect the dominance of the Royals’ bullpen. Its tOPS+ against – which reflects opposing hitters’ OPS relative to how they hit against starting pitching – was 78 (the 4th lowest for a World Series bullpen), making the Royals’ bullpen one of the best all-time at shutting down opposing offenses mid-game. And the bullpen’s sOPS+ against – which reflects opposing hitters’ OPS relative to the average OPS of hitters across the league – was 80, highlighting the bullpen’s excellence at shutting down hitters entirely.

While all of these numbers are impressive, what will go in the history books is how manager Ned Yost used his bullpen, which was a lot. The Royals’ bullpen pitched 539 2/3 innings this season, more than any other pennant-winning team in history. It’s not surprising that winning teams generally pitch their bullpens less than average, since more bullpen innings generally signifies bad starting pitching; in the Royals’ case, however, their bullpen was just really effective.

During the World Series, Royals relievers pitched 23 2/3 innings, compared to their starters’ 28 1/3. Take away Franklin Morales’s 6th inning implosion in Game 3, and the numbers are staggering: 1 run and 14 hits in just over 23 innings (an ERA of 0.39), with 4 walks and 30 strikeouts. And given just how dominant those relievers had been all year – and how susceptible to offensive slumps the Mets had been – the Royals’ dominant and decisive showing might just have been a foregone conclusion.



The Hot Hand: NBA Shot Streaks and the Geometric Distribution

By Neil Rangwani

Each year, as the NBA season kicks off, the “hot hand” debate (or, according to Wikipedia, the hot hand fallacy) resurfaces – are streaks of made shots indicative of a player getting hot, or are they just random occurrences? Here at Princeton Sports Analytics, we’re not happy discussing this with just anecdotal evidence (I mean, did you see Steph last night?), so we did some analysis(!). It turns out that (surprise) the data show that NBA superstars Steph Curry, LeBron James, and James Harden don’t get hot any more than a coin that lands on heads 5 times in a row does.

The argument that shot streaks are random is based on probability. Any time an event (with two outcomes) is repeated many times, streaks are bound to occur. To study whether NBA shot streaks are random, or if players have disproportionately long “hot” and “cold” streaks, I used shot-by-shot data from the 2014-2015 season (from and applied a geometric distribution framework.

The geometric distribution is a probability distribution that is used to model repeated trials of events that have two distinct outcomes, each of which occurs with a constant probability. NBA shots roughly fit these requirements – there are clearly two outcomes (makes and misses), and I’ll assume that a player’s season-long field goal percentage is the “true” probability that they make any given shot.

Using the geometric distribution, we can model the number of made shots in a row. Essentially, a streak means that a player makes a certain number of shots in a row and then misses the next. Mathematically, this is the probability of making k shots in a row (pk) multiplied by the probability of missing (1-p)k.

P(X = k) = pk * (1 – p)

Next, I applied this framework to the shot-by-shot data from last season for Steph Curry, LeBron James, and James Harden. Using the data and the geometric distribution, here’s the expected and observed shot streaks.

Curry James Harden
Streak Length Probability Expected Observed Probability Expected Observed Probability Expected Observed
1 24.98% 178.09 184 24.99% 174.15 176 24.64% 205.03 207
2 12.16% 86.69 85 12.19% 84.95 96 10.84% 90.17 88
3 5.92% 42.20 47 5.95% 41.44 38 4.77% 39.66 41
4 2.88% 20.54 17 2.90% 20.21 19 2.10% 17.44 18
5 1.40% 10.00 7 1.41% 9.86 7 0.92% 7.67 7
6 0.68% 4.87 4 0.69% 4.81 2 0.41% 3.37 5
7 0.33% 2.37 2 0.34% 2.35 1 0.18% 1.48 0
8 0.16% 1.15 1 0.16% 1.14 1 0.08% 0.65 0
9+ 0.15% 1.09 0 0.16% 1.09 0 0.06% 0.51 0

Here’s a visual version of the same data:

Stephen Curry Shot Streaks vs Expectation

LeBron James Shot Streaks vs Expectation

James Harden Shot Streaks vs Expectation

While it seems that the observed and expected distributions are close, we can actually quantify whether they are. We’ll use a chi-squared goodness-of-fit test to tell us whether the observed data fits the geometric distribution.

p-value Chi-Squared Test Conclusion
Curry 0.89 Fails Random
James 0.63 Fails Random
Harden 0.89 Fails Random

The p-value of a chi-squared test tells us the probability that the observed values are from the theoretical distribution. These p-values strongly suggest that the observed shot streaks are well explained by the geometric distribution.

Bringing this back to basketball… what we get from this analysis is that shot streaks match what we’d expect if they were truly random. Even NBA superstars don’t “get hot” – instead, since they tend to have higher field goal percentages in general, they naturally have longer streaks of made shots.

One thing to keep in mind, however, is that the model doesn’t account for timing. It’s totally possible that a player’s shot streaks are geometrically distributed overall, but when you isolate playoffs or overtime games, as examples, they tend to have longer streaks than expected. I mean, did you see LeBron in that Pistons game?

3v3 Overtime is Working

By Antonio Papa

This season, the NHL has initiated a rule change to create more overtime goals and fewer shootouts. Now, overtime play will be 3-on-3, instead of 4-on-4. A quick statistical analysis shows us that the new rule has – and will continue to – increase overtime scoring.

Shootouts were added after the 2005-06 lockout as an alternative to ties in the regular season, but they have been criticized as essentially flipping a coin to decide the winner. 3-on-3 play, in contrast, gives stronger teams an increased chance of scoring goals. In the early 1980s, the Edmonton Oilers even told their defenders to get into mutual roughing penalties on purpose so that the game would become 4-on-4 or 3-on-3. Then, Wayne Gretzky, Jari Kurri and Mark Messier would take over on the open ice. This was effective because players with superior skating ability gain an upper hand in 4-on-4 and 3-on-3 situations, resulting in more goals scored.

The NHL instituted the “Gretzky rule” in 1985 as a direct response to these shenanigans. The “Gretzky Rule” created the concept of coincidental minor penalties and allowed full strength play for offsetting penalties. A few years later, the NHL reversed the change in an attempt to reclaim some of that high-scoring open play. Expect this year’s 3-on-3 overtime to benefit top-heavy teams, like the Pittsburgh Penguins, who are sure to take advantage of the situation with skaters like Sidney Crosby, Evgeni Malkin and Phil Kessel.

Over the past eight seasons, 43% of overtime games had a goal and the other 57% needed a shootout (2,227 games). In this preseason’s overtime games with the rule change in place, 72% of overtime games had a goal and only 28% needed a shootout (24 games). Even with the small sample size, we can use a T-test (difference of means) to determine whether this change is statistically significant. The standard binomial error is σ = .0105 for the regular season set and σ = .0926 for the preseason set. The result is that we are 99% confident that the new rule decreases the proportion of shootouts in overtime by 40%-58% (about half) and should lead to high-octane teams winning more games in overtime.

[Editor’s Note: The last paragraph was edited post-publication to clarify the statistical test used]

AL Wild Card Live Probability Tracker

By Patrick Harrel

With three days left in the MLB season, there is still a lot to settle. The Astros looked strong as they sat atop the AL West for much of the year, but are now just trying to hold onto the second wild card spot. Meanwhile, the Angels have surged in September, and the Twins have also stayed in the race. With three games left, the Astros are holding on by 1 game over the Twins and Angels, and will look to stay ahead as they play in Arizona this weekend.

But how will the season finish? We will be tracking just that here, using live probabilities from Fangraphs’ win expectancy model. Follow here and see how your team’s chances at the playoffs change as the games go on tonight and into the weekend.