Methodological Reviews of Basketball Models – Part 2

by Andre Vizzoni


As outlined in the first post of this series, I will be reviewing several models, beginning with Wins Produced. The creators of Wins Produced are David Berri, Martin Schmidt, and Stacey Brook, three sports economists. Their work on the subject of basketball analysis spans many academic papers, specially Berri’s work, and has also spawned two books and two blogs.

Books and Blogs

The first book to come from the study – The Wages of Wins – serves as a collection of the academic papers. In it, the authors’ conclusions are woven together to form a more complete and coherent narrative. The result is an enjoyable read, even if one disagrees with a lot of the authors’ conclusions. The is the second book, Stumbling on Wins, includes new data, new papers, and new conclusions. The Wages of Wins Journal was used from the launch of the first book until 2011. By then, the content had moved over to, a site which is now inactive. Aside from the authors, a number of other people contributed to the Journal.

Interlude – Probability Theory

When working with probability and statistics, there are those who believe strongly in the appropriateness/inappropriateness of a given method. My thinking does not agree with this tendency, though my disagreement does not mean that ‘anything goes.’ Rather, my contention has to do with the nature of probability. Larry Wasserman wrote in his book, All of Statistics, that ‘Probability is a mathematical language for quantifying uncertainty’. Probability Theory, then, tries to make chaos (uncertainty) more certain. Probability Theory brings rules and structures to a landscape where there are none. Using instruments constructed by humans to measure that which is ineffable by humans is a losing battle. When the object involves people (like, say, a game where ten really tall people run around trying to put a ball through a hoop) , it becomes even more of a Sisyphean task, as people are nothing if not unpredictable.

Is it possible to measure chaos? Or can we only work with reasonable approximations of it? I tend toward the latter response, although it is more difficult to define ‘right’ and ‘wrong’ when we take this view. Working with approximations makes matters unavoidably subjective (that is rich coming from someone who is going to do a methodological review of other people’s work, isn’t it?)

Despite the difficulty in distinguishing right and wrong with certainty, analysis and critique teach us a great deal. Review often uncovers clear flaws or strengths in a model which may not have been apparent at first blush. The task is worthy, but is seems advisable not to overestimate our facility to measure the unmeasurable.

Rather than simply simply giving up on modeling chaos, however, I affirm that we should everywhere deploy the ‘reality test’ by taking our hypotheses out to the real world and testing them against observations to determine how close we are to being right (or how close we are to being wrong).

This means that if a model says something should be happening – or should have happened – a certain way, we must attend what is actually happening – or has happened. Over time, if a model repeatedly fails the reality test, it may be time to retire the model.

The Authors

With that out of the way, let’s briefly introduce the authors and name a couple of assumptions regarding possible influences on their work. One would assume that the authors both being college professors and researchers affects what they write and how they write. I also assume that Economics, as a research field, shares similarities with Statistics, among which is a focus on the principle of parsimony. The direct application of this principle means that the researcher prefers a simple model to a more complex one, when the results are similar. In this case, it leads Dr. Berri, et al to look for a model that explains results in basketball as simply as possible.

The authors; profession is also important due to the influence of the peer review process. Since the authors anticipated scholarly critique by peers in their initial publication(s) in journals, they were forced to take great care to avoid making assumptions without diligent proof.

Last but not least, one needs to take into account the authors’ goal in writing. The titles of the major works give good insight on this count. By ‘Taking Measure of the Many Myths in Modern Sport’, the ‘Two Economists Expose(d) the Pitfalls on the Road to Victory in Professional Sports’. The objective, then, is to point out what is done wrong in professional sports, from an economist’s point of view.

The Model

The Wins Produced model has the objective of explaining why teams win in basketball. Tme models defines basketball as a game where possessions are the currency that matters. The premise was based on previous work by John Hollinger and by Dr. Dean Oliver, according to the authors. In their first book, they affirm that ‘Points scored are determined by how often a team has the ball and its ability to convert possessions into points.’.

Already we see that the authors adopt an economist’s way of looking at a problem. There are resources – possessions – and there is an objective – winning. The way one achieves the objective depends upon the deployment of resources. If a team generates more resources than its opponent, it has a better chance of winning. If the team mantains its resources at a higher rate, by spending them more efficiently, the team benefits.

The authors try to prove that their view of basketball is accuate by analyzing game data from the 1987-88 through 2008-09 NBA seasons. The study concludes that teams do not consistently generate more possessions than their opponents. Thus, whether a team wins a game or not can be explained largely by how efficient they and their opponent are at spending possessions.

The Philosophy Behind the Model – Possessions

The authors run a regression model (clearly a linear regression). The response variable is the number of wins a team has. The two explanatory variables are the number of points scored per possession employed and the number of points allowed per possession employed by the opponent.

A regression model estimates how a group of variables affects one another. In the case of the Wins Produced model, this means that the model is intended to express the effect of scoring efficiency and opponent scoring efficiency on winning. As such, the model allows us to derive an estimate how many more wins a team would be expected to have if its offensive or defensive efficiency improved.

Since the study found that NBA teams average 1.02 points per possession in the sample, we conclude that a possession is worth about the same as a point. This finding prompts the authors to consider that their regression can proceed with possessions as the explanatory variables. To wit, knowing the point value of a possession allows the model to estimate the “wins value” of a possession.

By finding out the value of a possession, the authors can estimate the values of box score statistics. One can think about any stat with regard to the spending of possessions. For example, to take one shot means to spend one possession. To rebound a ball means to generate a possession. And this goes on. If one looks at how many possessions a player generates and at how they spend them, one can infer how many wins the player creates for their team.



Next, Dr. Berri et, al model defensive value. In their study, the most important factor is how good the whole team is at defending. Since the traditional box score does not give proper credit for each player, the authors look for a compromise. The position they choose is to divide the credit for defensive work equally between the players on a team

For an example, let’s take a game between Team A and Team B. If Team B scores on Team A, the number of wins generated by Team B is the same as the number of wins lost by Team A. All of the lost value is divided by five and given as a ‘demerit’ to every player on Team A. To put it in a reductive way, this fourth step is the team defensive adjustment. The model sees all players as equally responsible for the team’s defense – whether good or bad.

Position Adjustment

The next step is to estimate how many wins every player produces. Doing so necessitates a position adjustment. Different positions lead to different expected amounts for each box score statistic. Centers tend to heavily outrebound Point Guards, while ballhandlers usually get more assists than pivots do. Hence, there is a need to adjust win production for the position a player plays.


After adjusting for position, we must take into account the fact that teammates’ blocks and assists may artificially inflate a player’s value. Thence, the final output for each player is a result of the player’s box score stats, a position adjustment, a teammate adjustment, and their credit for the team’s defensive play.

The Results

A possession is worth about 0.032 wins, according to the model. And the variation of scoring efficiency for both sides of the ball explains around 95% of the variation of wins. Reporting the results in this manner indicates that Dr. Berri et. al are using the coefficient of determination to evaluate their model. Given the high value for the coefficient, they conclude that their model is doing its job well. To be clear, the conclusion brought by this result is quite obvious and straightforward: that scoring more points than your opponents means you will win more.

As previously mentioned, the value of a possession leads to the value of other stats. For assists, however, the situation is less clear. The authors look at what happens to a team’s production when one of their players gets an assist. In this separate regression, they found that every assist counts as around 0.67 points.

At a later date, Dr. Berri also adjusted the value of a defensive rebound. Many times, a rebound is easily available for all of the defensive players. As a result, some rebounds are ‘stolen’ from teammates. Still, for the adjustment, one takes a player’s production and divides it by the mean production of their position.

Methodological Review

The Good

What I like about the model is its simplicity. One can understand its outputs without looking too deeply at the math. A team tends to win if it uses its possessions wisely. Every boxscore stat has a direct relationship with possessions. A player’s value derives from how many of each box score stat the player accrues. However, the position they play and the quality of their teammates affects the player’s own statistical profile. How much production the other team generates also has an impact. After adjusting for all of that, you have the value for a given player.

The model is simple to understand, and is also easy to reproduce. Every step to calculate its outputs is freely available. The model can also translate directly to other leagues. In fact, Dr. Berri and Dre Alvarez have employed the model with data from NCAA basketball and the WNBA. I have done the same for Brazilian basketball data. Portability is a clear plus for Wins Produced.

Finally, Wins Produced ends up being fairly accurate at evaluating a player’s production while being quite parsimonious. In fact, it is a lot like my model for sports results in its adherence to the principle of parsimony. Both get good results even though they are simplified representations of reality. At bottom, this is precisely what models do – treat their subject as though it were simpler than it truly is. To put it another way, all models are wrong but some are useful. Wins Produced seems to belong in the useful category.

Defensive Bias

Now, onto the biases presented by the model. The first and most prominent bias is the way the model handles defense. I think I understand the rationale – that playing defense is a team activity. For the authors, this implies that a defense is only as weak as its weakest link. That is a very reasonable assumption. I would even argue that the concept probably holds true in many sports, like offensive line play in american football or the sport of football (soccer) en toto according to the authors of The Numbers Game.

For all that, when we are talking about specific players, the assumption seems fishy. Based on the model, one would conclude that the worst defender of a team has about the same defensive impact as its best. That conclusion is not reasonable, though Wins Produced is not the only model which leads to conclusions like this. In fact, the limitation shared with a few other models, and has a great deal to do with the kind of data that the modeler(s) have had access to. In 2006, when Wins Produced first appeared, better defensive data was not widely available.

Be that as it may, there is more access do defensive data now. Greg himself has a model that tries to account for both the difficulty of a player’s defensive role and how efficient the player is within that role which I highly recommend. Greg also has a book which includes his own methodological review of basketball models. His review helped a great deal with mine. The book, as a whole, would be a great read for anyone interested in further evaluating the strengths and weaknesses of various models.

Role Bias

There is also explicit bias in the way the model adjusts for position. While I quite like the fact that the model uses a position adjustment at all, there are probably better ways to make that adjustment. In basketball , different systems ask for different things from their players. As a result, not all power forwards are asked to do the exact same things.

System A might ask of its power forwards that they involve themselves more in the passing game, on the perimeter. As a result it would be a bit harder for them to grab offensive boards than it would be for a PF on another team. Similarly, System B may put emphasis on rebounding, which would give its PFs more chances for rebounds.

Treating every player who plays the same position as equal is reasonable, but there is information lost. I remember that Ari Caroline, a contributor of The Wages of Wins Journal, actually tried to adjust Wins Produced based on his position algorithm. Using the algorithm, he was able to derive optimal lineup construction principles using Principal Component Analysis.

Usage Bias

Since the professors value every event in the sport in relation to the efficient expenditure of possessions, they see missed shots as wasteful. In consequence, the model penalizes players that miss lots of shots, owing to the possessions lost. In general, the penalty is reasonable. The problem with the demerit for misses is that it does not make allowance for a player’s usage. This Eustacchio Raulli tweet is a perfect example of why usage is a significant factor in this context. If a player is good enough as a shooter for their team to give them all of the difficult end-of-play-clock shots, this says good things about their theoretical value. Yet, Wins Produced would punish the player for that, as they would lose value in the model for wasting too many possessions.

In a similar vein, the main ball handler of a team is punished for turnovers, which does not account for how many more turnovers the team could have if another player usurped the ball handling duties instead of the assigned lead ball handler.


In a methodological review, it is necessary also to explore questionable areas or concepts which beg further explanation or demonstration. One such aspect of this study is the use the coefficient of determination as the authors’ sole ‘proof’ that their model does well what it intends to do. The coefficient of determination (or r-squared) is not the be-all and end-all of analyzing model fit.

There are also problems with how the model estimates the value of a possession. First off, 1.02 is not the same as 1. More importantly, the study does not evaluate how the scoring efficiency of the league varies throughout the years. There are years where that efficiency might be significantly more or less than 1.

Perhaps my biggest gripe with the model, however, is the choice of a linear regression. Linear regressions have normally distributed response variables. A team’s number of wins, though, is a discrete variable. To put it simply, one can win 2 or 3 games, but not 2.5. In spite of that, a linear regression treats that as possible.

That is why, in my opinion, the choice of regression model is inherently questionable. In the regression, the modelers look at full season wins, which are most probably binomially distributed. By the Central Limit Theorem (CLT), one can act as if win numbers follow distributions that are approximately normal. Why use approximations to run linear regressions, though, when one can just as well use the true values and run Binomial Regressions?

The Bottom Line

I went into this methodological review with a basic level of approval for the Wins Produced model, and the review has not changed that viewpoint. The model is both simple and mathematically robust. It has a few very reasonable assumptions, and the books and blogs bring a lot of insight into basketball. It also has a few questionable assumptions and a few biases, which I have illuminated here. These questionable areas do not invalidate the model, however; as with any type of scientific inquiry, they invite further research.

Up Next

In the next post of the series, I will do a methodological review of the Win Shares model, developed on Basketball Reference based on the work of Dean Oliver.


Leave a Reply