At this time of year, NBA analysts, fans, and front offices are all concerned with cost efficiency. Free agent season stimulates near-constant conversation evaluating each new contract as a good deal, bad deal, or fair deal. What is the basis of all the conversation, though? To be more specific, what is the standard used to determine whether a player is overpaid, underpaid, or fairly paid? If the standard is subjective, then offseason “grades” merely reflect the degree of correlation between a team’s offseason moves and what I happen to think each player is worth. That correlation is not valuable to anyone aside from me. Nobody else can use grades like that, because the grades only reflect a subjective opinion.
Rubric for Offseason Grades
In the world of education, any grade is based on a rubric – a set of standards that defines what type of performance earns each grade. Rubrics allow a grade to be actionable by other people, since anyone can know what type of performance earns a “B,” and how much better it is than the type of performance that earns a “C.” In this article, I want to lay out the rubric I will use to grade offseason moves across the NBA in my 2019 Offseason Crunch, which drops tomorrow. The rubric begins with the following demarcation:

The table above sets the baseline. I then adjust the baseline by up to two partial letter grades to reflect the raw value of the player, independent of his contract (e.g., a “C” could be raised to a “C+” or to a “B-”). Making this adjustment allows us to avoid placing too much blame on teams for overpaying for good players, and also prevents us from giving teams too much credit for players signed out of the bargain bin.
Finally, there is an option to adjust the grade by one partial letter based on the fit of the player on the new team. For example, if a player is an exceptionally good fit on his new team, we might grade the move as a “C+” instead of a “C.”
How Much Do Wins Cost?
Now then, the next question to ask is, what is the basis of the comparison required for the rubric? In order to compare a player’s projected value with the value of his contract, we first need to know how much each player is worth. While my work in The Basketball Bible and on this site has determined a method for attributing a team’s wins to the players on the team, we still need to know the dollar values of those wins.
How much is a win worth? There are two ways to go about answering that question. The first way would be to sum team payrolls, then divide by the total available wins, giving us the value of dollars per win. Doing this with the financial data from 2018-19 yields a value of $2.95 million per win. There’s nothing intrinsically wrong with this way of approaching the question, but it is not the method I chose.
Instead, I chose to evaluate the existing relationship between player wins and player salary. In the chart below, you can see the relationship between the wins credited to each player by my analysis in 2018-19 and the player’s salary in 2018-19.

Why the Relationship Between Wins and Salary is Exponential
When we analyze the relationship between the two variables, it becomes clear that there is not a linear relationship between the two. Calculating the average cost per win, as in the first method, can only accurately represent a relationship of direct variation. If the dependent variable (salary) does not vary directly with the independent variable (wins), then finding the average cost per win is not helpful.
The relationship between wins and salary is exponential rather than linear. If the relationship were linear, a 2-win player would make twice as much as a 1-win player, a 4-win player would make twice as a much as a 2-win player, an 8-win player would make twice as much as a 4-win player, and so on. What we actually observe, however, is that an 8-win player usually gets paid far more than twice as much as a 4-win player. Each additional win generated by a player is worth more than the preceding win. To state the idea in overly simplified form, one win might be worth one dollar, but two wins are worth $2.50, three wins are worth $5, four wins are worth $8, etc. Each successive win is worth more than the win that came before it.
Why is this so? In the NBA, it is difficult to acquire a player who can put up seven or more wins in a season, and even moreso to acquire an all-league talent who can contribute 10 or more wins in a season. What does every single team in the league want? Answer: an all-league talent that can rack up 10 wins a season. The result is that there is far, far greater demand for elite players than there is supply of those players. The supply/demand imbalance creates an inflationary effect on the salaries of not only the best players in the league, but on the salaries of second-tier stars as well. This tremendous upward pressure elevates the salaries of high-level players beyond the level which would be suggested by a linear relationship.
The converse is of course also true. The players at the bottom of the league are both easily replaced and frequently replaced. There is a natural deflation on these players’ salaries – which is the reason for having a minimum salary, by the way. A player who generates one win is not ¼ as valuable as a player who puts up four wins. A player who puts up four wins is a good player, and usually a starter. A player who produces one win is generally a benchwarmer. Teams are not going to pay benchwarmers ¼ the salary of starters, for a very simple reason: paying bench players on such a model would use up too much of the team’s cap room, and leave them unable to sign or retain starters. What’s more, there is no reason for a team to pay that level of player more than the minimum, because there is always a supply of borderline NBA players available for the minimum salary. The situation at the bottom of the league is precisely the reverse of the situation at the top of the league. There is a copious supply of fringe NBA players (whether they are on NBA rosters, in the G-League, or overseas), but little demand for them. Having greater supply than demand creates a downward pressure on the salaries of such players.
Calculating the Cost of Wins
To describe the relationship between wins and salary, I use the exponential function f(Wins)=$2,000,000(1.38235)Wins. Broadly speaking, this means that in order to find the dependent value (salary) corresponding to a certain amount of wins, we need to multiply $2 million by the base 1.38235 raised to the power of the number of wins. Using player wins from last season as our independent variable (x) and player salaries from last year as our dependent variable (y), we find that the growth rate which describes the curve is 1.38235. Using this value as the base, we raise it to an exponential power equal to the player’s number of wins.
The only other value to set is the initial value, for which I have used $2,000,000. In analyzing the data, I found that the curve describing the data passed through the x-value of one (1.0 win) at $2,000,000. I chose to use this figure for the initial value in order to guard against situations which would be unrealistic for the NBA; specifically, I wanted to ensure that we did not end up saying that all minimum contracts cost more than the players are worth, when we know that NBA teams often have to fill out their rosters with players on minimum contracts. As mentioned above, the minimum salary counteracts the downward pressure on the salaries of players at the bottom of the league, so we need an initial value which expresses a baseline salary level in order to “teach” our function to return values that are actually possible for players at the end of the bench.
If you think it would make more sense to use the minimum salary for the initial value, rather than the normal salary for a player producing one win, I can understand. Unfortunately, the minimum salary is scaled for years of experience, ranging from $898K for zero years of experience to $2.565 million for 10+ years of experience. Since this fact precludes us from having the same initial value for each player, I chose to use $2,000,000 instead on the assumption that 1.0 win is a level of performance that is easy to reproduce by giving minutes to another bench player.
How Much Wins Cost in Practice
For an example, if you are like most other fans on NBA Twitter recently and want to know how much Kyle Kuzma is worth, you could insert his average wins per season (4.2) into the function above, and determine that his performance so far in his career has been worth $7.8 million per year ($7,791,589, if we’re being precise). Raising the growth term to the power of 4.2 – Kuzma’s wins – gives us 3.896. Multiplying the product by the initial value – $2,000,000 – gives us Kuzma’s suggested salary.
Now we know how to calculate the amount a player should be paid, and we have an actionable rubric with which to compare how much a player should be paid with how much he is being paid. With this method in hand, we are ready to grade offseason moves. Tune in tomorrow for grades and analysis on every move of the offseason!
Hey, Greg, I liked your model very much. I do have a couple of nits to pick, though. First of all, there was no need for the initial value to be 2M, as you could have run a generalized linear regression, there. Your idea of seeing the relationship between contract and wins produced is quite good, and supported by a regression model. All you needed to do was either set the link function to “exp” or run the log function over the contract data. Additionally, you could have used a different intercept for every player, with the intercept being the minimum salary available to that player. Finally, I see you don’t take into account a player’s age, in your model, which might hurt its predictive power when talking about very young or very old players. Though I’m bugging you about this stuff, I really liked your piece and your model, Greg. Keep up the good work
Thanks for commenting Andre, and for the thoughtfulness of your response. I definitely agree that a generalized linear regression would have recognized geometric variation in the output, similar to the polynomial function I employed. My concern is that a regression would have been better suited to tell us how well my model explains player salary. While certainly an interesting endeavor, this article aimed to estimate how salary varies based on performance. In other words, if we take as a given for the purposes of this study that the model accurately evaluates players, how much more does a six-win player make than a three-win player.
I was definitely tempted to try and determine a different intercept for every player. In a purely theoretical model, this would have been the way to go. In this system, however, there are hard and arbitrary bounds on the upper and lower end which constrain salary. To wit, a player’s minimum salary for the first four seasons is usually no different from his maximum salary; players on their rookie contracts make the slot contract value for their draft position. After the first four seasons, minimums and maximums could be modeled using a terms such as (Seasons-4)+1, (Seasons-4)+2, … but only up through (Seasons-4)+6, since the minimum and maximum salaries stop scaling at 10 years of service.
Again, thanks for contributing!