As with the previous posts in this series which detailed the best defenders at each position, I start with the question “Who are the best defensive centers in the league?” In order to address the question properly when evaluating centers, however, it is necessary to answer a prior question. What is a center’s defensive role in the modern NBA? During some prior eras, the center could remain near the basket either defending a fellow behemoth on the low block or walling off the path of opposing drives. Rule changes constricted the scope of zone defense. Then, the 3-point shooting revolution caught even big men in its tantalizing web. For the first time in basketball history, forces conspired to pull centers away from the basket for good.

As always happens in games of strategy, however, there were side-effects to stylistic changes. The spacing afforded by stretch bigs has meant that dribble penetration continues to be an integral part of every team’s offensive attack. Having fewer big bodies in the way makes driving easier. As a result, the role of “rim protector” is still very much alive. Teams in today’s NBA still need interior defense from their center.
At the same time, teams also need their center to be able to cover quicker opponents on the perimeter. The center’s role now lies somewhere in between the inside and the outside. When we ask who the best defensive centers are, we are really asking two things: 1) Who is the best rim protector? 2) Who is the best at defending on the perimeter?
How Can We Tell Who the Best Defenders Are?
Until fairly recently, it was difficulty answer either one of these questions accurately. Though 27 of the 40 Defensive Player of the Year awards in history have gone to centers, the awards have been based mostly on the accumulation of blocks and steals. A thoroughgoing analysis of individual defensive contribution was impossible to attain. Even “advanced stats” such as Box Plus/Minus, Wins Produced, and NBA Efficiency all base the majority of their defensive component on accumulating steals and blocks and avoiding personal fouls.
With the advent of tracking technology, however, it became possible to acquire a more nuanced view of defensive performance. In recent seasons, the NBA has made available a bevy of information including the conversion rate on shots taken “against” a defender on various types of actions (pick-n-roll, isolation, off-ball screens, etc.). We also now have more specific data regarding which offensive player a given defender guarded for the majority of each possession. Can this data help us address the question of who the best defensive centers in the league are?
Strengths of Tracking-Based Defense
I built a Tracking-Based Defense model which compares players’ performance defending each play type against position average, team average, and league average. After adjusting the data, we can properly assign credit for Opponent Missed FGs forced by each player. The greatest strength of this model came in recognizing the high volume of shots taken with a center nearby. The Tracking-Based Defense model does a very good job of recognizing how well a center defends the basket. Rudy Gobert, Draymond Green, and Anthony Davis have consistently placed among the best defenders in the league by this model.

More importantly, the model recognizes the various levels of effectiveness by players performing similar roles. While big plods like Nikola Jokic may force a lot of misses relative to the average player, the model recognizes that they have more opportunity to force misses as a rim protector. In offensive terms, we would call these “high-usage players.” Given his volume of opportunity, Jokic actually performed at league-average efficiency.
Limitations of Tracking-Based Defense
It is important to note that the Tracking-Based Model focuses only on the result of the possession. The model attributes credit or blame to the player who was the nearest defender at the time of the shot (as determined by the league’s tracking technology), whether or not that defender was assigned to defend the player who took the shot. The result is that on small scale results, players might get dinged for providing help defense. This effect will occur even when the opposing team was more likely to score if the player had not helped.
Thus, a player takes an action (helping to the ball) which decreases the likelihood that the offense will score. The model interprets this event as him having performed poorly on defense (if he is the nearest defender on a made shot). In the aggregate this may even out: Player A gets dinged for Player B’s mistakes, but Player B also gets dinged sometimes for Player A’s mistakes. The thing is, we don’t know that this error type will come out in the wash. While the majority of plays do not meet the description, we know that there are some errors in the data (perhaps “misappropriations” would be a better description than “errors”).
Nonetheless, Tracking-Based Defense is very reliable in distributing defensive credit among the players on a team. When we apply to centers in particular, it accurately identifies the best rim defenders. The second part of the center’s job description, however, can be muted in the Tracking-Based Model. Since centers defend far more shots inside than they do outside, the sheer volume of shots defended near the basket can overwhelm the model. Even the strongest or weakest performance defending quicker opponents on the perimeter may not shine through due to smaller number of opportunities.
Strengths of Matchup-Based Defense
My new Matchup-Based Defense model, by contrast, is more effective in identifying the performance of centers when they must range outside of the paint. Because the model keys in on which player a defender is matched up against, it is able to identify centers who effectively defend primary scorers far away from the basket. As I’ve done previously with point guards, shooting guards, small forwards, and power forwards, I find it useful to break each position group down into quartiles based on how much weight they carry on the defensive end. The results in the chart below demonstrate the limited range of centers.

Let’s look now at the upper quartile of centers – the centers who carry the highest defensive load in the league. Who are the best defenders among that group?
Pretty clearly, Myles Turner was a high-impact defender last year. Without even accounting for rebounds, Turner saved the Pacers 15.9 points per 100 possessions last season. In other words, he saved almost three points more than runner-up Rudy Gobert. Gobert’s dominance in my Tracking-Based Defense model, and in nearly all defensive metrics, has been undeniable. His 2016-17 campaign, when Gobert amassed a titanic 8.3 Defensive Wins, was the best defensive performance in the four season data set. His impact on the most valuable shots in the game (shots at the rim) is evident in metrics as widely diverging as Defensive Win Shares, Defensive Box Plus/Minus, RAPM, DRAYMOND, and RAPTOR. Is it possible, or even conceivable, that Myles Turner was better last season?
Gobert vs. Turner
Relative to the amount of points we would expect each player’s opponents to score, Turner’s matchups scored significantly fewer points than Gobert’s. The ratio of expected points to points against was 0.88 for Turner and 0.99 for Gobert. Both players put up good marks, but Turner was noticeably superior. Though Gobert carried a heavier load than Turner, Turner still saved 234.4 points from Base Shooting Defense on 4,293 possessions. Gobert saved 330.5 points in 5,220 possessions. Turner led the league in block percentage and bested Gobert in blocks per 100 possessions by a full block. Myles Turner also forced 3.97 turnovers per 100 possessions, second best in his quartile behind only Aron Baynes.

Though the matchup data favors Turner, other defensive metrics depict Gobert as slightly better. Both Defensive Win Shares and DBPM have Gobert higher than Turner, as do ESPN’s Defensive RPM and 538’s DRAYMOND and defensive RAPTOR. Even my own Tracking-Based Defense model rates Gobert as the superior defender in 2018-19. Does the matchup data recognize value that other methods overlook, or does it miss value that other models recognize?

As wishy-washy as it may sound, I believe there is solid evidence that both possibilities are partially accurate. Matchup-Based Defense does not take account of the type of shot a player is defending. Preventing good scorers from scoring while blocking shots and forcing turnovers requires defensive range. Protecting the rim, by contrast, does not require the center to stray outside of the paint. If it is true that modern centers have two roles (to stop shots at the rim and to defend opponents all over the floor), then perhaps Matchup-Based Defense can measure the second role more accurately than other metrics.
Limitations of the Model
What about role players? The table below includes all players that spent the majority of their minutes at center in the 2018-19 season.
If we evaluate all centers, Hassan Whiteside climbs up the ladder. But … this happened:
Why in the world would a statistical model rate Whiteside as an excellent defender? Models can only evaluate what happened, not what could happen or what might happen. While even a cursory viewing of Whiteside’s play reveals a tendency toward pouting when the ball doesn’t find him. The most common consequence of this tendency is Whiteside getting benched. The result is that Whiteside’s weaknesses are masked by his coaches’ choices. The fact that Whiteside gets yanked when he begins to negatively impact the team’s chances creates an imbalanced relationship between his actual value and the data observed when Whiteside is in the game. Like an iceberg, much of Whiteside’s negative value is hidden beneath the waters while he sulks on the bench.
Statistics can tell us a lot, and the data that has become available in the player-tracking era tell us more than we’ve ever known about defense. There are some things statistics can’t tell us. They can’t tell us what would have happened if a player had been in the game when he wasn’t. They can’t tell us what would have happened if a player had been cast in a different role, or played with different teammates. The real key – the secret sauce if you will – is to recognize the context that the data does provide, and to interpret and apply that context consistently. This is what Matchup-Based Defense strives to achieve, and what all reliable metrics need to do.