Itâs a really complicated problem to address relying on public-facing data. This article gets into it:
I like this:
For each player, the evaluation is really a three-step process:
- What is he being asked to do?
- How well does he do it?
- How much value does he provide relative to role?
None of these are easy questions, and weâll start with the last one first. At this point it seems reasonably well-settled that players on the larger end of the position spectrum provide more defensive value than smaller, because being large and in the way of the basket has proven to be a fairly major part of modern defense. Big surprise there. Beyond that, itâs hard to contextualize value across roles at this point without resorting to catchall metrics, which weâll get to.
Article goes on to try to answer âwhatâs is he being asked to do?â by identifying how frequently / player guards a teamâs primary option (defined by usage rate). And then gets into the âhow well does he do itâ by looking at pros and cons of different metrics.
Honestly, my takeaway is that a coach watching film and knowing what his scheme is supposed to accomplish will probably have a noticeable edge on any of the existing defensive impact models when it comes to accurately evaluating his playersâ defensive abilities. And especially if they can also marry some selective stats-y things like percent allowed with [player] as closest defender with their scheme knowledge.