One of the many things that’s interesting about the NFL Draft is the way draft analysts and experts create elaborate evaluations of players, try to predict what NFL teams will do, and then repeat the entire process the next year with very little accountability or reflection on what happened the prior year. I have previously written about how little similarity there tends to be between draft boards and draft reality. Despite this, there are a few experts touted as “the best in the business” in their fields, and I thought it was worth looking into this claim with respect to one particular grouping of players.
The reason for this focus is threefold. First, I highly value offensive line play and enjoy evaluating the play of offensive linemen myself. Second, it represents a rare balance in the draft where quality offensive linemen are found throughout the draft (unlike wide receivers or edge defenders, for example) but also where there are notable decreases in talent at each level of the draft (unlike safety, for instance). Third, there are notable “evaluators” of offensive line prospects to discuss. Specifically, Brandon Thorn is widely considered one of the best in the business, and people relentlessly cite Pro Football Focus evaluations of offensive linemen as if they are meaningful in some way despite very little independent confirmation of their results.
In order to perform this evaluation, I needed a pool of players and a way of evaluating their performance. Because Thorn began publicly contributing his offensive line prospect rankings to Bleacher Report in 2021, I pulled the big boards from 2021-2023 using the one that was published closest to the draft itself, normalizing all for relative rank instead of absolute rank (for example, while Teven Jenkins was the 12th-ranked player on Bleacher Report’s overall board, he was Thorn’s third-highest offensive linemen, so he was ranked 3rd). Pro Football Focus needed to be considered, as mentioned, given their visibility in the field. Additionally, Windy City Gridiron’s own Jacob Infante graciously allowed me to evaluate his boards. I have previously found that Daniel Jeremiah’s rankings align with the NFL draft itself better than almost anyone else, so I included him. To provide an overarching frame of reference, I also considered the big boards from the NFL Mock Draft Database’s consensus big boards. Each board went through the same normalizing process as mentioned above.
Note that there is an annual draft guide that many people use which I am not including here except in a limited capacity. I typically try to use public data or information that was willingly shared with me by the creator, but because I know there will be questions I will refer to aspects of this work as well.
Of course, it’s not enough to simply compare these individuals to the draft itself, however, because front offices also make mistakes. As a Bears fan, I’m only too aware of how readily a team can draft a player high just to have him falter. Unfortunately, not only is there a shortage of good stats on offensive linemen, there’s a shortage of mediocre stats as well. As a simple measure, players in this sample were scored out of what percentage of the available games they played in and then also what percentage of the available games they started, receiving a 10% multiplier on this score for each Pro Bowl they earned in this time. Any ties were broke first with number of starts and then with a lower blown block percentage as reported by Sports Info Solutions.
Because 2021 has had four years to work while 2023 has only had two years, this will create some bunchiness in the data, with the numbers from 2021 being a little more “true” that the numbers from 2023. Some people will argue that playing time is a poor surrogate for quality, as teams will simply play highly drafted players regardless of how well they play. However, there is only a moderate correlation (0.53) between draft rank and the weighted performance rank described above.
In order to prevent a single “off” ranking or result from dominating the statistics, all ranks lower than 20th were normed to 20th place. This will tend to make boards seem more accurate than they are. Remember that.
The Consensus Board at the Mock Draft Database was, not surprisingly, pretty accurate. It has the second-highest correlation with draft order of any of the boards (0.85). It was slightly ahead of Brandon Thorn’s rankings (a 0.79 correlation) for these three years, and Thorn was functionally at a dead heat with Infante (0.78). All of them crushed Pro Football Focus (0.68), but Daniel Jeremiah was the winner here (0.86).
Remember that analyst with a comprehensive draft guide that is not publicly available? The correlation between that draft guide for 2021 and 2022 and the draft itself ties Jeremiah at 0.86, with an interesting caveat to be discussed later.
To be fair, all of these boards are narrowing things down from thousands of college candidates to just a few hundred evaluees. A few impulsive decisions from GMs can also skew these results. However, at least when it comes to guessing the order offensive linemen might be drafted, the power of the masses is hard to beat for mortals, even if Daniel Jeremiah just barely does it. The only other result that really stands out is how poorly Pro Football Focus does compared to the others.
In simple terms, on average Daniel Jeremiah’s own board rankings came within 1.6 ranks of the order offensive linemen were actually drafted in, and in fact half of Jermiah’s top prospects (i.e. half of his top ten offensive linemen in each class) were taken within a single rank of where he placed them. Meanwhile, Pro Football Focus “misordered” draft picks by 3 full ranks compared to what really happened, and only a third of their top prospects were within a single rank of where they were selected.
Of course, one answer could be that Pro Football Focus focuses on evaluating the quality of these players and not where they will be drafted. Basically, maybe they’re right and everyone else—including the league front offices—might be wrong. If that were to be true, it might show up in whether or not these players eventually earn playing time or Pro Bowls.
The reality is that everyone struggles here, likely because of the number of variables involved ranging from injury to coaching–to say nothing of the inherent limitations of my own performance ranking system. However, Thorn (0.57) and Jeremiah (0.56) stand out as the best, with a difference between their average ranking and the actual performance ranking of around 3.7 spots per prospect. The draft itself and Jacob Infante are tied (0.53) at around 4 spots off, with the consensus board (0.52) in the same range. That leaves Pro Football Focus (0.48) as the least accurate of the public boards, off by just over 4.2 spots on average.
The wording here is deliberate, because it is not entirely true that Pro Football Focus puts out the least accurate board that I have looked at. For the sample from 2021 and 2022, the large guide I mentioned before has a correlation of only 0.41 with performance–it’s off by 4.6 spots on average. At least as judged by this methodology, its ranks are a worse match to how players perform in the NFL than any of the major boards mentioned by name.
Within the entire pool of 121 prospects, Jeremiah had the most players placed within 5 spots of their eventual performance ranks with 89 (74%), while of the public boards Pro Football Focus had the most true failures with 18 prospects placed more than 10 ranks out of position. Given that only 40 or so offensive linemen are drafted each year, that’s an average of six players per year performing a full quarter of the draft out of position, at least—and that’s with the blunting effect that the methodology creates.
Again, even PFF’s “6-misses-per-year rate” is beaten by the aforementioned analyst, with 13 missed by 11+ ranks in 2021 and 2022 alone. Interestingly, 12 of those 13 are ranked within three relative slots of where they were drafted as opposed to how they performed, and 10 of the 13 are within one rank. So that guide definitely seems to be tightly in line with NFL offices, even to its own detriment.
Finally, if we instead narrow the pool down to how well each board does only when it comes to adequately projecting their top ten candidates in each draft, Jeremiah again takes home the win with 15 of 30 being within five spots. Pro Football Focus is again the worst with 9 prospects placed more than 10 ranks out of position.
This study provides yet another reason to question the value of Pro Football Focus. They consistently deviate not only from what the draft itself values, but they also seem to value things that do not manifest in measurable performance once players are drafted–except of course in their own evaluations. To me, at least, this is another reason to look at all of their evaluations with even greater skepticism. If other experts view things differently than them and if the NFL itself views things differently than them, then one of the the only things that they seem to having going for them is that they agree with their own opinions.
However, in an effort to be fair to Pro Football Focus, it’s worth considering one more check. I averaged the ranks of Jeremiah and the NFL Draft itself and compared them to PFF’s ranks, noting all scores where PFF was at least 5 ranks off of the composite score and also more than 5 ranks off of both the draft and Jeremiah. These ranks then are the ones where Pro Football Focus most disagreed with the “received wisdom.” This gave me a pool of 20 players, with nine preferred by PFF and 11 preferred by both the draft and Jeremiah. Which group has played better since being drafted?
Pro Football Focus had nine “favorite” players. One was William Sherman, who was drafted in 2021 but has to date only played 6 snaps on special teams. The other eight, however, were in Sports Information Solutions’ databases and had an average blown block rate of 3.3% and an average penalty rate of 0.7%, for an error rate of 4% (median 3.3%). Conversely, eleven of these players were ranked more highly by the mainstream but PFF instead found other players more worthy. They had an average blown block rate of 2.7% and an average penalty rate of 0.6% (with an average error rate of 3.4% because rounding works that way sometimes); the median error rate was 2.9%. In short, when reviewed by a third-party evaluator once they entered the NFL, the prospects uniquely favored by PFF played worse than the players uniquely devalued by them.
Out of the entire pool of 20, of the nine players at or better than a 2.9% error rate (the “Teven Jenkins” level), seven of them were more favored by conventional wisdom than PFF: Creed Humphrey (3-time Pro Bowler), Landon Dickerson(3-time Pro Bowler), Josh Myers, Joe Tippmann, Alijah Vera-Tucker, Cole Strange, and Teven Jenkins. The two relative successes for PFF were Tommy Doyle and Zach Tom. The most notable other player in the pool of 20 is Pro-Bowler Cam Jurgens, who also was ranked more poorly by PFF than by the draft or Jeremiah.
It’s worth asking if their grades are so inaccurate when checked at other times, how valuable can their in-season grades be?
Prior to this study, I would have been inclined to say that I trusted Brandon Thorn’s work evaluating offensive line prospects more than that of any other analyst, and I will no doubt continue to credit to his opinions. However, I am reminded once again that if I were forced to pick a single draft analyst to recommend, it would have to be Daniel Jeremiah. It seems like every time I put together one of these projects, no matter what weight is given to different factors or what I’m looking for, he’s toward the top of any list that uses objective measures.
There is a enough wiggle room in this study to allow adherents of any one board to continue to defend it. Some will no doubt even find a way to defend PFF. However, while the Mock Draft Database, Jacob Infante, and Brandon Thorn all offer reasonable alignment with the draft and with player performance—and while all are seemingly superior to PFF—it’s the man from the NFL Network who continues to show he’s the best.