Spring Training is underway! There is new stuff coming out of camps everyday now, a respite from the off-season doldrums where we hashed and re-hashed the same few unresolved debates. Who knew that they were running a special on hamate bone repair this month?

As is my want, I tend to zig on writing topics when everyone else zags. New and fresh spring training camp content is hard to come by, so I’m going sojourn off to previously plowed fields on drafting (and signing IFAs) and developing minor league players. As discussed in other venues, I have been working on some methodological changes to my series on draft-and-development. A lot of focus has been in the arena of trying to better discriminate the draft from the development part (difficult to do!). I have some preliminary results. Needs more work, which will be greatly improved by your contributions in the comments sections.

In pondering how to discriminate development from drafting, I decided the most intuitive way would be to evaluate how players improved (or not) from their first (draft) ranking (in FV terms) to where they have ended up. To be certain, one can’t actually determine if a single player improving from a 40+ to a 50 outcome is because of development or because the rating was artificially low. My sense was that although individual outcomes would be difficult to tease apart, I could look at organization-wide outcomes and see if there were trends that transcend individual rating misses.

For example, I might (in theory) see a team like the Cardinals consistently turn a higher percentage of 45 FV prospects into average (50 FV) MLB players. A single under-rated player is probably not uncommon. A repeating theme of same could suggest the fine hand of player development, no?

So, I endeavored to acquire prospect ranking data. I got as far back as 2017 from Fangraphs Prospect Board, so that defined my population (and timeline). This dataset contains nearly 19,000 prospect evaluations, so it is a deep set, albeit error riddled. Ugh! Lots of data wrangling with this set. More to come, too.

You’ve heard me say that it is difficult to evaluate a draft before 7-10 years have passed, and yet I only have a nine year data set to analyze. So, I also endeavored to try to shorten that window a bit, borrowing an idea from Ben Clemens of FG here, who proposed it is rational to take Zips 3-year forward projections and append them to young players short history to develop a more comprehensive view of said player/prospect. So, I joined 5,700 Zips projections that came out a few weeks ago with the 19,000 prospect ratings, covering some 4,300 different prospects. No easy feat. Been working on this all winter, and not done yet.

In essence, as I describe players/prospects, I am describing what Zip+DC thinks this player is today and will be 3 years hence. If you accept projections as a reasonable source of analysis, then I’ve shortened my window to 4-7 years, which gets me inside that 2017 cut-off (I can find no reliable electronic data source of prospect grades prior to that year).

I had to make a couple other methodological choices which I invite you to comment on. One is, I’ve calculated each player/prospects’ actual+projected WAR value and divided that value by that player’s MLB seasons – 1 ti create an “average WAR”. Zips appears to forecast everyone 3 years out, including young-ish prospects such as Raniel Rodriguez, which I found handy. Thus, every player has a minimum of 3 seasons of data, more if they’ve made their MLB debut. I used that AverageWAR to assign an FV value of what that player is today and expected to be in the future, as compared to his prospect peers (not all players). This value is completely driven by Zips projections plus actual production and stands in contrast to the scouting grades I compared them with.

Then, I distributed that players Average WAR along the 20-80 scale, using the guidance that each 10 places is one standard deviation. Ergo, 68% of all prospects will have what I term “Adjusted FV” between 40 and 60 and 98% will fall between FV 30 and FV 70 and the remaining 5% will occupy the nether regions 20 and 80 FV. In practice, I ended up with more 20’s and 30’s because many prospects don’t make an MLB debut, don’t achieve and 35 or higher FV and have no actual production nor any Zips ratings, so they go into the waste bin.

My first test was to evaluate the prospect/players who grade out 80 from their performance and projection. A total of 5. The rarest of the rare, top .3%. You can see the list below. Definitely performances that are outlier (beyond 2 standard deviations from average). The list passes the eye test, no?

Prospects who perform at 80 grade (2 or more SD from average)PlayerNameprimary_position_namepitcher.typecareerWARprojectedWARfirstFVlastFVAdjusted.FVShohei OhtaniTwo-Way PlayerStarter49.677918.82581707080Shohei OhtaniTwo-Way PlayerStarter49.677918.82581707080Tarik SkubalPitcherStarter19.269417.80266456080Bobby Witt Jr.ShortstopStarter26.732817.60179556580Garrett CrochetPitcherReliever11.892415.56059455080Paul SkenesPitcherStarter10.7715.51154606580

Data courtesy of Fangraphs | Zips

I scaled the AdjustedFV value by starters, relievers, and position players. In the list above, you are seeing the top .3% of each group. No relievers performed at 80 FV, FG tends to scale all pitchers to WAR per 200 IP for comparison purposes, but I found the 200 IP limit a bit anachronistic (and this is a modern data set and who pitches 200 IP anymore?) and leverage varies a lot between starters and reliever, so I chose to scale within a like cohort. Tell me if you agree with a list that shows Devin Williams as more valuable since he broke in than say, Dakota Hudson.

If this passes the eye test, then the whole data produces some MLB-wide averages we can begin to compare.

MLB-wide prospect outcomes since 2017GroupTotalMultipleRankingsTrendDownTrendDownPctTrendUpTrendUpPctTradedTradedPctNoChgPctBeatProjectionBeatProjectionPctUndershotProjectionUndershotPctHitProjectionHitProjectionPct4297371092025%91025%71319%51%115927%284666%2927%

Data courtesy of Fangraphs|Zips

Here we see the total of 4,297 players. Most (3,710) of the players have more than one ranking, so for most prospects we can see how they evolved a bit in the minor leagues. Note that as prospects are re-evaluated (annually or semi-annually), roughly 25% go up and roughly 25% go down. A nice even distribution. I find myself surprised that 50% of original rankings remain unchanged through a minor leaguer’s career. Realize that 3,700 players get 18,500 rankings, so that tells me initial FV grades remain pretty static across MiLB. Interesting. I expected more volatility.

Another tidbit to observe. About 15% of prospects get traded during their MiLB career. This number is undoubtedly a bit higher, but in the data I only see prospects who 1) change teams, 2) have a high enough ranking that they are ranked in both organizations. Is 15% a surprisingly high or low number to you?

Here is the fun one. In spite of the somewhat sticky nature of initial FV grades given, actual output + current projection, when converted to the AdjustedFV, results in a 6% hit rate. Said another way, 94% of prospects who go on to accumulate enough juice to collect actual fWAR or gain a 3-year projection come in at least ½ of a standard deviation off their initial FV grade (which half the time is their final FV grade, too).

That is a league wide look across all prospects. To get closer to how the Cardinals are doing, I wanted to break it down by each FV grade.

firstFVGroupTotalMultipleRankingsTrendDownTrendDownPctTrendUpTrendUpPctTradedTradedPctNoChgPctBeatProjectionBeatProjectionPctUndershotProjectionUndershotPctHitProjectionHitProjectionPct352200%2100%00%0%150%150%00%37.51369119420%36030%19516%70%33825%103175%00%401722144739828%28219%30221%53%44126%102960%25215%42.542138817244%9123%7319%32%9723%31775%72%4547240921352%10726%8922%22%17537%28761%102%47.577702536%2739%69%26%2229%5268%34%501401256350%2722%3226%28%6043%7554%54%5565553869%1018%1324%13%2234%4062%35%602219842%421%316%37%15%1150%1045%65311100%00%00%0%00%3100%00%70400NA0NA0NANA250%00%250%

Here, you see the same interesting tidbits, broken down by FV. We can safely ignore the extreme ends of the spectrum (20,30, 70, 80) as small sample size, but in the middle seems to be a story.

For instance, we can see in this view that most players traded (~500 of the total ~700) fall in the FV 40 or FV 40+ ranks. Very few teams trade 45+ and up players, partially because they don’t have many to trade. When evaluating trades for prospects, take note. Remember this when we get to the Cardinals.

Also note that FV groups that tend to trend up the most are 35+, 45 and 45+. Almost universally, the trend downs tend to cluster in the upper-echelons of initial rankings. When evaluating draft picks, take note. FV 45 is an odd group. By far and away the group of players most likely to move off an initial 45 rating, going down 50% of the time.

Take a look at the “undershoot” column. These are the prospects who have performed (or are projected to perform) lower than their initial FV at draft/signing time. A rule of thumb would something like 70% of prospects undershoot. The percentages improve a bit with the FV 55 and up group, but those numbers are so small that a large SSS stamp is posted on them.

So, this is a Cardinals blog after all, so we should talk about them, no?

Here is the Cardinal prospect-only breakdown, following the same pattern.

firstFVGroupTotalMultipleRankingsTrendDownTrendDownPctTrendUpTrendUpPctTradedTradedPctNoChgPctBeatProjectionBeatProjectionPctUndershotProjectionUndershotPctHitProjectionHitProjectionPct37.5474100%1434%1024%66%1123%3677%00%405552917%1223%2140%60%1731%3258%611%42.51312325%542%325%33%431%969%00%451413862%18%646%31%750%536%214%47.51100%00%00%100%00%1100%00%501211327%218%655%55%758%542%00%55211100%00%1100%0%00%2100%00%

See anything? First, the high floor, low ceiling draft approach jumps out. Interesting, even now, they are, percentage wise, a little light in the 40+, 45 and 45+ ranges. That is after all the trades and last year’s draft.

One thing that stands out to me. The Cardinals have been involved in trades of their 45 and 50 FV prospects at double the rate of league average. Remember when I wrote earlier that teams don’t appear to like to trade these prospects? The Cardinals were involved in 6 of the 32 FV 50 trades over the period. I gather that is mostly a reflection of this past off-season, but I have not proven that yet. I would hate to see how this data looked for the Cardinals before, say June, 2025.

Although the numbers are small, it is noteworthy that 11 of 15 players 45+ or higher have (or are) under their original draft projection. That would be about 70%, or right on league average for undershoot in that range. My takeaway on this? Probably that the Cardinal’s development program, in falling back, fell back to league-average. Not good enough to sustain their competitive model but not collapsed either.

I could go a lot of ways with this data. More clean-up is needed. Would love to backcast a bit farther. Thoughts? Questions? What made you wonder about that I can explore more?

I’m off to Florida early next week. I will report back while I’m there and recap after I return. If you have any questions for me to explore, put it in comments and I will try. I have an extensive list. I believe my press credentials are ready.