Search This Blog

Thursday, May 3, 2012

Mel, McShay, and Mayock Makes As Many Mistakes As You and Me

        PunditTracker.Com recently ranked the three most famous NFL draft prognosticators – Mel Kiper, Jr., Todd McShay, and Mike Mayock – on their individual “mock draft accuracy”. To accomplish this goal, PunditTracker “use[s] a variability measure based on the difference between when each player was projected to be picked and when he was actually picked.” This means that if a player was projected to be the 15th pick in the draft and ended up being the 20th pick in the draft then the variability measure is five. After completing this analysis, PunditTracker found McShay to be the best predictor of where a draft pick would be selected for the fourth year in a row.

  There are a couple of issues with measuring the success of the prognosticators in this manner. The first is does anyone, outside of the draft picks and their agents, really care what number players are selected in any draft - especially the second after the draft is over? Quick, try to name all of the top-ten picks in order of this year’s NFL Draft. We are willing to be that most cannot complete this task nor do they care to try.

  The second issue is that PunditTracker.com should really be rating the prognosticators on the performance of the picks after they are selected and not where they are drafted.  Each of these prognosticators rates prospects both before after the draft – with teams often receiving grades for their selections. PudnitTracker.com could have evaluated how Kiper, McShay, and Mayock’s draft grades / assessments matched with players actual future performance in the NFL. This would likely be the assessment that executives, coaches, fans, and players would be more interested in when evaluating prognosticators and not where players were selected.

         It is possible that this analysis has already occurred and we would be happy if someone would sent this to info@blocksixanalytics.com. However, we are comfortable stating their predictive capabilities of Kiper, McShay and Mayock are no better than the experts in many other fields – very poor. In his book The Wisdom of Crowds, James Surowieicki shows that experts constantly and consistently are very poor predictors of future performance or behavior. In fact, he shows that crowds of people with limited knowledge of a certain fields are often better able to make decision than experts. For example, Surowiecki shows that a few hundred amateur participants working within the Iowa Electronic Markets project at the University of Iowa have more accurately predicted the outcome of election results than Gallup Polls have done in 1990s and early 2000s based on their trades of political campaigns' "stocks".

        For the sports organizations, this is a particularly troubling finding. It is hard to find an industry that spends more money, time, and energy in its scouting / player development operations. For example, McShay, Kiper, and Mayock are just one of hundreds of general managers, scouts, assistants, etc. who have watched thousands hours of game film, attended hundreds of games, completed numerous interviews, etc. with draft picks to determine the future professional performance of amateur players. More importantly, professional sports teams annually pay draft picks millions of dollars to players in a variety of sports base on the ability of their experts to predict future performance. Even with all of these resources spent on evaluating college and high school players, drafts in virtually every sport are often called “inexact sciences”. This is a nice way to say that teams are wrong a large portion of the time when it comes to selecting players based on their expected future performance. Virtually every fan of a professional sports team can point to a time a team spent millions of dollars on a draft pick and receive bubkus in return (and bubkus is an “official” B6A term for zero).

        All of this should lead to a natural question - Why are sports organizations paying so much money for experts in player evaluations if it is so difficult to predict future performance? In addition, what is really the value of relying on “experts” like McShay, Kiper, and Mayock if they are wrong so much of the time? McShay and Kiper in particular have rose to prominence for doing something (predicting the success of future draft picks) where they are no more likely to be successful than a group of “average” people sitting in a room making the same evaluations.

         Instead, sports organizations can change their approach when dealing with experts and their drafts. There does exist a large amount evidence that experts are really good (and much better than amateurs) at dissecting situations that have occurred in the past. Having experts evaluate where and how teams may have made mistakes in their drafts would likely be extremely valuable to sports organizations. When it comes to the draft, teams would likely be as well, if not better, served creating markets where their fans could vote, trade, and/or “bid” on players in a particular draft. If nothing else, teams definitely need to consider shifting money away from scouting operations because predicting the future is really, really, really hard.






No comments:

Post a Comment