PD’s Postulations: Recruiting Rankings Pt. II

Written byDavid Parker, January 11, 2013, 0 Comments,

In part one of this series, I discussed my analysis of the predictive power of team recruiting rankings on National Signing Day, measured by how well they forecast the success of those classes in the final polls over the four-year effective eligibility of that class. In this installment, I will get deeper into the specifics of the findings to demonstrate just how flawed their predictive powers are, and try to determine why anyone should care about class rankings, and if so, why?

But first, I want to clarify the scope of this analysis. It is not a study of the predictive power of star ratings on individual players. There have been a couple of relatively well-circulated analyses of the accuracy of star ratings, and they have been very shallow in method and very hollow in findings. Overall I cannot argue with the very high level takeaway that “Five-star good, No-star bad,” but beyond that very general connotation, there isn’t a lot of meat on the bone. And it is not for nothing, as they say. In order to perform a meaningful analysis on the accuracy of star ratings for individual players, the number of variables that would have to be controlled is far beyond the scope of sanity. In addition, there are so many fudge factors involved from the perspective of the recruiting services; the sizzle just isn’t worth the steak.

And I personally don’t believe in the value of one player to the point that it is worth investigating. The exception would be a very unique program-changer like Tim Tebow or Percy Harvin, and those players are well known to everyone as such. The tracking studies we have seen judging accuracy of star ratings focus on individual statistics and honors both on the college level and especially on the NFL level. But while it is always great to have an All-American or even Heisman Trophy winner in your program, and of course it’s nice to follow NFL stars who played for your school, it doesn’t really matter in terms of making the program better — winning more games or championships — over the period of the player’s eligibility. Emmitt Smith shattered the Florida rushing record book and had a Hall of Fame professional career, breaking the NFL rushing yards and touchdown marks as well, placing himself permanently in the discussion as the best running back of all time. So if there were a star rating system when he was a high school senior, his five-star score would have proven to be accurate. But over Emmitt’s three-year Florida career, the Gators outright stunk. The heavy reliance on the NFL aspect of a player’s career is ludicrous to begin with when judging the star ratings, because those stars are meant for college projection, not NFL, and a college coach’s job in recruiting is to sign players who will succeed in their college program — not in the NFL. And there have been so many players who star or have strong careers in the NFL who did next to squat in college (just off the top of my head, Will Hill, Antonio Cromartie, Jimmy Spencer, and almost every Georgia running back to make an NFL impact since Herschel Walker come to mind), the measurement is invalid.

Ultimately, even if the individual player star rating systems were accurate at all three of the major recruiting services, my response is, “So what?” Because my interest in recruiting is how signing classes will help the Gators win more games and more championships, or how they will help other schools who compete with Florida do the same. If I cared about stockpiling great players that get all sorts of media buzz and adulation, but almost never win championships, I would be a fan of FSU or USC. However I am a fan of the Gators, and in Title Town, we measure excellence in wins and championships.

So you can see why I find it useful to conduct this analysis: does the ranking of entire recruiting classes against those of all the other schools have any accuracy in projecting success of those schools in terms of wins and championships, as would be demonstrated to a great extent by final poll rankings? The more wins you secure, the higher you will be ranked in general; if you win your conference title, you will usually receive a bump in the polls — especially in leagues that hold conference title games. It is not perfect, but it is the best team-to-team, poll-to-poll, apples-to-apples comparison to judge the meaning of any recruiting rankings out there. Now with all that having been said, let’s look at some more of the specifics of the findings.

Da Debil in Dem Details

To set the stage again, the analysis examined the eight signing classes from 2002 to 2009, and measured their success by aggregating their final Coaches’ and AP poll rankings over the first four years on campus, the most effective eligibility window of the class. Let’s first examine a few years as an example of how random the signing class rankings can be in terms of predictive ability. In 2006, USC signed the top ranked class, but wound up finishing fourth in four-year poll performance. Not terribly egregious. Florida was the king of that quad as far as success on the field, and was ranked second in 2006 recruiting, so neither of the Number 1 teams in recruiting rankings or poll rankings were big misses. However, in that same year, there were some huge misses. Tacking forward, Notre Dame’s ranking was off by 34 spots (that is, ranked No. 6 in 2006 aggregate recruiting rankings, but finished 40th in the nation in combined four-year final poll rankings). The swing and miss on FSU’s ranking — another of 2006’s top 10 ranked signing classes – was even worse, over-ranking their class by 36 spots. Tracking backward (i.e., looking at the four-year final polls’ top 25 and seeing where those classes were ranked in 2006) uncovered even bigger blunders like Ohio State (10), Virginia Tech (25), TCU (46) and Boise State (50).

And those are just in the top 10, where the variance is smaller. In the rest of the Top 25, it was like a Wild West gun show if the gun fighters were Don Knots and Tim Conway (both in their current state of health), with misses like Missouri (33), Utah (38), Cincinnati (40) and West Virginia (45). The average margin for error in the 2006 recruiting rankings top 25 was 14, while tracking backwards it was 19. In the Top 10 alone it was 11 and 14, respectively.

In 2007, Florida won the recruiting title, but wound up finishing fifth in the aggregate polls over the next four years, as that time span contained Urban’s only two years in the last five that were not 13-1 finishes. Ohio State was the best performing class of that year, after singing just the No. 13-ranked class in 2007. Overall that top 10 had an average miss of nearly 20 spots, with only one significant outlier (TCU by 53). Boise State, Notre Dame and North Carolina were all recruiting ranking fails by over 40 spots. Even when adjusting for outliers, you wouldn’t want to wager any of your own money on future success based on signing day rankings.

The least volatile top 10 in this eight-year span was the class of 2002 — the only class in the study that had a single-digit margin for error in the top 10 looking forwards and backwards. By comparison, the 2008 and 2009 signing classes were two of the worst in terms of predictive accuracy, so the rankings have become worse over the years, not better. Even as the best representative of class ranking accuracy in the last ten years, the 2002 class still had a margin of error of 14 points tracking forward and 16 points tracking backwards for the top 25. In the top 10, it snuck just under the double-digit ceiling at No. 9 and No. 7, respectively. Of the top 10 signing classes in 2002, five of the rankings only missed their final poll predictions by 1 spot, and six of them wound up finishing in the top 10 in the collective aggregate polls. That’s pretty darn solid. Had this been the norm, the findings would have looked much different. However, the very next year, the margin of error nearly doubled to 16, even for the top 10, where there were misses of 11, 14, 40, 41 and 44. The following year was another good one for the recruiting services, actually projecting the top two teams in the polls — USC and LSU — correctly. However, in 2005, the error rate ballooned again, this time shooting all the way up to 18 spots, 21 for the top 10. That trend of very bad accuracy continued for the final four years of the analysis.

But a quick look at the anomalies explains why the recruiting services have any luck at all in predicting the success of signing classes, and why the casual observer often erroneously perceives the rankings to be generally on the money. Let’s consider the 2004 recruiting rankings. USC’s class was ranked No. 1, with LSU’s class ranked No. 2. Well, it should come as no surprise that the two teams that each claimed the national title just a couple weeks before National Signing Day that year were USC (AP champions) and LSU (BCS champions). In fact, when you look at the entire top 10 in aggregate final poll rankings for the 2003 season — released in January 2004, a matter of days before National Signing Day — the class of 2004 aggregate recruiting rankings varied only a mere 4.9 positions from where those teams were ranked in the final polls of the season that just ended. In fact, over the eight years of this analysis, the average margin of error for top 10 recruiting class rankings that I discussed earlier was 14.5 spots. Over the same time, the average variance between the aggregate recruiting class ranking and the previous month’s final AP and Coaches Poll rankings was just 8.6. Over the eight years, the margin for error for top 10 recruiting class rankings only nudged into single digits twice; when looking at how closely the rankings mirrored (i.e., copied) the previous month’s final polls, the variance never reached double digits a single time.

The reason it appears to the casual observer that the class rankings seem to match the final polls is that they are looking at the same year’s final polls and recruiting rankings — not the final polls over the next four years that prove out the accuracy of the rankings.

Now obviously there is a natural national title bump that boosts a program’s recruiting in the signing class that immediately follows just a couple weeks later, so you would expect their class ranking to be high, but that does not explain the preponderance of top 10 finishers every year to be mirrored in the recruiting rankings. Is it as simple as copying the top 10 or top 25 final poll rankings every year? No, that would be rather obvious. But it is not too different in concept in that whoever is hot that year, their visit lists will be weighted more in the recruiting rankings black box than teams that did not finish high in the polls.

To pick another glaring example of wild misses, tracking forward, the No. 1 signing class of 2005 was FSU; they didn’t even finish in the final top 25 the country in either poll in any of the four years of the effective eligibility of that class. On the other side of the coin, Ohio State finished in the 30s or worse in every recruiting ranking in 2003, but finished #3 in final poll rankings as a class over the next four years.

What it Means

These findings are not an indictment of the ability of these self-proclaimed recruiting gurus to judge talent, because truth be told they base their assessments on visit lists and buzz more than all other factors combined (and it is not very clear what “all other factors combined” even are). Given that fact, these findings may be an indictment on the evaluation talent of the upper third of the programs in the country. Or more likely, the evaluation acumen and the ability to develop the talent once it is on campus. The outcome of the analysis should not serve as a blunt object with which to hammer the recruiting services, either. But it does demonstrate that their class rankings essentially have no predictive power or accuracy in forecasting the future of a program. And that seems to be the only reason to follow recruiting — or certainly the ranking systems. But that does not mean the rankings and the process of following them are useless to fans beyond the artificially created entertainment value. And it does not mean that there is no significance to signing a highly rated recruit or having a class that is highly ranked. It merely means that neither of those mean much of anything by themselves as far as insuring the success of the program over that player’s or that class’s period of eligibility.

The primary take-away here is that being ranked highly in recruiting IS significant — it means your program has a lot of very good raw materials with which to work over the next three to five years — but it is not a precise or even general predictor of future success in any capacity. The differentiating factors as always are matching the right players to the program, coaching them up and avoiding those unforeseen, largely uncontrollable impacts like injuries, transfers, coaching instability, etc. These impacts on the success of a class were not taken into account to adjust for “program turbulence,” if you will, that would absolve the recruiting service rankings of some of the blame for their significant inaccuracies. They were ignored firstly because the eight-year period is long enough to smooth out those influences on a program. We are in the era of almost instant program transformation. Urban Meyer won a national title in two years. So did Gene Chizik, Bob Stoops and Jim Tressel. Les Miles, Pete Carroll and Nick Saban all did it in three years, with Saban also doing it at LSU in four. Larry Coker even did it in his first year. All of those national titles were won since 2000, and in fact these coaches represent ALL of the national title winning coaches since 2000 except Mack Brown, who took eight years to win his at Texas. And all the coaches on that list but Miles and Coker took over struggling programs. In 2011, Will Muschamp and Brian Kelly turned in national title-contending seasons removed just two and three years, respectively from taking over programs that were in the absolute gutter. So eight years is by far a long enough sample duration to factor out any program transition troubles that might misrepresent a program’s recruiting success.

The Good News

The longer you follow recruiting, the more you realize they key to turning these signing classes into successful football programs over their period of eligibility goes far beyond the players signing each February. If you plan to win a lot of games and win championships, it is very important to sign great classes of athletes on a consistent basis. But the programs that use that raw talent to succeed are the ones with great coaching staffs who not only have an eye for talent, but an eye for talent that will fit their systems and fit their programs on a cultural level, and above all a high level of ability in developing the talent to play at the highest level within that system.

You see where I am going with this.

Florida currently sits atop two of the three major recruiting services’ class rankings and will likely finish there next month. In a vacuum, as the data tell us, that means nothing. But when coupled with the fact that the head coach and staff of assistants have in place a high-functioning machine of strength and conditioning, education and development within their system and game planning in all three phases of the game, signing a No. 1 or otherwise highly-ranked class at Florida right now is a very significant event. With the system and staff stability in place, and their demonstrated ability, a highly ranked class can be a very good omen for future success. Not because of the evaluation of the recruiting services, but because in the case of schools like Florida, Alabama and LSU, to name a few, the hype and buzz surrounding highly rated prospects signing up for duty is well warranted. There is reason to celebrate a top-ranked class for the Gators this year.

It doesn’t matter what other programs do or where they rank this year — the success of this Gators class will bear out on the strength of the staff’s abilities and the stability of the Florida program’s system. As for how much they achieve, it will depend on the program, and certainly not the strength of the endorsement of a No. 1 class ranking in February. And for one last mention of predictive accuracy, I will predict that it will be very fun for Gators fans to watch this class and the rest of the program develop and perform and bring home serious hardware over the next four years. Since this class will rank among the all-time best in school history in terms of National Signing Day regard, we’ll definitely have to check back in four years and see how accurate that prediction is. Until then, remember that each day is a gift, that’s why they call it the present.

About David Parker

One of the original columnists when Gator Country first premiered, David “PD” Parker has been following and writing about the Gators since the eighties. From his years of regular contributions as a member of Gator Country to his weekly columns as a partner of the popular defunct niche website Gator Gurus, PD has become known in Gator Nation for his analysis, insight and humor on all things Gator.

In part one of this series, I discussed my analysis of the predictive power of team recruiting rankings on National Signing Day, measured by how well they forecast the success of those classes in the final polls over the four-year effective eligibility of that class. In this installment, I will get deeper into the specifics of the findings to demonstrate just how flawed their predictive powers are, and try to determine why anyone should care about class rankings, and if so, why?

But first, I want to clarify the scope of this analysis. It is not a study of the predictive power of star ratings on individual players. There have been a couple of relatively well-circulated analyses of the accuracy of star ratings, and they have been very shallow in method and very hollow in findings. Overall I cannot argue with the very high level takeaway that “Five-star good, No-star bad,” but beyond that very general connotation, there isn’t a lot of meat on the bone. And it is not for nothing, as they say. In order to perform a meaningful analysis on the accuracy of star ratings for individual players, the number of variables that would have to be controlled is far beyond the scope of sanity. In addition, there are so many fudge factors involved from the perspective of the recruiting services; the sizzle just isn’t worth the steak.

And I personally don’t believe in the value of one player to the point that it is worth investigating. The exception would be a very unique program-changer like Tim Tebow or Percy Harvin, and those players are well known to everyone as such. The tracking studies we have seen judging accuracy of star ratings focus on individual statistics and honors both on the college level and especially on the NFL level. But while it is always great to have an All-American or even Heisman Trophy winner in your program, and of course it’s nice to follow NFL stars who played for your school, it doesn’t really matter in terms of making the program better — winning more games or championships — over the period of the player’s eligibility. Emmitt Smith shattered the Florida rushing record book and had a Hall of Fame professional career, breaking the NFL rushing yards and touchdown marks as well, placing himself permanently in the discussion as the best running back of all time. So if there were a star rating system when he was a high school senior, his five-star score would have proven to be accurate. But over Emmitt’s three-year Florida career, the Gators outright stunk. The heavy reliance on the NFL aspect of a player’s career is ludicrous to begin with when judging the star ratings, because those stars are meant for college projection, not NFL, and a college coach’s job in recruiting is to sign players who will succeed in their college program — not in the NFL. And there have been so many players who star or have strong careers in the NFL who did next to squat in college (just off the top of my head, Will Hill, Antonio Cromartie, Jimmy Spencer, and almost every Georgia running back to make an NFL impact since Herschel Walker come to mind), the measurement is invalid.

Ultimately, even if the individual player star rating systems were accurate at all three of the major recruiting services, my response is, “So what?” Because my interest in recruiting is how signing classes will help the Gators win more games and more championships, or how they will help other schools who compete with Florida do the same. If I cared about stockpiling great players that get all sorts of media buzz and adulation, but almost never win championships, I would be a fan of FSU or USC. However I am a fan of the Gators, and in Title Town, we measure excellence in wins and championships.

So you can see why I find it useful to conduct this analysis: does the ranking of entire recruiting classes against those of all the other schools have any accuracy in projecting success of those schools in terms of wins and championships, as would be demonstrated to a great extent by final poll rankings? The more wins you secure, the higher you will be ranked in general; if you win your conference title, you will usually receive a bump in the polls — especially in leagues that hold conference title games. It is not perfect, but it is the best team-to-team, poll-to-poll, apples-to-apples comparison to judge the meaning of any recruiting rankings out there. Now with all that having been said, let’s look at some more of the specifics of the findings.

Da Debil in Dem Details

To set the stage again, the analysis examined the eight signing classes from 2002 to 2009, and measured their success by aggregating their final Coaches’ and AP poll rankings over the first four years on campus, the most effective eligibility window of the class. Let’s first examine a few years as an example of how random the signing class rankings can be in terms of predictive ability. In 2006, USC signed the top ranked class, but wound up finishing fourth in four-year poll performance. Not terribly egregious. Florida was the king of that quad as far as success on the field, and was ranked second in 2006 recruiting, so neither of the Number 1 teams in recruiting rankings or poll rankings were big misses. However, in that same year, there were some huge misses. Tacking forward, Notre Dame’s ranking was off by 34 spots (that is, ranked No. 6 in 2006 aggregate recruiting rankings, but finished 40th in the nation in combined four-year final poll rankings). The swing and miss on FSU’s ranking — another of 2006’s top 10 ranked signing classes – was even worse, over-ranking their class by 36 spots. Tracking backward (i.e., looking at the four-year final polls’ top 25 and seeing where those classes were ranked in 2006) uncovered even bigger blunders like Ohio State (10), Virginia Tech (25), TCU (46) and Boise State (50).

And those are just in the top 10, where the variance is smaller. In the rest of the Top 25, it was like a Wild West gun show if the gun fighters were Don Knots and Tim Conway (both in their current state of health), with misses like Missouri (33), Utah (38), Cincinnati (40) and West Virginia (45). The average margin for error in the 2006 recruiting rankings top 25 was 14, while tracking backwards it was 19. In the Top 10 alone it was 11 and 14, respectively.

In 2007, Florida won the recruiting title, but wound up finishing fifth in the aggregate polls over the next four years, as that time span contained Urban’s only two years in the last five that were not 13-1 finishes. Ohio State was the best performing class of that year, after singing just the No. 13-ranked class in 2007. Overall that top 10 had an average miss of nearly 20 spots, with only one significant outlier (TCU by 53). Boise State, Notre Dame and North Carolina were all recruiting ranking fails by over 40 spots. Even when adjusting for outliers, you wouldn’t want to wager any of your own money on future success based on signing day rankings.

The least volatile top 10 in this eight-year span was the class of 2002 — the only class in the study that had a single-digit margin for error in the top 10 looking forwards and backwards. By comparison, the 2008 and 2009 signing classes were two of the worst in terms of predictive accuracy, so the rankings have become worse over the years, not better. Even as the best representative of class ranking accuracy in the last ten years, the 2002 class still had a margin of error of 14 points tracking forward and 16 points tracking backwards for the top 25. In the top 10, it snuck just under the double-digit ceiling at No. 9 and No. 7, respectively. Of the top 10 signing classes in 2002, five of the rankings only missed their final poll predictions by 1 spot, and six of them wound up finishing in the top 10 in the collective aggregate polls. That’s pretty darn solid. Had this been the norm, the findings would have looked much different. However, the very next year, the margin of error nearly doubled to 16, even for the top 10, where there were misses of 11, 14, 40, 41 and 44. The following year was another good one for the recruiting services, actually projecting the top two teams in the polls — USC and LSU — correctly. However, in 2005, the error rate ballooned again, this time shooting all the way up to 18 spots, 21 for the top 10. That trend of very bad accuracy continued for the final four years of the analysis.

But a quick look at the anomalies explains why the recruiting services have any luck at all in predicting the success of signing classes, and why the casual observer often erroneously perceives the rankings to be generally on the money. Let’s consider the 2004 recruiting rankings. USC’s class was ranked No. 1, with LSU’s class ranked No. 2. Well, it should come as no surprise that the two teams that each claimed the national title just a couple weeks before National Signing Day that year were USC (AP champions) and LSU (BCS champions). In fact, when you look at the entire top 10 in aggregate final poll rankings for the 2003 season — released in January 2004, a matter of days before National Signing Day — the class of 2004 aggregate recruiting rankings varied only a mere 4.9 positions from where those teams were ranked in the final polls of the season that just ended. In fact, over the eight years of this analysis, the average margin of error for top 10 recruiting class rankings that I discussed earlier was 14.5 spots. Over the same time, the average variance between the aggregate recruiting class ranking and the previous month’s final AP and Coaches Poll rankings was just 8.6. Over the eight years, the margin for error for top 10 recruiting class rankings only nudged into single digits twice; when looking at how closely the rankings mirrored (i.e., copied) the previous month’s final polls, the variance never reached double digits a single time.

The reason it appears to the casual observer that the class rankings seem to match the final polls is that they are looking at the same year’s final polls and recruiting rankings — not the final polls over the next four years that prove out the accuracy of the rankings.

Now obviously there is a natural national title bump that boosts a program’s recruiting in the signing class that immediately follows just a couple weeks later, so you would expect their class ranking to be high, but that does not explain the preponderance of top 10 finishers every year to be mirrored in the recruiting rankings. Is it as simple as copying the top 10 or top 25 final poll rankings every year? No, that would be rather obvious. But it is not too different in concept in that whoever is hot that year, their visit lists will be weighted more in the recruiting rankings black box than teams that did not finish high in the polls.

To pick another glaring example of wild misses, tracking forward, the No. 1 signing class of 2005 was FSU; they didn’t even finish in the final top 25 the country in either poll in any of the four years of the effective eligibility of that class. On the other side of the coin, Ohio State finished in the 30s or worse in every recruiting ranking in 2003, but finished #3 in final poll rankings as a class over the next four years.

What it Means

These findings are not an indictment of the ability of these self-proclaimed recruiting gurus to judge talent, because truth be told they base their assessments on visit lists and buzz more than all other factors combined (and it is not very clear what “all other factors combined” even are). Given that fact, these findings may be an indictment on the evaluation talent of the upper third of the programs in the country. Or more likely, the evaluation acumen and the ability to develop the talent once it is on campus. The outcome of the analysis should not serve as a blunt object with which to hammer the recruiting services, either. But it does demonstrate that their class rankings essentially have no predictive power or accuracy in forecasting the future of a program. And that seems to be the only reason to follow recruiting — or certainly the ranking systems. But that does not mean the rankings and the process of following them are useless to fans beyond the artificially created entertainment value. And it does not mean that there is no significance to signing a highly rated recruit or having a class that is highly ranked. It merely means that neither of those mean much of anything by themselves as far as insuring the success of the program over that player’s or that class’s period of eligibility.

The primary take-away here is that being ranked highly in recruiting IS significant — it means your program has a lot of very good raw materials with which to work over the next three to five years — but it is not a precise or even general predictor of future success in any capacity. The differentiating factors as always are matching the right players to the program, coaching them up and avoiding those unforeseen, largely uncontrollable impacts like injuries, transfers, coaching instability, etc. These impacts on the success of a class were not taken into account to adjust for “program turbulence,” if you will, that would absolve the recruiting service rankings of some of the blame for their significant inaccuracies. They were ignored firstly because the eight-year period is long enough to smooth out those influences on a program. We are in the era of almost instant program transformation. Urban Meyer won a national title in two years. So did Gene Chizik, Bob Stoops and Jim Tressel. Les Miles, Pete Carroll and Nick Saban all did it in three years, with Saban also doing it at LSU in four. Larry Coker even did it in his first year. All of those national titles were won since 2000, and in fact these coaches represent ALL of the national title winning coaches since 2000 except Mack Brown, who took eight years to win his at Texas. And all the coaches on that list but Miles and Coker took over struggling programs. In 2011, Will Muschamp and Brian Kelly turned in national title-contending seasons removed just two and three years, respectively from taking over programs that were in the absolute gutter. So eight years is by far a long enough sample duration to factor out any program transition troubles that might misrepresent a program’s recruiting success.

The Good News

The longer you follow recruiting, the more you realize they key to turning these signing classes into successful football programs over their period of eligibility goes far beyond the players signing each February. If you plan to win a lot of games and win championships, it is very important to sign great classes of athletes on a consistent basis. But the programs that use that raw talent to succeed are the ones with great coaching staffs who not only have an eye for talent, but an eye for talent that will fit their systems and fit their programs on a cultural level, and above all a high level of ability in developing the talent to play at the highest level within that system.

You see where I am going with this.

Florida currently sits atop two of the three major recruiting services’ class rankings and will likely finish there next month. In a vacuum, as the data tell us, that means nothing. But when coupled with the fact that the head coach and staff of assistants have in place a high-functioning machine of strength and conditioning, education and development within their system and game planning in all three phases of the game, signing a No. 1 or otherwise highly-ranked class at Florida right now is a very significant event. With the system and staff stability in place, and their demonstrated ability, a highly ranked class can be a very good omen for future success. Not because of the evaluation of the recruiting services, but because in the case of schools like Florida, Alabama and LSU, to name a few, the hype and buzz surrounding highly rated prospects signing up for duty is well warranted. There is reason to celebrate a top-ranked class for the Gators this year.

It doesn’t matter what other programs do or where they rank this year — the success of this Gators class will bear out on the strength of the staff’s abilities and the stability of the Florida program’s system. As for how much they achieve, it will depend on the program, and certainly not the strength of the endorsement of a No. 1 class ranking in February. And for one last mention of predictive accuracy, I will predict that it will be very fun for Gators fans to watch this class and the rest of the program develop and perform and bring home serious hardware over the next four years. Since this class will rank among the all-time best in school history in terms of National Signing Day regard, we’ll definitely have to check back in four years and see how accurate that prediction is. Until then, remember that each day is a gift, that’s why they call it the present.

David ParkerDavidParkercipactli2@yahoo.comAuthorOne of the original columnists when Gator Country first premiered, David “PD” Parker has been following and writing about the Gators since the eighties. From his years of regular contributions as a member of Gator Country to his weekly columns as a partner of the popular defunct niche website Gator Gurus, PD has become known in Gator Nation for his analysis, insight and humor on all things Gator.GatorCountry.com