A few days ago, I wrote that France closed out Belgium with a master call in wasting time. I appreciate all of the views it received and Twitter dialogue it's generated, much of which has seemingly come from the two countries involved in the match. Many great questions arose – namely how France's time-wasting in Tuesday's semifinal actually stacks up relative to other such performances in this World Cup and beyond.

That is a question I absolutely wanted to explore, and I'd certainly have done it sooner if I could wave a magic wand and have the start-stop-and-cause data for every match at my fingertips. But in reality, it requires that I re-watch the end of every match. (Maybe there's some algorithm out there that can do it, though I'd argue you sometimes need human judgment to determine the primary cause of certain delays.)

In any case, I went ahead and conducted the same "time-wasting" analysis for five additional matches from the 2018 FIFA World Cup knockout stage. I specifically focused on matches with a one-goal margin for all or nearly all of the 71st minute through the end of stoppage time, which mirrors the situation in France v. Belgium and is the most likely circumstance in which the side with the lead will waste time.

So the sample now includes France v. Belgium (1-0, Semifinal); Brazil v. Mexico (2-0, Round of 16); Belgium v. Brazil (2-1, Quarterfinal); Uruguay v. Portugal (2-1, Round of 16); Sweden v. Switzerland (1-0, Round of 16); and England v. Colombia (1-1, Round of 16). Brazil had a one-goal lead from the 51st to the 88th minute, and England had a one-goal lead from the 57th minute to the 93rd minute, so both of those qualified.

You can scroll down to see the play-by-play details for all six matches. The headline is that France did indeed waste time best (most) among World Cup teams with a one-goal lead in this year's knockout stage. Brazil was not far behind in their defeat of Mexico. Given the differences in stoppage time, the best measurement to lean on is percentage of the match after the 70th minute squandered by each side:

As we observed in the original story, France wasted 12 minutes and five seconds of the final 26 minutes and 13 seconds of their semifinal with Belgium, which represented 46% of all match time after the 70th minute. Brazil wasted 11:30 of the final 26:07 with Mexico, accounting for 44% of remaining time. Belgium milked some clock in their 2-1 quarterfinal victory too, but not nearly to the level of France and Brazil.

Brazil wasted time almost as well as France

Brazil-Mexico's final stretch was an entertaining re-watch – depending, of course, on what your definition of entertaining is. I fast-forwarded as always to the 70:00 mark, and sure enough, play had already been interrupted by an injury to Brazil's Willian back at 69:16. The action didn't resume until 70:16. (For purposes of this analysis, I only counted the 16 seconds of stoppage that occurred from the 70:00 point onward.)

Five seconds later (!!!!!), Neymar was writhing on the sideline in one of his finest performances of the World Cup. (I guess Miguel Layún stepped on him. You saw the GIFs and memes. You be the judge.) That resulted in a two-minute-and-two-second delay. Three-and-a-half minutes after that, he was back on the ground for a 48-second delay. Keeping track? That's 3:50 worth of Brazilian injuries in a span of 6:46.

As if to one-up Neymar, Brazil's Thiago Silva took 1:45 off the clock with an injury that started at 81:47. Play resumed at 83:32, and for all his agony, Silva was back on the pitch 10 seconds later. In the period studied, Brazil burned 5:35 on injuries, 2:12 on subs, and 1:17 after a goal. That's 9:04 in wasted timeafter 69:17 alone. Surely there was more wasted in the half. Yet there were only six minutes of stoppage time.

Four of Mexico's five fouls after the 70th minute sent a Brazilian player to the ground in histrionics. The only one that didn't came after Brazil secured a two-goal lead in the 88th. Yet somehow, when Brazil was trailing Belgium by a goal late in the next match, the Seleção avoided injury altogether. Brazil wasted only 2:34 after the 70th minute of that match – about one-fifth the amount they wasted against Mexico.

Everybody does it when they're winning (albeit to different degrees)

The winning side wasted more time than the losing side in all six of the knockout stage matches that carried a one-goal margin for all or nearly all of the end of regulation. France and Brazil fared comically best, followed by Belgium, and then Uruguay, Sweden and England. (England conceded a 93rd-minute goal to Colombia, so maybe they should have taken a time-wasting page from France and Brazil.)

There's a ton of detail below, but I wanted to share it all because different folks will find different parts interesting. You'll note that winning teams were responsible for 71% of all time wasted after the 70th minute in these six matches. They were injured 11 times for a total of 13:59 (average of 1:16), whereas the trailing side was injured just once for a mere 22 seconds (Switzerland in the 75th minute against Sweden.)

The winning side delayed the match with a sub nine times for a total of 8:53 (average of 0:59), while the losing team did so only twice for 1:19 (average of 0:39). (That doesn't mean losing sides didn't make subs after the 70th minute; it just means that when they did, there was another factor equal or greater causing the match delay.) Teams with the lead took twice as long on free kicks (31 seconds versus 15), and they took three times as long on goal kicks (31 seconds versus 11) and throw-ins (17 seconds versus six).

[Again, the data below only counts the primary cause of delay. Injuries and substitutions generally take longer than whatever precipitated them, so they supersede the other acts listed. If a sub were made during a goal kick, for example, it's counted in the sub category, not as a goal kick. If an injury occurred to draw a free kick, it's counted in the injury category and won't show up in the free kick data. So for the most part, you can trust the corner, free kick, goal kick, and throw-in times as not elongated by other factors.]

Detail for each of the six matches

Match flow can affect the degree to which a team has the opportunity to waste time. In the Round of 16, Switzerland trailed Sweden 1-0 from the 66th minute onward. But the Swiss maintained pressure and won six corners in the final 20 minutes plus stoppage time alone, losing 2:34 to a worthy but ultimately futile cause. (No other team in these six matches won more than two corners during that final run of the match.) The Swiss attack led to only two Swedish goal kicks, giving the Swedes less chance to burn clock.

These charts are sorted by how much time the winning side wasted after the 70th minute. If you scroll all the way down to England v. Colombia, you'll see that I highlighted a continuous in-play stretch of three minutes and 31 seconds, from 74:18 to 77:39. That was by far the longest uninterrupted run of the periods studied in these six games. The next-highest was 1:41, during stoppage time of Sweden v. Switzerland.

The average continuous play was 29 seconds – followed by, on average, 29 seconds of stoppage or delay.

Times presented represent best estimates based on analysis of Fox Sports TV coverage. Three moments are asterisked because the camera cut away from the field. Other reasonable analyses might arrive at slightly different time estimates. Data was compiled and analyzed by ELDORADO. All charts and graphs herein were created by ELDORADO.

France defeated Belgium 1-0 in the first World Cup semifinal on Tuesday, withstanding a few late pushes from The Red Devils with the help of two non-calls outside the box – and a master class in wasting time.

I reviewed the final 26 minutes of the match (70:00 through stoppage time) – during which France's time-wasting seemed to be most pronounced – and tracked starts and stoppages, their causes, and which team controlled each restart. Those final 26 minutes featured only 11 minutes and 52 seconds of action.

You can see the details below. Keep in mind that FIFA considers certain stoppages like throw-ins and goal kicks to be "entirely natural," noting that a time allowance should be made only when delays are excessive. But at what point does "natural" for a goal kick or throw-in end and "unnatural" begin? FIFA separately instructs that injuries, substitutions, time-wasting, and celebrations should be factored into stoppage time.

France took four goal kicks during the final 26 minutes and shaved 30 seconds, 30 seconds, 45 seconds, and 27 seconds off the clock. (The third coincided with a French substitution). Their three late-match throw-ins burned 23 seconds, 15 seconds, and 23 seconds. Altogether, 3 minutes and 13 seconds of the final 26 minutes were paused for goal kicks and throw-ins – about one-eighth of the game's home stretch.

French Injuries (3) = 4 minutes and 8 seconds (71' on)

Injuries to France's Samuel Umtiti in the 73rd minute and Blaise Matuidi in the 81st and 85th minutes cost the match 50 seconds, 125 seconds, and 73 seconds, respectively – good for 4 minutes and 8 seconds in all. (Belgium's Eden Hazard was also hurt in the 81st minute, but he was back on his feet in 48 seconds..)

When the whistle blew after Matuidi and Hazard's collision, the clock read 80:57. After play resumed, France milked a 45-second goal kick and substitute, Matuidi went down again, Belgium set up a long free kick, and France took another goal kick. The clock read 88:30. It was damn near the 90th minute, Over seven-and-a-half minutes had melted off the game clock. France and Belgium played for two of them.

Les Bleus successfully squandered another 3 minutes and 39 seconds after five Belgian fouls (one of which led to a Belgian substitution); Kylian Mbappé took 31 seconds for his yellow-card inducing ball-bobbling antics in the 92nd minute; and France wasted 37 seconds before their 96th-minute corner. Six minutes of stoppage time were added. France and Belgium played for about two-and-a-half of them.

​From the 71st minute on (26 minutes and 13 seconds of match time), France and Belgium were paused or delayed for 14 minutes and 21 seconds – 55% of what should have been the most exciting part of the semifinal. France was primarily responsible for 12 minutes and 8 seconds (85%) of that wasted time.

I'm not saying all of that time should have been added back (though FiveThirtyEight found that World Cup stoppage time has been about half as long as it should be, even when you allow for the "natural" slippage associated with throw-ins and goal kicks). I don't even think you can really blame France. The rules are the rules, after all. Everybody wastes time when they're winning. France just did it extremely well today.

But it does kinda feel like FIFA needs to do something more. Howbout if you're down for one or two minutes you have to be substituted? Wouldn't that make these dudes get up? Actual increased stoppage time? Quicker yellows for modest delays? A red for something overt? I'm an American who only obsessively watches every four years, so I hold little authority. But I would love to hear your thoughts ; )

Times presented represent best estimates based on analysis of Fox Sports TV coverage. Three moments are asterisked because the camera cut away from the field. Other reasonable analyses might arrive at slightly different time estimates. Data was compiled and analyzed by ELDORADO. All charts and graphs herein were created by ELDORADO.

Spring 2018 officially came to an end last Thursday morning in the United States, and by the time the Fourth of July barbecues are fired up next week, the season that so reliably brings us flowers, baseball, and hay fever will be forgotten. As it turns out, there wasn't much "spring" to remember this year, anyway.

Spring spanned 94 days – March 20 to June 21 – but in New York City and surrounding areas (and many other places, I'm sure), it felt like it came late and left early. If forced elevator conversation is a reliable gauge of people's opinions on the weather, then you might recall strangers and coworkers complaining about how cold it was in mid-April, or how hot it was in early May.

And if my own meandering observation is worth its salt, then I, too, felt robbed of that pleasant springtime stretch when it isn't too cold and it isn't too hot – when New Yorkers need not cover their faces from the bite of winter's cold or the sting of summer's stench. But were we really robbed? Did Mother Nature actually shortchange us of "spring"? Was there anything special about this past season?

From March 20 to June 21, New Yorkers experienced 40 days with a high of 60 to 79 degrees. That's the 10th-fewest "spring-weather" days since 1900. The other 54 days of spring were, of course, either colder (below 60) or hotter (80+) – decidedly "unspringlike" in the eyes of anyone who truly fancies the season.[1]

Another observation we can make is how many such days we had relative to how many we might expect. You may notice that the red trend line in the chart above tilts upward. In 1870, New Yorkers could expect about 45 days with highs in the 60s and 70s during the spring season. Rising temperatures have elevated that expectation to 51 days. In that way, today's New Yorkers generally enjoy the springiest springs of all.

[The average high temperature for a spring day in Central Park has increased from 63.5 degrees Fahrenheit in the 1870s to 68.2 degrees this decade. I figured temperatures had risen but was surprised by the magnitude of the increase. That trend has reduced the number of sub-60 degree daily spring highs from approximately 36 to 24, and it's increased the number of 80-plus degree daily spring highs from 14 to 20.]

Against that backdrop, our mere 40 spring days with highs of 60 to 79 degrees is even more noteworthy, as it's 11 days below expectation. That's the 8th-largest "below-expectation" recording since 1870, the first year for which complete data is available. It's also one of the few modern years on the list. ​​In other words, not only was spring 2018 unspringlike, it was also far less springy than we've come to expect spring to be:

Average high temperatures tell a similar story. The average high this April was 57.1 degrees, which ranks in the 18th percentile for the month since 1900 – aka cold for April. The average high in May was 75.5, which ranks in the 93rd percentile for that month since 1900 – hot for May. That 18.4-degree difference between average highs in April and May is among the most drastic in recorded New York City weather history:

[If you walked into your office on April 19 with contempt for Mother Nature, you likely weren't alone. The high in Central Park that day was 49 degrees, or 15 degrees below normal.[2] During the first three weeks of April, high temperatures in New York City generally rise from 55 degrees (April 1) to 64 degrees (April 20); this year, highs were in the 40s on nine of those 20 days. And they were 51, 51, and 50 on three others.]

And what of summer, only a fews days young? So far, we're set to have our coldest June 21-to-26 stretch since 1992. The average high of 76 degrees over the six-day span is nearly six degrees below normal, and it's the city's 13th-coldest June 21-to-26 run since 1900.[3]So maybe spring lives on after all – until Friday, that is, when a four-day heat wave is expected to usher in highs of 90, 92, 94, and 92. Enjoy the summer!

Footnotes & Extras

[1] To maintain a consistent baseline of comparison, I defined the spring season as March 20 to June 21 for all years.

[2] From 1900 to 2018, the average high temperature in Central Park on April 19 was 63.7 degrees Fahrenheit.

[3] From 1900 to 2018, the average high temperature in Central Park from June 21 to 26 was 81.8 degrees Fahrenheit.

The decision paves the way for states to decide whether to offer legal sports betting. ESPN’s David Purdum reports that New Jersey, which brought the case, Mississippi, New York, Pennsylvania, and West Virginia could be among the first to do so. The Associated Press reports that as many as 14 states could act within the next two years, with another 18 states to follow.

So what does the decision mean for you? For starters, it’s likely to bring much of the estimated $150 billion-dollar-a-year black market in sports gambling above board.[1] You’ll conceivably be able to place a bet on your phone, at a local sportsbook, or even in an arena. Casinos should see a boost. And fans will have more reason to engage, which supports broadcasters, franchises, and leagues.

Most importantly, however, the Supreme Court’s decision on sports betting means that you will now be able to lose your money legally – which, either right away or over time, you are exceedingly likely to do.

​At the start of last NFL season, I walked through some of the success rates and financial dynamics of the most popular sports wager in the United States – the spread bet. I specifically looked at the against-the-spread performance of 60 individuals and 60 prediction models during the 2016 NFL season. I’ve since updated those statistics to include the 2017 NFL season.

​To the unfamiliar, here’s an example of how a spread bet works. Assume the Dallas Cowboys are 4.5-point favorites at home against the New York Giants. If you pick Dallas to win “against the spread,” they need to win by five points or more for you to win the bet (“Dallas -4.5”). If you pick New York, you win if the Giants win the game outright or lose by four points or less (“Giants +4.5”).

In a typical spread bet, you risk 10% more money than you would win (-110), known as a 10% “vig.” Bet $110 and win, and you get $100. Bet $110 and lose, and you lose all $110. Historically, sportsbooks would set and adjust point spreads to attract and maintain equal action on both teams.[2] Doing so guarantees the books a profit on those bets, equal to 4.5% of the total amount wagered.

​The chart below shows the results of a random group of individual bettors (60 in 2016 and 53 in 2017) and prediction models (60 in 2016 and 57 in 2017) “against the spread” over the course of the last two NFL regular seasons. As expected, both groups had an average success rate right around 50%. And while the sample size is still small, you can see a typical bell curve forming.

Half of the bettors won more games than they lost, and half lost more games than they won. But when losses cost $110 and wins only earn $100 – thanks to that 10% vig – things get pretty ugly pretty fast. Had everyone wagered real money on every game in equal amounts, only 30 out of 113 individuals (27%) would have netted a single-season profit. Nearly three-quarters of bettors would have lost money.[3]

Now consider this. Certain professional sports leagues have been lobbying to receive an “integrity fee” for all wagers placed on their respective games, theoretically compensating them for “[creating] the source of the activity” and “[bearing] the majority of the integrity risk.” Fee estimates range from 0.25% to 1.0% of money wagered, or 2.5% of profits, with other wrinkles attached.

Sports betting operators will also have to pay taxes – potentially as high as 12.5% of gross sports wagering revenue in one version of an Illinois bill. Against this backdrop, rumors began to circulate last fall that sportsbooks might consider covering these taxes and fees by raising the standard vig from 10% (-110, risk $110 to win $100) to 20% (-120, risk $120 to win $100).

If that happens, your chances of making money get even dimmer. With a 10% vig in the example above, we saw how roughly three out of four individual bettors would have lost money – already pretty tough sledding. With a 20% vig in the same example, nearly seven out of eight would have lost money on a single-season basis – worse still, 95% of the individuals who were part of the sample in both years would have lost money over the course of the two seasons combined.[4]

Let's take a closer look at just how devastating the vig is. With a 10% vig, only three out of 113 individuals (2.7%) would have netted a single-season return-on-dollars-wagered of 10% or more. Meanwhile, 22 people (19.5%) would have had a 10%+ loss. Only 18 of the individuals would have earned a 3%+ profit, while 61 bettors would have lost at least that much. And the 20 worst performers would have lost over 2.0x the money that the 20 best performers gained.

Now let’s repeat those sentences assuming a 20% vig. With a 20% vig, only one out of 113 individuals (0.9%) would have netted a single-season return-on-dollars-wagered of 10% of more. Meanwhile, 50 people!!! (44.2%) would have had a 10%+ loss. Only four of the individuals would have earned a 3%+ profit, while 83 bettors would have lost at least that much. And the 20 worst performers would have lost almost 8.0x the money that the 20 best performers gained.

And again, those are single-season returns. It's even harder to stay in the black year over year. With a 10% vig, bettors in this example would have had to finish in the 90th percentile in 2017 just to offset the damage of finishing in 50th percentile (i.e., being exactly average) the year before. (Remember that half of the bettors did even worse!)

With a 20% vig, they'd have had to finish in the 97th percentile in 2017 to recoup the money they lost by finishing in the 65th percentile (above average!) the year before. In other words, the reward for being good-but-not-great is a financial penalty and a requirement that you be exceptional next year to break even. You can deduce from the chart how ugly it gets when you're actually below average, like half of everybody is.

Despite all that, most folks kinda pretend the vig isn’t there, casually chalking it up as the cost of doing business. Ask your buddy how much he has on a game, and he’ll likely say “$100” or $50,” not “$110” or “$55,” which is really what he’d lose. If he wins two bets and loses two others, he’ll probably tell you that he went two and two, not that he lost money.

Of course, not everyone puts money on every game. Bettors often target select games. But even then, one person's "best bet" is another's "trap of the week," and over a long enough period, the vast majority of casual bettors will lose money. There’s nothing special about the math. Sometimes we just need to see it all on paper.

[2] Evidence suggests that sportsbooks have grown increasingly comfortable straying from that 50-50, guaranteed-profit balance. When they do so – that is, when they set lines that attract significantly more money on one team – they’re effectively betting on the other team. In early 2017, certain books allowed big imbalances in nine of 10 NFL playoff games. The books lost all of them against the spread.

[3] In this particular sample, that's true both on a single-season basis and combined over the course of the two seasons. There were 60 individuals in 2016 and 53 individuals in 2017. Forty-two of them were the same across seasons. On a single-season basis, 83 out of 113 (73%) would have lost money with a 10% vig. On a combined basis over the two seasons, 31 out of 42 (74%) would have lost money with a 10% vig.

[4] Again, there were 60 individuals in 2016 and 53 individuals in 2017. Forty-two of them were the same across seasons. On a single-season basis, 97 out of 113 (86%) would have lost money with a 20% vig. On a combined basis over the two seasons, 40 out of 42 (95%) would have lost money with a 20% vig.

​(And other musings on the history, controversy, irony, stereotypes, cultural associations, and cultural complexity that the show and its cast have brought to light)

MTV’s Jersey Shore: Family Vacation airs its seventh episode on Thursday, and the reunion season has managed to reinsert the likes of Deena, JWoww, Pauly D, Ronnie, Snooki, The Situation, and Vinny back into certain corners of pop culture. (Sammi Sweetheart isn’t even there and yet she’s kinda back, too.)

Technically a new series – and already renewed for a second season – episodes have drawn between 1.44 and 2.55 million viewers, making it the highest-rated original cable show among 18-to-49 year-olds on all but two of the Thursday nights it’s aired. The only telecasts that topped it were the NFL Draft on April 26 and the NBA Playoffs on May 3.[1]

While strong, those numbers are a far cry from Jersey Shore’s original six-season run, which spanned 71 episodes over three years and peaked at 8 million-plus viewers per episode. At its apex, it was the second-most watched Thursday-night show on cable or network television for 18-to-49 year-olds, behind only American Idol on Fox.

The fact that Jersey Shore still commands attention and dominates its night on cable – nearly a decade after its premiere – is a testament to how much of a pop culture juggernaut it was. The show that gave us GTL, T-shirt time, and "the note" successfully brought contemporary guido culture to the mainstream. Its popularity surged despite controversy – or perhaps because of it – and in spite of another interesting and telling cultural fact, the significance of which extends far beyond Thursday nights on MTV.

Every American ethnic group has its stereotypes. But with Jersey Shore, we have a show built around people who actively associate their idiocy, however mindlessly entertaining, with the negative stereotypes of one particular group. The cast flies the proverbial flag – or paints it on their chests– and advances, explains, and justifies their antics as “Italian-American.” Worse yet, if you actually lift up the genealogical hood, you find that the Jersey Shore cast is barely over half Italian-American:

Only three of Jersey Shore’s eight original cast members claim full Italian ancestry – Paul DelVecchio ("Pauly D"), Mike Sorrentino ("The Situation"), and Vinny Guadagnino. Three others, Angelina Pivarnick, Ronnie Ortiz-Magro, and Sammi Giancola ("Sammi Sweetheart"), are half Italian-American. (Angelina is Polish on her father’s side, Ronnie is Puerto Rican on his mother’s side, and Sammi is Greek on her mother’s side.) Jenny Farley ("JWoww") and Nicole Polizzi ("Snooki") do not have any Italian blood at all.[2]

What we end up with is the bastardization of a culture by a group of people who, to a sizable extent, merely chose to represent it. And there’s an interesting irony to that. For generations, many Italian-Americans (and members of other immigrant groups) felt compelled, thought it an advantage, or otherwise elected to Americanize their names and whitewash their Italian ancestry, which American society viewed as somewhere between "too ethnic" and "criminal." (Sound familiar?) Some were stars. Others might have been your parents or grandparents.

As recently as 1983, The New York Times Magazine ran a long piece on “Italian-Americans Coming into Their Own,” highlighting that “Americans of Italian descent… [had] attained a kind of critical mass in terms of affluence, education, aspiration and self-acceptance.” The article opens in the offices of three-term New York Governor Mario Cuomo and offers a fascinating glimpse into how the Italian-American journey was felt and perceived at the time. It closes with the story of William D'Antonio visiting his daughter Laura at college. Laura comments that she's "the only 100% Italian in [her] dorm... but [she knows] at least a dozen people who wish they were Italian." William muses that 40 years earlier, "[he] would not have been able to admit that [he] was Italian, much less imagine any dozen people who wished they were.''

Progress had come. And so far gone are the days of Italian-Americans changing their names that today's performers and politicians not only keep them, some voluntarily adopt them for art or appeal. When New York City hosted the 60th Annual Grammy Awards in 2018, headline acts included Alessia Cara, Italian-Canadian; Stefani Joanne Angelina Germanotta, three-quarters Italian and better known as Lady Gaga; Donald Glover, who raps under the stage name Childish Gambino; and Logic, whose mixtape titles bear the names of Sinatra and Tarantino. Even the host city’s mayor, Bill de Blasio, opted into an Italian name – he was Warren Wilhelm until 1983 and Warren de Blasio-Wilhelm until 2001. We’re a long way from Anne Bancroft, Dean Martin, Tony Bennett, and my grandparents’ generation.

​But the Jersey Shore phenomenon still underscores a major cultural paradox. Many Italian-Americans don’t carry the hyperbolic physical or behavioral hallmarks that American society perceives to mean “Italian-American.” So when Italian-Americans are successful, that success is not closely associated with their ethnic or family background. If you walked into a room and chatted with Geraldine Ferraro (the first female vice-presidential candidate of a major U.S. political party), Samuel Alito (the 110th Justice of the Supreme Court), or Anthony Fauci (pioneering HIV/AIDS researcher and Director of the National Institute of Allergy and Infectious Diseases since 1984), would you walk out thinking they were all Italian-American? Unless you saw their names or discussed the topic of family origin, you might believe none of them was.

[It’s scary to wonder how many more people know who Snooki is than Geraldine Ferraro.]

Conversely, if you walked into a room and chatted with a bunch of Jersey Shore-style personalities, you’d likely walk out thinking of them as uniformly Italian-American – even though the Jersey Shore cast, the wannabe guidos at your New York-area high school, and the lines outside of Neptunes in the Hamptons (R.I.P.) and D’Jais in Belmar might be about half, if even that. Everyone else is opting in, flying the flag, painting it on their chest – either halfway or all the way.

[If you’re tempted to comment that those people and places are “more Italian-American than I think,” you’re only proving my point.]

My goal here is to offer perspective and gratitude, not complaints. After all, a little Jersey Shore bullshit is more nuisance for Italian-Americans than actual threat, which is what many other groups still face. Our threat days are over. So now it's time for that perspective and gratitude to extend beyond our own dinner tables – to support other groups who harbor the same multi-generational American hope, to those who were never given their fair shake in this country, and to those who pursue the Dream today.

Footnotes

[1] For added context, Jersey Shore: Family Vacation has drawn 2.2 to 3.3 times the total viewers per episode as FX's critically acclaimed Atlanta and 1.3 to 1.5 times the total viewers per episode as Bravo's popular Southern Charm. All three air on Thursday nights.

[2] Deena Cortese replaced Angelina after season two and is Italian-American on both sides of her family. JWoww is Irish- and Spanish-American. Snooki is Chilean-American but was adopted and raised by an Italian-American family. In a 2010 interview with Fox News, JWoww pointed out that she and Snooki are not Italian-American and that they're "not trying to be Italians." For what it's worth, it feels a little weird to analyze people’s ethnic backgrounds like this, but if the Jersey Shore cast wants to represent Italian-American culture– or at the very least if they're going to be universally associated with it– it's only proper for the rest of us to check the facts.

[3] The real hard work and sacrifice came in the generations before me. I am forever grateful for that.

Data was compiled and analyzed by ELDORADO. All charts and graphics herein were created by ELDORADO.

Developing that story kicked up a lot of other fun facts – including the number of active QB draft classes in the league in a given season, draft-class longevity, draft-class peaks, the worst draft classes in NFL history, and, more generally, the best quarterback seasons and careers based on our era adjustment.

- Quarterbacks from 18 different draft classes recorded passing statistics during the 2017 season, tying the 2004, 2003, 1975, and 1969 seasons for the most in NFL history. Of those, 2017 is the only one in which the 18 draft classes were consecutive (2000 straight through 2017). All of the others had gaps – 2004 and 2003 thanks to Doug Flutie (QB Class of ’85) and 1975 and 1969 thanks to George Blanda (QB Class of ’49).

- There have been between 14 and 18 QB draft classes active in every NFL season since 1961. The average over that period (and post AFL-NFL merger) is 15.75. The seasons with 18 active QB classes were mentioned above. Interestingly, the seasons with 14 active QB classes have all come in pairs (or threes) – 2013 and 2012; 1992 and 1991; 1986, 1985, and 1984; 1981 and 1980; and 1962 and 1961 – and are generally due to lack of longevity among what would, in those seasons, be the elder quarterback classes. - The older end of the 2017 season's draft-class curve was made up of Tom Brady (2000), Drew Brees (2001), Josh McCown (2002), and Carson Palmer (2003), and each was the only one to throw a pass from his draft class in 2017. Palmer retired in January, so unless he or another member of the '03 class pulls a Vinny Testaverde and makes a spot start, we won’t see 19 different QB draft classes throw a pass in 2018.

The Longevity of QB Draft Classes

- The average post-merger QB draft class has recorded passing yards in 15.5 different seasons (excluding those that are still active). The following bullets cover those with the most and least longevity.- The 1949 QB draft class recorded passing yards in a record 26 seasons thanks to the aforementioned George Blanda. Blanda played for 27 seasons, and though his final nine were spent as a kicker, he still registered passing stats in eight of those. The only season in which he did not attempt a pass was 1973.

- Four other QB draft classes have recorded passing yards in 20 or more seasons – 1956 (21 seasons via Earl Morrall), 1985 (21 via Randall Cunningham, Steve Bono, Frank Reich, and Doug Flutie), 1987 (21 via Vinny Testaverde), and 1991 (20 via Brett Favre). The longevity for 1949, 1956, 1987, and 1991 is by way of one man per class, so it’s really more of an individual statistic masquerading as draft-class statistic.- The 1985 QB draft class’s longevity is more interesting. Cunningham played through 2001 but “retired” for one season in 1996. Bono and Reich were still playing in 1996 before retiring in 1999 and 1998, respectively. Meanwhile, after starting 15 NFL games in the 1980s and playing eight seasons in the CFL, Flutie returned to the NFL in 1998 and played through the 2005 season. Their longevity was a team effort.

​The Peak Seasons for QB Draft Classes

- The median "peak season" for a QB draft class is season number four (among fully retired classes since 1936 or post-merger. The median is the same in both cases.) By season four, stars and starters are often in the saddle, and there are usually enough mediocre starters, fringe starters, and backups still bouncing around the league. Together, they can put up strong cumulative numbers for their draft class.- The 1985 QB draft class had the latest peak in history – year 14. In 1998, Randall Cunningham started 14 games for Minnesota, Doug Flutie started 10 for Buffalo, Steve Bono started two for Saint Louis, and Frank Reich started two for Detroit. (The 1962 QB draft class had the 2nd-latest peak in history – year 12.)

- Four post-merger QB draft classes have peaked in their rookie year – and for very different reasons. The 2017 class has only played one season, the 2013 and 1974 classes peaked in year one because they're terrible, and the 2012 class peaked in year one because it was superb. As detailed in my original post, the 2012 QB class's rookie season is 5th-best all-time in era-adjusted yards for any draft class in any season.

The Worst QB Draft Classes in NFL History

- The 1996 QB draft class is the worst since the merger in terms of longevity and cumulative era-adjusted passing yards (among fully retired classes). Eight quarterbacks were selected in 1996, and four of them threw a pass in the NFL. Tony Banks (42nd overall) started 78 games over nine seasons and passed for 15,315 career yards. Danny Kanell (130th) started 24 games over six seasons and passed for 5,129 yards. Bobby Hoying started 13 games and Jeff Lewis attempted 54 passes. (The 1997 and 1976 QB draft classes are next with only 10 seasons to their name. They had more yards but lower peaks than the '96 group.)- The 2013 QB draft class could give the 1996 class a run for its money as worst since the merger. The 2013 group currently trails the 1996 group by 7,779 modern-day equivalent yards. The 2013 class had quarterbacks in the league in 2017 – E.J. Manuel, Geno Smith, Mike Glennon, and Landry Jones. Good luck.- Zero quarterbacks were selected in the first round of the 1996, 1988, 1985, 1984, and 1974 drafts – the only such drafts since 1942. In 1988 and 1974, no quarterbacks were drafted in rounds one or two.

The Best Individual Seasons and Careers: Era-Adjusted Yards

- The five best individual seasons since World War II in terms of era-adjusted passing yards belong to Dan Fouts (1982), Roman Gabriel (1973), Joe Namath (1967), Drew Brees (2008), and Drew Brees (2011). Fouts averaged 320 passing yards per game across 1982’s strike-shortened nine-game season – 1.45 times the NFL’s team average. Those 320 yards per game would translate to about 348 yards per game today. (These rankings are based on the same era-adjustments used and described in the original story.)- The five best careers in terms of era-adjusted passing yards belong to Brett Favre (77,106 modern-day equivalent passing yards), Peyton Manning (74,861), Drew Brees (71,195), Fran Tarkenton (68,038), and Tom Brady (67,215). Compared to the actual list, Favre and Manning flip-flop at the top, Brees remains third all-time, Brady falls from 4th to 5th, and Tarkenton jumps from 11th to 4th. (Dan Marino falls from 5th to 6th.)

- I'll put out a separate post with these best era-adjusted season and career rankings this summer.

​The NFL Draft begins on Thursday night in Arlington, Texas, and onlookers expect between four and six quarterbacks to be taken in the first round. A number of mock drafts go so far as to project that four of those QBs will be among the draft’s first five or six picks, including USC’s Sam Darnold, Wyoming’s Josh Allen, UCLA’s Josh Rosen, and Oklahoma’s Baker Mayfield, winner of the 2017 Heisman Trophy.

Six first-round quarterbacks would tie the record set in the 1983 NFL Draft, when John Elway, Todd Blackledge, Jim Kelly, Tony Eason, Ken O’Brien, and Dan Marino were drafted. Five first-round quarterbacks would equal the 1999 NFL Draft for second most in history. Six other drafts saw four QBs taken in round one.

It will take some time before we know which QBs in the 2018 draft class were worthy of the first round and which weren’t, and even longer before we know how this group stacks up historically. Hell, for all we know, the draft’s best quarterback might end up being Washington State’s Luke Falk, Richmond’s Kyle Lauletta, or Western Kentucky’s Mike White, all of whom are projected to go in subsequent rounds. Sixth-round pick Tom Brady (2000) would be happy to provide a history lesson on that.

With so many quarterbacks in the mix, the 2018 NFL Draft is practically begging us to look back and rank the best NFL quarterback draft classes of all time. There are several such lists out there on the internet, but nearly all rely on the subjective views of their author. As always – be it with sports, movies, or politics– my objective is to remove opinion from the equation and take a purely empirical approach.

So I looked back at every NFL draft in history – the first was in 1936 – and tallied up the career passing yards thrown by the quarterbacks selected in each draft. To keep these QB draft class rankings as straightforward as possible while still accounting for era, I made two simple adjustments – the first based on league-wide passing trends and the second based on the number of regular-season games. The result is era-adjusted passing yards expressed on a 2017-equivalent basis. (See footnotes for more.)

​The NFL’s most famous QB draft class is not its most prolific

The 1983 NFL quarterback draft class is arguably the league’s most famous. Six quarterbacks were drafted in the first round and three became Hall of Famers (Marino, Elway, and Kelly). When Marino retired in 1999 – the last of the three to do so – he was the NFL’s all-time passing leader. Elway was 2nd and Kelly was 10th. The class’s combined performance in 1986 remains the best in league history, as they threw for the modern equivalent of 21,648 yards and accounted for over 20% of all passing yards.

But once you factor in era, rule changes, and the number of regular-season games, the oh-so-famous 1983 QB draft class falls to second place in total era-adjusted passing yards, outgunned by 1971, “the original Year of the Quarterback.” That year, the Patriots selected Heisman Trophy-winner Jim Plunkett of Stanford first overall, the Saints took Ole Miss’s Archie Manning second, and the Oilers picked Santa Clara’s Dan Pastorini third – all quarterbacks.

Lynn Dickey and Ken Anderson were among four quarterbacks chosen in the third round, and Joe Theismann went one round later (99th overall). Anderson offers another history lesson for teams picking quarterbacks in 2018. He was the sixth QB selected in the 1971 draft but became its most prolific, ranking 7th in career passing yards when he retired in 1986 after 16 seasons with the Bengals.

The 1983 QB draft class may have had three Hall of Famers, Ken O’Brien, and a higher high, but the 1971 class had more depth and a longer peak. Collectively, the 1971 group produced over 15,000 era-adjusted yards in a single season as early as 1972 and as late as 1983 – a span of 12 seasons. The 1983 group first achieved that feat in 1984 and last did it 1991 – a span of eight. You can compare those peaks and their duration in this chart:

​None of the 1971 NFL draft’s quarterbacks became Hall of Famers, but by the time they’d all retired in 1986, Anderson, Plunkett, Theismann, Manning, and Dickey owned five of the top 31 spots on the NFL’s all-time passing list. Pastorini was 48th. (For those curious, if you don’t adjust for regular-season games per season, the 1983 quarterback class ranks number one all-time.)

The 2004 class – headlined by Eli Manning (1st overall), Philip Rivers (4th), and Ben Roethlisberger (11th) and supported by Matt Schaub (90th) – currently ranks as the 3rd-most prolific QB class in NFL history. Through 2017, Manning, Roethlisberger, and Rivers are 6th, 8th, and 9th, respectively, in career passing yards. (They’ve played much of their careers in an unprecedented passing era.)

But the jury’s still out on whether the 2004 QB class can surpass the 1983 crew in total era-adjusted yards. If Manning, Rivers, and Roethlisberger remain starters and produce in line with last year, the 2004 class would sneak past the 1983 class in Week 16 of the 2019 season, basically two full seasons from now. Some of that will hinge on health, and some will depend on what the Giants, Chargers, and Steelers do over the next few days.

For now, the 2004 NFL quarterback class is well positioned in 3rd on the all-time list, the legendary 1983 class is 2nd, and the less-sexy 1971 class is number one thanks to depth, longevity, and era adjustment. Whether that means they're the "best" is ultimately up to you.

Postscript: What about the 2005 and 2012 QB draft classes?

​Other recent quarterback classes have had more spectacular peaks than the 2004 group, but with some pretty sharp declines thereafter. The 2005 QB draft class produced the 2nd, 3rd, and 4th best combined adjusted-yards performances of all time in 2010, 2008, and 2009, respectively, when Kyle Orton, Aaron Rodgers, Matt Cassel, Ryan Fitzpatrick, Alex Smith, Jason Campbell, and Derek Anderson – and sometimes Charlie Frye and Dan Orlovsky – were starting NFL games. (This seems insane now, but Kyle Orton threw for more yards per game than Aaron Rodgers in 2010.)

Meanwhile, the 2012 quarterback draft class owns four of the top 10 combined adjusted-yards performances in NFL history – 2012 (5th), 2013 (7th), 2015 (10th), and 2016 (6th). Their rookie season was their best, and it was led, in order, by Andrew Luck, Brandon Weeden (not a typo), Ryan Tannehill, Robert Griffin, Russell Wilson, and Nick Foles, along with four starts by Ryan Lindley, 33 completions by Kirk Cousins, and four pass attempts by Brock Osweiler. They fell off a cliff in 2017 but are still 21st all-time in era-adjusted yards, and they’ll continue to chip their way up the list.

The study includes the AFL (1960-69), excludes undrafted quarterbacks, and does not deduct yardage lost to sacks, which was not tracked for all of NFL history. Sack yardage is generally excluded from individual statistics anyway, though it is often included in team passing statistics. To create more reliable era adjustments, I've excluded yards lost to sacks entirely.

The first era adjustment is an “inflation adjustment" that accounts for league-wide passing trends throughout NFL history. Thanks in large part to a series of passing friendly rule changes instituted in the 1970s, 1990s, and 2000s, today’s NFL teams have basically never passed more and rushed less. Passing’s peak came in 2015, when the average team threw for 259.2 yards per game. In 1990, that number was 211.4 yards. And in 1973, it was only 159.4. (All gross of yards lost to sacks.)

To account for this, I’ve adjusted each QB’s passing yards to a 2017 equivalent based on league-wide team averages in the seasons he played. For example, to convert 1990 passing yards (211.4 per team game) into 2017 passing yards (239.6 per team game), we need to multiply 1990 yardage by 1.13 (239.6 / 211.4). It’s kind of like converting 1990 dollars to 2017 dollars.

The second adjustment deals with the number of regular-season games played per season, which has generally increased over time. From 1936 to 1946, NFL teams played between 10 and 12 regular-season games. From 1947 to 1959/60, there were 12 games. From 1960/61 to 1977, there were 14 games per year. And since 1978, there have been 16 regular-season games. (In 1960, the AFL played 14 regular-season games and the NFL played 12 regular-season games.)

The main data source for this article is pro-football-reference.com. Data includes the NFL and AFL regular seasons. Data was compiled and analyzed by ELDORADO. All charts and graphics herein were created by ELDORADO.