News

As evidence grew in 2002 to exonerate the Central Park Five, their supporters demanded an apology from Donald Trump, who, soon after their arrest, had called for the return of the death penalty. Credit: Frances Roberts

Let’s get the obvious stuff out of the way. Yes, Donald Trump is a vile racist. He regularly uses dehumanizing language about nonwhites, including members of Congress. And while some argue that this is a cynical strategy designed to turn out Trump’s base, it is at most a strategy that builds on Trump’s pre-existing bigotry. He would be saying these things regardless (and was saying such things long before he ran for president); his team is simply trying to turn bigoted lemons into political lemonade.

What I haven’t seen pointed out much, however, is that Trump’s racism rests on a vision of America that is decades out of date. In his mind it’s always 1989. And that’s not an accident: The ways America has changed over the past three decades, both good and bad, are utterly inconsistent with Trump-style racism.

Why 1989? That was the year he demanded bringing back the death penalty in response to the case of the Central Park Five, black and Latino teenagers convicted of raping a white jogger in Central Park. They were, in fact, innocent; their convictions were vacated in 2002. Trump, nevertheless, has refused to apologize or admit that he was wrong.

His behavior then and later was vicious, and it is no excuse to acknowledge that at the time America was suffering from a crime wave. Still, there was indeed such a wave, and it was fairly common to talk about social collapse in inner-city urban communities.

But Trump doesn’t seem to be aware that times have changed. His vision of “American carnage” is one of a nation whose principal social problem is inner-city violence, perpetrated by nonwhites. That’s a comfortable vision if you’re a racist who considers nonwhites inferior. But it’s completely wrong as a picture of America today.

For one thing, violent crime has fallen drastically since the early 1990s, especially in big cities. Our cities certainly aren’t perfectly safe, and some cities — like Baltimore — haven’t shared in the progress. But the social state of urban America is vastly better than it was.

On the other hand, the social state of rural America — white rural America — is deteriorating. To the extent that there really is such a thing as American carnage — and we are in fact seeing rising age-adjusted mortality and declining life expectancy — it’s concentrated among less-educated whites, especially in rural areas, who are suffering from a surge in “deaths of despair” from opioids, suicide and alcohol that has pushed their mortality rates above those of African-Americans.

And indicators of social collapse, like the percentage of prime-age men not working, have also surged in the small town and rural areas of the “eastern heartland,” with its mostly white population.

What this says to me is that the racists, and even those who claimed that there was some peculiar problem with black culture, were wrong, and the sociologist William Julius Wilson was right.

When social collapse seemed to be basically a problem for inner-city blacks, it was possible to argue that its roots lay in some unique cultural dysfunction, and quite a few commentators hinted — or in some cases declared openly — that there was something about being nonwhite that predisposed people toward antisocial behavior.

What Wilson argued, however, was that social dysfunction was an effect, not a cause. His work, culminating in the justly celebrated book “When Work Disappears,” made the case that declining job opportunities for urban workers, rather than some underlying cultural or racial disposition, explained the decline in prime-age employment, the decline of the traditional family, and more.

How might one test Wilson’s hypothesis? Well, you could destroy job opportunities for a number of white people, and see if they experienced a decline in propensity to work, stopped forming stable families, and so on. And sure enough, that’s exactly what has happened to parts of nonmetropolitan America effectively stranded by a changing economy.

I’m not saying that there’s something wrong or inferior about the inhabitants of, say, eastern Kentucky (and no American politician would dare suggest such a thing). On the contrary: What the changing face of American social problems shows is that people are pretty much the same, whatever the color of their skin. Give them reasonable opportunities for economic and personal advancement, and they will thrive; deprive them of those opportunities, and they won’t.

Which brings us back to Trump and his attack on Representative Elijah Cummings, whom he accused of representing a district that is a “mess” where “no human being would want to live.” Actually, part of the district is quite affluent and well educated, and in any case, Trump is debasing his office by, in effect, asserting that some Americans don’t deserve political representation.

But the real irony is that if you ask which congressional districts really are “messes” in the sense of suffering from severe social problems, many — probably most — strongly supported Trump in 2016. And Trump is, of course, doing nothing to help those districts. All he has to offer is hate.

Paul Krugman has been an Opinion columnist since 2000 and is also a Distinguished Professor at the City University of New York Graduate Center. He won the 2008 Nobel Memorial Prize in Economic Sciences for his work on international trade and economic geography. @PaulKrugman

Growing inequality in the United States shows that the game is rigged.

By Annie Lowrey

Joshua Roberts / Reuters

Last month, Bloomberg reportedthat Jeff Bezos, the founder of Amazon and owner of the Washington Post, has accumulated a fortune worth $150 billion. That is the biggest nominal amount in modern history, and extraordinary any way you slice it. Bezos is the world’s lone hectobillionaire. He is worth what the average American family is, nearly two million timesover. He has about 50 percent more money than Bill Gates, twice as much as Mark Zuckerberg, 50 times as much as Oprah, and perhaps 100 times as much as President Trump. (Who knows!) He has gotten $50 billion richer in less than a year. He needs to spend roughly $28 million a day just to keep from accumulating more wealth.

This is a credit to Bezos’s ingenuity and his business acumen. Amazon is a marvel that has changed everything from how we read, to how we shop, to how we structure our neighborhoods, to how our postal system works. But his fortune is also a policy failure, an indictment of a tax and transfer system and a business and regulatory environment designed to supercharging the earnings of and encouraging wealth accumulation among the few. Bezos did not just make his $150 billion. In some ways, we gave it to him, perhaps to the detriment of all of us.

Bezos and Amazon are in many ways ideal exemplars of the triumph of capital over labor, like the Waltons and Walmart and Rockefeller and Standard Oil before them. That the gap between executives at top companies and employees around the country is so large is in and of itself shocking. Bezos has argued that there is not enough philanthropic need on earth for him to spend his billions on. (The Amazon founder, unlike Gates or Zuckerberg, has given awayonly a tiny fractionof his fortune.) “The only way that I can see to deploy this much financial resource is by converting my Amazon winnings into space travel,” he said this spring. “I am going to use my financial lottery winnings from Amazon to fund that.”

In contrast, half of Amazon’s domestic employees make less than $28,446 a year, per the company’s legal filings. Some workers have complained of getting timedsix-minute bathroom breaks. Warehouse workers need to pick goods and pack boxes at closely monitored speeds, handling upto 1,000 items and walking as many as 15 miles per shift. Contractors have repeatedly complained of wage-and-hour violations and argued that the company retaliates against whistleblowers. An Amazon temp died on the floorjust a few years ago.

The impoverishment of the latter and the wealth of the former are linked by policy. Take taxes. The idea of America’s progressive income-tax system is that rich workers should pay higher tax rates than poor workers, with the top rate of 37 percent hitting earnings over $500,000. (The top marginal tax rate was 92 percentas recently as 1953.) But Bezos takes a paltry salary, in relative terms, given the number of shares he owns. That means his gains are subject to capital-gains taxes, which top out at just 20 percent; like Warren Buffett, it is possible he pays effective tax rates lower than his secretary does.

Moreover, Amazon itself paid no federal corporate income taxeslast year, despite making billions of dollars in profits. It has fought tooth-and-nailagainst state and local taxes, and has successfully cajoled cities into promising it billionsand billionsand billionsin write-offs and investment incentives in exchange for placing jobs there. (Given that Bezos is a major Amazon shareholder, such tax-dodging redounds directly to his benefit.)

Or consider the country’s low minimum wage, a policy that again benefits corporations at the expense of workers. Amazon’s starting wage is about $5-an-hour below the country’s national living wage, and its median full-time wage is a full dollar below it as well: The company is profitable and has money to invest in operations and expansions because its labor force is so cheap. Of course, it is not cheap for the taxpayer, which ameliorates the effects of poverty wages with policies like the Earned Income Tax Credit, Medicaid, and the Supplemental Nutrition Assistance Program. One in three Amazon employees in the state of Arizona is reportedly on food stamps.

Noncompete agreements are another tool Amazon and other big companies use to suppress the costs of labor and to bolster their bottom lines, to the benefit of major shareholders. Amazon’s contracts have required employees to promise that they will not work for any company that “directly or indirectly” competes with Amazon for 18 months after leaving the firm. Given the breadth of the Amazon’s business, that means taking a job with Bezos might have meant turning down a future job not just at Walmart, but also at postal companies, logistics businesses, warehouses, and retailers. “Amazon appears to be requiring temp workers to forswear a sizable portion of the global economy in exchange for a several-months-long hourly warehouse gig,” The Verge, which reported on the contracts, argued.

Such non-compete and no-poaching clauses used to be common only among executives and other high-income workers, but now roughly one in five workers are covered by them; more than half of major franchise businesses, like McDonald’s, include no-poaching agreements in their contracts. This suppresses wages by reducing competition for workers—and is now seen as one of several reasons wage growth has been so sluggish during the recovery.

Stripping workers of the right to move among employers is just one way that Amazon and other big businesses are flexing their monopoly and monopsony power—again with Uncle Sam helping companies at the expense of workers. Amazon’s dominance in e-commerce, particularly in markets like book-selling, has given it pricing power to squeeze both the companies it purchases goods from and its own employees. A recent studyby The Economistfound that Amazon opening a fulfillment center in a given community actually depresses warehouse wages: In counties without an Amazon center, warehouse workers earn an average of $45,000 a year, versus $41,000 a year in counties with an Amazon center. The data also show that in the two-and-a-half years after Amazon opens a new fulfillment center, local warehouse wages fall by 3 percent.

“In local labor markets that are highly concentrated, concentration contributes to lower wages,” said Sandeep Vaheesan, policy counsel at the Open Markets Institute, a Washington think tank that studies market competition. “Amazon wields a great deal of power over both its workers and its suppliers. Where Amazon distribution centers are located, especially in rural and more exurban areas, they are one of the powerful local employers and likely have a great deal of wage-setting power—and so they can depress wages below what would exist in sort of more competitive and less concentrated market.”

Finally, there is the decline of unions. Since its founding nearly three decades ago, Amazon has again and again sought to prevent the unionization of its workforce, a development that would likely bolster wages and improve working conditions. Amazon has reportedly shut down operations where workers were seeking to organize, fired employees advocating for unionization, hired law firms to counter organizing drives at warehouses around the country, and given managers instructionson how to union-bust. (It has denied retaliating against workplaces seeking collective bargaining.) At the same time, the government, in its regulatory bodies and the courts, has again and again sided against unions and in favor of business owners.

All of these trends have have shifted income upward, suppressing worker power and helping people higher up on the income ladder turn simple earnings into self-perpetuating, ever-growing wealth. “The period since 1973 has been characterized by falling purchasing power of the minimum wage,” said Mark Price, a labor economist at the Keystone Research Center. “It’s been characterized by a rapid decline in union density and by the falling top tax rate. It’s been characterized by no-poaching agreements among low-wage service employees.” As such, he said, it has been characterized by spiraling wealth and income inequality.

In recent months, the Trump administration has tilted policy to enhance these decade-long trends, rather than to counter them. President Trump himself has hammered Amazon for not paying high enough postage rates, and taken Bezos to task for the Washington Post’s Pulitzer-winning coverage of his administration. Yet his White House has slashed taxes for corporations and the rich, rather than for middle-income workers, all while preserving loopholes and deductions for investment income. It is now reportedly seeking to give away another $100 billion to investors via a capital-gains tax cut. It has reduced companies’ regulatory burdens and appointed the most pro-business Supreme Court in history. It has declined to push for higher minimum wages, or stronger workplace protections.

The result of these decades of trends and policy choices is that Jeff Bezos has accumulated a $150 billion fortune while the average American family is poorer than it waswhen the Great Recession hit. Concerns about such astonishing levels of inequality are not just about fairness, nor are they just sour-grapesing about runaway success. The point is not that Jeff Bezos himself has done wrong by accumulating such wealth, or creating such profitable and world-changing businesses. But wealth concentration is bad for the economy and the country itself, and the government has failed to counter it. Rising inequality fuelspolitical polarization and partisan gridlock. It slows economic growth, and implies a lack of competition that fuels economic sclerosis. It makes the government less responsive to the demands of normal people, potentially putting our very democracyat risk. Bezos’s extraordinary fortune shows that the game is rigged. He just happened to play it better than anyone else.

The slogan refers to a time-honored series of life events: graduating from high school (at least), getting a full-time job, and marrying before having kids (in that order). As the conservative columnist George Will wrotelast year (in a piece headlined, in part, “Listen up, millennials”), “Of the several causes of descent … into the intergenerational transmission of poverty, one was paramount: family disintegration.” He called the success sequence “insurance against poverty” for young adults.

The success sequence has a powerful allure for its adherents. But just as strongly, the idea repels: A number of critics—many of whom are academics and have sturdy research to back up their position—reject it, not because following it is a bad idea, but rather because it traces a path that people already likely to succeed usually walk, as opposed to describing a technique that will lift people over systemic hurdles they face in doing so. The success sequence, trustworthy as it may sound, conveniently frames structural inequalities as matters of individual choice.

The concept of the success sequence has caught on for multiple reasons. “I think part of the appeal is it’s a fairly straightforward way of formulating a life script,” Brad Wilcox, the director of the National Marriage Project at the University of Virginia, a professor of sociology there, and the best-known advocate of the success sequence, explained. “It has some kind of connection to the way most people came up and they sort of see perennial wisdom. And I think the fact that there are three steps to follow. That is appealing.”

Stephanie Coontz, a professor of history and family studies at Evergreen State College in Olympia, Washington, the director of public education at the nonprofit Council on Contemporary Families, and the author of Marriage, a History: How Love Conquered Marriage, largely agrees with Wilcox. “It’s a 20-second sound bite anybody can agree with,” she says. “It’s advice everybody gives their kids. There’s nothing complicated about it.”

The way the success sequence rose to prominence as a prescription for poverty says a lot about the narratives that America tells itself about meritocracy and who’s deserving of “success.” As far as I’ve been able to determine, the first use of the term occurred in 2006 when the historian and writer Barbara Dafoe Whitehead and the sociologist Marline Pearson co-authored a report called “Making a Love Connection,”for the National Campaign to Prevent Teen Pregnancy, a nonpartisan sexual-health advocacy group that’s now known as Power To Decide.

Whitehead and Pearson wrote that modern teenagers “lack what earlier generations took for granted: a normative sequence for the timing of sex, marriage and parenthood.” Then they put a slogan on such a sequence and described it. Teens, they wrote, “lack knowledge of what might be called the ‘success’ sequence: Finish high school, or better still, get a college degree; wait until your twenties to marry; and have children after you marry.”

So the phrase was born. In the years since, the sequence has been modified somewhat by its promoters to graduate high school, get a full-time job, and marry before having babies, and “success” has been defined down a little to mean “stay out of poverty.”

There has long been concern over the personal economic impact of non-marital parenthood, but a notable flare-up in the debate—one that presages the present-day success-sequence campaign—was sparked by a TV plotline. In May of 1992, the TV character Murphy Brown, of the eponymously titled show, gave birth to a child. She was single, and many social conservatives were outraged; Vice President Dan Quayle condemned the plot, saying that it “mock[ed] the importance of fathers.”

Four months later, with the Murphy Brown debate still fresh, Nicholas Zill, from a think tank called Child Trends, spoke on the topic of childhood poverty in testimony before the House Subcommittee on Human Resources. He never mentioned Brown, but he nonetheless gave powerful ammunition to social conservatives. In his talk, Zill presented datashowing that 45 percent of children in single-parent families lived in poverty, versus 8 percent of children in married-couple families. He talked about both individual behavior—as in, being a single parent—and structural obstacles that perpetuate childhood poverty, such as racial discrimination, high unemployment, and too-low wages.

Why millennials are facing the scariest financial future of any generation since the Great Depression.

By Michael Hobbes

Like everyone in my generation, I am finding it increasingly difficult not to be scared about the future and angry about the past.

I am 35 years old—the oldest millennial, the first millennial—and for a decade now, I’ve been waiting for adulthood to kick in. My rent consumes nearly half my income, I haven’t had a steady job since Pluto was a planet and my savings are dwindling faster than the ice caps the baby boomers melted.

What’s a millennial anyway?

Unless otherwise noted, we mean anyone born between 1982 and 2004

We’ve all heard the statistics. More millennials live with their parents than with roommates. We are delaying partner-marrying and house-buying and kid-having for longer than any previous generation. And, according to The Olds, our problems are all our fault: We got the wrong degree. We spend money we don’t have on things we don’t need. We still haven’t learned to code. We killed cerealand department storesand golfand napkinsand lunch. Mention “millennial” to anyone over 40 and the word “entitlement” will come back at you within seconds, our own intergenerational game of Marco Polo.

This is what it feels like to be young now. Not only are we screwed, but we have to listen to lectures about our laziness and our participation trophies from the people who screwed us.

But generalizations about millennials, like those about any other arbitrarily defined group of 75 million people, fall apart under the slightest scrutiny. Contrary to the cliché, the vast majority of millennials did not go to college, do not work as baristas and cannot lean on their parents for help. Every stereotype of our generation applies only to the tiniest, richest, whitest sliver of young people. And the circumstances we live in are more dire than most people realize.

• We’ve taken on at least 300%more student debt than our parents

(Source: The College Board, Trends in Student Aid 2013. Calculations based on average per-student borrowing in 1980 and 2010.)

• We’re about 1/2as likely to own a home as young adults were in 1975

(Source: U.S. Census, young adults ages 24-35.)

• 1 in 5of us is living in poverty

(Source: U.S. Census, young adults ages 18-34.)

• Based on current trends, many of us won’t be able to retire until we’re 75

(Source: Projection for the class of 2015 based on a NerdWallet analysis of federal data.)

But it’s not just the numbers.

What is different about us as individuals compared to previous generations is minor. What is different about the world around us is profound. Salaries have stagnated and entire sectors have cratered. At the same time, the cost of every prerequisite of a secure existence—education, housing and health care—has inflated into the stratosphere. From job security to the social safety net, all the structures that insulate us from ruin are eroding. And the opportunities leading to a middle-class life—the ones that boomers lucked into—are being lifted out of our reach. Add it all up and it’s no surprise that we’re the first generation in modern history to end up poorer than our parents.

This is why the touchstone experience of millennials, the thing that truly defines us, is not helicopter parenting or unpaid internships or Pokémon Go. It is uncertainty. “Some days I breathe and it feels like something is about to burst out of my chest,” says Jimmi Matsinger. “I’m 25 and I’m still in the same place I was when I earned minimum wage.” Four days a week she works at a dental office, Fridays she nannies, weekends she babysits. And still she couldn’t keep up with her rent, car lease and student loans. Earlier this year she had to borrow money to file for bankruptcy. I heard the same walls-closing-in anxiety from millennials around the country and across the income scale, from cashiers in Detroit to nurses in Seattle.

It’s tempting to look at the recession as the cause of all this, the Great Fuckening from which we are still waiting to recover. But what we are living through now, and what the recession merely accelerated, is a historic convergence of economic maladies, many of them decades in the making. Decision by decision, the economy has turned into a young people-screwing machine. And unless something changes, our calamity is going to become America’s.

Chapter 1

Never-ending Job Insecurity FTW

What Scott remembers are the group interviews.

Eight, 10 people in suits, a circle of folding chairs, a chirpy HR rep with a clipboard. Each applicant telling her, one by one, in front of all the others, why he’s the right candidate for this $11-an-hour job as a bank teller.It was 2010, and Scott had just graduated from college with a bachelor’s in economics, a minor in business and $30,000 in student debt. At some of the interviews he was by far the least qualified person in the room. The other applicants described their corporate jobs and listed off graduate degrees. Some looked like they were in their 50s. “One time the HR rep told us she did these three times a week,” Scott says. “And I just knew I was never going to get a job.”After six months of applying and interviewing and never hearing back, Scott returned to his high school job at The Old Spaghetti Factory. After that he bounced around—selling suits at a Nordstrom outlet, cleaning carpets, waiting tables—until he learned that city bus drivers earn $22 an hour and get full benefits. He’s been doing that for a year now. It’s the most money he’s ever made. He still lives at home, chipping in a few hundred bucks every month to help his mom pay the rent.

In theory, Scott could apply for banking jobs again. But his degree is almost eight years old and he has no relevant experience. He sometimes considers getting a master’s, but that would mean walking away from his salary and benefits for two years and taking on another five digits of debt—just to snag an entry-level position, at the age of 30, that would pay less than he makes driving a bus. At his current job, he’ll be able to move out in six months. And pay off his student loans in 20 years.

There are millions of Scotts in the modern economy. “A lot of workers were just 18 at the wrong time,” says William Spriggs, an economics professor at Howard University and an assistant secretary for policy at the Department of Labor in the Obama administration. “Employers didn’t say, ‘Oops, we missed a generation. In 2008 we weren’t hiring graduates, let’s hire all the people we passed over.’ No, they hired the class of 2012.”

You can even see this in the statistics, a divot from 2008 to 2012 where millions of jobs and billions in earnings should be. In 2007, more than 50 percent of college graduates had a job offer lined up. For the class of 2009, fewer than 20 percent of them did. According to a 2010 study, every 1 percent uptick in the unemployment rate the year you graduate college means a 6 to 8 percent drop in your starting salary—a disadvantage that can linger for decades. The same study found that workers who graduated during the 1981 recession were stillmaking less than their counterparts who graduated 10 years later. “Every recession,” Spriggs says, “creates these cohorts that never recover.”

Sources: “Cashier or Consultant? Entry Labor Market Conditions, Field of Study, and Career Success,” by Joseph G. Altonji, Lisa B. Kahn & Jamin D. Speer, Journal of Labor Economics, 2016; and “The long-term labor market consequences of graduating from college in a bad economy,” by Lisa B. Kahn, Labour Economics, 2010. Projections assume initial earnings of $50,000 and are based on the researchers’ analysis of earnings during periods of growth and recession from 1980 to 2011.

By now, those unlucky millennials who graduated at the wrong time have cascaded downward through the economy. Some estimates show that 48 percent of workers with bachelor’s degrees are employed in jobs for which they’re overqualified. A university diploma has practically become a prerequisite for even the lowest-paying positions, just another piece of paper to flash in front of the hiring manager at Quiznos.

But the real victims of this credential inflation are the two-thirds of millennials who didn’t go to college. Since 2010, the economy has added 11.6 million jobs—and 11.5 millionof them have gone to workers with at least some college education. In 2016, young workers with a high school diploma had roughly triple the unemployment rate and three and a half times the poverty rate of college grads.

Once you start tracing these trends backward, the recession starts to look less like a temporary setback and more like a culmination. Over the last 40 years, as politicians and parents and perky magazine listicles have been telling us to study hard and build our personal brands, the entire economy has transformed beneath us.

For decades, most of the job growth in America has been in low-wage, low-skilled, temporary and short-term jobs. The United States simply produces fewer and fewer of the kinds of jobs our parents had. This explains why the rates of “under-employment” among high school and college grads were rising steadily long before the recession. “The way to think about it,” says Jacob Hacker, a Yale political scientist and author of The Great Risk Shift, “is that there are waves in the economy, but the tide has been going out for a long time.”

The decline of the job has its primary origins in the 1970s, with a million little changes the boomers barely noticed. The Federal Reserve cracked down on inflation. Companies started paying executives in stock options. Pension funds invested in riskier assets. The cumulative result was money pouring into the stock market like jet fuel. Between 1960 and 2013, the average time that investors held stocks before flipping them went from eight years to around four months. Over roughly the same period, the financial sector became a sarlacc pit encompassing around a quarter of all corporate profits and completely warping companies’ incentives.

The pressure to deliver immediate returns became relentless. When stocks were long-term investments, shareholders let CEOs spend money on things like worker benefits because they contributed to the company’s long-term health. Once investors lost the ability to look beyond the next earnings report, however, any move that didn’t boost short-term profits was tantamount to treason.

The new paradigm took over corporate America. Private equity firms and commercial banks took corporations off the market, laid off or outsourced workers, then sold the businesses back to investors. In the 1980s alone, a quarter of the companies in the Fortune 500 were restructured. Companies were no longer single entities with responsibilities to their workers, retirees or communities.

They were Lego castles, clusters of distinct modules that could be separated, optimized, sold off and put back together.

Businesses applied the same chop-shop logic to their own operations. Executives came to see themselves as first and foremost in the shareholder-pleasing game. Higher staff salaries became luxuries to be slashed. Unions, the great negotiators of wages and benefits and the guarantors of severance pay, became enemy combatants. And eventually, employees themselves became liabilities. “Corporations decided that the fastest way to a higher stock price was hiring part-time workers, lowering wages and turning their existing employees into contractors,” says Rosemary Batt, a Cornell University economist.

Hours of minimum wage work needed to pay for four years of public college

Boomer: 306

Millennial: 4,459

(Source: National Center for Education Statistics. Calculations based on tuition for four-year public universities from 1973-1976 and 2003-2006.)

Thirty years ago, she says, you could walk into any hotel in America and everyone in the building, from the cleaners to the security guards to the bartenders, was a direct hire, each worker on the same pay scale and enjoying the same benefits as everyone else. Today, they’re almost all indirect hires, employees of random, anonymous contracting companies: Laundry Inc., Rent-A-Guard Inc., Watery Margarita Inc. In 2015, the Government Accountability Office estimated that 40 percent of American workers were employed under some sort of “contingent” arrangement like this—from barbers to midwives to nuclear waste inspectors to symphony cellists. Since the downturn, the industry that has added the most jobs is not tech or retail or nursing. It is “temporary help services”—all the small, no-brand contractors who recruit workers and rent them out to bigger companies.

The effect of all this “domestic outsourcing”—and, let’s be honest, its actual purpose—is that workers get a lot less out of their jobs than they used to. One of Batt’s papers found that employees lose up to 40 percent of their salary when they’re “re-classified” as contractors. In 2013, the city of Memphis reportedly cut wages from $15 an hour to $10 after it fired its school bus drivers and forced them to reapply through a staffing agency. Some Walmart “lumpers,” the warehouse workers who carry boxes from trucks to shelves, have to show up every morning but only get paid if there’s enough work for them that day.

“This is what’s really driving wage inequality,” says David Weil, the former head of the Wage and Hour Division of the Department of Labor and the author of The Fissured Workplace. “By shifting tasks to contractors, companies pay a price for a service rather than wages for work. That means they don’t have to think about training, career advancement or benefit provision.”

This transformation is affecting the entire economy, but millennials are on its front lines. Where previous generations were able to amass years of solid experience and income in the old economy, many of us will spend our entire working lives intermittently employed in the new one. We’ll get less training and fewer opportunities to negotiate benefits through unions (which used to cover 1 in 3 workers and are now down to around 1 in 10). Plus, as Uber and its “gig economy” ilk perfect their algorithms, we’ll be increasingly at the mercy of companies that only want to pay us for the time we’re generating revenue and not a second more.

But the blame doesn’t only fall on companies. Trade groups have responded to the dwindling number of secure jobs by digging a moat around the few that are left. Over the last 30 years, they’ve successfully lobbied state governments to require occupational licenses for dozens of jobs that never used to need them. It makes sense: The harder it is to become a plumber, the fewer plumbers there will be and the more each of them can charge. Nearly a third of American workers now need some kind of state license to do their jobs, compared to less than 5 percent in 1950. In most other developed countries, you don’t need official permission to cut hair or pour drinks. Here, those jobs can require up to $20,000 in schooling and 2,100 hours of instruction and unpaid practice.

In sum, nearly every path to a stable income now demands tens of thousands of dollars before you get your first paycheck or have any idea whether you’ve chosen the right career path. “I was literally paying to work,” says Elena, a 29-year-old dietician in Texas. (I’ve changed the names of some of the people in this story because they don’t want to get fired.) As part of her master’s degree, she was required to do a yearlong “internship” in a hospital. It was supposed to be training, but she says she worked the same hours and did the same tasks as paid staffers. “I took out an extra $20,000 in student loans to pay tuition for the year I was working for free,” she says.

All of these trends—the cost of education, the rise of contracting, the barriers to skilled occupations—add up to an economy that has deliberately shifted the risk of economic recession and industry disruption away from companies and onto individuals. For our parents, a job was a guarantee of a secure adulthood. For us, it is a gamble. And if we suffer a setback along the way, there’s so little to keep us from sliding into disaster.

Chapter 2

TFW They Broke The Safety Net

Becoming poor is not an event. It is a process.

Like a plane crash, poverty is rarely caused by one thing going wrong. Usually, it is a series of misfortunes—a job loss, then a car accident, then an eviction—that interact and compound.

I heard the most acute description of how this happens from Anirudh Krishna, a Duke University professor who has, over the last 15 years, interviewed more than 1,000 people who fell into poverty and escaped it. He started in India and Kenya, but eventually, his grad students talked him into doing the same thing in North Carolina. The mechanism, he discovered, was the same.

We often think of poverty in America as a pool, a fixed portion of the population that remains destitute for years. In fact, Krishna says, poverty is more like a lake, with streams flowing steadily in and out all the time. “The number of people in danger of becoming poor is far larger than the number of people who are actually poor,” he says.

We’re all living in a state of permanent volatility. Between 1970 and 2002, the probability that a working-age American would unexpectedly lose at least half her family income more than doubled. And the danger is particularly severe for young people. In the 1970s, when the boomers were our age, young workers had a 24 percent chance of falling below the poverty line. By the 1990s, that had risen to 37 percent. And the numbers only seem to be getting worse. From 1979 to 2014, the poverty rate among young workers with only a high school diploma more than tripled, to 22 percent. “Millennials feel like they can lose everything at any time,” Hacker says. “And, increasingly, they can.”

Here’s what that downward slide looks like. Gabriel is 19 years old and lives in a small town in Oregon. He plays the piano and, until recently, was saving up to study music at an arts college. Last summer he was working at a health supplement company. It wasn’t the most glamorous job, lugging boxes and blending ingredients, but he made $12.50 an hour and he hoped he could step up to a better position if he proved himself.

Then his sister got into a car accident, T-boned turning into their driveway. “She couldn’t walk; she couldn’t think,” Gabriel says. His mom wasn’t able to take a day off without risking losing her job, so Gabriel called his boss and left a message saying he had to miss work for a day to get his sister home from the hospital.

Average annual stock market returns on 401(k) plans

Boomer: 6.3%

Millennial: 2.9%

(Source: “The changing equation: Building for retirement in a low return world,” BlackRock, October 2016. Percentages based on average returns from 1978-2016 for boomers and projected returns from 2016 onward for millennials.)

The next day, his temp agency called: He was fired. Though Gabriel says no one had told him, the company had a three-strikes policy for unplanned absences. He had already missed one day for a cold and another for a staph infection, so this was it. A former colleague told him that his absences meant he was unlikely to get a job there again.

So now Gabriel works at Taco Time and lives in a trailer with his mom and his sisters. Most of his paycheck goes to gas and groceries because his mom’s income is disappearing into the family’s medical bills. He still wants to go to college. But since he can barely keep his head above water, he’s set his sights on an electrician’s apprenticeship program offered by a local nonprofit. “I don’t understand why it’s so hard to do something with your life,” he tells me.

The answer is brutally simple. In an economy where wages are precarious and the safety net has been hacked into ribbons, one piece of bad luck can easily become a years-long struggle to get back to normal.

Over the last four decades, there has been a profound shift in the relationship between the government and its citizens. In The Age of Responsibility, Yascha Mounk, a political theorist, writes that before the 1980s, the idea of “responsibility” was understood as something each American owed to the people around them, a national project to keep the most vulnerable from falling below basic subsistence. Even Richard Nixon, not exactly known for lifting up the downtrodden, proposed a national welfare benefit and a version of a guaranteed income. But under Ronald Reagan and then Bill Clinton, the meaning of “responsibility” changed. It became individualized, a duty to earn the benefits your country offered you.

Since 1996, the percentage of poor families receiving cash assistance from the government has fallen from 68 percent to 23 percent. No state provides cash benefits that add up to the poverty line. Eligibility criteria have been surgically tightened, often with requirements that are counterproductive to actually escaping poverty. Take Temporary Assistance for Needy Families, which ostensibly supports poor families with children. Its predecessor (with a different acronym) had the goal of helping parents of kids under 7, usually through simple cash payments. These days, those benefits are explicitly geared toward getting mothers away from their children and into the workforce as soon as possible. A few states require women to enroll in training or start applying for jobs the day after they give birth.

The list goes on. Housing assistance, for many people the difference between losing a job and losing everything, has been slashed into oblivion. (To pick just one example, in 2014 Baltimore had 75,000 applicants for 1,500 rental vouchers.) Food stamps, the closest thing to universal benefits we have left, provide, on average, $1.40 per meal.

In what seems like some kind of perverse joke, nearly every form of welfare now available to young people is attached to traditional employment. Unemployment benefits and workers’ compensation are limited to employees. The only major expansions of welfare since 1980 have been to the Earned Income Tax Credit and the Child Tax Credit, both of which pay wages back to workers who have already collected them.

Back when we had decent jobs and strong unions, it (kind of) made sense to provide things like health care and retirement savings through employer benefits. But now, for freelancers and temps and short-term contractors—i.e., us—those benefits might as well be Monopoly money. Forty-one percent of working millennials aren’t even eligible for retirement plans through their companies.

And then there’s health care.

In 1980, 4 out of 5 employees got health insurance through their jobs. Now, just over half of them do. Millennials can stay on our parents’ plans until we turn 26. But the cohort right afterward, 26- to 34-year-olds, has the highest uninsured rate in the country and millennials—alarmingly—have more collective medical debt than the boomers. Even Obamacare, one of the few expansions of the safety net since man walked on the moon, still leaves us out in the open. Millennials who can afford to buy plans on the exchanges face premiums (next year mine will be $388 a month), deductibles ($850) and out-of-pocket limits ($5,000) that, for many young people, are too high to absorb without help. And of the events that precipitate the spiral into poverty, according to Krishna, an injury or illness is the most common trigger.

“All of us are one life event away from losing everything,” says Ashley Lauber, a bankruptcy lawyer in Seattle and an Old Millennial like me. For most of her clients under 35, she says, the slide toward bankruptcy starts with a car accident or a medical bill. “You can’t afford your deductible, so you go to Moneytree and take out a loan for a few hundred bucks. Then you miss your payments and the collectors start calling you at work, telling your boss you can’t pay. Then he gets sick of it and he fires you and it all gets worse.” For a lot of her millennial clients, Lauber says, the difference between escaping debt and going bankrupt comes down to the only safety net they have—their parents.

But this fail-safe, like all the others, isn’t equally available to everyone. The wealth gap between white and non-white families is massive. Since basically forever, almost every avenue of wealth creation—higher education, homeownership, access to credit—has been denied to minorities through discrimination both obvious and invisible. And the disparity has only grown wider since the recession. From 2007 to 2010, black families’ retirement accounts shrank by 35 percent, whereas white families, who are more likely to have other sources of money, saw their accounts growby 9 percent.The result is that millennials of color are even more exposed to disaster than their peers. Many white millennials have an iceberg of accumulated wealth from their parents and grandparents that they can draw on for help with tuition, rent or a place to stay during an unpaid internship. According to the Institute on Assets and Social Policy, white Americans are five times more likely to receive an inheritance than black Americans—which can be enough to make a down payment on a house or pay off student loans. By contrast, 67 percent of black families and 71 percent of Latino families don’t have enough money saved to cover three months of living expenses.And so, instead of receiving help from their families, millennials of color are more likely to be called on to provide it. Any extra income from a new job or a raise tends to get swallowed by bills or debts that many white millennials had help with. Four years after graduation, black college graduates have, on average, nearly twice as much student debt as their white counterparts and are three times more likely to be behind on payments. This financial undertow is captured in one staggering statistic: Every extra dollar of income earned by a middle-class white family generates $5.19 in new wealth. For black families, it’s 69 cents.

Sources: “The Road to Zero Wealth,” Institute for Policy Studies, September 2017 and “Household Wealth Trends In The United States, 1962-2013: What Happened Over the Great Recession?” National Bureau of Economic Research, December 2014.

Want to get even more depressed? Sit down and think about what’s going to happen to us when we get old. Despite all the stories you read about flighty millennials refusing to plan for retirement (as if our grandparents were obsessing over the details of their pension plans when they were 25), the biggest problem we face is not financial illiteracy. It is compound interest.

In the coming decades, the returns on 401(k) plans are expected to fall by half. According to an analysis by the Employee Benefit Research Institute, a drop in stock market returns of just 2 percentage points means a 25-year-old would have to contribute more than double the amountto her retirement savings that a boomer did. Oh, and she’ll have to do it on lower wages. This scenario gets even more dire when you consider what’s going to happen to Social Security by the time we make it to 65. There, too, it seems inevitable that we’re going to get screwed by demography: In 1950, there were 17 American workers to support each retiree. When millennials retire, there will be just two.

There’s one way that many Americans have traditionally managed to build wealth for themselves, to achieve some kind of dignity and comfort in old age. I’m talking, of course, about homeownership. At least we’ve got a shot at that, right?

Chapter 3

RIP Your Chances of Affording a Home

Whenever Tyrone moves into a new apartment, he lies down naked on his living room floor.

It’s a ritual, a reminder of the years he spent without a floor underneath him or a ceiling above. He was homeless for four years in Georgia: sleeping on benches, biking to interviews in the heat, arriving an hour early so he wouldn’t be sweaty for the handshake. When he finally got a job, his co-workers found out that he washed himself in gas station bathrooms and made him so miserable he quit. “They said I ‘smelled homeless,’” he says.

Tyrone moved to Seattle six years ago, when he was 23, because he’d heard the minimum wage there was almost double what he made in Atlanta. He got a job at a grocery store and slept in a shelter while he saved. Since then, his income has gone up, but he’s been pushed farther and farther from the city. First stop was subsidized housing in Kirkland, 20 minutes east across the lake. Then a rented house in Tacoma, 45 minutes south, sharing a bedroom with his girlfriend and, eventually, a son. The breakup is why he’s now in Lakewood, even farther south, in a one-bedroom right next to a freeway entrance.

And it’s already such a strain. Tyrone earns $17 an hour as a security guard at a building site, his highest wage ever. But he’s a contractor (of course), so he doesn’t get sick leave or health insurance. His rent is $1,100 a month. It’s more than he can afford, but he could only find one building that would let him move in without paying the full deposit in advance.

Since rent is due on the 1st and he gets paid on the 7th, his landlord adds a $100 late fee to each month’s bill. After that and the car payments—it’s a two-hour bus ride from the suburb where he lives to the suburb where he works—he has $200 left over every month for food. The first time we met, it was the 27th of the month and Tyrone told me his account was already zeroed out. He had pawned his skateboard the previous night for gas money.

Despite the acres of news pages dedicated to the narrative that millennials refuse to grow up, there are twice as manyyoung people like Tyrone—living on their own and earning less than $30,000 per year—as there are millennials living with their parents. The crisis of our generation cannot be separated from the crisis of affordable housing.

More people are renting homes than at any time since the late 1960s. But in the 40 years leading up to the recession, rents increased at more than twice the rate of incomes. Between 2001 and 2014, the number of “severely burdened” renters—households spending over half their incomes on rent—grew by more than 50 percent. Rather unsurprisingly, as housing prices have exploded, the number of 30- to 34-year-olds who own homes has plummeted.

Falling homeownership rates, on their own, aren’t necessarily a catastrophe. But our country has contrived an entire “Game of Life” sequence that hinges on being able to buy a home. You rent for a while to save up for a down payment, then you buy a starter home with your partner, then you move into a larger place and raise a family. Once you pay off the mortgage, your house is either an asset to sell or a cheap place to live in retirement. Fin.

This worked well when rents were low enough to save and homes were cheap enough to buy. In one of the most infuriating conversations I had for this article, my father breezily informed me that he bought his first house at 29. It was 1973, he had just moved to Seattle and his job as a university professor paid him (adjusted for inflation) around $76,000 a year. The house cost $124,000 — again, in today’s dollars. I am six years older now than my dad was then. I earn less than he did and the median home price in Seattle is around $730,000. My father’s first house cost him 20 months of his salary. My first house will cost more than 10 years of mine.

Which prompts the question: How did housing in America become so freaking expensive?

Here’s your city. Here’s downtown. That’s where lots of the good jobs are.

Most people want to live fewer than 30 minutes from work. So, for much of the 20th century, big cities built housing close to jobs.

When the inner ring of suburbs filled up, cities built freeways to whisk workers to the next.

There’s a simple fix for this problem: Build more housing close to jobs.

For a long time, that’s what cities did. They built upward, divided homes into apartments and added duplexes and townhomes.

But in the 1970s, they stopped building. Cities kept adding jobs and people. But they didn’t add more housing. And that’s when prices started to climb.

So much of this can be explained by one word:

ZONING

At first, zoning was pretty modest. The point was to stop someone from buying your neighbor’s house and turning it into an oil refinery.

But eventually people realized they could use zoning for other purposes.

In the late 1960s, it finally became illegal to deny housing to minorities. So cities instituted weirdly specific rules that drove up the price of new houses and excluded poor people—who were, disproportionately, minorities.

Houses had to have massive backyards. They couldn’t be split into separate apartments. Basically, cities mandated McMansions.We’re still living with that legacy. Across huge swaths of American cities, it’s pretty much illegal to build affordable housing.And this problem is only getting worse.That’s because all the urgency to build comes from people who need somewhere to live. But all the political power is held by people who already own homes.

For homeowners, there’s no such thing as a housing crisis.

Why?

Because when property values go up, so does their net worth. They have every reason to block new construction.

They do that by weaponizing environmental regulations and historical preservation rules.

They force buildings to be shorter so they don’t cast shadows. They demand two parking spaces for every single unit.

They complain that a new apartment building will destroy “neighborhood character” when the structure it’s replacing is… a parking garage. (True story.)

All this extra hassle means construction takes longer and costs more.

Which means that the only way most developers can make a profit is to build luxury condos.

So that’s why cities are so unaffordable. The entire system is structured to produce expensive housing when we desperately need the opposite.

The housing crisis in our most prosperous cities is now distorting the entire American economy. For most of the 20th century, the way many workers improved their financial fortunes was to move closer to opportunities. Rents were higher in the boomtowns, but so were wages.Since the Great Recession, the “good” jobs—secure, non-temp, decent salary—have concentrated in cities like never before. America’s 100 largest metros have added 6 million jobs since the downturn. Rural areas, meanwhile, still have fewer jobs than they did in 2007. For young people trying to find work, moving to a major city is not an indulgence. It is a virtual necessity.But the soaring rents in big cities are now canceling out the higher wages. Back in 1970, according to a Harvard study, an unskilled worker who moved from a low-income state to a high-income state kept 79 percent of his increased wages after he paid for housing. A worker who made the same move in 2010 kept just 36 percent. For the first time in U.S. history, says Daniel Shoag, one of the study’s co-authors, it no longer makes sense for an unskilled worker in Utah to head for New York in the hope of building a better life.

This leaves young people, especially those without a college degree, with an impossible choice. They can move to a city where there are good jobs but insane rents. Or they can move somewhere with low rents but few jobs that pay above the minimum wage.

This dilemma is feeding the inequality-generating woodchipper the U.S. economy has become. Rather than offering Americans a way to build wealth, cities are becoming concentrations of people who already have it. In the country’s 10 largest metros, residents earning more than $150,000 per year now outnumber those earning less than $30,000 per year.

Millennials who are able to relocate to these oases of opportunity get to enjoy their many advantages: better schools, more generous social services, more rungs on the career ladder to grab on to. Millennials who can’t afford to relocate to a big expensive city are … stuck. In 2016, the Census Bureau reported that young people were less likely to have lived at a different address a year earlier than at any time since 1963.

And so the real reason millennials can’t seem to achieve the adulthood our parents envisioned for us is that we’re trying to succeed within a system that no longer makes any sense. Homeownership and migration have been pitched to us as gateways to prosperity because, back when the boomers grew up, they were. But now, the rules have changed and we’re left playing a game that is impossible to win.

We start earning less money, later. We have more debt and higher rent.
Which means we aren’t able to save.
Which means we can’t buy a house or prepare for retirement.
Which means that unless something changes…All of us are headed for a very dark place.

Chapter 4

LOL Everything Matters

The most striking thing about the problems of millennials is how intertwined and self-reinforcing and everywherethey are.

Over the eight months I spent reporting this story, I spent a few evenings at a youth homeless shelter and met unpaid interns and gig-economy bike messengers saving for their first month of rent. During the days I interviewed people like Josh, a 33-year-old affordable housing developer who mentioned that his mother struggles to make ends meet as a contractor in a profession that used to be reliable government work. Every Thanksgiving, she reminds him that her retirement plan is a “401(j)”—J for Josh.

Fixing what has been done to us is going to take more than tinkering. Even if economic growth picks up and unemployment continues to fall, we’re still on a track toward ever more insecurity for young people. The “Leave It To Beaver” workforce, in which everyone has the same job from graduation until gold watch, is not coming back. Any attempt to recreate the economic conditions the boomers had is just sending lifeboats to a whirlpool.

But still, there is already a foot-long list of overdue federal policy changes that would at least begin to fortify our future and reknit the safety net. Even amid the awfulness of our political moment, we can start to build a platform to rally around. Raise the minimum wage and tie it to inflation. Roll back anti-union laws to give workers more leverage against companies that treat them as if they’re disposable. Tilt the tax code away from the wealthy. Right now, rich people can write off mortgage interest on their second home and expenses related to being a landlord or (I’m not kidding) owning a racehorse. The rest of us can’t even deduct student loans or the cost of getting an occupational license.

Some of the trendiest Big Policy Fixes these days are efforts to rebuild government services from the ground up. The ur-example is the Universal Basic Income, a no-questions-asked monthly cash payment to every single American. The idea is to establish a level of basic subsistence below which no one in a civilized country should be allowed to fall. The venture capital firm Y Combinator is planning a pilot program that would give $1,000 each month to 1,000 low- and middle-income participants. And while, yes, it’s inspiring that a pro-poor policy idea has won the support of D.C. wonks and Ayn Rand tech bros alike, it’s worth noting that existing programs like food stamps, TANF, public housing and government-subsidized day care are not inherently ineffective. They have been intentionally made so. It would be nice if the people excited by the shiny new programs would expend a little effort defending and expanding the ones we already have.

But they’re right about one thing: We’re going to need government structures that respond to the way we work now. “Portable benefits,” an idea that’s been bouncing around for years, attempts to break down the zero-sum distinction between full-time employees who get government-backed worker protections and independent contractors who get nothing. The way to solve this, when you think about it, is ridiculously simple: Attach benefits to work instead of jobs. The existing proposals vary, but the good ones are based on the same principle: For every hour you work, your boss chips in to a fund that pays out when you get sick, pregnant, old or fired. The fund follows you from job to job, and companies have to contribute to it whether you work there a day, a month or a year.

Small-scale versions of this idea have been offsetting the inherent insecurity of the gig economy since long before we called it that. Some construction workers have an “hour bank” that fills up when they’re working and provides benefits even when they’re between jobs. Hollywood actors and technical staff have health and pension plans that follow them from movie to movie. In both cases, the benefits are negotiated by unions, but they don’t have to be. Since 1962, California has offered “elective coverage” insurance that allows independent contractors to file for payouts if their kids get sick or if they get injured on the job. “The offloading of risks onto workers and families was not a natural occurrence,” says Hacker, the Yale political scientist. “It was a deliberate effort. And we can roll it back the same way.”

Another no-brainer experiment is to expand jobs programs. As decent opportunities have dwindled and wage inequality has soared, the government’s message to the poorest citizens has remained exactly the same: You’re not trying hard enough. But at the same time, the government has not actually attempted to give people jobson a large scale since the 1970s.

Because most of us grew up in a world without them, jobs programs can sound overly ambitious or suspiciously Leninist. In fact, they’re neither. In 2010, as part of the stimulus, Mississippi launched a program that simply reimbursed employers for the wages they paid to eligible new hires—100 percent at first, then tapering down to 25 percent. The initiative primarily reached low-income mothers and the long-term unemployed. Nearly half of the recipients were under 30.

The results were impressive. For the average participant, the subsidized wages lasted only 13 weeks. Yet the year after the program ended, long-term unemployed workers were still earning nearly nine times more than they had the previous year. Either they kept the jobs they got through the subsidies or the experience helped them find something new. Plus, the program was a bargain. Subsidizing more than 3,000 jobs cost $22 million, which existing businesses doled out to workers who weren’t required to get special training. It wasn’t an isolated success, either. A Georgetown Center on Poverty and Inequality review of 15 jobs programs from the past four decades concluded that they were “a proven, promising, and underutilized tool for lifting up disadvantaged workers.” The review found that subsidizing employment raised wages and reduced long-term unemployment. Children of the participants even did better at school.

But before I get carried away listing urgent and obvious solutions for the plight of millennials, let’s pause for a bit of reality: Who are we kidding? Donald Trump, Paul Ryan and Mitch McConnell are not interested in our innovative proposals to lift up the systemically disadvantaged. Their entire political agenda, from the Scrooge McDuck tax reform bill to the ongoing assassination attempt on Obamacare, is explicitly designed to turbocharge the forces that are causing this misery. Federally speaking, things are only going to get worse.

Which is why, for now, we need to take the fight to where we can win it.

Over the last decade, states and cities have made remarkable progress adapting to the new economy. Minimum-wage hikes have been passed by voters in nine states, even dark red rectangles like Nebraska and South Dakota. Following a long campaign by the Working Families Party and other activist organizations, eight states and the District of Columbia have instituted guaranteed sick leave. Bills to combat exploitative scheduling practices have been introduced in more than a dozen state legislatures. San Francisco now gives retail and fast-food workers the right to learn their schedules two weeks in advance and get compensated for sudden shift changes. Local initiatives are popular, effective and our best hope of preventing the country’s slide into “Mad Max”-style individualism.

The court system, the only branch of our government currently functioning, offers other encouraging avenues. Class-action lawsuits and state and federal investigations have resulted in a wave of judgments against companies that “misclassify” their workers as contractors. FedEx, which requires some of its drivers to buy their own trucks and then work as independent contractors, recently reached a $227 million settlement with more than 12,000 plaintiffs in 19 states. In 2014, a startup called Hello Alfred—Uber for chores, basically—announced that it would rely exclusively on direct hires instead of “1099s.” Part of the reason, its CEO told Fast Company, was that the legal and financial risk of relying on contractors had gotten too high. A tsunami of similar lawsuits over working conditions and wage theft would be enough to force the same calculation onto every CEO in America.

56%of millennials with student loans have delayed a major life event—including getting married or having kids—because of their debt

And then there’s housing, where the potential—and necessity—of local action is obvious. This doesn’t just mean showing up to city council hearings to drown out the NIMBYs (though let’s definitely do that). It also means ensuring that the entire system for approving new construction doesn’t prioritize homeowners at the expense of everyone else. Right now, permitting processes examine, in excruciating detail, how one new building will affect rents, noise, traffic, parking, shadows and squirrel populations. But they never investigate the consequences of notbuilding anything—rising prices, displaced renters, low-wage workers commuting hours from outside the sprawl.

Some cities are finally acknowledging this reality. Portland and Denver have sped up approvals and streamlined permitting. In 2016, Seattle’s mayor announced that the city would cut ties with its mostly old, mostly white, very NIMBY district councils and establish a “community involvement commission.” The name is terrible, obviously, but the mandate is groundbreaking: Include renters, the poor, ethnic minorities—and everyone else unable to attend a consultation at 2 p.m. on a Wednesday—in construction decisions. For decades, politicians have been terrified of making the slightest twitch that might upset homeowners. But with renters now outnumbering owners in nine of America’s 11 largest cities, we have the potential to be a powerful political constituency.

The same logic could be applied to our entire generation. In 2018, there will be more millennials than boomers in the voting-age population. The problem, as you’ve already heard a million times, is that we don’t vote enough. Only 49 percent of Americans ages 18 to 35 turned out to vote in the last presidential election, compared to about 70 percent of boomers and Greatests. (It’s lower in midterm elections and positively dire in primaries.)

But like everything about millennials, once you dig into the numbers you find a more complicated story. Youth turnout is low, sure, but not universally. In 2012, it ranged from 68 percent in Mississippi (!) to 24 percent in West Virginia. And across the country, younger Americans who are registered to voteshow up at the polls nearly as often as older Americans.

The fact is, it’s simply harder for us to vote. Consider that nearly half of millennials are minorities and that voter suppression efforts are laser-focused on blacks and Latinos. Or that the states with the simplest registration procedures have youth turnout rates significantly higher than the national average. (In Oregon it’s automatic, in Idaho you can do it the same day you vote and in North Dakota you don’t have to register at all.) Adopting voting rights as a cause—forcing politicians to listen to us like they do to the boomers—is the only way we’re ever going to get a shot at creating our own New Deal.

Or, as Shaun Scott, the author of Millennials and the Moments That Made Us, told me, “We can either do politics or we can have politics done to us.”

And that’s exactly it. The boomer-benefiting system we’ve inherited was not inevitable and it is not irreversible. There is still a choice here. For the generations ahead of us, it is whether to pass down some of the opportunities they enjoyed in their youth or to continue hoarding them. Since 1989, the median wealth of families headed by someone over 62 has increased 40 percent. The median wealth of families headed by someone under 40 has decreased by 28 percent. Boomers, it’s up to you: Do you want your children to have decent jobs and places to live and a non-Dickensian old age? Or do you want lower taxes and more parking?

Then there’s our responsibility. We’re used to feeling helpless because for most of our lives we’ve been subject to huge forces beyond our control. But pretty soon, we’ll actually be in charge. And the question, as we age into power, is whether our children will one day write the same article about us. We can let our economic infrastructure keep disintegrating and wait to see if the rising seas get us before our social contract dies. Or we can build an equitable future that reflects our values and our demographics and all the chances we wish we’d had. Maybe that sounds naïve, and maybe it is. But I think we’re entitled to it.

Global warming is a potentially devastating problem requiring urgent action by governments. However, to date the U.S. government has remained largely paralyzed. Now new Greenpeace research has shed light on the sources of paralysis, a multi-decade war on democracy by the kingpins of carbon – the coal, oil, and gas industries allied with a handful of self-interested libertarian billionaires.

Their strategy has aimed to (1) shrink, disable and paralyze progressive government and (2) manipulate the remaining levers of government power by (a) eliminating all restrictions on private money in elections and (b) disenfranchising blacks, Latinos, the young, the elderly, and the disabled, all of whom are presumed to favor Democrats. Since 1975, their strategy has rolled back New Deal programs, weakened labor unions, and reversed victories of the civil rights movement, undermining the strength and cohesion of the middle class, further enriching and empowering a tiny self-interested elite.

There’s a certain type of financial confessional that has had a way of going viral in the post-recession era. The University of Chicago law professor complaining his family was barely keeping their heads above water on $250,000 a year. This hypothetical family of three in San Francisco making $200,000, enjoying vacations to Maui, and living hand-to-mouth. This real New York couple making six figures and merely “scraping by.”

In all of these viral posts, denizens of the upper-middle class were attempting to make the case for their middle class-ness. Taxes are expensive. Cities are expensive. Tuition is expensive. Children are expensive. Travel is expensive. Tens of thousands of dollars a month evaporate like cold champagne spilled on a hot lanai, they argue. And the 20 percent are not the one percent.

A great, short book by Richard V. Reeves of the Brookings Institution helps to flesh out why these stories provoke such rage. In Dream Hoarders, released this week, Reeves agrees that the 20 percent are not the one percent: The higher you go up the income or wealth distribution, the bigger the gains made in the past three or four decades. Still, the top quintile of earners—those making more than roughly $112,000 a year—have been big beneficiaries of the country’s growth. To make matters worse, this group of Americans engages in a variety of practices that don’t just help their families, but harm the other 80 percent of Americans.

“I am not suggesting that the top one percent should be left alone. They need to pay more tax, perhaps much more,” Reeves writes. “But if we are serious about narrowing the gap between ‘the rich’ and everybody else, we need a broader conception of what it means to be rich.”

The book traces the way that the upper-middle class has pulled away from the middle class and the poor on five dimensions: income and wealth, educational attainment, family structure, geography, and health and longevity. The top 20 percent of earners might not have seen the kinds of income gains made by the top one percent and America’s billionaires. Still, their wage and investment increases have proven sizable. They dominate the country’s top colleges, sequester themselves in wealthy neighborhoods with excellent public schools and public services, and enjoy healthy bodies and long lives. “It would be an exaggeration to say that the upper-middle class is full of gluten-avoiding, normal-BMI joggers who are only marginally more likely to smoke a cigarette than to hit their children,” Reeves writes. “But it would be just that—an exaggeration, not a fiction.”

They then pass those advantages onto their children, with parents placing a “glass floor” under their kids. They ensure they grow up in nice zip codes, provide social connections that make a difference when entering the labor force, help with internships, aid with tuition and home-buying, and schmooze with college admissions officers. All the while, they support policies and practices that protect their economic position and prevent poorer kids from climbing the income ladder: legacy admissions, the preferential tax treatment of investment income, 529 college savings plans, exclusionary zoning, occupational licensing, and restrictions on the immigration of white-collar professionals.

“Anybody Could Fall Into Such Hardship”: A Photographer’s Look at Poverty in America

Joakim Eskildsen never considered himself to be an assignment photographer. That changed when Kira Pollack, Time’s director of photography, asked him to work on a project about poverty in the United States. Pollack had seen Eskildsen’s book The Roma Journeys, a detailed look into the lives of Roma Gypsies living in seven countries. So for a project in 2011, Pollack asked Eskildsen to photograph some of the most impoverished areas in New York, California, Louisiana, South Dakota, and Georgia over seven months.

During his first trip to Athens, Georgia, Eskildsen traveled by himself. He said that Time liked the work he produced, but he felt he needed someone to accompany him, since the subject was too intense to deal with alone. Throughout the rest of his travels, he was joined by journalist Natasha del Toro, who also wrote the texts for the photographs that eventually became part of his new book, American Realities, published by Steidl.

“The stories and the atmosphere were very depressive, and almost everyone we talked to or interviewed cried at some point,” he wrote via email. “It was in many ways very hard to face all this. I had the feeling anybody could fall into such hardship.”

Eskildsen said that he prefers to spend a significant amount of time with his subjects so that his presence feels less jarring to them. During this project, he said many of the encounters he had with his subjects were brief, lasting at most a few hours, so he returned to visit them several times.

A massive study of health and income found that smoking, obesity, and exercise are the most important determinants of longevity. Poor neighborhoods score worse in all of them. What’s going on?

“Geography is destiny.”

Economists once used this theory to try to explain the difference between rich and poor countries. But in the last few years, something like it has become a grand theory for rich and poor within the United States. Researchers have shown that where a family lives dramatically shapes children’s education, income, and their potential to earn more than their parents.

Geography’s most consequential legacy might be life itself. In a new study released Monday morning and reported in The New York Times, the life expectancy of the poorest Americans can differ by many years in neighborhoods that are fewer than 100 miles apart. In the belt running from Texas through Michigan, the typical lifespan has fallen behind the national average. Meanwhile, Americans living in cities, particularly on the coasts, are not only living longer but also have lower incidences of diabetes, stroke, heart attacks, and high blood pressure. Where one lives shapes when one dies.

Why are America’s poor dying young? According to the paper, the causes are internal. Most of the variation in life expectancy doesn’t come from “external factors,” like murder, but rather from medical causes, like heart disease and diabetes. The poor die early because they get sicker faster.

And why do the poor get sick? Several variables could not explain it, like access to medical care, their environment, income inequality, or labor-market conditions. Instead, the most important correlations were with behaviors, especially smoking and obesity (both highly negative) and exercise (positive, but the relationship was slightly little weaker).

It sounds like the most predictable finding in the world. If somebody eats healthy food, exercises regularly, and doesn’t smoke, on average she’ll outlive someone who eats junk food and smokes on the couch in a dangerous part of town.

But it poses a question whose answer is anything but obvious: Why do the poor in some cities have such unhealthy lifestyles? Researchers know the places where the poor live the longest. They are: New York, San Jose, Santa Barbara, Santa Rosa, Los Angeles, San Francisco, San Diego, Miami, Newark, and Boston. Several of these cities—particularly San Jose and San Francisco—are also among the most expensive places to live. Is it possible that building more housing for poorer Americans to live in these areas might introduce them to environments where healthy behaviors are already the norm, improving not only their economic fortunes but also their health?

* * *

People act like the people around them. It’s well observed in sociology that behaviors ranging from an innocuous interest in fashion to a propensity for smoking can spread through local networks, like a virus. There is extremely strong evidence that every behavior identified in the health-inequality paper—smoking, obesity, and exercise—can spread infectiously through social groups.

“If your friends are obese, your risk of obesity is 45 percent higher,” says Nicholas Christakis, a Yale University professor who is one of the country’s leading researchers on socially contagious behaviors. “If your friend’s friends are obese, your risk of obesity is 25 percent higher. If your friend’s friend’s friend, someone you probably don’t even know, is obese, your risk of obesity is 10 percent higher.” Christakis has even found that the mere onset of obesity can be infectious. If somebody becomes obese, it increases his friend’s risk of obesity by about 57 percent.

The idea that half of all American children are in poverty or near being in poverty is not one that we’d really be comfortable entertaining. It would show that our society isn’t taking care of the most vulnerable people within it for example. And that’s what a new report now tells us: that half of American children are either in or near being in poverty. Thus this is a terrible indictment of how society is organised.

Or, you know, perhaps not? Because it does rather depend upon what we define as poverty and even more perhaps upon what we define as near poverty. And the definition actually used here is one that means that pretty much inevitably we’re going to be defining half American children as being in poverty or near it. On the basis that the definition of what is “near poverty” is also really very close indeed to median household income. And, for those who don’t get Garrison Keillor’s joke, median income is where we expect half the households in the country to have an income higher than and half lower than. Thus if children were equally distributed across households (they’re not) and we define near poverty as being median income, then by definition half of all children will be in near poverty.

Nearly half of children in the United States live dangerously close to the poverty line, according to new research from the National Center for Children in Poverty (NCCP) at Columbia University’sMailman School of Public Health. Basic Facts about Low-Income Children, the center’s annual series of profiles on child poverty in America, illustrates the severity of economic instability and poverty conditions faced by more than 31 million children throughout the United States. Using the latest data from the American Community Survey, NCCP researchers found that while the total number of children in the U.S. has remained about the same since 2008, more children today are likely to live in families barely able to afford their most basic needs.

Sounds terrible if not even spooky, doesn’t it?

However, perhaps the first thing we should say about these numbers are that they are working from market incomes. In more detail, that’s any cash income from whatever work might be done in those households, plus cash welfare and social security. What these numbers do not include is the impact of the things that we do to reduce poverty as it is actually experienced. The effects of Medicaid, SNAP, the EITC, free school meals, Section 8 vouchers, in fact the whole plethora of some 80 poverty reduction programs (other than straight cash welfare payments) are entirely ignored here. So it’s not in fact true that near half of American children are living in poverty or near it. What is true is that if we didn’t have a benefits and welfare system then near half of American children would be living in what this report describes as poverty or near poverty. Those two are really not the same thing at all. Which, given that we spend some $800 billion a year or so on those poverty reduction programs is probably a good thing.

IT’S disturbing and puzzling news: Death rates are rising for white, less-educated Americans. The economists Anne Case and Angus Deaton reported in December that rates have been climbing since 1999 for non-Hispanic whites age 45 to 54, with the largest increase occurring among the least educated. An analysis of death certificates by The New York Times found similar trends and showed that the rise may extend to white women.

Both studies attributed the higher death rates to increases in poisonings and chronic liver disease, which mainly reflect drug overdoses and alcohol abuse, and to suicides. In contrast, death rates fell overall for blacks and Hispanics.

Why are whites overdosing or drinking themselves to death at higher rates than African-Americans and Hispanics in similar circumstances? Some observers have suggested that higher rates of chronic opioid prescriptions could be involved, along with whites’ greater pessimism about their finances.

Yet I’d like to propose a different answer: what social scientists call reference group theory. The term “reference group” was pioneered by the social psychologist Herbert H. Hyman in 1942, and the theory was developed by the Columbia sociologist Robert K. Merton in the 1950s. It tells us that to comprehend how people think and behave, it’s important to understand the standards to which they compare themselves.

How is your life going? For most of us, the answer to that question means comparing our lives to the lives our parents were able to lead. As children and adolescents, we closely observed our parents. They were our first reference group.

And here is one solution to the death-rate conundrum: It’s likely that many non-college-educated whites are comparing themselves to a generation that had more opportunities than they have, whereas many blacks and Hispanics are comparing themselves to a generation that had fewer opportunities.

The state in which you choose to live can play a big role in how far your paycheck stretches each month. Some states are just more expensive than others, forcing you to spend more of your paycheck on necessities.

GOBankingRates has identified the best — and worst — states for avoiding a paycheck-to-paycheck existence. Data was collected to rank all 50 states in a range of categories, including median household income and the cost of housing, food, transportation, utilities, and healthcare. We crunched those numbers to see which states’ residents had the biggest portion of their paycheck left at the end of the month. Click through to see if you live in a state that’s kind on your income or one that stretches your budget too thin.

The 10 States Where You’re Most Likely to Live Paycheck to Paycheck
There are certain states in the country where your paycheck just isn’t enough to make ends meet. The 10 states where you’re most likely to live payday to payday don’t necessarily have low median household incomes — in fact, a few states with some of the highest median incomes per paycheck in the nation made this list. It’s their high costs for things such as housing, utilities or transportation that eat up paychecks, suggesting that the incomes, though high compared with other states, are actually still disproportionate to the costs of living.

WASHINGTON (AP) — The income gap in major U.S. cities goes beyond the trend of rising paychecks for those at the top: Pay has plummeted for those at the bottom.

Many of the poorest households still earn just a fraction of what they made before the Great Recession began in late 2007. Even as the recovery gained momentum in 2014 with otherwise robust job growth, incomes for the bottom 20 percent slid in New York City, New Orleans, Cincinnati, Washington and St. Louis, according to an analysis of Census data released Thursday by the Brookings Institution, a Washington think tank.

“It’s really about the poor losing ground rather than these upper-class households pulling away,” said Alan Berube, a senior fellow at Brookings and deputy director of its metropolitan policy program.

Consider Cincinnati, home to such major companies as Procter & Gamble and Macy’s that are associated with middle class prosperity. Its bottom 20 percent earned just $10,454 in 2014. After inflation, that’s 3 percent less than what they earned in 2013 — and 25 percent below their incomes when the recession started eight years ago.

Cincinnati’s top 5 percent of earners made at least $164,410 in 2014, a figure that has increased since 2013, though it remains 7 percent below pre-recession levels.

The consequence is a widening income gap. The top 5 percent earned 15.7 times what the bottom 20 percent did in Cincinnati. Nationally, this ratio was 9.3 — the same as in 2013. Before the recession, the ratio was 8.5.

The poorest have clawed back some of their earning power since the economy officially began to recover 6½ years ago. But the analysis suggests that strong job growth and modest pay raises have failed to pull millions of Americans back up the economic ladder.

A federal tally of assistance to the poor missed 40% of food-stamp recipients in New York state.

PHOTO: GETTY IMAGES/IKON IMAGES

By Robert Doar

Here’s good news for policy makers—on the right and left—concerned about poverty in the United States. A new study by economists Bruce Meyer of the University of Chicago and Nikolas Mittag of Charles University shows that public-assistance programs are far more effective in alleviating poverty than many government statistics suggest.

The problem lies in the way the U.S. Census Bureau measures poverty. According to the bureau’s website, the government’s “official poverty definition uses money income before taxes and does not include capital gains or noncash benefits (such as public housing, Medicaid, and food stamps).” This has long been known to underestimate income sources and material well-being in low-income households.

In 2011 experts working with the Census Bureau developed a new metric, the Supplemental Poverty Measure, to better measure the effects of government-assistance programs and tax policies on families. But the Meyer-Mittag study, published last month through the American Enterprise Institute, found a serious flaw common to both measures: reliance on a government survey of income that vastly understates the government assistance flowing to poor households.

The authors checked the Census Bureau’s Current Population Survey (CPS) responses from low-income Americans against the actual administrative records of government benefits disbursed to those same respondents in New York state. They discovered that “the survey data sharply understate the income of poor households,” and that “underreporting in the survey data also greatly understates the effects of anti-poverty programs.”

In New York state, the sole focus of the study, the CPS data missed a third of housing-assistance recipients, 40% of Supplemental Nutrition Assistance Program (SNAP) recipients (food stamps), and 60% of those who received cash assistance through Temporary Assistance for Needy Families (TANF) or General Assistance (New York state’s cash assistance program).

Among those who did report receiving benefits, the CPS underreported the average value of those benefits by 6% for SNAP, 40% for cash, and 74% for housing assistance. Remember, this is a primary survey on which the government bases much of its information about progress in reducing poverty.

Walmart employees are so poor that they are skipping lunch, sharing it or, in some cases, stealing it from their coworkers, some of the company’s workers claimed on Thursday while announcing a fast in protest of the company’s wages.

Starting Friday morning, over 100 Walmart associates who are members of Our Walmart, a workers organization, and about a 1,000 supporters will begin a fast to shine light on what they describe as Walmart’s “poverty pay”.

The protest comes in the run-up to Thanksgiving and the Black Friday shopping bonanza, one of Walmart’s busiest periods. Some of the workers will take their fast to the doorstep of Walmart heiress Alice Walton’s apartment in New York City.

Earlier this year, Walmart announced it was raising wages for about half a million of its employees, paying them at least $9 an hour – $1.75 above the federally mandated minimum wage. The company plans to further increase their pay to $10 an hour next year.

The workers say that’s still not enough and demand that they be paid $15 an hour and be given full-time schedules. The name of the 15-day fast is Fast for 15.

Tyfani Faulkner, a former Walmart customer service manager from Sacramento, California, who worked for company for about five years, will be one of those fasting in protest.

“Every day there are associates who go to work with no lunch, or an unhealthy lunch, because that’s all they can afford. I have seen instances where some would eat another associate’s lunch from the refrigerator because they have nothing to eat,” said Faulkner.

BLOND AND MIDWESTERN CHEERFUL, Kathryn Edin could be a cruise director, except that instead of showing off the lido deck, she’s pointing out where the sex traffickers live off a run-down strip of East Camden, New Jersey. Her blue eyes sparkle as she highlights neighborhood landmarks: the scene of a hostage standoff where police shot a man after he’d murdered a couple in their home and abducted their four-year-old; the front yard where a guy was gunned down after trying to settle a dispute between his son and two other teens.

Edin, 51, talks to every stranger we pass. She chirps hello to some guys working on a car jacked up in their front yard, some dudes selling pot, and a little girl driving a pink plastic jeep on the sidewalk. Most of them look at her like she’s from another planet—which in a way, she is.

A sociologist at Johns Hopkins University, Edin is one of the nation’s preeminent poverty researchers. She has spent much of the past several decades studying some of the country’s most dangerous, impoverished neighborhoods. But unlike academics who draw conclusions about poverty from the ivory tower, Edin has gotten up close and personal with the people she studies—and in the process has shattered many myths about the poor, rocking sociology and public-policy circles.

This essay by New Yorker writer George Packer does an excellent job of laying out some of the reasons why we are seeing widening economic inequality in America and what this means for our country. In fact, this article is what inspired me to create the American Realities website. I strongly suggest you read it.

By George Packer from Foreign Affairs NOVEMBER/DECEMBER 2011 ISSUE

Iraq was one of those wars where people actually put on pounds. A few years ago, I was eating lunch with another reporter at an American-style greasy spoon in Baghdad’s Green Zone. At a nearby table, a couple of American contractors were finishing off their burgers and fries. They were wearing the contractor’s uniform: khakis, polo shirts, baseball caps, and Department of Defense identity badges in plastic pouches hanging from nylon lanyards around their necks. The man who had served their food might have been the only Iraqi they spoke with all day. The Green Zone was set up to make you feel that Iraq was a hallucination and you were actually in Normal, Illinois. This narcotizing effect seeped into the consciousness of every American who hunkered down and worked and partied behind its blast walls — the soldier and the civilian, the diplomat and the journalist, the important and the obscure. Hardly anyone stayed longer than a year; almost everyone went home with a collection of exaggerated war stories, making an effort to forget that they were leaving behind shoddy, unfinished projects and a country spiraling downward into civil war. As the two contractors got up and ambled out of the restaurant, my friend looked at me and said, “We’re just not that good anymore.”

The Iraq war was a kind of stress test applied to the American body politic. And every major system and organ failed the test: the executive and legislative branches, the military, the intelligence world, the for-profits, the nonprofits, the media. It turned out that we were not in good shape at all — without even realizing it. Americans just hadn’t tried anything this hard in around half a century. It is easy, and completely justified, to blame certain individuals for the Iraq tragedy. But over the years, I’ve become more concerned with failures that went beyond individuals, and beyond Iraq — concerned with the growing arteriosclerosis of American institutions. Iraq was not an exceptional case. It was a vivid symptom of a long-term trend, one that worsens year by year. The same ailments that led to the disastrous occupation were on full display in Washington this past summer, during the debt-ceiling debacle: ideological rigidity bordering on fanaticism, an indifference to facts, an inability to think beyond the short term, the dissolution of national interest into partisan advantage.

Was it ever any different? Is it really true that we’re just not that good anymore? As a thought experiment, compare your life today with that of someone like you in 1978. Think of an educated, reasonably comfortable couple perched somewhere within the vast American middle class of that year. And think how much less pleasant their lives are than yours. The man is wearing a brown and gold polyester print shirt with a flared collar and oversize tortoiseshell glasses; she’s got on a high-waisted, V-neck rayon dress and platform clogs. Their morning coffee is Maxwell House filter drip. They drive an AMC Pacer hatchback, with a nonfunctioning air conditioner and a tape deck that keeps eating their eight-tracks. When she wants to make something a little daring for dinner, she puts together a pasta primavera. They type their letters on an IBM Selectric, the new model with the corrective ribbon. There is only antenna television, and the biggest thing on is Laverne and Shirley. Long-distance phone calls cost a dollar a minute on weekends; air travel is prohibitively expensive. The city they live near is no longer a place where they spend much time: trash on the sidewalks, junkies on the corner, vandalized pay phones, half-deserted subway cars covered in graffiti.

By contemporary standards, life in 1978 was inconvenient, constrained, and ugly. Things were badly made and didn’t work very well. Highly regulated industries, such as telecommunications and airlines, were costly and offered few choices. The industrial landscape was decaying, but the sleek information revolution had not yet emerged to take its place. Life before the Android, the Apple Store, FedEx, HBO, Twitter feeds, Whole Foods, Lipitor, air bags, the Emerging Markets Index Fund, and the pre-K Gifted and Talented Program prep course is not a world to which many of us would willingly return.

The surface of life has greatly improved, at least for educated, reasonably comfortable people — say, the top 20 percent, socioeconomically. Yet the deeper structures, the institutions that underpin a healthy democratic society, have fallen into a state of decadence. We have all the information in the universe at our fingertips, while our most basic problems go unsolved year after year: climate change, income inequality, wage stagnation, national debt, immigration, falling educational achievement, deteriorating infrastructure, declining news standards. All around, we see dazzling technological change, but no progress. Last year, a Wall Street company that few people have ever heard of dug an 800-mile trench under farms, rivers, and mountains between Chicago and New York and laid fiber-optic cable connecting the Chicago Mercantile Exchange and the New York Stock Exchange. This feat of infrastructure building, which cost $300 million, shaves three milliseconds off high-speed, high-volume automated trades — a big competitive advantage. But passenger trains between Chicago and New York run barely faster than they did in 1950, and the country no longer seems capable, at least politically, of building faster ones. Just ask people in Florida, Ohio, and Wisconsin, whose governors recently refused federal money for high-speed rail projects.

We can upgrade our iPhones, but we can’t fix our roads and bridges. We invented broadband, but we can’t extend it to 35 percent of the public. We can get 300 television channels on the iPad, but in the past decade 20 newspapers closed down all their foreign bureaus. We have touch-screen voting machines, but last year just 40 percent of registered voters turned out, and our political system is more polarized, more choked with its own bile, than at any time since the Civil War. There is nothing today like the personal destruction of the McCarthy era or the street fights of the 1960s. But in those periods, institutional forces still existed in politics, business, and the media that could hold the center together. It used to be called the establishment, and it no longer exists. Solving fundamental problems with a can-do practicality — the very thing the world used to associate with America, and that redeemed us from our vulgarity and arrogance — now seems beyond our reach.

It is a national moral disgrace that there are 14.7 million poor children and 6.5 million extremely poor children in the United States of America – the world’s largest economy. It is also unnecessary, costly and the greatest threat to our future national, economic and military security.

The 14.7 million poor children in our nation exceeds the populations of 12 U.S. states combined: Alaska, Hawaii, Idaho, Maine, Montana, New Hampshire, North Dakota, Rhode Island, South Dakota, Vermont, West Virginia, and Wyoming and is greater than the combined populations of the countries of Sweden and Costa Rica. Our nearly 6.5 million extremely poor children (living below the poverty line) exceeds the combined populations of Delaware, Montana, New Hampshire, Rhode Island, South Dakota, Vermont and Wyoming and is greater than the populations of Denmark or Finland.

The younger children are the poorer they are during their years of greatest brain development. Every other American baby is non-White and 1 in 2 Black babies is poor, 150 years after slavery was legally abolished.

America’s poor children did not ask to be born; did not choose their parents, country, state, neighborhood, race, color, or faith. In fact if they had been born in 33 other Organization for Economic Cooperation and Development (OECD) countries they would be less likely to be poor. Among these 35 countries, America ranks 34th in relative child poverty — ahead only of Romania whose economy is 99 percent smaller than ours.

The United Kingdom, whose economy, if it were an American state, would rank just above Mississippi according to the Washington Post, committed to and succeeded in cutting its child poverty rate by half in 10 years. It is about values and political will. Sadly, politics too often trumps good policy and moral decency and responsibility to the next generation and the nation’s future. It is way past time for a critical mass of Americans to confront the hypocrisy of America’s pretension to be a fair playing field while almost 15 million children languish in poverty. This report calls for an end to child poverty in the richest nation on earth with a 60 percent reduction immediately. It shows solutions to ending child poverty in our nation already exist. For the first time this report shows how, by expanding investments in existing policies and programs that work, we can shrink overall child poverty 60 percent, Black child poverty 72 percent, and improve economic circumstances for 97 percent of poor children at a cost of $77.2 billion a year. These policies could be pursued immediately, improving the lives and futures of millions of children and eventually saving taxpayers hundreds of billions of dollars annually.

Child poverty is too expensive to continue. Every year we keep 14.7 million children in poverty costs our nation $500 billion – six times more than the $77 billion investment we propose to reduce child poverty by 60 percent. MIT Nobel Laureate economist and 2014 Presidential Medal of Freedom recipient Dr. Robert Solow in his foreword to a 1994 CDF report Wasting America’s Future presciently wrote: “For many years Americans have allowed child poverty levels to remain astonishingly high…far higher than one would think a rich and ethical society would tolerate. The justification, when one is offered at all, has often been that action is expensive: ‘We have more will than wallet.’ I suspect that in fact our wallets exceed our will, but in any event this concern for the drain on our resources completely misses the other side of the equation: Inaction has its costs too…As an economist I believe that good things are worth paying for; and that even if curing children’s poverty were expensive, it would be hard to think of a better use in the world for money. If society cares about children, it should be willing to spend money on them.”

– See more at: http://www.childrensdefense.org/library/PovertyReport/EndingChildPovertyNow.html#sthash.TQrWiUwO.dpuf

I’m not someone who believes that poverty can ever truly be ended — I’m one of those “the poor will always be with you” types — but I do believe that the ranks of the poor can and must be shrunk and that the effects of poverty can and must be ameliorated.

And there is one area above all others where we should feel a moral obligation to reduce poverty as much as possible and to soften its bite: poverty among children.

People may disagree about the choices parents make — including premarital sex and out-of-wedlock births. People may disagree about access to methods of family planning — including contraception and abortion. People may disagree about the size and role of government — including the role of safety-net programs.