Tag Archives: poverty

What “the jobs problem” is depends on how far into the future you’re looking. Near-term, macroeconomic policy should suffice to create enough jobs. But long-term, employing everyone may be unrealistic, and a basic income program might be necessary. That will be such a change in our social psychology that we need to start preparing for it now.

Historical context. The first thing to recognize about unemployment is that it’s not a natural problem. Tribal hunter-gatherer cultures have no notion of it. No matter how tough survival might be during droughts or other hard times, nothing stops hunter-gatherers from continuing to hunt and gather. The tribe has a territory of field or forest or lake, and anyone can go to this commonly held territory to look for food.

Unemployment begins when the common territory becomes private property. Then hunting turns into poaching, gathering becomes stealing, and people who are perfectly willing to hunt or fish or gather edible plants may be forbidden to do so. At that point, those who don’t own enough land to support themselves need jobs; in other words, they need arrangements that trade their labor to an owner in exchange for access to the owned resources. The quality of such a job might vary from outright slavery to Clayton Kershaw’s nine-figure contract to pitch for the Dodgers, but the structure is the same: Somebody else owns the productive enterprise, and non-owners needs to acquire the owner’s permission to participate in it.

So even if unemployment is not an inevitable part of the human condition, it is as old as private property. Beggars — people who have neither land nor jobs — appear in the Bible and other ancient texts.

But the nature of unemployment changed with Industrial Revolution. With the development and continuous improvement of machines powered by rivers or steam or electricity, jobs in various human trades began to vanish; you might learn a promising trade (like spinning or weaving) in your youth, only to see that trade become obsolete in your lifetime.

So if the problem of technological unemployment is not exactly ancient, it’s still been around for centuries. As far back as 1819, the economist Jean Charles Léonard de Sismondi was wondering how far this process might go. With tongue in cheek he postulated one “ideal” future:

In truth then, there is nothing more to wish for than that the king, remaining alone on the island, by constantly turning a crank, might produce, through automata, all the output of England.

This possibility raises an obvious question: What, then, could the English people offer the king (or whichever oligarchy ended up owning the automata) in exchange for their livelihoods?

Maslow. What has kept that dystopian scenario from becoming reality is, basically, Maslow’s hierarchy of needs. As basic food, clothing, and shelter become easier and easier to provide, people develop other desires that are less easy to satisfy. Wikipedia estimates that currently only 2% of American workers are employed in agriculture, compared to 50% in 1870 and probably over 90% in colonial times. But those displaced 48% or 88% are not idle. They install air conditioners, design computer games, perform plastic surgery, and provide many other products and services our ancestors never knew they could want.

So although technology has continued to put people out of work — the railroads pushed out the stagecoach and steamboat operators, cars drastically lessened opportunities for stableboys and horse-breeders, and machines of all sorts displaced one set of skilled craftsmen after another — new professions have constantly emerged to take up the slack. The trade-off has never been one-for-one, and the new jobs have usually gone to different people than the ones whose trades became obsolete. But in the economy as a whole, the unemployment problem has mostly remained manageable.

Three myths. We commonly tell three falsehoods about this march of technology: First, that the new technologies themselves directly create the new jobs. But to the extent they do, they don’t create nearly enough of them. For example, factories that manufacture combines and other agricultural machinery do employ some assembly-line workers, but not nearly as many people as worked in the fields in the pre-mechanized era.

When the new jobs do arise, it is indirectly, through the general working of the economy satisfying new desires, which may have only a tangential relationship to the new technologies. The telephone puts messenger-boys out of business, and also enables the creation of jobs in pizza delivery. But messenger-boys don’t automatically get pizza-delivery jobs; they go into the general pool of the unemployed, and entrepreneurs who create new industries draw their workers from that pool. At times there may be a considerable lag between the old jobs going away and the new jobs appearing.

Second, the new jobs haven’t always required more education and skill than the old ones. One of the key points of Harry Braverman’s 1974 classic Labor and Monopoly Capital: the degradation of work in the 20th century was that automation typically bifurcates the workforce into people who need to know a lot and people who need to know very little. Maybe building the first mechanized shoe factory required more knowledge and skill than a medieval cobbler had, but the operators of those machines needed considerably less knowledge and skill. The point of machinery was never just that it replaced human muscle-power with horsepower or waterpower or fossil fuels, but also that once the craftsman’s knowledge had been built into a machine, low-skill workers could replace high-skill workers.

And finally, technological progress by itself doesn’t always lead to general prosperity. It increases productivity, but that’s not the same thing. A technologically advanced economy can produce goods with less labor, so one possible outcome is that it could produce more goods for everybody. But it could also produce the same goods with less labor, or even fewer goods with much less labor. In Sismondi’s Dystopia, for example, why won’t the king stop turning his crank as soon as he has all the goods he wants, and leave everyone else to starve?

So whether a technological society is rich or not depends on social and political factors as much as economic ones. If a small number of people wind up owning the machines, patents, copyrights, and market platforms, the main thing technology will produce is massive inequality. What keeps that from happening is political change: progressive taxation, the social safety net, unions, shorter work-weeks, public education, minimum wages, and so on.

The easiest way to grasp this reality is to read Dickens: In his day, London was the most technologically advanced city in the world, but because political change hadn’t caught up, it was a hellhole for a large chunk of its population.

The fate of horses. Given the long history of technological unemployment, it’s tempting to see the current wave as just more of the same. Too bad for the stock brokers put out of work by automated internet stock-trading, but they’ll land somewhere. And if they don’t, they won’t wreck the economy any more than the obsolete clipper-ship captains did.

But what’s different about rising technologies like robotics and artificial intelligence is that they don’t bifurcate the workforce any more: To a large extent, the unskilled labor just goes away. The shoe factory replaced cobblers with machine designers and assembly-line workers. But now picture an economy where you get new shoes by sending a scan of your feet to a web site which 3D-prints the shoes, packages them automatically, and then ships them to you via airborne drone or driverless delivery truck. There might be shoe designers or computer programmers back there someplace, but once the system is built, the amount of extra labor your order requires is zero.

In A Farewell to Alms, Gregory Clark draws this ominous parallel: In 1901, the British economy required more than 3 million working horses. Those jobs are done by machines now, and the UK maintains a far smaller number of horses (about 800K) for almost entirely recreational purposes.

There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

By now, there is literally nothing that three million British horses can do more economically than machines. Could the same thing happen to humans? Maybe it will be a very long time before an AI can write a more riveting novel than Stephen King, but how many of us still have a genuinely irreplacable talent?

Currently, the U.S. economy has something like 150 million jobs for humans. What if, at some point in the not-so-distant future, there is literally nothing of economic value that 150 million people can do better than some automated system?

Speed of adjustment. The counter-argument is subtle, but not without merit: You shouldn’t let your attention get transfixed by the new systems, because new systems never directly create as many jobs as they destroy. Most new jobs won’t come from maintaining 3D printers or manufacturing drones or programming driverless cars, they’ll come indirectly via Maslow’s hierarchy: People who get their old wants satisfied more easily will start to want new things, some of which will still require people. Properly managed, the economy can keep growing until all the people who need jobs have them.

The problem with that argument is speed. If technology were just a one-time burst, then no matter how big the revolution was, eventually our desires would grow to absorb the new productivity. But technology is continually improving, and could even be accelerating. And even though we humans are a greedy lot, we’re also creatures of habit. If the iPhone 117 hits the market a week after I got my new iPhone 116, maybe I won’t learn to appreciate its new features until the iPhone 118, 119, and 120 are already obsolete.

Or, to put the same idea in a historical context, what if technology had given us clipper ships on Monday, steamships on Tuesday, and 747s by Friday? Who would we have employed to do what?

You could imagine, then, a future where we constantly do want new things that employ people in new ways, but still the economy’s ability to create jobs keeps falling farther behind. Since we’re only human, we won’t have time either to appreciate the new possibilities technology offers us, or to learn the new skills we need to find jobs in those new industries — at least not before they also become obsolete.

Macroeconomics. Right now, though, we are still far from the situation where there’s nothing the unemployed could possibly do. Lots of things that need doing aren’t getting done, even as people who might do them are unemployed: Our roads and bridges are decaying. We need to prepare for climate change by insulating our buildings better and installing more solar panels. The electrical grid is vulnerable and doesn’t let us take advantage of the most efficient power-managing technologies. Addicts who want treatment aren’t getting it. Working parents need better daycare options. Students could benefit from more one-on-one or small-group attention from teachers. Hospital patients would like to see their nurses come around more often and respond to the call buttons more quickly. Many of our elderly are warehoused in inadequately staffed institutions.

Some inadequate staffing we’ve just gotten used to: We expect long lines at the DMV, and that it might take a while to catch a waitress’ eye. In stores, it’s hard to get anybody to answer your questions. But that’s just life, we think.

That combination of unmet needs and unemployed people isn’t a technological problem, it’s an economic problem. In other words, the problem is about money, not about what is or isn’t physically possible. Either the people with needs don’t have enough money to create effective demand in the market, or the workers who might satisfy the needs can’t afford the training they need, or the businessmen who might connect workers with consumers can’t raise the capital to get started.

When society invents a new technology that makes workers more efficient, it has two options: It can employ the same number of workers and produce more goods and services, or it can employ fewer workers to produce the same number of goods and services.

Jargon-filled media coverage makes this hard to see, but the Federal Reserve plays a central role in this decision. When the Fed pumps more money into the economy, people spend more and create more jobs. If the Fed fails to supply enough cash, then faster technological progress can lead to faster job losses — something we might be experiencing right now.

So if you’re worried that technological progress will lead to mass unemployment — and especially if you think this process is already underway — you should be very interested in what the Federal Reserve does.

Another option is for the government to directly subsidize the people whose needs would otherwise go unmet. That’s what the Affordable Care Act and Medicaid do: They subsidize healthcare for people who need it but otherwise couldn’t afford it, and so create jobs for doctors, nurses, and the people who manufacture drugs, devices, and the other stuff used in healthcare.

Finally, the government can directly invest in industries that otherwise can’t raise capital. The best model here is the New Deal’s investment in the rural electric co-ops that brought electricity to sparsely populated areas. It’s also what happens when governments build roads or mass-transit systems.

When you look at things this way, you realize that our recent job problems have as much to do with conservative macroeconomic policy as with technology. Since Reagan, we’ve been weakening all the political tools that distribute the benefits of productivity: progressive taxation, the social safety net, unions, shorter work-weeks, public education, the minimum wage. And the result has been exactly what we should have expected: For decades, increases in national wealth have gone almost entirely to owners rather than workers.

In short, we’ve been moving back towards Dickensian London.

The long-term jobs problem. But just because the Robot Apocalypse isn’t the sole source of our immediate unemployment problem, that doesn’t mean it’s not waiting in the middle-to-far future. Our children or grandchildren might well live in a world where the average person is economically superfluous, and only the rare genius has any marketable skills.

The main thing to realize about this future is that its problems are more social and psychological than economic. If we can solve the economic problem of distributing all this machine-created wealth, we could be talking about the Garden of Eden, or various visions of hunter-gatherer Heaven. People could spend their lives pursuing pleasure and other forms of satisfaction, without needing to work. But if we don’t solve the distribution problem, we could wind up in Sismondi’s Dystopia, where it’s up to the owners of the automata whether the rest of us live or die.

The solution to the economic problem is obvious: People need to receive some kind of basic income, whether their activities have any market value or not. The obvious question “Where will the money for this come from?” has an obvious answer “From the surplus productivity that makes their economic contribution unnecessary.” In the same way that we can feed everybody now (and export food) with only 2% of our population working in agriculture, across-the-board productivity could create enough wealth to support everyone at a decent level with only some small number of people working.

But the social/psychological problem is harder. Kurt Vonnegut was already exploring this in his 1952 novel Player Piano. People don’t just get money from their work, they get their identities and senses of self-worth. For example, coal miners of that era may not have wanted to spend their days underground breathing coal dust and getting black lung disease, but many probably felt a sense of heroism in making these sacrifices to support their families and to give their children better opportunities. If they had suddenly all been replaced by machines and pensioned off, they could have achieved those same results with their pension money. But why, an ex-miner might wonder, should anyone love or appreciate him, rather than just his unearned money?

Like unemployment itself, the idea that the unemployed are worthless goes way back. St. Paul wrote:

This we commanded you, that if any would not work, neither should he eat.

It’s worth noticing, though, that many people are already successfully dealing with this psycho-social problem. Scions of rich families only work if they want to, and many of them seem quite happy. Millions of Americans are pleasantly retired, living off a combination of savings and Social Security. Millions of others are students, who may be working quite hard, but at things that have no current economic value. Housespouses work, but not at jobs that pay wages.

Countless people who have wage-paying jobs derive their identities from some other part of their lives: Whatever they might be doing for money, they see themselves as novelists, musicians, chess players, political activists, evangelists, long-distance runners, or bloggers. Giving them a work-free income would just enable them to do more of what they see as their calling.

Conservative and liberal views of basic income. If you talk to liberals about basic income, the conversation quickly shifts to all the marvelous things they would do themselves if they didn’t have to work. Conservatives may well have similar ambitions, but their attention quickly shifts to other people, who they are sure would lead soulless lives of drunken society-destroying hedonism. (This is similar to the split a century ago over Prohibition: Virtually no one thought that they themselves needed the government to protect them from the temptation of gin, but many believed that other people did.)

So far this argument is almost entirely speculative, with both sides arguing about what they imagine would happen based on their general ideas about human nature. However, we may get some experimental results before long.

GiveDirectly is an upstart charity funded by Silicon Valley money, and it has tossed aside the old teach-a-man-to-fish model of third-world aid in favor of the direct approach: Poor people lack money, so give them money. It has a plan to provide a poverty-avoiding basic income — about $22 a month — for 12 years to everybody in 40 poor villages in Kenya. Another 80 villages will get a 2-year basic income. Will this liberate the recipients’ creativity? Or trap them in soul-destroying dependence and rob them of self esteem?

My guess: a little bit of both, depending on who you look at. And both sides will feel vindicated by that outcome. We see that already in American programs like food stamps. For some conservatives, the fact that cheating exists at all invalidates the whole effort; that one guy laughing at us as he eats his subsidized lobster outweighs all the kids who now go to school with breakfast in their stomachs. Liberals may look at the same facts and come to the opposite conclusion: If I get to help some people who really need it, what does it matter if a few lazy lowlifes get a free ride?

So I’ll bet some of the Kenyans will gamble away their money or use it to stay permanently stoned, while others will finally get a little breathing room, escape self-reinforcing poverty traps, and make something of their lives. Which outcome matters to you?

Summing up. In the short run, there will be no Robot Apocalypse as long as we regain our understanding of macroeconomics. But we need to recognize that technological change combines badly with free-market dogma, leading to Dickensian London: Comparatively few people own the new technologies, so they capture the benefits while the rest of us lose our bargaining power as we become less and less necessary.

However, we’re still at the point in history where most people’s efforts have genuine economic value, and many things that people could do still need doing. So by using macroeconomic tools like progressive taxation, public investment, and money creation, the economy can expand so that technological productivity leads to more goods and services for all, rather than a drastic loss of jobs and livelihoods for most while a few become wealthy on a previously unheard-of scale.

At some point, though, we’re going to lose our competition with artificial intelligence and go the way of horses — at least economically. Maybe you believe that AIs will never be able to compete with your work as a psychotherapist, a minister, or a poet, but chess masters and truck drivers used to think that too. Sooner or later, it will happen.

Adjusting to that new reality will require not just economic and political change, but social and psychological change as well. Somehow, we will need to make meaningful lives for ourselves in a work-free technological Garden of Eden. When I put it that way, it sounds easy, but when you picture it in detail, it’s not. We will all need to attach our self-respect and self-esteem to something other than pulling our weight economically.

In the middle-term, there are things we can do to adjust: We should be on the lookout for other roles like student and retiree, that give people a socially acceptable story to tell about themselves even if they’re not earning a paycheck. Maybe the academic idea of a sabbatical needs to expand to the larger economy: Whatever you do, you should take a year or so off every decade. “I’m on sabbatical” might become a story more widely acceptable than “I’m unemployed.” College professors and ministers are expected to take sabbaticals; it’s the ones who don’t who have something to explain.

Already-existing trends that lower the workforce, like retraining mid-career or retiring early, need to be celebrated rather than worried about. In the long run the workforce is going to go down; that can be either a source of suffering or a cause for rejoicing, depending on how we construct it.

Most of all, we need to re-examine the stereotypes we attach to the unemployed: They are lazy, undeserving, and useless. These stereotypes become self-fulfilling prophecies: If no one is willing to pay me, why shouldn’t I be useless?

Social roles are what we make them. The Bible does not report Adam and Eve feeling useless and purposeless in the Garden of Eden, and I suspect hunter-gatherer tribes that happened onto lands of plentiful game and endless forest handled that bounty relatively well. We could to the same. Or not.

The short version is that as the climate degrades and fossil fuels become simultaneously more expensive and less useable, each generation inherits from its more prosperous ancestors an infrastructure that it can’t afford to maintain. Society muddles through from year to year — sometimes even seeming to advance — until some part of that poorly maintained infrastructure snaps and causes major destruction. The destroyed area may get rebuilt, but not to its previous level. The resulting community has less infrastructure to maintain, but is also less prosperous, and so the cycle continues into the next generation.

New Orleans is one example. Hurricane Katrina was an act of Nature (and possibly a consequence of global warming), but the reason it destroyed so much of New Orleans was the failure of the city’s infrastructure. As Jed Horne reported in The Washington Post:

key levees, including the 17th Street and London Avenue canals in the heart of the city, failed with water well below levels they were designed to withstand. As the Army Corps [of Engineers] eventually conceded, they were breached because of flawed engineering and collapsed because they were junk. … The Corps and local levee boards that maintain flood barriers pinched pennies, and suddenly Katrina became the nation’s first $200 billion disaster.

The collapse of Detroit lacks a Katrina-level catastrophe, but follows a similar pattern: Detroit’s sinking tax base can’t maintain a major city, and every attempt to either raise taxes or spend less just exacerbates the decline.

At any particular moment, you can always find something else to blame: corruption, say, or mismanagement. But rising cities are also corrupt and mismanaged, maybe more so — see Tammany Hall. It’s not that declining communities lack virtue, it’s that flourishing communities can afford vice.

Greer imagines the same scenario on a planetary scale. He sees places like New Orleans and Detroit not as unique examples of dysfunction, but as coal-mine canaries. The same vicious cycles that are driving them downward will eventually manifest everywhere.

Dot 2. The suburban Ponzi scheme. In June, 2011, Charles Marohn published “The American suburbs are a giant Ponzi scheme” at Grist (also reviewed in the Sift). His point is that car-oriented suburbs create only the illusion of wealth. In the long run, they are enormous bad investments that create unmaintainable communities.

America’s early suburbs were outlying towns that were gradually engulfed by urban sprawl in a more-or-less natural way — Oak Park, Illinois and Arlington, Massachusetts come to mind. But the 20th century created car-oriented commuter towns out of nothing. Everything was new at the same time: new houses, new roads, new schools, new stores, new sidewalks, new bridges, new sewers, and so on.

As a result, nothing needed fixing right away, so taxes could be low. Sound accounting would have required these towns to build up big maintenance funds for the day when things started wearing out. But under sound accounting, those communities wouldn’t have been quite so attractive in the first place. And whatever the accountants said, why would voters tolerate higher taxes if the town was sitting on a pile of money?

As long as there was rapid growth — new subdivisions, new roads, new malls, etc. — the game could continue: Even after the potholes started, the tax base was still big compared to the relatively small part of the suburb that needed fixing. That’s why Marohn calls it a Ponzi scheme: Just as Ponzi’s later investors paid the dividends of the early investors, the suburb’s new neighborhoods pay for the maintenance of its old neighborhoods.

But trees don’t grow to the sky, so eventually a suburb reaches its carrying capacity. And when growth plateaus, the maintenance time bomb starts ticking. A decade or two later, everything seems to wear out at once, while the tax base stays comparatively flat. Now the local government faces a choice: raise taxes or let things start falling apart. Either option makes the town a less attractive destination for the high-income families and high-margin businesses it needs — especially in comparison with fresher suburbs still in their low-tax, low-maintenance, everything-is-new growth phase.

That starts a slow-but-steady decline, until eventually you have not just high tax rates, but also cracked sidewalks, pot-holed streets, underfunded schools, dingy libraries, litter-filled parks … and the kind of residents who can’t afford to live anywhere nicer.

When places like this hit the decline phase – which they inevitably do – they become absolutely despotic. This type of development doesn’t create wealth; it destroys it. The illusion of prosperity that it had early on fades away and we are left with places that can’t be maintained and a concentration of impoverished people poorly suited to live with such isolation. … Unfortunately, nothing I’ve brought up here is really unique to Ferguson. All of our auto-oriented places are somewhere on the predictable trajectory of growth, stagnation and decline. Racial elements aside, I think we are going to see rioting in a lot of places as this stuff unwinds.

Dot 3. The Ferguson revenue structure. As I’ve discussed before, Ferguson didn’t erupt simply because Darren Wilson shot Michael Brown. That was just the spark. Combustible anger had been building up in Ferguson for a long time.

Ferguson erupted because the less affluent black majority resented being in a predator/prey relationship with the mostly white police. It would be bad enough if that relationship were entirely based on racism or abuse of power, but it goes deeper than that: It’s economics. Reuters reports:

Traffic fines are the St. Louis suburb’s second-largest source of revenue and just about the only one that is growing appreciably. Municipal court fines, most of which arise from motor vehicle violations, accounted for 21 percent of general fund revenue and at $2.63 million last year, were the equivalent of more than 81 percent of police salaries before overtime.

Some of our municipalities are seeking to raise revenue through the use of their municipal courts. This is not about public safety. The courts in those municipalities are profit-seeking entities that systematically enforce municipal ordinance violations in a way that disproportionately impacts the indigent and communities of color.

Charles Mudede of Slog widens his view to include a statistic from AP: The “homicide clearance rate”, i.e., the percentage of murders that police solve in America, has dropped from 91% in 1963 to 61% in 2007. Mudede suggests a simple explanation:

Catching murderers costs money. Cities do not have money.

In other words, why have your police out spending the town’s money investigating serious crimes when they could be making money for the town by hassling jaywalkers like Michael Brown? In an era when your businesses are already moving away and your property values are stagnant or sinking, how else are you going to raise revenue?

Mostly, that revenue is going to come from poor people who can’t afford lawyers and have no place to move to. That may seem harsh, but if you change the practice, you’ll have to come up with an alternative revenue stream, preferably one that won’t chase away more businesses and more professional-class families. What could it be?

And so, concludes Reuters, changing the way Ferguson polices its people is going to be “easier said than done”.

Put it together. For the last couple centuries, we’ve had a simple formula for increasing wealth: Take something that people used to do with their muscles and figure out a way to do it by burning fossil fuels. Augmenting human effort with the energy stored in coal, oil, and gas has created a level of luxury that would have seemed magical to our ancestors.

During that time of increase, you didn’t have to worry much about either the fuel you were burning — there was more where that came from — or the atmosphere you were burning it into. Now we’re starting to hit limits on both sides. We have to go to extremes — deep in the ocean, deep underground, far into the polar regions — to find new fuel; and if we burn all that we have discovered, the change in our climate could be catastrophic.

So you don’t have to go all the way to Greer-level pessimism to realize that creating wealth will be trickier in the centuries to come. Many of the things that may look like wealth-creation actually aren’t; they just shove someone else into poverty, or create debt that will eventually have to be written off — like the “profits” investment banks booked during the housing bubble.

When generation-to-generation economic growth is large and reliable, you don’t have to worry too much about the long term, because your grandchildren will be rich enough to handle the messes you leave for them. So it makes a certain amount of sense to push costs off into the future. But if we genuinely don’t know whether generations-to-come will be richer or poorer than we are, then it’s important that we do our accounting right. It’s also important that we build robustly, so that communities are viable under a wide variety of scenarios. Assuming that everybody will have a car or that food can be imported cheaply creates brittle communities that someday may have to be abandoned. A flourishing society can afford such write-offs. But if maintaining the infrastructure we inherit is the difference between advance and decline, we’ll have to be smarter.

And finally, we need to figure out how to rebuild or write off the mistakes of the past. Places like Ferguson — and there are a lot of them — are not sustainable in their current form. They will never generate the capital to remake themselves, and the outside capital they attract will be mainly from vultures who want to squeeze the last bits of value out of the community’s decline and despair.

In the short term, the easiest way to deal with that dysfunction is to blame it on the people who live there and lack more viable options. Their local governments can figure out ever more inventive ways to squeeze money out of them and leave them in squalor, while the rest of us lecture them about their lack of middle-class values. But the fundamental mistakes are not theirs. Those mistakes were made decades ago, and have been quite literally set in stone.

Image vs. fact. The public debate around Food Stamp cuts has consisted almost entirely of imagery. Fox News’ hour-long special “The Great Food Stamps Binge” anointed lobster-buying surfer-musician Jason Greenslate “the new face of Food Stamps”, while MSNBC focused on kids and military families. Ezra Klein interviewed author and ex-sergeant Kayla Williams about growing up on Food Stamps, and quoted a blog post by an unemployed Afghanistan veteran currently receiving Food Stamps.

Each image is moving in its own way, but how well do any of them represent reality?

First, let’s establish some facts: We’re talking about the Supplemental Nutrition Assistance Program (SNAP), which cost the government $74.6 billion in FY 2012. As of last September, 47.7 million Americans — about 1 in 7 — were receiving SNAP benefits that averaged $134 a month. To be eligible for SNAP, your income must be lower than 130% of the poverty level, or about $30,000 for a family of four.

As you can see from the chart, the percentage of the population getting SNAP benefits fluctuated with the business cycle until Clinton’s welfare reform in 1996, then started increasing again when the 2002 Farm Bill loosened up eligibility. (The anomaly in the chart is the increase during the “Bush Boom” of 2002-2006.) It really took off when the Great Recession hit in 2008. Recently, the number of households receiving SNAP has roughly matched the USDA estimate of the number of households that are “food insecure”. Both numbers jumped between 2007 and 2009, and both are currently about 1 in 7.

The non-partisan Congressional Budget Office (CBO) estimates that the number of recipients would go back down to 34 million by 2023 even with no changes in eligibility. (I’d guess that follows from the assumption that the economy goes back to normal by then.) Benefits were increased in the stimulus bill of 2009, and those increases (a little less than 6%) will run out this November. (That’s already baked into the numbers and does not figure in the $39 billion of cuts.)

Lost in most of the discussion is the question of where the estimated $39 billion savings comes from. Anecdotes or even averages about SNAP recipients are meaningless in this discussion unless they apply specifically to the people who will lose their benefits.

The detailed CBO estimates show that most of the provisions of the House bill have little impact on cost. (It didn’t even bother to figure the savings from Section 110, “Ending supplemental nutrition assistance program benefits for lottery or gambling winners.”) The entire $39 billion comes from three changes.

Work requirements. The biggest chunk, $19 billion, come from Section 109, “Repeal of state work program waiver authority.” That also accounts for the most immediate impact: $3.3 billion in FY 2015.

This sounds like the waivers in welfare work requirements that Mitt Romney so brazenly misrepresented in 2012, but it’s actually different. The SNAP rules say that able-bodied adults without children are limited to receiving 3 months of SNAP benefits every 3 years, unless they are spending at least 20 hours a week either working or participating in a job training program. The 1996 law that established that requirement allowed governors to apply for waivers if their states had high unemployment, figuring that it’s not fair to require hungry people to work if there are no jobs. That’s what’s being repealed.

That change shouldn’t affect any children, but it should cut off both Fox’s freeloading surfer and MSNBC’s unemployed Afghanistan veteran. (I didn’t find national estimates, but adults without children who don’t work 20 hours a week are about 8% of SNAP recipients in Texas, according to the Dallas Morning News. ) How you feel about it largely depends on which one you think is more typical. I suspect the vet is more typical, but I don’t really know.

How you feel also depends on your mercy/severity bias. Some people would gladly feed ten freeloaders to save one person from going hungry through no fault of his or her own. Others feel justified in cutting off ten hungry innocents to force one Jason Greenslate back into the job market.

Categorical eligibility. The second biggest savings, $11.6 billion ($1.3 billion in 2015), comes from Section 105, “Updating Program Eligibility”, which eliminates something known as “categorical eligibility”. CE amounts to the idea that if you’ve already qualified for one needs-based government program, you can qualify automatically for some others, even if the eligibility requirements don’t match perfectly. This saves overhead costs for the government and shortens the lag time of waiting for your paperwork to go through, at the cost of giving benefits to people who might make a little more than 130% of the poverty level.

So the main folks this hurts are the working poor, those lucky couples with kids who get SNAP even though they make slightly over $30,000 a year. It hits them in multiple ways, because qualifying for SNAP can also automatically qualify their kids for free school lunches. Bread for the World estimates that 2-3 million people will lose SNAP benefits if CE is eliminated, and that 280,000 children will lose free school lunches. (It’s tricky, but not impossible, to make that estimate match the CBO’s $1.3 billion. Using the $133-a-month average benefit, we’d be talking about 10 million person-months. That could be 2 million people getting SNAP for an average of five months each during a year. My best guess, though, is that we’re more likely talking about 1 million people, with the other 1-2 million losing benefits only briefly while they re-apply and re-qualify.)

Heat and eat. $8.7 billion in savings ($840 million in 2015) doesn’t actually concern food at all. It comes from eliminating the so-called “Heat and Eat” program, through which SNAP recipients can get assistance paying their utility bills. Bloomberg’s article says this would affect 850,000 people currently getting about $90 a month. (Again, I think you make that work with the $840-million-a-year CBO estimate by assuming not everybody gets assistance for the full 12 months.)

So that’s the whole $39 billion right there. Everything else in the bill is window dressing. For example, drug-testing recipients — which the House bill does not mandate but allows states to do — will almost certainly cost the government more for the tests than it can save by denying benefits to drug users. That was already true when Florida tried it for welfare applicants, and since SNAP benefits-per-person are much less, the loss should be even bigger.

Dependence. The Republican rhetoric on this issue revolves around the word dependence: dependence on government, creating dependence, and so on. The implicit assumption is that people who are getting aid would otherwise take matters in hand somehow. (And that we would approve of how they did it. After all, isn’t Breaking Bad the story of a man realizing that no one is going to help him and taking matters in hand?) And that in turn is based on the assumption that poverty is caused by poor people; if they’d just get out and work, they wouldn’t be poor. A third assumption is that it’s OK for children to suffer for the misbehavior of their parents; seeing their children hungry is part of what’s supposed to motivate the poor not to be poor.

I see two things going on here. First, what I like to call the Musical Chairs Fallacy, which is a version of the Composition Fallacy. If a particular child is always the first one out in musical chairs, you could train him/her to be quicker and more alert. But if you trained all the kids, someone would still be the first one out, because there aren’t enough chairs.

Similarly, you can imagine individual parents watching their children plead for more food and getting a burst of desperate energy that propels them into jobs they might otherwise not have found. But if all the poor get desperate at once, will that desperation create enough jobs to feed all their children? Or are a certain number of people going to go out of the game when the music stops (no matter how quick or alert everyone is) because there aren’t enough chairs?

And our economy is creating more and more part-time and minimum-wage jobs. The increasing numbers of people on food stamps is how we’re dealing with those trends. If you’re working 30 hours a week at WalMart, you can’t feed your kids. Politicians who are against raising the minimum wage and also against Food Stamps need to spell out their plan for those kids.

Summing up. The $39 billion saved by the House bill comes from three places: Cutting off benefits for unemployed adults without kids and trusting that they will find legal jobs rather than go hungry or turn to crime; stopping benefits for the working poor who make slightly too much money; and poor families being hotter in the summer and colder in the winter.

What’s next? The Senate passed much smaller Food Stamp cuts (about $4 billion over ten years) back in June. That was part of a bipartisan farm bill that got 48 Democratic and 18 Republican votes. Now the House and Senate have to meet in a conference committee to work out a compromise bill, though it’s hard to imagine what that might look like. Like all the other spending bills that are hung up in this Congress, it has an October 1 deadline.