How we measure the poverty line(s)

The World Bank is rethinking where it sets the poverty line. - FREDERIC J. BROWN/AFP/Getty Images

Until the mid-1960s, the U.S. didn't have an official, federal poverty line.

In 1963, the Social Security Administration asked one of its researchers, Mollie Orshanksy, to report on child poverty. Orshansky quickly realized there was no way to tell exactly how many children were living in poverty, and devised a simple calculation to determine who was poor.

She took the the U.S. Department of Agriculture's "thrifty food plan," which estimated the minimum amount of food that cash-strapped families could survive on, and still be healthy.

In 1963, that food cost $1,033 dollars for the year. Data from surveys at the time showed the average family spent about a third of their income on food, so Orshanksy took that $1,033 and multiplied it by three. Any family earning less than that amount was below the poverty line. Fifty years later, that is still how the federal government determines who is in poverty: the minimum you need for food, multiplied by three.

Many poverty researchers find that problematic, because these days, the average family spends about one-seventh of its income on food, not one third. But other costs, like housing, medical care, childcare and commuting have risen.

For the past few years, the Census Bureau has published a Supplemental Poverty Measure, which takes those rising costs into account, along with whether people live in low-cost or high-cost areas. The Supplemental measure also adds in the benefits that many low-income people recieve like SNAP and subsidized housing.

Many poverty researchers agree the supplemental measure paints a more accurate picture of who is in poverty, but the government still uses Orshanky's original formula to determine its official measure.