Friday, March 29, 2013

Most infographics have too much information in them, or have too much text relative to actual information. This page has five really good ones - the screenshot above is perhaps my favorite, the periodic table according to abundance on Earth. The rest are pretty good too.

Thursday, March 28, 2013

Here's a link to another useful set of charts about the high costs of the American health care system. (You can see a previous post on this topic here.) This one shows prices in the US for various drugs, diagnostic tests, and procedures - like an appendectomy, illustrated in the screenshot - compared with other countries. Guess what? Every single one of them is highest in the US, by a lot. This Atlantic.com column by Derek Thompson offers the beginnings of an explanation:

Why is American health care so expensive? Books could be written about this topic. And books have been written about this topic. In The Healing of America, T. D. Reid explored why American medicine falls behind other countries in quality while it races far ahead in cost of care.

Near the end of the book, Reid expands on two big
reasons why U.S. health care is so expensive: (1) Unlike other
countries, the U.S. government doesn't manage prices; and (2) the
complications created by our for-profit system adds tremendous costs.

Wednesday, March 27, 2013

A friend recently pointed me to a couple of posts on Evan Miller's interesting blog - Miller is a grad student in economics who is interested in statistics for the non-academic trying to figure out what's going on in her world. Like me. He's also developed and is selling two applications that look extremely useful, Wizard, and Magic Maps. Wizard, according to Miller's description, is a "powerful, easy-to-use tool for data analysis." I've embedded the brief but clear video describing it. Magic maps is, as you might expect, a mapping tool.

I've downloaded test versions and will play with them over the next couple of weeks. Meanwhile, has anyone had any experience with these programs? If you do, please email me or comment.

Tuesday, March 26, 2013

Think the situation Charlie's family in "Charlie and the Chocolate Factory" where no one has quite enough to eat and the grandparents give up food so growing Charlie can have enough no longer exists in the US? Think again - research shows that nearly 15% of US households are "food insecure" meaning that they "had difficulty at some time
during the year providing enough food for all their members due to
a lack of resources." Worse, one-third of these households had very low food security, meaning that "the food intake of some household
members was reduced and normal eating patterns were disrupted at
times during the year due to limited resources." All this is according to a report, Household Food Security in the United States in 2011, produced by the Economic Research Service of the US Department of Agriculture.

The USDA Economic Research Service has recently developed a Food Access Research Atlas - a map showing census tracts of low income residents with low access to food. The screenshot above is part of the New York City metropolitan area. The green areas are low income census tracts where a significant proportion of residents is more than 1 mile (urban tracts) or 10 miles (rural tracts) from the nearest supermarket. The orange areas identify tracts where the proportion of residents is more than .5 mile from the nearest supermarket (still 10 miles for rural tracts.) I know, because the Atlas provides detailed data about each tract, including its character and how many residents have access to vehicles.

The atlas is a useful tool, though it could be a little easier for someone just beginning to learn about food security issues to use. What was your experience? Let me know in the comments. Do you agree that it's a great use of government data?

Thursday, March 21, 2013

The Guardian's ever-reliable data blog has produced an interesting interactive graph showing how weight varies by country, continent, and gender. The screenshot above is a comparison of obesity weights among women on different continents. Overall, rates of obesity increased between 2002 and 2010 on every continent. The graph highlights three countries that bucked the trend. The other graphs show obesity rates among women compared with men in 2010 and among men in 2002 and 2010.

I've written before about food and obesity rates, here for example, and here. What do you think might be causing the increase in obesity worldwide? Share your thoughts in the comments.

Update: Here is a link to an interview with Michael Moss about the placement of salt, sugar and fat in our diets.

Wednesday, March 20, 2013

These are tough times for people who run not-for-profits, with needs increasing and government revenues declining. So it was heartening to read that at least one, New York Foundling, has found a way to put its real estate assets to work for it. According to news coverage like this, NY Foundling has made re-thought its use of the 14-story office building it built on Sixth Avenue in the early 1990s. Five years ago it sold off six floors to the School Construction Authority for an elementary school. Now it is contracting its operations into five of the remaining eight floors and will rent out the top three.

Creative use of real estate brings lots of options. For example, the New York Timesreports that the Foundling is planning to offer staggered leases that will give it the option of reclaiming some of the space if it grows again. It's been able to use money from the sale of the lower floors to renovate the space it retains. And the hope is that the income from the rents will help alleviate some of the pressure to raise funds. Of course, operating as a landlord will bring its own challenges.

Other foster care agencies own valuable property in New York City and the surrounding counties. It will be interesting to see whether, and how, any of them change the way they make use of those properties.

Tuesday, March 19, 2013

Here's a link to a slideshow, from Climate Central, illustrating a week's worth of climate change news. Each slide links to a longer story. The screenshot above is photo of fractures in the sea ice off Canada and Alaska - and the related story explains why the fractures are a problem.

The NSIDC (National Snow and Ice Data Center) said the fracturing is likely a sign of the prevalence of
young and thin sea ice, which can be disturbed more easily by weather
patterns and ocean currents, and also melts more easily when exposed to
warm air and ocean temperatures during the melt season. As Arctic sea
ice extent has plummeted since 1979, down to a record low in September 2012, first-year ice has become much more common across the Arctic, as thick, multiyear ice has declined.

Monday, March 18, 2013

You've done this instinctively: divided something that needed to be sorted - cards, for instance - into smaller groups, like suits, and then sorted so that each suit is in order. That operation is analogous to a computer algorithm, quick sort, which Wikepedia describes as:

Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists.The steps are:

Pick an element, called a pivot, from the list.

Reorder the list so that all elements with values less than the
pivot come before the pivot, while all elements with values greater than
the pivot come after it (equal values can go either way). After this
partitioning, the pivot is in its final position. This is called the partition operation.

Recursively
apply the above steps to the sub-list of elements with smaller values
and separately the sub-list of elements with greater values.

So what's the video? It's a very good illustration of the process. I found it yesterday via James Fallows' blog. Click the link for his gloss.

The same group also developed an illustration of a bubble sort - perhaps not as good an algorithm, but still a good dance.

Thursday, March 14, 2013

Here's a a link to a thoughtful piece by Alnoor Ebrahim in the Harvard Business Review called "Let's Be Realistic about Measuring Impact." It compares the measurement processes of three different organizations, each of which uses a different approach. One measures outputs, another measures short-term outcomes and estimates long-term outcomes, while the third thinks hard - and waits for - long-term outcomes. Each uses some form of proxy measures when necessary. Each organization puts the resources necessary into making its measurements meaningful. And they've figured out a way to get around, where necessary, their research limitations. Says Ebrahim:

Notice that none of these three organizations typically measures
impact directly. They hypothesize what the outcomes and impacts might be
but only in some instances are they able to follow through by
commissioning their own research or multi-year evaluations. And these
are sophisticated funders and investors who are much better positioned
to measure long-term results than the front line organizations that
contend with funding shortages and operational challenges every day.

I've written before about the importance of using outcome measures. It's hard to do well, but well worth the effort. Has your organization tried? What were the problem areas? What worked well?

Wednesday, March 13, 2013

Update, March 16: For more on the loss of the monarch habitat, see this Op-Ed in the New York Times.

Update, March 14: The New York Times is reporting that the monarch butterfly migration is the smallest in many years. If you have a subscription, read the article: first, I think it's fair to conclude that it illustrates that there are many unintended consequences from our decisions - weed-resistant plants mean fewer weeds, which means less food for the butterflies. And second, it illustrates the use of proxy measures - researchers cannot count the number of butterflies, so they count the amount of space the butterflies cover. The sharp declines are scary.

That is a graph showing temperature ranges over the last 11,000 years - the Holocene period, published last week in Science magazine (the full abstract is here; the full article is behind a paywall). What the data show is that the earth is warmer today than it's been for most of that period. What's different about this paper? Several things: it goes back much further than previous research and it examines global, not just regional, temperatures. As Tim McDonnell of The Climate Deskputs it:

To be clear, the study finds that temperatures in about a fifth of
this historical period were higher than they are today. But the key,
said lead author Shaun Marcott of Oregon State University, is that
temperatures are shooting through the roof faster than we've ever seen.

"What we found is that temperatures increased in the last hundred
years as much as they had cooled in the last six or seven thousand," he
said. "In other words, the rate of change is much greater than anything
we've seen in the whole Holocene," referring to the current geologic
time period, which began around 11,500 years ago.

How did we get here and where are we going? Here's a link to a good infographic that tells you what you need to know about the continuing release of carbon dioxide will play out in different scenarios. It's not a pretty picture.

Tuesday, March 12, 2013

Often, as donors, we make a donation and then don't think about a charity - and what it does with our money - until we hear from it again. Donors should care about what happens after we donate, and Stern ably shows why. We should care
because if we don’t care, then charities won’t care. They’ll continue to raise
money, but may become bloated and unresponsive, like the Red Cross after
Hurricane Katrina. Or they’ll keep on doing the same thing. What Stern could
have addressed better is the reasons for the inertia. One reason is structural:
restricted funding streams make it very difficult for charities to function
differently, or respond to emerging needs. Our system is extremely inefficient,
and Stern reminds us that it results in economic costs (for all our generosity a
lot of people in the US live in poverty), increased competition for charitable
dollars and confusion on the part of the public. But it’s also because
charities are often ineffective at measurement.

Measurement, if done well, is exacting, difficult,
time-consuming, and expensive. It may tell the managers of a charity something
they do not want to know, as in the DARE example. Success means different
things in different contexts, and coming up with a definition forces managers to
grapple with existential questions: what does it mean to say that a charity,
NPR for example, is successful? What about the Metropolitan Opera?

Most of all, though, measurement is expensive. Studies that
follow many people for many years, like the High Scope Perry Preschool Studies
of the long-term effects of early childhood education, are labor-intensive. It
takes staff long hours to track down individuals, collect and analyze
information, and explain to managers, boards and funders what a study means. And
once it’s done, the analysis is not static. If you reach your goals, you have
to reset them. If you don’t, you have to figure out why you did not, and
whether you have set the correct goals in the first place.

Not many charities can afford the investment in staff,
follow-up, and data analysis a good study requires, but a few do. Moreover,
though Stern does not discuss it, many foundation and government funders have
been demanding outcome measures as part of their contracting process over the
last decade. Unfortunately, because a funder can require outcome measures only
for the program – or part of a program – it is funding, the result can be
fragmentation, of efforts, and of understanding. And when a charity is
providing similar but not identical information to another funder opportunities
for manipulating the reports may be irresistible. Better accountability efforts
by funders and board members are also necessary.

Stern, as befits the former CEO of NPR, tells a good story. He
reveals a series of structural issues around the charitable sector: low
barriers to entry into the charitable field mean that almost any cause or event,
like a college town beer festival, can become a charity. Often executive
salaries are high, though they are usually lower than those of executives
managing comparably-sized private businesses. And “crooks gravitate to crises,”
Stern says. After the Haiti earthquake, scam artists sent out hundreds of fake
appeals on Facebook, Twitter, and by e-mail. Stern reports that the FBI
estimated that more than 2300 fake charity sites solicited donations after
Hurricane Katrina. Lately a disturbing trend that Stern calls celanthropy –
celebrities setting up charities – has arisen.

Stern doesn’t really distinguish between charities that
provide social services from charities, like colleges, that can be said to
serve donors. Their operations are very different, even if their tax status is
similar. Stern doesn’t fully piece his arguments together. If charities had
some kind of normal life cycle the way private businesses do, for example, government
agencies might have the time to focus on the egregious cases. While Stern
describes failed efforts to write sunset provisions into the charities laws in
the 1970s, he never fully circles back to make the point.

Instead, Stern concludes that as governments retreat from supporting
arts and social services donors have to be willing to invest more in charities,
not less, and to invest differently. He identifies several private (and
charitable) programs that create and enlarge effective charities by providing
multi-year grants and consulting services, urges that their work be expanded.
This kind of social entrepreneurship very rare in US, and by itself is probably
not enough.

Stern’s point would be considerably stronger if he had addressed
the many new ways government is providing and funding social services, and
recognized the promise these developments hold for the charitable sector. To
give just one example, last year the federal Center for Medicare and Medicaid Innovation offered a competitive grant seeking innovative service and payment
models for health care. Huge amounts of data are now being collected about
social services (even the foster care system in New York City has automated,
on-line records). Jim Manzi, in his book “Uncontrolled,” (Basic Books 2012) (my review is here) suggests
establishing an agency, akin to the NIH, that can oversee and fund the
design and interpretation of randomized social policy experiments, harnessing
the power of big data for social services.

Stern also could have engaged usefully with the new funding
models social entrepreneurs have developed. Social impact bonds, in which a
government contracts with a private bond issuer to pay for services based on
outcomes or achieving performance targets, have been used in the UK and are
starting to be used in the US. The bonds raise enough money for a rigorous program
evaluation – and payment depends on success. Once results become available, effective
programs can be ramped up quickly, while ineffective ones can be stopped.
Health impact bonds function similarly, by providing preventive health care
services. In his book “The Non Nonprofit” (Jossey-Bass Books, 2012) (my review is here) Steve
Rothschild describes how his Minnesota charity used data to show a return on
government and foundation investments, creating economic value from social
benefit. Rothschild has now expanded the concept into something he
calls Human Capital Performance Bonds, which operate like social impact bonds
except that a government entity issues the bonds.

We have different kinds of charities in the US –
large arts organizations, family foundations, tiny programs run out of church
basements. They enjoy different funding mixes, ranging from almost entirely
government funding to entirely private funding. “With Charity for All” raises
some important issues about charities, their effectiveness, and our unique
blend of public and private funding. It also includes some useful suggestions.
But without a more analytical look at how different kinds of charities operate we,
like the charities Stern describes, are going to continue doing what we’ve
always done.

This is the second part of a two-part review. You can read the first part here.Yesterday, I briefly posted an incorrect version of Stern's name. I regret the error and have corrected it.

Monday, March 11, 2013

Yellow wristbands. Lists of donors in a program or report, categorized
by donation amount. Names on buildings, stadiums, or rooms. Charity is large
and public in the United States. In his timely book “With Charity for All” Ken
Stern, formerly the CEO and COO of NPR, reports that 1.4 million foundations, philanthropies, and charities exist to accept
our donations. Charities, also known as not-for-profit corporations, dominate
education, health care, the arts (including museums and orchestras),
environmental groups and social services.

Charity has been part of American life since the Pilgrims
landed. Stern estimates that the sector adds up to 10% of the US economy today.
Stern reports that in 2011, we gave nearly $300 billion to charity, with the
largest share going to religious and educational institutions. Charities employ
13 million people (an additional 61 million people volunteer) and rake in $1.5
trillion in revenues each year, including approximately $500 billion in
government grants that pay for services. And that’s before we think about the
tax expenditure – what it costs the federal and state governments in taxes it
foregoes when you and I deduct our gifts from our income when we calculate our
taxes. But what are we getting for our large investment? What should we be
getting? A third question, how can we tell, is implicit in the first two. Stern
does a good job of explaining the context and importance of these questions. He
could have gone farther in answering them.

Stern vividly describes the development and broad reach of
charities in our country. Giving expanded from the religious to the private
sector when the 19th century robber barons, having made their
fortunes, made substantial gifts to their communities: libraries, universities,
museums, cultural institutions. Endowments seem to have kept the institutions operating
for half a century. In the second half of the 20th century we
enacted tax policies that encouraged giving, and governments transferred
funding and operational responsibilities for social services to private
entities. The number of charities took off.Unfortunately, the amount of government oversight did not. In recent
years, Stern tells us, the IRS has granted 99.5% of the applications for
charitable status under the Internal Revenue Code, which allows donors to
deduct contributions. Most state Attorneys General require at least
registration for charities to operate within a state, though it’s unclear how
much other monitoring goes on.

Stern also addresses the challenging question of why we
give. He describes the impulse to donate as a complex mixture of altruism and
the possibility of obtaining personal benefits (remember those wristbands).
Stern reports finding no study that explains donor behavior. The one thing
that’s clear is that we respond to stories.

[The donor] would hear a story of
need, often through the media or through her network of friends and associates,
and a check could follow within hours. This method of giving is in fact the
norm for many donors: reactive to news and events, and responsive to individual
stories and needs. It reflects the intimate and individualistic nature of
giving in this country. . .

As a result, he says charities hone their narratives, not
their services. “Charities know that they are rewarded not for finding
cost-effective solutions to problems – nor solutions to problems at all – but
for finding ways to personalize, humanize, and convey needs.”

Stern’s conclusion that charities focus on stories may be
true for fundraising, but it is not true for management. For years, as Stern
acknowledges, charities have been measuring their services. Accrediting
agencies want to see a robust measurement program, including outcome measures,
as do many funders, both foundation and government. It’s just we donors who often
ignore the results. Stern uses the DARE anti-drug program as a prime example.
Developed in the 1980s, DARE brings police officers into classrooms to educate
middle and high school students about the dangers of drugs. Despite many
studies that show DARE does not work, the program is still going strong –
perhaps because it provides funding for several thousand police officers. DARE
dismisses the research studies in favor of anecdotal evidence, but Stern
exaggerates when he concludes that the entire sector suffers from a “medieval
aversion to scientific scrutiny and accountability.”

Deciding what to measure can be a hurdle. It’s tempting to
use measures that are easy and accessible. Test scores for elementary and
middle school students come to mind. They are certainly a good measure of each
child’s achievement at a particular point in time. There is even substantial
evidence that SAT scores, especially in combination with grades, can predict a
student’s first year college grades. But they are not the best measure of a
teacher’s effectiveness, only an easily available one.

Unfortunately, measuring the wrong thing can
result in misallocated resources. Stern’s example here, prevention of
waterborne diseases in developing countries, is instructive. Water-borne
diseases are endemic throughout the developing world, and digging a well feels
like the obvious solution. Donating to support new wells is an understandable
impulse, and Stern shows that the apparent simplicity of the solution increases
the appeal. But, he asks, if the solution is so easy, why are waterborne
diseases still occurring? Because each well requires a pump, and each requires
maintenance. Stern points out that water charities don’t have the expertise, or
the intent, to deal with the long-term problem of maintenance. In any case we
donors don’t want to hear about it. We feel good when we give, and we don’t
often care about what happens next.

Friday, March 8, 2013

The normally extremely clear writer James Fallows (his terrific blog on politics, technology, and beer, among other things, is here) has posted a letter from one of his readers. The full blog post is here; the excerpt I'm interested in is:

First, our public policy discussion has become too wonkish, by being
entirely focused on measurable outcomes at the expense of all others.
(Another example: the health care debate, the vast majority of which was
about costs instead of the moral imperative of universal health care)...

It may just be the writing (to repeat, it's not Fallows' but a reader's comment), but this strikes me as an example of someone
blaming the numbers, as opposed to the interpretation of the numbers,
for the politics. Sometimes a mathematical model is our best chance of
understanding what is happening in the world (even if that understanding is weak).
But it's our interpretation of the numbers - the context we give it -
that guides how we use them. Numbers are never just numbers.

Thursday, March 7, 2013

The child welfare system is as subject to the winds of fashion as any other field - in my time uniform case records, family-centered practice, intensive preventive services, family group decision-making and concurrent planning and community-based services are among the new approaches that have come and gone. And that's just in New York City. And in fact, now, New York City is moving its large preventive services program to an evidence-based set of programs, though it is not clear that those programs can be replicated here easily.

So it was with some interest that I read Elizabeth Bartholet's article "Creating a Child-Friendly Child Welfare System: Effective Early Intervention to Prevent Maltreatment and Protect Victimized Children." Bartholet is a professor at Harvard Law School, and here she focuses on the prevention of abuse and neglect, criticizing the current climate that values family preservation to the extent possible. She also highlights two promising approaches: early prevention through home visits using a public health model, and early protection - that is, monitoring for children at risk of maltreatment.

The problem is, there's a lot of short-term research (some of which Bartholet rightly criticizes) but not enough long-term research: what approach really is better for children. Removal? Foster care? Adoption? Open adoption? Because of ethical problems, we can't do random assignment of children or families into programs, but Bartholet calls over and over for comparative research. She's right. We also need longitudinal research. Long-term research with followups is expensive, but until we have it we will continue lurching from one approach to another - we may call it research-intensive or evidence-based, but Bartholet convincingly shows that most approaches are neither.

Monday, March 4, 2013

I've written before on the important issue of public access to research and data sets that have been supported by public funding. The Obama administration has taken an important step in the direction of expanding access. On February 22 John Holdren, Director of the Office of Science and Technology Policy, issued a policy statement, available here, instructing federal agencies with research budgets of more than $100 million to make the research results available within 12 months of publication. As the journal Nature puts it:

The policy applies to an estimated 19 federal agencies, which each
spend more than US$100 million on research and development. It would
roughly double the number of articles made publicly available each year
to about 180,000, according to the Scholarly Publishing and Academic
Resources Coalition, an open-access advocacy group in Washington DC,
which called the memo a “landmark”. Until now, only the US National
Institutes of Health (NIH) has required its research to be publicly
available after 12 months.

You can read Nature's assessment of possible approaches in the US here.

But don't expect things to happen too quickly. The policy statement gives agencies six months to come up with a draft plan, and doesn't specify an implementation date. The new US policy meets what Nature calls the green standard: research results and data sets must be available within one year of publication. The gold standard, which the UK alone is pursuing, is to make research available immediately. You can read more about the issues involved in the difference here.