The assertion that a program will save more money than it costs in the long term is a difficult one to demonstrate.

The rising call for publicly funded, high-quality early-childhood education brings to mind one of my favorite public-administration exercises: the challenge of the "public ledger." In programs that are touted to "pay for themselves with savings downstream," public managers have a difficult time quantifying those savings convincingly and even more difficulty harvesting those savings to use to support cost-effective interventions. If the pays-for-itself argument is to be more than rhetoric, we need to get more real about what's involved.

Think of the public ledger as a full accounting of public funding, over time, across multiple programs and jurisdictional lines within an area of public intervention. Juvenile crime prevention programs supposedly save money when youth are not incarcerated. Health systems save money when seniors do not fall. A city avoids substantial water treatment costs if its watershed is protected. Additional auditors will pay for themselves by finding more waste and abuse. Children arriving in kindergarten ready to learn are more likely to succeed in school and, if the advocates are correct, more likely to graduate from high school and move into successful adulthood.

Certainly early-childhood education appears to present a strong case that it yields multiples of investment return. A recent report synthesizing research over five decades concluded that there was compelling (though not indisputable) evidence that investment in quality pre-kindergarten programs pays for itself over the long term for lower-income, disadvantaged children unlikely to arrive in kindergarten ready to learn.

Yet expanding funding for high-quality pre-K programs has been slow and erratic. While ideological objections explain some of the challenges, the application of the public-ledger argument faces the same pitfalls that other programs encounter when arguing that investments will produce savings in future years:

• Flawed measurement of results: Many public programs intending social benefit struggle to quantify "success" -- or to measure impact at all. Absent control groups, others fail to rigorously distinguish between extraneous conditions and the direct effects of the intervention. Others seek to replicate programs whose success rests on a charismatic leader more than on the specific interventions. Or they assume that replicated programs will produce replicated results when they are in fact site- or condition-specific.

In the early-childhood field, critics have seized on a fade-out of academic benefits within a couple years of attending a quality program. Many observers believe the "head-start" effect is lost in poor classroom experiences where teachers struggle with children who are not prepared for school and allow school-ready children to coast.

• Targeting issues: The calculation of cost-effectiveness rests on the cost of the program per unit of service delivered, the number of individuals receiving the service, and the savings over the whole group of program recipients. Unlike an inexpensive vaccine universally distributed, social-benefit programs can be very expensive, and the targeting to at-risk populations where savings are most reliable can make or break a cost-effectiveness argument. In Philadelphia, an intensive fall-prevention program did reduce injurious falls to at-risk seniors by 80 percent. But the program defined "at risk" in a tightly constrained way: Participants were over 85 or had previously experienced an injurious fall. Broad application of the program would have diluted its impact because of multiplying cost and reduced benefit to those less likely to fall.

The more tightly targeted the program, the lower the cost and the higher the return on investment but the less public support and perception of social impact. Early-childhood-education advocates make strong arguments for universal pre-K, pointing to evidence that virtually all children benefit, but the budgetary cost rises in inverse proportion to the incremental benefit to less-disadvantaged children.

• The complexity of tracing downstream savings: The Philadelphia fall-prevention program yielded substantial savings to Medicaid, Medicare, hospitals and insurers. But because of the complexity of payment systems, it was impossible to calculate savings that could capitalize the program from those for which it was reducing costs. Ultimately, the program was embraced by institutions that bore the costs of falls and reaped the savings within their own financial enterprises. In early childhood, savings are spread across myriad social programs: schools, family services and the like. Contrast this with New York City's Land Acquisition Program, which since the mid 1990s has acquired development easements on upstate wilderness acreage to protect the city's water supply and reduce the need for capital-intensive water-treatment projects. City investment brought tangible cost avoidance.

• Cross-program dynamics in a budget-constrained world: Those fabled program siloes really do get in the way, especially when every agency is pressed to deliver basic services and will resist becoming the source of investments for another. The counterarguments can be persuasive: When will we see the savings? How can we explain to our program constituents that money is going to an unrelated program? What if the savings just don't pan out as promised?

Seemingly overwhelming obstacles work against the claim by a program's advocates that it can pay for itself with long-term savings. The key is leadership that has the confidence in programs that work, the determination to make room for disciplined investment, and the perseverance and patience to test the reality of the public ledger.