George Bernard Shaw quipped, “If all economists were laid end to end, they would not reach a conclusion.” However, economists should not be singled out on this account — there is an equal share of controversy awaiting anyone who uses theories to solve social problems. While there is a great deal of theory-based research in the social sciences, it tends to be more theory than research, and with the universe of ideas dwarfing the available body of empirical evidence, there tends to be little if any agreement on how to achieve practical results. This was summed up well by another master of the quip, Mark Twain, who observed that the fascinating thing about science is how “one gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Recently, economists have been in the hot seat because of the stimulus package. However, it is the policymakers who depended on economic advice who are sweating because they were the ones who engaged in what I like to call data-free evaluation. This is the awkward art of judging the merit of untried or untested programs. Whether it takes the form of a president staunching an unprecedented financial crisis, funding agencies reviewing proposals for new initiatives, or individuals deciding whether to avail themselves of unfamiliar services, data-free evaluation is more the rule than the exception in the world of policies and programs.

It is a nervous lot who must take action when a social problem or proposed solution is new, or if for any number of other reasons there is little or no historical data on the effectiveness of a program under consideration. To be fair, sometimes the problem is not lack of data but limited access to it. In either situation, it is the absence of data that forces policymakers to put their faith in something else, namely a theory. This could be one individual’s idiosyncratic lay theory or a widely accepted scientific theory, but it is always a theory of some sort. Consequently, data-free evaluation is always theory-based evaluation, though not always theory-based evaluation at its best.

The push for evidence-based policy stands in sharp contrast to data-free evaluation. By and large, there is much to be admired in evidence-based approaches. Certainly, if one has evidence one should consider it to the extent that it is deemed relevant and trustworthy. Yet relying only on available evidence can prevent us from imagining new and better solutions. Given that we have been using a limited repertoire of solutions to solve social problems that have plagued humankind for millennia, innovation is sorely needed. Data-free evaluation offers an advantage over all other forms of evaluation in this regard because it is a practical application of the scientific imagination.

This leads me to two questions that I am still mulling over. First, can data-free evaluation be rigorous? Rigorous is a problematic term, but in some sense I am asking how data-free evaluation can be undertaken in order to imbue it with the desirable properties we associate with scientific inquiry. Second, how can we know that it is rigorous? This comes down to consequential validity — whether using the rigorous approach provides some benefit over not using it.

It may be impossible to answer these questions, but I would like to imagine that we can develop more rigorous methods of innovation. Given the mess(es) we are in, we could certainly use them.