Tag Archives: evaluations

The 2014 Conference of the African Evaluation Association (AfrEA) was just opened. Organizers delayed the start of the opening ceremony, however, as they waited for the arrival of officials from the government of Cameroon. Fifteen minutes. Thirty minutes. An hour. More.

This may sound like a problem, but it wasn’t—the unofficial conference had already begun. Participants from around the world were mixing, laughing, and learning. I met evaluators from Kenya, South Africa, Sri Lanka, Europe, and America. I learned about health programs, education systems, evaluation use in government, and the development of evaluation as a profession across the continent. It was a truly delightful delay.

And it reflects the mindset I am finding here—a strong belief that commitment and community can overcome circumstance.

: : : : : : : : : : : :

During the opening ceremony, the Former President of AGRA, Dr. Namanga Ngongi, stated that one of the greatest challenges facing development programs is finding enough qualified evaluators—those who not only have technical skills, but also the ability to help organizations increase their impact.

Where will these much-needed evaluators come from?

Historically, many evaluators have come from outside of Africa. The current push for made-in-Africa evaluations promises to change that by training more African evaluators.

Evaluators are trained in many ways, chief among them university programs, professional mentoring, practical experience, and ongoing professional development. The CLEAR initiative—Centers for Learning on Evaluation and Results—is a new approach. With centers in Anglophone and Francophone Africa, CLEAR has set out to strengthen monitoring, evaluation, performance management, and evaluation use at the country level.

While much of CLEAR’s work is face-to-face, a great many organizations have made training material available on the web. One can now piece together free resources online—webinars, documents, videos, correspondence, and even one-on-one meetings with experts—that can result in highly contextualized learning. This is what many of the African evaluators I have met are telling me they are doing.

What’s next? Perhaps consolidators who organize online and in-person content into high-quality curricula that are convenient, coherent, and comprehensive.

: : : : : : : : : : : :

Although the supply of evaluators may be limited in many parts of Africa, the demand for evaluation continues to increase. The history of evaluation in the US, Canada, and Europe suggests that demand grows when evaluation is required as a condition of funding or by law. From what I have seen, it appears that history is repeating itself in Africa. In large part this is due to the tremendous influence that funders from outside of Africa have.

An important exception is South Africa, where there government and evaluators work cooperatively to produce and use evaluations. I hope to learn more about this in the days to come.

“Tell me again why you are going to Cameroon?” my wife asked. I paused, searching for an answer. New business? Not really, although that is always welcome. Old connections? I have very few among those currently working in Africa. What should I say? How could I explain?

I decided to confess.

“Because I am curious. There is something exciting going on across Africa. The African Evaluation Association—AfrEA—is playing a critical role. I want to learn more about it. Support it. Maybe be a part of it.”

She found that perfectly reasonable. I suppose that is why I married her.

Then she asked more questions about the conference and how my work might be useful to practitioners in that part of the world. As it turns out, she was curious, too. I believe many are, especially evaluation practitioners.

It takes a certain irrational obsessiveness, however, to fly 32 hours because you are curious.

For those not yet prepared to follow their curiosity to such lengths, I will be blogging about the AfrEA Conference over the next week.

You can find guest posts about the previous AfrEA conference in Ghana two years ago here, here, here, and here.

Are you suffering from “post-parting depression” now that the conference of the American Evaluation Association has ended? Maybe this will help–a sampling of the professionals who attended the conference, along with their thoughts on the experience. Special thanks to Anna Fagergren who collected most of these photos and quotes.

Stefany Tobel Ramos, City Year

This is my first time here and I really enjoyed the professional development workshop Evaluation-Specific Methodology. I learned a lot and have new ideas about how to get a sense of students as a whole.

Jonathan Karanja, Independent Consultant with Nielsen, Kenya

This is my first time here and Nielsen is trying to get into the evaluation space, because that is what our clients want. The conference is a little overwhelming but I have a strategy – go to the not technically demanding, easy-to-digest sessions. Baby steps. I want to ensure that our company learns to not just apply market research techniques but to actually do evaluation.

George Julnes, University of Baltimore

When I attend AEA, I get to present to enthusiastic groups of evaluation professionals. It makes me feel like a rock star for a week. Then I go home and do the dishes.

Linda Pursley, Lesley University

I’m returning to the conference after some years away—it’s great to renew contact with acquaintances and colleagues. I am struck by the conference’s growth and the huge diversity of TIGs (topical interest groups), and I’m finding a lot of sessions of interest.

Pieta Blakely, Commonwealth Corporation

It’s my first time here and it’s a little overwhelming. I’m getting to know what I don’t know. But it’s also really exciting to see people working on youth engagement because I’m really interested in that.

Linda Stern, National Democratic Institute

I’ve been coming for many years, and I really like the two professional development workshops I took—Sampling and Empowerment Evaluation Strategies—and how they helped guide my way through the greater conference program.

Carsten Strømbæk Pedersen, National Board of Social Services, Denmark

John, I really like your blog. You have…how do you say it in English?…a twisted mind. I really like that.

Aske Graulund, National Board of Social Services, Denmark

Nina Middelboe, Oxford Research AS, Denmark

[nods of agreement]

No greater compliment, Carsten! And my compliments to all 3,500 professionals who participated in the conference.

Recursion is when your local bookstore opens a café inside the store in order to attract more readers, and then the café opens a bookstore inside itself to attract more coffee drinkers.

Chris Lysy at Freshspectrum.com noticed, laughed at, and illustrated (above) the same phenomenon as it relates to my blogging (or rather lack of it) during the American Evaluation Association Conference last week.

I intended to harness the power of recursion by blogging about blogging at the conference. I reckoned that would nudge a few others to blog at the conference, which in turn would nudge me to do the same.

I ended up blogging very little during those hectic days, and none of it was about blogging at the conference. Giving up on that idea, I landed on blogging about not blogging, then not blogging about not blogging, then blogging about not blogging about not blogging, and so on.

Once Chris opened my eyes to the recursive nature of recursion, I noticed it all around me at the conference.

For example, the Research on Evaluation TIG (Topical Interest Group) discussed using evaluation methods to evaluate how we evaluate. Is that merely academic navel gazing? It isn’t. I would argue that it may be the most important area of evaluation today.

As practitioners, we conduct evaluations because we believe they can make a positive impact in the world, and we choose how to evaluate in ways we believe produce the greatest impact. Ironically, we have little evidence upon which to base our choices. We rarely measure our own impact or study how we can best achieve it.

ROE (research on evaluation, for those in the know) is setting that right. And the growing community of ROE researchers and practitioners is attempting to do so in an organized fashion. I find it quite inspiring.

A great example of ROE and the power of recursion is the work of Tom Cook and his colleagues (chief among them Will Shadish).I must confess that Tom is a hero of mine. A wonderful person who finds tremendous joy in his work and shares that joy with others. So I can’t help but smile every time I think of him using experimental and quasi-experimental methods to evaluate experimental and quasi-experimental methods.

Experiments and quasi-experiments follow the same general logic. Create two (or more) comparable groups of people (or whatever may be of interest). Provide one experience to one group and a different experience to the other. Measure outcomes of interest for the two groups at the end of their experiences. Given that, differences in outcomes between the groups are attributable to differences in the experiences of the groups.

If on group received a program and the other did not, you have a very strong method for estimating program impacts. If on group received a program designed one way and the other a program designed another way, you have a strong basis for choosing between program designs.

Experiments and quasi-experiments differ principally in how they create comparable groups. Experiments assign people to groups at random. In essence, names are pulled from a hat (in reality, computers select names at random from a list). This yields two highly comparable but artificially constructed groups.

Quasi-experiments typically operate by allowing people to choose experiences as they do in everyday life. This yields naturally constructed groups that are less comparable. Why are they less comparable? The groups are comprised of people who made difference choices, and these choice may be associated with other factors that affect outcomes. The good news is that the groups can be made more comparable–to some degree–by using a variety of statistical methods.

Is one approach better than another? At the AEA Conference, Tom described his involvement with efforts to answer that question. One way that is done is by randomly assigning people to two groups–one group that will be part of an experiment or another group that will be part of a quasi-experiment (referred to as an observational study in the picture above). Within the experimental group, participants are randomly assigned to either a treatment group (e.g., math training) or control group (vocabulary training). Within the quasi-experimental group, participants choose between the same two experiences, forming treatment and comparison groups according to their preference.

Program impact estimates are compared for the experimental and quasi-experimental groups. Differences at this level are attributable to the evaluation method and can indicate whether one method is biased with respect to the other. So far, there seems to be pretty good agreement between the methods (when implemented well–no small achievement), but much work remains to be done.

Perhaps the most important form of recursion at the AEA Conference is membership. AEA is comprised of members who manage themselves by forming groups of members who manage themselves by forming groups of members who manage themselves. The board of AEA, TIGs, local affiliates, task forces, working groups, volunteer committees, and conference sessions are all organized by and comprised of groups of members who manage themselves. That is power of recursion–3,500 strangers coming together to create a community dedicated to making the world a better place. And what a joy to watch them pull it off.

Rodney Hopson, Professor, George Mason University (Past President of AEA)

I’m plotting. I’m always plotting. That’s how you make change in the world. You find the opportunities, great people to work with, and make things happen.

Tina Christie, Professor, UCLA

I’ve just finished three years on the AEA board with Rodney. The chance to connect with colleagues like Rodney–work with them, debate with them, laugh with them–is something I look forward to each year. It quickly starts to feel like family.

It’s true—I am addicted to conferences. While I read about evaluation, write about evaluation, and do evaluations in my day-to-day professional life, it’s not enough. To truly connect to the field and its swelling ranks of practitioners, researchers, and supporters, I need to attend conferences. Compulsively. Enthusiastically. Constantly.

Stop me if you’ve heard this one before. An evaluator uses data to assess the effectiveness of a program, arrives at a well-reasoned but disappointing conclusion, and finds that the conclusion is not embraced—perhaps ignored or even rejected—by those with a stake in the program.

People—even evaluators—have difficulty accepting new information if it contradicts their beliefs, desires, or interests. It’s unavoidable. When faced with empirical evidence, however, most people will open their minds. At least that has been my experience.

During the presidential election, reluctance to embrace empirical evidence was virtually universal. I began to wonder—had we entered the post-data age?

In that time, I suspect that we also engaged in more denial and distortion of data than in all human history.

The election was a particularly bad time for data and the people who love them—but there was a bright spot.

On election day I boarded a plane for London (after voting, of course). Although I had no access to news reports during the flight, I already knew the result—President Obama had about an 84% chance of winning reelection. When I stepped off the plane, I learned he had indeed won. No surprise.

How could I be so certain of the result when the election was hailed as too close to call? I read the FiveThiryEight blog, that’s how. By using data—every available, well-implemented poll—and a strong statistical model, Nate Silver was able to produce a highly credible estimate of the likelihood that one or the other candidate would win.

Most importantly, the estimate did not depend on the analysts’—or anyone’s—desires regarding the outcome of the election.

Although this first-rate work was available to all, television and print news was dominated by unsophisticated analysis of poll data. How often were the results of an individual poll—one data point—presented in a provocative way and its implications debated for as long as breath and column inches could sustain?

Isn’t this the way that we interpret evaluations?

News agencies were looking for the story. The advocates for each candidate were telling their stories. Nothing wrong with that. But when stories shape the particular bits of data that are presented to the public, rather than all of the data being used to shape the story, I fear that the post-data age is already upon us.

Are evaluators expected to do the same when they are asked to tell a program’s story?

Did you miss the Catapult Labs conference on May 19? Then you missed something extraordinary.

But don’t worry, you can get the recap here.

The event was sponsored by Catapult Design, a nonprofit firm in San Francisco that uses the process and products of design to alleviate poverty in marginalized communities. Their work spans the worlds of development, mechanical engineering, ethnography, product design, and evaluation.

That is really, really cool.

I find them remarkable and their approach refreshing. Even more so because they are not alone. The conference was very well attended by diverse professionals—from government, the nonprofit sector, the for-profit sector, and design—all doing similar work.

The day was divided into three sets of three concurrent sessions, each presented as hands-on labs. So, sadly, I could attend only one third of what was on offer. My apologies to those who presented and are not included here.

I started the day by attending Democratizing Design: Co-creating With Your Users presented by Catapult’s Heather Fleming. It provided an overview of techniques designers use to include stakeholders in the design process.

Evaluators go to great lengths to include stakeholders. We have broad, well-established approaches such as empowerment evaluation and participatory evaluation. But the techniques designers use are largely unknown to evaluators. I believe there is a great deal we can learn from designers in this area.

An example is games. Heather organized a game in which we used beans as money. Players chose which crops to plant, each with its own associated cost, risk profile, and potential return. The expected payoff varied by gender, which was arbitrarily assigned to players. After a few rounds the problem was clear—higher costs, lower returns, and greater risks for women increased their chances of financial ruin, and this had negative consequences for communities.

I believe that evaluators could put games to good use. Describing a social problem as a game requires stakeholders to express their cause-and-effect assumptions about the problem. Playing with a group allows others to understand those assumptions intimately, comment upon them, and offer suggestions about how to solve the problem within the rules of the game (or perhaps change the rules to make the problem solvable).

I have never met a group of people who were more sincere in their pursuit of positive change. And honest in their struggle to evaluate their impact. I believe that impact evaluation is an area where evaluators have something valuable to share with designers.

That was the purpose of my workshop Measuring Social Impact: How to Integrate Evaluation & Design. I presented a number of techniques and tools we use at Gargani + Company to design and evaluate programs. They are part of a more comprehensive program design approach that Stewart Donaldson and I will be sharing this summer and fall in workshops and publications (details to follow).

The hands-on format of the lab made for a great experience. I was able to watch participants work through the real-world design problems that I posed. And I was encouraged by how quickly they were able to use the tools and techniques I presented to find creative solutions.

That made my task of providing feedback on their designs a joy. We shared a common conceptual framework and were able to speak a common language. Given the abstract nature of social impact, I was very impressed with that—and their designs—after less than 90 minutes of interaction.

I wrapped up the conference by attending Three Cups, Rosa Parks, and the Polar Bear: Telling Stories that Work presented by Melanie Moore Kubo and Michaela Leslie-Rule from See Change. They use stories as a vehicle for conducting (primarily) qualitative evaluations. They call it story science. A nifty idea.

I liked this session for two reasons. First, Melanie and Michaela are expressive storytellers, so it was great fun listening to them speak. Second, they posed a simple question—Is this story true?—that turns out to be amazingly complex.

We summarize, simplify, and translate meaning all the time. Those of us who undertake (primarily) quantitative evaluations agonize over this because our standards for interpreting evidence are relatively clear but our standards for judging the quality of evidence are not.

For example, imagine that we perform a t-test to estimate a program’s impact. The t-test indicates that the impact is positive, meaningfully large, and statistically significant. We know how to interpret this result and what story we should tell—there is strong evidence that the program is effective.

But what if the outcome measure was not well aligned with the program’s activities? Or there were many cases with missing data? Would our story still be true? There is little consensus on where to draw the line between truth and fiction when quantitative evidence is flawed.

As Melanie and Michaela pointed out, it is critical that we strive to tell stories that are true, but equally important to understand and communicate our standards for truth. Amen to that.

The icing on the cake was the conference evaluation. Perhaps the best conference evaluation I have come across.

Everyone received four post-it notes, each a different color. As a group, we were given a question to answer on a post-it of a particular color, and only a minute to answer the question. Immediately afterward, the post-its were collected and displayed for all to view, as one would view art in a gallery.

When I first read this I scratched my head. A conference that combined the interests of any two made sense to me. Combining the interests of all three seemed like a stretch. I found—much to my delight—that the conference worked very well because of its two-panel structure.

Panel 1 addressed the social and environmental impact of new ventures; Panel 2 addressed the impact of large, established corporations. This offered an opportunity to compare and contrast new with old, small with large, and risk takers with the risk averse.

Fascinating and enlightening. I explain why after I describe the panels.

Panel 1: Social Entrepreneurship/Innovation

The first panel considered how entrepreneurs and venture capitalists can promote positive environmental and social change.

Andrew D’Souza, Chief Revenue Officer at Top Hat Monocle, discussed how his company developed web-based clickers for classrooms and online homework tools that are designed to promote learning—a social benefit that can be directly monetized.

Mike Young, Director of Technology Development at Innova Dynamics, described how his company’s social mission drives their development and commercialization of “disruptive advanced materials technologies for a sustainable future.”

Amy Errett, Partner at the venture capital firm Maveron, emphasized the firm’s belief that businesses focusing on a social mission tend to achieve financial success.

Paul Dillinger, Senior Director-Global Design at Levi Strauss & Co., made an excellent presentation on the social and environmental consequences—positive and negative—of the fashion industry, and how the company is working to make a positive impact.

Barbara Kahn moderated. She wins the prize for having the longest title—the Patty & Jay H. Baker Professor, Professor of Marketing; Director, Jay H. Baker Retailing Center—and from what I could tell, she deserves every bit of the title.

Measuring Social Impact

I was thrilled to find corporations, new and old, concerned with making the world a better place. Business in general, and Wharton in particular, have certainly changed in the 20 years since I earned my MBA.

The unifying theme of the panels was impact. Inevitably, that discussion turned from how corporations were working to make social and environmental impacts to how they were measuring impacts. When it did, the word evaluation was largely absent, being replaced by metrics, measures, assessments, and indicators. Evaluation, as a field and a discipline, appears to be largely unknown to the corporate world.

Echoing what I heard at the Harvard Social Enterprise Conference (day 1 and day 2), impact measurement was characterized as nascent, difficult, and elusive. Everyone wants to do it; no one knows how.

I find this perplexing. Is the innovation, operational efficiency, and entrepreneurial spirit of American corporations insufficient to crack the nut of impact measurement?

Without a doubt, measuring impact is difficult—but not for the reasons one might expect. Perhaps the greatest challenge is defining what one means by impact. This venerable concept has become a buzzword, signifying both more an less than it should for different people in different settings. Clarifying what we mean simplifies the task of measurement considerably. In this setting, two meanings dominated the discussion.

One was the intended benefit of a product or service. Top Hat Monocle’s products are intended to increase learning. Annie’s foods are intended to promote health. Evaluators are familiar with this type of impact and how to measure it. Difficult? Yes. It poses practical and technical challenges, to be sure. Nascent and elusive? No. Evaluators have a wide range of tools and techniques that we use regularly to estimate impacts of this type.

The other dominant meaning was the consequences of operations. Evaluators are probably less familiar with this type of impact.

Consider Levi’s. In the past, 42 liters of fresh water were required to produce one pair of Levi’s jeans. According to Paul Dillinger, the company has since produced about 13 million pairs using a more water-efficient process, reducing the total water required for these jeans from roughly 546 million liters to 374 million liters—an estimated savings of 172 million liters.

Is that a lot? The Institute of Medicine estimates that one person requires about 1,000 liters of drinking water per year (2.2 to 3 liters per day making a variety of assumptions)—so Levi’s saved enough drinking water for about 172,000 people for one year. Not bad.

But operational impact is more complex than that. Levi’s still used the equivalent yearly drinking water for 374,000 people in places where potable water may be in short supply. The water that was saved cannot be easily moved where it may be needed more for drinking, irrigation, or sanitation. If the water that is used for the production of jeans is not handled properly, it may contaminate larger supplies of fresh water, resulting in a net loss of potable water. The availability of more fresh water in a region can change behavior in ways that negate the savings, such as attracting new industries that depend on water or inducing wasteful water consumption practices.

Is it difficult to measure operational impact? Yes. Even estimating something as tangible as water use is challenging. Elusive? No. We can produce impact estimates, although they may be rough. Nascent? Yes and no. Measuring operational impact depends on modeling systems, testing assumptions, and gauging human behavior. Evaluators have a long history of doing these things, although not in combination for the purpose of measuring operational impact.

It seems to me that evaluators and corporations could learn a great deal from each other. It is a shame these two worlds are so widely separated.

Designing Corporate Social Responsibility Programs

With all the attention given to estimating the value of corporate social responsibility programs, the values underlying them were not fully explored. Yet the varied and often conflicting values of shareholders and stakeholders pose the most significant challenge facing those designing these programs.

Why do I say that? Because it has been that way for over 100 years.

The concept of corporate social responsibility has deep roots. In 1909, William Tolman wrote about a trend he observed in manufacturing. Many industrialists, by his estimation, were taking steps to improve the working conditions, pay, health, and communities of their employees. He noted that these unprompted actions had various motives—a feeling that workers were owed the improvements, unqualified altruism, or the belief that the efforts would lead to greater profits.

Tolman placed a great deal of faith in the last motive. Too much faith. Twentieth-century industrial development was not characterized by rational, profit-maximizing companies competing to improve the lot of stakeholders in order to increase the wealth of shareholders. On the contrary, making the world a better place typically entailed tradeoffs that shareholders found unacceptable.

So these early efforts failed. The primary reason was that their designs did not align the values of shareholders and stakeholders.

Can the values of shareholders and stakeholders be more closely aligned today? I believe they can be. The founders of many new ventures, like Top Hat Monocle and Innova Dynamics, bring different values to their enterprises. For them, Tolman’s nobler motives—believing that people deserve a better life and a desire to do something decent in the world—are the cornerstones of their company cultures. Even in more established organizations—Safeway and Levi’s—there appears to be a cultural shift taking place. And many venture capital firms are willing to take a patient capital approach, waiting longer and accepting lower returns, if it means they can promote a greater social good.

This is change for the better. But I wonder if we, like Tolman, are putting too much faith in win-win scenarios in which we imagine shareholders profit and stakeholders benefit.

It is tempting to conclude that corporate social responsibility programs are win-win. The most visible examples, like those presented at this conference, are. What lies outside of our field of view, however, are the majority of rational, profit-seeking corporations that are not adopting similar programs. Are we to conclude that these enterprises are not as rational as they should be? Or have we yet to design corporate responsibility programs that resolve the shareholder-stakeholder tradeoffs that most companies face?

Again, there seems to be a great deal that program designers, who are experienced at balancing competing values, and corporations can learn from each other…if only the two worlds met.

Jargon is the name we give to big labels placed on little ideas. What should we call little labels placed on big ideas? Jongar, of course.

A good example of jongar in evaluation is the term mixed methods. I run hot and cold for mixed methods. I praise them in one breath and question them in the next, confusing those around me.

Why? Because mixed methods is jongar.

Recently, I received a number of comments through LinkedIn about my last post. A bewildered reader asked how I could write that almost every evaluation can claim to use a mixed-methods approach. It’s true, I believe that almost every evaluation can claim to be a mixed-methods evaluation, but I don’t believe that many—perhaps most—should.

Why? Because mixed methods is also jargon.

Confused? So were Abbas Tashakkori and John Creswell. In 2007, they put together a very nice editorial for the first issue of the Journal of Mixed Methods Research. In it, they discussed the difficulty they faced as editors who needed to define the term mixed methods. They wrote:

…we found it necessary to distinguish between mixed methods as a collection and analysis of two types of data (qualitative and quantitative) and mixed methods as the integration of two approaches to research (quantitative and qualitative).

By the first definition, mixed methods is jargon—almost every evaluation uses more than one type of data, so the definition attaches a special label to a trivial idea. This is the view that I expressed in my previous post.

By the second definition, which is closer to my own perspective, mixed methods is jongar—two simple words struggling to convey a complex concept.

My interpretation of the second definition is as follows:

A mixed-methods evaluation is one that establishes in advance a design that explicitly lays out a thoughtful, strategic integration of qualitative and quantitative methods to accomplish a critical purpose that either qualitative or quantitative methods alone could not.

Although I like this interpretation, it places a burden on the adjective mixed that it cannot support. In doing so, my interpretation trades one old problem—being able to distinguish mixed methods evaluations from other types of evaluation—for a number of new problems. Here are three of them:

Evaluators often amend their evaluation designs in response to unanticipated or dynamic circumstances—so what does it mean to establish a design in advance?

Integration is more than having quantitative and qualitative components in a study design—how much more and in what ways?

A mixed-methods design should be introduced when it provides a benefit that would not be realized otherwise—how do we establish the counterfactual?

These complex ideas are lurking behind simple words. That’s why the words are jongar and why the ideas they represent may be ignored.

Technical terms—especially jargon and jongar—can also be code. Code is the use of technical terms in real-world settings to convey a subtle, non-technical message, especially a controversial message.

For example, I have found that in practice funders and clients often propose mixed methods evaluations to signal—in code—that they seek an ideological compromise between qualitative and quantitative perspectives. This is common when program insiders put greater faith in qualitative methods and outsiders put greater faith in quantitative methods.

When this is the case, I believe that mixed methods provide an illusory compromise between imagined perspectives.

The compromise is illusory because mixed methods are not a middle ground between qualitative and quantitative methods, but a new method that emerges from the integration of the two. At least by the second definition of mixed methods that I prefer.

The perspectives are imagined because they concern how results based on particular methods may be incorrectly perceived or improperly used by others in the future. Rather than leap to a mixed-methods design, evaluators should discuss these imagined concerns with stakeholders in advance to determine how to best accommodate them—with or without mixed methods. In many funder-grantee-evaluator relationships, however, this sort of open dialogue may not be possible.

This is why I run hot and cold for mixed methods. I value them. I use them. Yet, I remain wary of labeling my work as such because the label can be…

jargon, in which case it communicates nothing;

jongar, in which case it cannot communicate enough; or

code, in which case it attempts to communicate through subtlety what should be communicated through open dialogue.