CAT | Government Evaluation

Happy Labor Day Week! I am Calista H. Smith, President of C H Smith & Associates, a project management consulting and evaluation firm in Ohio. C H Smith & Associates has done multiple evaluation projects for the Ohio Department of Education and designed evaluations for other clients related to public policy. In this work, it has been important to understand policymakers and the legislative decision-making process.

Lessons Learned:

Legislative processes may influence your evaluation design and timeline. Publicly sponsored projects may have reporting deadlines written into legislation or their funding streams may be subject to annual budgeting reviews. Projects sponsored by private philanthropy may also be influenced by the legislative cycle as findings may be helpful to craft or change public policy.

Policymakers may get data and information from a variety of sources. It was common for a policymaker to have visited a program site or talked extensively with program champions. Program critics may also be vocal to policymakers. External criticism may be based on program perceptions (rooted in experiences or in ideology), or a sense of competition for resources. Your evaluation data will need to be clear and easily accessible to cut through what may be noise.

You may need various reports of the same analysis. For one evaluation, we produced a one pager of highlights for quick reference by high level administrators and officials, a 6-page summary of lessons to insert in a public annual report, and a full technical report with more detailed explanation of methodology and data for staffers and stakeholders.

Hot Tips (or Cool Tricks):

Spend time refining research questions related to what legislative decision-makers want to or should know regarding the project and related policies.

Regardless of the scope of your program evaluation, identify what policies and funding streams impact the program. This understanding helps you to gain clarity on who the stakeholders are and their interests and constraints.

In your evaluation design, consider legislative timelines. Think about what data you may be able to reasonably collect, analyze, and report to provide insights to legislators in line with the legislative decision-making process.

Encourage your client to think independently from your evaluation about courses of productive action they may take if findings are less favorable than expected. Consider building in extra review time for analysis so that the client can process data and determine how to make lessons actionable or identify questions that may emerge from policymakers about the results or the evaluation approach.

Rad Resources:

The National Conference of State Legislators has a Program Evaluation Society for its state policy staff members. It is helpful to see what materials policy staff members may reference when they would like to implement or review an evaluation.

You may map out stakeholder interests, including policymaker’s interest, in your evaluations in a” Power/interest matrix.”:

The American Evaluation Association is celebrating Labor Day Week in Evaluation:Honoring the WORK of evaluation. The contributions this week are tributes to the behind the scenes and often underappreciated work evaluators do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings and welcome from the Disabilities and Underrepresented Populations TIG week. We are June Gothberg, Chair and Caitlyn Bukaty, Program Chair. This week we have a strong line up of great resources, tips, and lessons learned for engaging typically underrepresented population in evaluation efforts.

You might have noticed that we changed our name from Disabilities and Other Vulnerable Populations to Disabilities and Underrepresented Populations and may be wondering why. It came to our attention during 2016 that sever of our members felt our previous name was inappropriate and had the potential to be offensive. Historically, a little under 50% of our TIGs presentations represent people with disabilities, the rest are a diverse group ranging from migrants to teen parents. The following Wordle shows the categorical information of presentations our TIGs presentation

Categories represented by the Disabilities and Underrepresented Populations presentations from 1989-2016

TIG members felt that the use of vulnerable in our name set up a negative and in some cases offensive label to the populations we represent. Thus, after discussion, communications, and coming to consensus we proposed to the AEA board that our name be changed to Disabilities and Underrepresented Populations.

Lessons Learned:

Words are important! Labels are even more important!

Words can hurt or empower, it’s up to you.

Language affects attitudes and attitudes affect actions.

Hot Tips:

If we are to be effective evaluators we need to pay attention to the words we use in written and verbal communication.

Always put people first, labels last. For example, student with a disability, man with autism, woman with dyslexia.

The nearly yearlong name change process reminded of the lengthy campaign to rid federal policy and documents of the R-word. If you happened to miss the Spread the Word to End the Word Campaign, there are several great video and other resources at r-word.org.

Bill S. 2781 put into federal law, Rosa’s Law, which takes its name and inspiration for 9-year-old Rosa Marcellino, removes the terms “mental retardation” and “mentally retarded” from federal health, education and labor policy and replaces them with people first language “individual with an intellectual disability” and “intellectual disability.” The signing of Rosa’s Law is a significant milestone in establishing dignity, inclusion and respect for all people with intellectual disabilities.

We are Wanda Casillas and Heather Evanson, and we are part of Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE). Many of our team members and colleagues are privileged to work with a variety of federal agencies on program evaluation and performance measurement and, throughout this week, will share some of their lessons learned and ideas about potential opportunities to help federal agencies expand the value of evaluations.

This week members of our team will share lessons learned about working remotely on federal evaluations, the use of qualitative methods in federal programs that don’t always appreciate the value of mixed methods, the potential for federal programs to be more “selfish” in program planning, the value of conducting evaluation and performance measurement for federal programs, and making the most out of data commonly collected in federal programs. In the coming weeks, readers will find an additional article on scaling up federal evaluations.

Lesson Learned: Many federal clients use performance measurement, monitoring, evaluation, assessment, and other similar terms interchangeably; however, evaluators and clients don’t always have the same definitions, and therefore expectations, in mind for what these terms mean. It’s important to learn as much as possible about your federal client’s experiences and history with evaluation through research and conversations with relevant stakeholders in order to make sure you can deliver on a given agency’s needs.

Lesson Learned: Clients sometimes see evaluation or performance measurement as a requirement rather than an opportunity to understand how to improve upon or expand an existing program. As evaluation consultants, we sometimes have to work with clients to help them understand how evaluation can benefit them even after responding to a request for proposals.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We’re Sarah Brewer, Elise Garvey, Ted Kniker and Krystal Tomlin the current leadership of AEA’s Government Evaluation Topical Interest Group (TIG). To finish out Government Evaluation Week on AEA365, we decided to offer a glimpse into the future of Government Evaluation.

Since 2015 was the 25th anniversary of the Government Evaluation TIG, we wanted to forecast into the next 25 years, and sponsored a Birds of a Feather session at AEA 2015 on Predicting the Future of Evaluation and identifying if there are innovations we can make. Using an abbreviated scenario planning exercise, we set the context that scenario planning is about “stories” that illuminate the drivers of change. We asked the group to brainstorm about what government evaluation could look like in 25 years — what are the innovations that will be thought of, what are the drivers of change in government evaluation, what is the future they imagine? A very positive shared vision emerged.

Increased use of open data and crowd sourcing for data to support evaluation. Government evaluation can lead the way to democratize data to understand how interventions succeed and can be used by more people.

Diffusion of Evaluation capability to more government personnel – not concentrated in one Performance/Evaluation office. Organizational capacity building, organizational learning, and teaching of evaluation.

Data and Evaluations are integrated across levels of government and across agency. More collaboration and networking of evaluation.

The US would have a federal evaluation policy and/or more evaluations would be written into program authorizing legislation. AEA taking the lead.

Improved technology for capture, structure, and analysis of qualitative data – i.e. voice recording. How can we take what’s been learned from the shared, portable music and apply it to data collection, analysis and reporting?

Increased demand for evaluation capacity at all levels of government – especially at the county and city level. The more we innovate on the first six ideas, the more we can influence this one. The demand will increase.

Get Involved: The Government Evaluation TIG is taking these ideas, cross-walking them to our strategic planning goals to turn these possibilities into probabilities. Join us!

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello Everyone! I’m Ted Kniker, Senior Vice President of Enlighteneering, and Chair of AEA’s Government Evaluation Topical Interest Group (TIG). During our 25th anniversary year, the TIG sponsored a lively reflection on What is “Government” Evaluation from a multi-cultural perspective? The term “government evaluation” can mean so many different things.

Lessons Learned:

For example does it mean “federal, state or local?” The TIG was originally started 25 years ago as a State and Local Government group, and has expanded to include evaluators from federal government, evaluation contractors, non-profit evaluators affected by government policies and practices and as well as managers in various organizations responsible for issues of organizational performance.

Does it mean funded, sponsored, or conducted? The think tank session attendees agreed that a government evaluation focuses on a program that is either funded by or administered by a public sector entity. However, we struggled with whether a definition like that is still too limiting or even needed. When the ideas of policy and usage are introduced, government evaluation quickly includes a much larger universe of projects and evaluators.

What does it mean internationally? As part of the discussion we learned from our friends from Japan that government evaluation means evaluating the government, and looking particularly for its inefficiencies. While many of us see government as context, others define it as the evaluand. We were reminded of the broadness of the term.

What does the definition mean for the populations being evaluated? Does it carry connotations that affect credibility, validity, and participation? The group agreed that government evaluation requires the same standards of excellence in practice of any evaluation. But one of the populations that seem to go unchecked, is ourselves. A question that generated a lot of reflection was, when we conduct an evaluation in a government context, do we consider ourselves government evaluators? While members of other methodological and contextual groupings often refer to themselves in those terms (e.g. qualitative evaluation has qualitative evaluators), why not government?

Lesson Learned:Government Evaluation is inclusive. The attendees agreed that evaluators may have very narrow definitions of what government evaluation is and whether it applies to them, but that in reality it is far more expansive, has greater reach, and can include multiple contexts, evaluands, and methodologies. Far more evaluations can influence or be influenced by the government evaluation context. Therefore, government evaluation is a larger contextual group than might initially be thought. Have you worked in a government evaluation context but haven’t participated in the Government Evaluation TIG or attended the Government Evaluation TIG sponsored sessions? If so, we’d like to hear from you, or better yet, come join us! Here is our LinkedIn link: https://www.linkedin.com/grps/AEA-Government-Evaluation-TIG-6945047/about

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Lauren Supplee and I work in the Office of Planning, Research and Evaluation at the Administration for Children and Families. Recent media and academic attention to transparency, replication, trust in science and lack of replication of findings in medicine research and psychology raises issues for evaluation as seen in articles in Nature Medicine and The Guardian. While evaluators can debate the concept of replication, one of the core issues of replication is trust in the evidence evaluation generates as a condition of whether it is used in policy or practice. As an evaluator I know that the perceived utility of my work to policy and practice is only as strong as the user’s trust in my findings.

While the evaluation field can’t address all of the aspects involved in the public’s trust in research and evaluation, we can proactively address building confidence and trust in design, analysis and interpretation of findings.

Hot Tips: Registering studies: A colleague and I recently wrote a commentary on the Society for Prevention Research’s revised evidence standards for prevention science. In the commentary we noted our disappointment that the new standards did not take transparency and trust head on. We stated the field needs to seriously consider engaging in practices such as pre-registering studies, pre-specifying analytic plans and sharing data with other evaluators to allow for replication of findings by independent analysts. There are multiple registries including the Open Science Framework which allows for publically sharing multiple aspects of project design and analysis; and for clinical trials new registries have been created by American Economic Association, Registry of Clinical Trials on What Works Clearinghouse, and clinicaltrials.gov.

Issues related to analysis: While pre-registering analysis plans may not always be appropriate for every study, the lack of adjustment for multiple comparisons or pre-specification of primary versus secondary outcome variables does not increase the public and policy-makers’ trust in our findings. Another factor in lack of replication is under-powered studies. A recent article in American Psychologist discusses this aspect and proposes the field should be considering statistical techniques such as Bayesian methods.

Interpretation of findings: My colleague who does work in tribal communities emphasizes the importance of having the community’s input in the interpretation of findings. In community-based participatory work, the partnership is embedded from the start and can naturally include this step. In some “high-stakes” policy-evaluation, a firewall has been built between the evaluator and the evaluated to gain independence of the findings.

Get Involved: How can we broaden the conversation to the larger community? What other ways can we build trust in evaluation findings, and ensure clear guidance on how to benefit from participant interpretation while still maintaining trust in the findings?

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Diana Harbison and I’m the Director of the Program Monitoring and Evaluation Office at the U.S. Trade and Development Agency, which links U.S. businesses to global infrastructure opportunities. According to a recent survey, USTDA has some of the most engaged employees in the United States government. There are countless articles – and entire consulting businesses – built around the concept of “employee engagement,” but I think the reason USTDA is successful is, in part, that our employees are engaged in evaluation.

My office, as well as the rest of the Agency’s staff, collects feedback from our partners – over 2,000 last year – to evaluate the commercial and development results of the activities we have funded. We utilize this data to inform our daily, project-specific decisions. We also gather as a group once a year to review our results and discuss where we should focus our resources. This allows us to prioritize the countries and sectors where we work, and to identify new approaches for collaborating with our stakeholders – including our most important customers, the American people. We often employ data to communicate how our partners have or could benefit from our programs.

We also love to tell stories, like the time a South African pilot stood up and told an audience that she had been unsure about her career path but after participating in an aviation workshop we hosted, knew what she wanted to do next and was excited about the future. Or the time a small business owner told me that his first USTDA contract helped him expand his business in just three years, and he now has hundreds of millions of dollars in business, working with new clients. We have so many stories about our accomplishments that we have begun sharing them publicly on our website as staff commentaries.

My colleagues are committed to our mission and engaged in their work every day. Instead of simply doing what is required, they utilize our results to go beyond and do what is possible. So when I’m asked how USTDA continuously drives performance results and maintains such an engaged staff, I say it’s because everyone values – and evaluates – their work.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Kathy Newcomer, Director of the Trachtenberg School of Public Policy and Administration at George Washington University and President-Elect of AEA, and Nick Hart, a PhD candidate at GWU and Board Member of Washington Evaluators. We both have extensive experience working with Federal agencies to implement evaluation and performance measurement initiatives, providing insights about lessons learned over the past 15 years as well as lessons that could have been learned, but were not.

The George W. Bush and Barack Obama Administrations both advocated for the generation and use of evidence to guide and improve government management. The two presidents brought very different experiences, views and advisors to the Federal bureaucracy, yet their management agendas established similar expectations and initiatives. For example, each administration focused both on delivering better results for the American public and improving accountability. But while the Bush evaluation and performance management agenda relied on the use of central oversight offices to establish ambitious goals and to coordinate implementation, the Obama Administration’s approach provided agencies flexibility and focused on decentralized institutionalization.

Lessons Learned: Below, we highlight eight lessons that were learned and/or re-learned in implementing the Bush and Obama initiatives. Each of these lessons can inform future efforts to improve government performance, organizational learning, and accountability.

#1: The role of central oversight offices in the Federal government must be calibrated to meet agency needs, providing sufficient oversight with an appropriate level of ownership among agencies.

#2: Establishing and sustaining an audience for the performance measurement and evaluation initiatives is challenging, but critical.

#4: Development of case studies to highlight success stories can help articulate the usefulness of performance initiatives.

#5: Sufficient evaluation capacity is necessary to support initiatives over the long-term.

#6: Additional emphasis is needed on creating and institutionalizing synergies between performance measurement and evaluation offices and staff within agencies.

#7: Training new political appointees and senior managers about their role in leading evaluation and performance measurement initiatives will help improve the institutional support needed to effectively implement management agendas.

#8: More consultation with intended users of the initiatives’ products will help better align the information provided by agencies to the actual needs of policy-makers.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Elise Garvey, Management Auditor with the King County Auditor’s Office in Seattle, Washington and I serve as the co-chair of the Government Evaluation Topical Interest Group (TIG). In 2015, the Government Evaluation TIG is celebrating its 25th anniversary, and there is nothing like an anniversary to motivate a time of reflection and inspire a look to the future. At this year’s conference, the Government TIG hosted a session called “Defining Government Evaluation: What Is “Government” Evaluation from a Multi-Cultural Perspective?” One of our posts later this week will provide a recap of that think tank, but this post is intended to introduce you to a type of government evaluation that potentially could expand your professional network and resources: performance auditing.

Lessons Learned: The term “auditing” generally conjures up images of finances and taxes, but there is a branch called performance auditing that is fundamentally similar to evaluation. Our guiding document, the Yellow Book, defines performance auditing as “audits that provide findings or conclusions based on an evaluation of sufficient, appropriate evidence against criteria.” Performance audits cover a wide range of topics, including housing and homelessness, libraries, climate action, capital projects and infrastructure, and emergency medical services, among many others.

Rad Resources: The Association of Local Government Auditors (ALGA) is one of several professional organizations in the auditing world. Check out the ALGA website to learn more about performance auditing and to connect with people working in local governments across the U.S. and Canada with a growing presence from countries across the world. If your evaluation will involve working with local government, there may be a performance auditor you can reach out to for helpful information or resources!

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are David J. Bernstein, a Senior Study Director with Westat, founding chair of the AEA Government Evaluation Topical Interest Group, and President-elect of Washington Evaluators, the DC area affiliate of AEA, and Kathy Newcomer, Director of the Trachtenberg School of Public Policy and Administration at George Washington University, a former AEA Board Member, and a Past President of Washington Evaluators. We both have a long-standing interest in ways to improve the way the U.S. Federal Government contracts for evaluation services.

Problem: The vast majority of United States Federal government evaluations are conducted by contractors, but effective contracting is rarely examined. Government evaluation is not rocket science, but it is complicated.

A. Procurement regulations are detailed, and maybe outdated.

B. Agency practices differ across the Federal government.

C. There appears to be a lack of research focused on contracting for Federal evaluation work (although there are GAO and other studies on Federal contracting).

Solution: At the 2014 AEA Conference, a panel of government evaluators, contractors, and academics addressed 5 questions related to evaluation contracting and how it can be done more effectively. At a July 2015 Washington Evaluators Brown Bag, we presented a summary of the AEA session and asked the audience for opinions and examples on the 5 questions:

Name one legal and/or regulatory obstacle that can affect the quality of contracted evaluations. Potential solutions?

Do Requests for Expressions of Interest and question and answer processes improve the quality of evaluation Requests for Proposals (RFPs)?

How do government estimates of level of effort (or lack thereof) and time frames influence evaluation budgets and the conduct of evaluations?

How do contractors decide to bid or not? Do certain practices discourage bidding?

What are the pros and cons of performance-based contracting? Is it possible or desirable for contracting evaluation services?

Interested in government evaluation contracting? Look for a session on Exemplary Practices in Contracting for Government Evaluation at Evaluation 2015 in Chicago, IL.

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.