Evaluation in the Wild: Serial Podcast Season 3

binoculars-1269458_1920.jpg

The Vital Question of Effectiveness Versus Efficiency

The vital question of effectiveness versus efficiency

By Elena Harman and Laura Sundstrom

“This is possibly the most profound and least examined question in the building: what works? ... we just don’t know ... The court keeps extensive data regarding efficiency ... I’m not knocking efficiency, it’s important ... but there is no database either locally or nationally that shows what works. So each judge in the building has to muddle it out for him or herself.”

Wait, what? *I rewind the podcast and listen again.* Serial, my favorite podcast, is talking about my profession. Oh. My. Goodness. My excitement is uncontainable. Evaluation has finally hit the mainstream! If Sarah Koenig is talking about it, everyone must be talking about it.

I rapid fire text everyone I know telling them that Serial is talking about evaluation and they need to listen. And they do. But they don’t understand what it has to do with what I do for a living, because Sarah never actually uses the word evaluation… My excitement dissipates, and I realize evaluation might still live a great distance from the mainstream.

Evaluation is how we learn what works and what doesn’t in the social sector.

Let me take a few moments to build a bridge.

When Sarah asks “what works?”...that is evaluation.

Evaluation is how we learn what works and what doesn’t in the social sector. Evaluators start with questions that have the potential to help program staff (e.g., judges and the court system) make better decisions (e.g., sentencing) to help improve outcomes (e.g, reduce recidivism, increase rehabilitation) for those they serve (e.g. the Cleveland community).

Once we know the questions, we use all sorts of traditional and emerging research methods to answer those questions. Then we use the findings to help programs improve their services and better support our communities. As Sarah points out, we don’t just want programs to be efficient, we want to understand their effectiveness.

How asking what works, works.

In the criminal justice world, drug courts are a great example of how asking what works can shift the way systems work and improve programs. They were developed in response to a large portion of people with substance abuse issues cycling through the system again and again. People started asking questions: is there a better way to work with this population that will reduce their recidivism and cost on the system?

Drug courts connect judicial, law enforcement, and treatment communities to focus on treatment for non-violent offenders with substance use problems, rather than sentencing them to prison. They take a holistic approach to get at the root cause of the crimes and find the best way to break the cycle of recidivism.

But are drug courts actually effective?

Since the first drug court in 1989, numerous evaluations have been conducted on the effectiveness of the model. And what they found was clear: drug courts are effective at reducing recidivism, reducing drug relapses and substance use problems, and are cost-effective.

Drug courts have shifted how local judicial systems operate. More and more drug courts are popping up around the country, with more than 2,500 around the U.S. And because the problem-solving court model that drug courts use was so effective, it is starting to be expanded to other populations, such as Mental Health Courts and Human Trafficking Courts.

If the judicial system had just asked how to more efficiently process substance abuse cases, none of this would have been possible.

Why we haven’t we asked what works more often?

Evaluation originally emerged from a desire to hold social programs accountable. This means that outcomes that can easily be quantified, like financial performance and efficiency, are often prioritized over outcomes that actually matter. It creates misaligned incentive systems for really important social programs.

Unfortunately, accountability and learning cannot coexist happily. When we focus first on holding organizations accountable for meeting certain standards, then ask them to learn about what works and what doesn’t, the second ask falls on deaf ears. We don’t know more about what works because for so long the accountability questions have come first.

How can we change that?

First, keep asking “what works” questions. Every voice that joins Sarah in asking deeper questions about the effectiveness of our social projects starts to put a dent in the domination of accountability-oriented systems.

Second, support politicians, organizations, and leaders that ask these questions; who fund the inquiry into their answers; and who base future decisions on what they learn. It is so easy to get attached to a program because it sounds promising without reevaluating once the evidence shows otherwise. The classic example is D.A.R.E., the darling of the 1980s drug prevention strategy. The program was widely touted by leaders all the way up to the First Lady, and continued to be praised and expanded well after the 1991 evaluation that found that D.A.R.E. did not reduce adolescent substance use.*

Finally, be a critical consumer of information. Sometimes the wrong questions are highlighted because a focus on accountability has taken priority over a focus on positive change. By understanding who is asking the questions, who is providing the answers, and what those answers are being used for, you’ll get a clearer picture of how programs are working. Keep those considerations in mind when you are supporting (either financially or otherwise) politicians, organizations, and leaders.

For more tips on focusing on effectiveness rather than efficiency, check out The Great Nonprofit Evaluation Reboot: A new approach every staff member can understand.


*The program has since been redesigned to address these concerns, and the modern-day D.A.R.E. bares little resemblance to that which my generation experienced.


Elena.Site.jpg

Elena Harman, PhD | CEO

Elena takes the big-picture view of how Vantage’s work transforms how evaluation is used and perceived. She pushes everyone around her to think bigger about what evaluation can be, and how it can help improve our communities. With an encyclopedic knowledge of research and evaluation methods, Elena supports and advises the evaluation team on all projects. She connects the dots between data sources and projects. Elena has dedicated her life to Colorado and evaluation as a means to improve the lives of state residents. She brings a deep expertise of systems, nonprofits, and foundations in Colorado, as well as how to engage diverse audiences in a productive conversation about evaluation.

https://static1.squarespace.com/static/5b928c32b10598ee9204612f/t/5bbd00d7e79c70ae7153e6c0/1539113180872/Laura_Site_480x320.jpg?format=750w

Laura Sundstrom, MSW
Evaluator

Laura specializes in building evaluation capacity, helping clients understand the “why” behind evaluation tasks, and leaves an impressive trail of evaluation skills wherever she goes.