Why AI Can’t Save Nonprofit Evaluation

superhero VE.png

The true hero comes from within

The true hero comes from within

By Elena Harman

A few times a year, I get approached by someone wondering why nonprofits don’t use Artificial Intelligence (AI) for evaluation. They believe AI could be a cost-efficient, accurate way for nonprofits to measure their impact without added staff. And often, they have developed a product that is going to use AI (or some other form of automated data analysis process) to save nonprofit evaluation.

So let me share with you what I share with them: AI can’t save nonprofit evaluation. Nonprofit evaluation doesn't need to be saved from the outside, or from technology. It needs to be saved from the inside, with the people working in nonprofits.

Nonprofit evaluation is most effective when it includes strong engagement from staff and stakeholders (people), space for ongoing reflection (process), and the means to analyze the data (technology). Even the best automated analysis solutions only address technology.

People Matter

Nonprofit evaluation doesn’t need to be saved from the outside, or from technology. It needs to be saved from the inside, with the people working in nonprofits.

This is probably the number one reason AI cannot “fix” evaluation. When you involve your team, other key stakeholders, and program beneficiaries in the evaluation process you uncover goals, challenges, perspectives and even resources that are central to understanding how to improve your work.

You can’t identify what data to collect without figuring out what questions matter most. You’ll need to engage with the people at the heart of the matter to reveal the priorities of the evaluation. This takes a facilitated conversation to really dig into what information is needed and what success looks like.

Further, nonprofit and social programs have intricacies that require thoughtful and sometimes evolving measurement. The success of a blockbuster release can be measured by sales. It is not as simple to identify the data that would indicate the success of a movie meant to create social change, however (like starting a movement to improve our health care system).

To make matters more complicated: the data you’d want to look at is different depending on the context of the program. A movie designed to motivate health insurance enrollment would dictate very different measures of success than a movie designed to motivate environmental conservation. Whereas an action blockbuster and a drama blockbuster are both measured by sales.

You can’t automate the selection of data sources because the work nonprofits do is not simple.

Even if AI could somehow figure out the most important questions and account for the nuances of different contexts, we still need people to collect the data. There is no point of sales system for tracking social change. I am assuming of course, that our Amazon Alexas and Apple Siris are not secretly collecting data that can be used to understand changes in attitude toward environmental conservation. (Seriously, that’s not happening, is it?) People have to ask willing participants and document that attitude and perception data or it can’t be analyzed.

Process Matters

Second, evaluation is at its best when nonprofits are able to use the information to inform ongoing strategic improvements to their programming. That only happens when there is a process in place to reflect on data and identify opportunities for improvement.

Unfortunately, these opportunities are few and far between at most nonprofits. New nonprofit professionals are trained in how to do their jobs, often direct service, but few of their training opportunities include continuous learning processes. The same is true at the leadership level: very few nonprofit leadership programs focus on a culture of learning, so it gets left out and left behind! And even if AI conducts the best analysis on the best data, it can’t force a process on an organization. It can set calendar reminders and push notifications. But it can’t facilitate a discussion that builds trust and understanding among the people involved in a program, and it can’t ensure all voices are heard when interpreting the data.

Even using evaluation results requires individuals who are “data literate” and able to find the implications for their work in the evaluation results. We can collect all the data we want on our movie’s impact on health insurance enrollment, but unless we have the time to process what the data means and how it can inform our decisions at work, it means nothing. AI cannot do your strategic thinking for you.

Technology Matters?

Ok, so AI cannot solve the people part of evaluation or the process part of evaluation. Can it solve the technology part? Sometimes. AI requires a tremendous amount of data, and rare is the nonprofit or social program that has the quantity and quality of data to make AI a realistic option. Most organizations don’t meet this bar. If you do, more power to you and full steam ahead with AI as your tech solution. But please please remember that the best analysis in the world means nothing if it’s unrelated to the things you want to know most and if you don’t have a continuous process to learn from and improve based on the findings.

Nonprofit evaluation deserves adequate funding and staff time, and we need to give nonprofits space to do what they do best. Only nonprofit staff can identify what matters most and use the answers to those questions to improve services for our communities.

One easy analog way to start? “The Great Nonprofit Evaluation Reboot: A New Approach Every Staff Member Can Understand,” has worksheets and tips for finding the right questions and building in reflection time.


Elena.Site.jpg

Elena Harman, PhD | CEO

Elena takes the big-picture view of how Vantage’s work transforms how evaluation is used and perceived. She pushes everyone around her to think bigger about what evaluation can be, and how it can help improve our communities. With an encyclopedic knowledge of research and evaluation methods, Elena supports and advises the evaluation team on all projects. She connects the dots between data sources and projects. Elena has dedicated her life to Colorado and evaluation as a means to improve the lives of state residents. She brings a deep expertise of systems, nonprofits, and foundations in Colorado, as well as how to engage diverse audiences in a productive conversation about evaluation.


Elena Harman