Funders’ Learning from Evaluation

It’s that time of year—federal grant deadlines for NSF, IMLS, NOAA, NEH, and NEA are looming large, and many informal learning organizations are eyeing those federal dollars.  While government agencies (and private foundations) often require evaluation, we relish working on projects with team members that integrate evaluation into their work process because they are interested in professional learning—rather than just fulfilling a grant requirement.  Sadly, evaluation is often thought of and used as a judgment tool—which is why funders require it for accountability purposes.  That said, I am not anti-accountability.  In fact, I am pro accountability; and I am also pro learning.

Image: Andy Warhol, Dollar Sign (1982)

This pretty simple idea—learning from evaluation—is actually quite idealistic because I sense an uncomfortable tension between program failure AND professional learning.  Not all of the projects we evaluate achieve their programmatic intentions, in part because the projects are often complicated endeavors involving many partners who strive to communicate complex and difficult-to-understand ideas to the public.  Challenging projects, though, often achieve another kind of outcome—professional and organizational learning—especially in situations where audience outcomes are ambitious.

When projects fail to achieve their audience outcomes, what happens between the grantee and funder?  If the evaluation requirement is focused on reporting results from an accountability perspective, the organization might send the funder a happy report without any mention of the project’s inherent challenges, outcomes that fell short of expectations, or the evaluator’s analysis that identified realities that might benefit from focused attention next time. The grantee acknowledges its willingness to take on a complicated and ambitious project and notes a modest increase in staff knowledge (because any further admission might suggest that the staff wasn’t as accomplished as the submitted proposal stated).  The dance is delicate because some grantees believe that any admission of not-so-rosy results is reprehensible and punishable by never receiving funding again!

Instead, what if the evaluation requirement was to espouse audience outcomes and professional and/or organizational learning?  The report to the funder might note that staff members were disappointed that their project did not achieve its intended audience outcomes, but they found the evaluation results insightful and took time to process them.  They explain how what they learned, which is now part of their and their organization’s knowledge bank, will help them reframe and improve the next iteration of the project and they look forward to continuing to hone their skills and improve their work.  The report might also include a link to the full evaluation report.  I have observed that funders are very interested in organizations that reflect on their work and take evaluation results to heart.  I have also noticed that funders are open to thinking and learning about alternative approaches to evaluation, outcomes, measurement, and knowledge generation.

Most of the practitioners I know want opportunities to advance their professional knowledge; yet some feel embarrassed when asked to talk about a project that may not have fared well even when their professional learning soared.  When funders speak, most pay close attention.  How might our collective knowledge grow if funders invited their grantees to reflect on their professional learning?  What might happen if funders explicitly requested that organizations acknowledge their learning and write a paper summarizing how they will approach their work differently next time?  If a project doesn’t achieve its audience outcomes and no one learns from the experience—that would be reprehensible.

Isn’t it time that funding agencies and organizations embrace evaluation for its enormous learning potential and respect it as a professional learning tool?  Isn’t it time to place professional and organizational learning at funders’ evaluation tables alongside accountability?  In my opinion, it is.

Previous
Previous

Balancing Mission with Impact I

Next
Next

Intentional Practice