The Value in Evaluation: Finding Approaches that Work

sb_30_thumb No. 11 | Published 2002
The Value in Evaluation: Finding Approaches that Work
Over the course of the last twenty years, the trend toward outcome-based evaluation in philanthropy has grown from a trickle into a tsunami. Woe to the organization or program officer who hasn’t seen it coming. This movement – complete with adherents, acolytes, and its own special language – has made believers of many and sharp critics of others. In any case, it is now a fact of philanthropic life." />

No. 11 | Published 2002
The Value in Evaluation: Finding Approaches that Work
Over the course of the last twenty years, the trend toward outcome-based evaluation in philanthropy has grown from a trickle into a tsunami. Woe to the organization or program officer who hasn’t seen it coming. This movement – complete with adherents, acolytes, and its own special language – has made believers of many and sharp critics of others. In any case, it is now a fact of philanthropic life.

The Value in Evaluation: Finding Approaches that Work

by William Keens, President
Wolf, Keens & Company

Over the course of the last twenty years, the trend toward outcome-based evaluation in philanthropy has grown from a trickle into a tsunami. Woe to the organization or program officer who hasn’t seen it coming. This movement – complete with adherents, acolytes, and its own special language – has made believers of many and sharp critics of others. In any case, it is now a fact of philanthropic life.

In this Working Paper, we begin by sharing some thoughts about evaluation gleaned from our work with both grantees and grant makers, and then turn to comments from the field, in the interest of inviting a wider and more robust discussion.

A Common Framework Is Essential

Evaluation is easier from the start when the organization and the evaluator agree on the shape and scope of the “activity landscape.” A successful evaluation process starts with a common framework that describes anticipated outcomes, when and how those outcomes will be evident, and an evaluation methodology to assess them.

A simple mapping tool may help. In any field, the activities being evaluated vary by domain and depth – with “domain” ranging from local to global, and “depth” ranging from shallow to deep. [See the accompanying graphic.] Goals in the upper left quadrant are macro-aspirational (e.g., eliminate racism worldwide), while the goals in the lower right quadrant are micro-specific (e.g., sell a certain number of concert tickets). By extension, the upper right quadrant encompasses goals that are shallow but global (e.g., conduct a census of all the members of a particular faith), and in the lower left the goals address deep conditions but are geographically circumscribed (e.g., persuade more parents in a community to read regularly to their children). Degrees of depth or domain can be pinpointed along either scale.

Locating program activities on this map can be the first step in developing a common framework for evaluation. It allows all parties to discuss, reconcile, and build on their views of what outcomes, time frame, and manifestations of success – and therefore, what evaluation methods – are most reasonable.

Three Reasons Why Evaluation Is Useful…

The benefits of a well-constructed evaluation process are readily apparent to those who have experienced it.

1. Evaluation is a learning tool, and the organization emerges with more knowledge, self-assurance, and ability to improve in the future.

2. Subsequent organizational and program planning are increasingly grounded in an understanding of what works.

3. The true return on investment is clearer; that information can be used to build a stronger case for support.

…And Why It Is Not

On the other hand, those who have experienced a poorly executed evaluation have their own reasons for resenting the intrusion.

1. Grantees feel at the mercy of the evaluation process, which often places more value on quantitative measurement than on intuitive and qualitative ways of knowing.

2. Donors who require evaluation often fund for a few years only – they don’t stay involved long enough to apply what is learned.

3. The lessons from organizational evaluations don’t accumulate in the field, and with no common repository for what is learned, every subsequent program design and evaluation seems de novo.

Paul Brest, President
The William and Flora Hewlett Foundation

Bill Keens makes cogent observations about the value of evaluation. His chart showing the “mapping tool” is designed to help overcome the most fundamental barrier to evaluation: lack of clarity about the organization’s intended outcomes. Not as a substitute, but as a complement, I want to propose the utility of another, more linear chart that sets out how the organization plans to achieve its intended outcomes or objectives:

Inputs → activities and outputs → outcomes

Inputs consist of resources the organization plans to deploy for the project; activities and outputs are what the organization actually does or delivers; and outcomes are the ultimate results it plans to achieve. The causal chain moves from left to right, showing how the organization plans to get from here to there. However, designing a project begins on the right side – specifying where “there” is.

Planning for evaluation thus begins by understanding how, at least in principle, you would know that you have achieved your ultimate outcome, and how you will know if you are on the path to success. The actual measurement of many outcomes – for example, improving children’s life opportunities through a mentoring program –requires the gathering and analysis of social science data beyond the capacity of most nonprofit organizations. Specifying what outcomes you would measure if you could is nonetheless essential to knowing just what you hope to accomplish. And intermediate indicators of progress that are actually measurable, and that you commit to measuring from the outset, are essential to knowing if you’re heading in the right direction.

Judith H. Kidd
Assistant Dean of Harvard College for Public Service
Instructor “Managing Nonprofit Organizations,” Harvard Extension School

Funders and nonprofit organizations have a shared need to determine organizational and program effectiveness. That said, I try to avoid using the word “evaluation.” It is tinged with the sense of final judgment and, even in academia, few people can agree on what constitutes a valid process. Even when everyone can agree on the process, it may be too rigorous and unforgiving to justify the time and expense involved.

“Outcomes measurement” is a better term perhaps, but it requires quantifiable results that may be difficult to demonstrate. At the ground level, people need maximum flexibility for individual cases, even if this runs counter to good evaluation design. Direct service providers are not, as a rule, opposed to precision or the need to create better programs; but they doubt whether rigorous evaluations are strongly correlated with long-term funding. Some believe, probably justifiably, that evaluations meet more of the funders’ needs for accountability than the programs’ needs for advice.

Nonprofit staff can find it challenging to cooperate with a time-consuming evaluation that takes them away from their work. This is especially true when they fear that the process will not prove the effectiveness of a program that, day-to-day, on the ground level, they can see making a difference. Perhaps that difference is occurring one person at a time though not in a sufficiently systemic or acceptably quantifiable way. Furthermore, those on the front line may have had little input into the grand claims or numbers built into their organizations’ strategic plans or grant proposals that are used as the basis for the evaluations.

Perhaps we should spend less time evaluating the direct service providers and more time evaluating those advocacy and governmental agencies that are supposed to work on systemic social change.

Elizabeth T. Miller, Director, New Ventures Philanthropy
Washington Regional Association of Grantmakers

In my role as Director of New Ventures in Philanthropy, I am a grantee, and as a member of the staff of the Washington Regional Association of Grantmakers, I work closely with many grantmakers. I believe that evaluation can be an excellent learning tool for both grantmakers and their grantees, so I was appalled to hear a foundation director lament at a recent conference of grantmakers: “I just don’t have time to read those long evaluation reports.” This comment revealed an essential dilemma about the level of effort required both to produce evaluation reports and to make them a useful tool for philanthropists, and leads to these reflections on evaluation from the grantee’s perspective.

1. Funders need to communicate clearly with grantees what is being evaluated, why this information is needed, and how it will be used. If feasible, grantees should be involved in designing the evaluation process so that they are assured that the evaluation will be valuable to them. Funders should make it clear that the evaluation report is not an audition for the next round of funding, but that they value an honest appraisal of the positives and negatives of a grantee’s experience.

2. Funders should provide grantees with the technical assistance they need to do evaluation well. Foundation staff and evaluation consultants engaged by funders should help grantees determine outcomes and develop data-collection tools.

3. A strategic and well-constructed evaluation can be an incredibly valuable asset to both funders and grantees, if the knowledge gained is shared. Convening meetings of grantees to facilitate the exchange of information among the grantees is a great way for us to learn from each other, and a painless way for the funder to digest the evaluation “learnings” without having to read a lengthy report.

{jcomments on}

Leave a Reply

Your email address will not be published. Required fields are marked *