Evaluation as More than a Report Card
Evaluation can be one of the most widely misunderstood aspects of designing and conducting programs, especially in the area of arts education. Many approach it simply as a means of securing a (hopefully good) grade for their activities. Difficulties, mistakes, and uneven data get smoothed over. But meaningful evaluation should engage all stakeholders in the hard – but essential – work of improvement. " />
No. 29 | published 2010
Evaluation as More than a Report Card
Evaluation can be one of the most widely misunderstood aspects of designing and conducting programs, especially in the area of arts education. Many approach it simply as a means of securing a (hopefully good) grade for their activities. Difficulties, mistakes, and uneven data get smoothed over. But meaningful evaluation should engage all stakeholders in the hard – but essential – work of improvement.
Evaluation as More than a Report Card
by Dr. Dennie Palmer Wolf, Principal
Evaluation can be one of the most widely misunderstood aspects of designing and conducting programs, especially in the area of arts education. Many approach it simply as a means of securing a (hopefully good) grade for their activities. Difficulties, mistakes, and uneven data get smoothed over. But meaningful evaluation should engage all stakeholders in the hard – but essential – work of improvement.
For several years, I have been fortunate to work with Big Thought, a city-wide consortium of arts and cultural organizations in Dallas that enriches the learning of elementary students throughout the city. In a new Big Thought book that I wrote with Jennifer Bransom (Big Thought’s Director for Program Accountability) and Katy Denson (an experienced program evaluator), we shared our ideas on what it takes to conduct an evaluation as an engine of improvement. Here are some highlights from our book, More than Measuring:
1: Tailor the evaluation to the context.
In Dallas, even evidence of a growing number of clients and services and increased grants and donations was not sufficient in a time of increasing accountability for measurable effects. Evaluation design had to reflect stakeholders’ and funders’ desire to see demonstrable changes in children’s learning.
2: Create community-wide investment in improvement.
From Day 1, the stakeholders agreed that the evaluation should be a candid and constructive examination of what worked and what needed improvement.
3: Engage a full range of stakeholders in key decisions.
Long before data collection began, we asked all participating organizations to help define a shared outcome – literacy, defined as children’s increased ability to express themselves powerfully. After that, all horses pulled in the same direction.
4: Design the evaluation so it enhances the capacity of all participants.
We invited staff members from all the participating organizations to help us collect and analyze the data. Teachers worked on program design. Others took different roles.
5: Plan for midcourse corrections.
Part way into the evaluation, we realized we were looking for effects in the wrong places. This required a public re-working of the design, our tools, predictions, and analyses. But the change modeled the habit of learning from mistakes.
6: Grapple with uneven findings.
The data showed that different arts and cultural experiences had uneven effects on student learning. Rather than viewing this as evidence of “failure�? or “noise�?, we addressed the unevenness as data, harvesting information about what does and oesn’t work.
7: Stay alert to surprises.
Students who were interviewed by researchers were most affected by the programs. These were unintended effects. But we learned
that placing young people in the role of informants
accelerated their learning.
8: Share and use the findings for improvement. The evaluation was a four-year longitudinal investigation, but we held annual meetings with teachers, principals, District Board members, and provider organizations. We discussed what the program was – and wasn’t yet – accomplishing and used those discussions to set program and evaluation goals for the coming year.
Gigi Antoni
Executive Director, Big Thought
Taking the plunge into major evaluation – with all its risks and benefits – may have been the single most important thing that propelled our organization, Big Thought, to becoming the organization it is today. Initially, our decision to make a major investment in research and evaluation came about because we had no choice – funders were insisting on it. But we soon saw how important it was to get beyond “stars in kids’ eyes�? as our only measure of accountability.
In the process of gaining feedback from evaluation, our programs got much stronger.
We asked ourselves hard questions about how the design of our arts integration work would have maximum effect on the children we served.
We had to go after the reasons beneath the choices we made.
It was a huge transition for us. We created a division of program accountability and we started building evaluation dollars into all of our budgets. Everyone in the organization had to be able to explain these costs to our partners and our funders. We had to make peace with the fact that those dollars weren’t going directly to children and artists.
Ultimately, we were taken much more seriously. We became contenders in major grant competitions at the federal level. This last time around, we got the highest possible score in the research design category when our proposal was reviewed at the U. S. Department of Education. When you think that less than ten years ago, we would have seen evaluation as taking something away from our programs – the distance we have traveled is quite significant.
David Dik
Managing Director, Metropolitan Opera Guild
For any arts education institution, the rationale for evaluation and assessment should be clear – to improve the teaching and learning of all those involved. Though data and collection of data is critical, evaluation requires an initial question or hypothesis to be tested.
Without a common set of guiding beliefs, evaluation evolves into simple program documentation and can quickly become transformed as inadequate “proof.�?
As an example from our work at the Metropolitan Opera Guild, we have a guiding belief for our programs that also informs our evaluation design. We believe the arts can provide an immeasurable impact on literacy when we expand the definition of literacy to include expression and articulation in multiple languages – in the case of the operatic art form, the combination of sound, sight, dialogue, and movement.
At the Guild, we also follow another of Dennie Wolf’s design principles about the key role of partnerships in evaluation. Partnerships need to be developed amongst educators, administrators, cultural, and community-based organizations as well as parents to ensure that the lasting impact of an engaging arts education program realizes its fullest potential. In our efforts, we strive to engage teachers, teaching artists, and students as our primary gatherers of information. We use the process and work that we create to assess our impact on the lives of students and teachers. When evaluating our programs, we find it critical that the artistic work is genuine and authentic. What better way to do so than to engage students and teachers not only in the process of making art, but in the assessment process as well.
Moy Eng
Program Director, Performing Arts, William and Flora Hewlett Foundation
Why should evaluation in the arts and in arts education be such a provocative topic? Why do arts practitioners and teaching artists often resist taking it on?
Some say that the arts are about creating magic and beauty. The arts are ephemeral and simply cannot be reduced to quantitative metrics. Others argue that the arts represent one of the last refuges from a testing culture in our schools, a culture only made worse by the No Child Left Behind legislation. Evaluation, they say, reduces the arts to the same multiple choice testing mentality that pervades so much of what a child experiences in school.
At our foundation, evaluation is an organization-wide philosophy and practice, and the arts are not exempted.
We do not try to claim that the arts are different when it comes to demonstrating effectiveness. We ask all of our grantees in all program areas to articulate a theory of change – from mission to outcomes – and then to tell us the strategies in between that will move them along a path toward those outcomes. We ask: “How will you know you are moving in the right direction and making progress toward the results you seek? How will you measure change?�?
Those of us who work at the Hewlett Foundation view part of our job as helping our grantees think about these questions and coming up with evaluation strategies that build their capacity. As Dennie Wolf says, evaluation is far more than an exercise in measuring. At its best, it strengthens organizations and programs and ensures better results. {jcomments on}