Much learning does not teach understanding.
-Heraclitus
As we approach the 30th anniversary of Exchange magazine, we enjoy looking back at our very first articles. One article, from our very first year, caught my eye today. In our November 1978 issue, Cambridge researcher Dick Rowe contributed “Making Evaluation Work in Child Care.” It's one of those insightful articles that stand the test of time. Today I would like to share excerpts from this article where he talks about reasons evaluations often fail...
- Lack of significant effects.
Frequently the evaluations of social programs reveal no significant effects of even the most innovative programs.... Differences that do exist commonly are so small or so mixed in with other variables that cancel out or dampen the effects you are looking for, that when all is said and done, when you look at the data, you don’t see any program impact on the lives of the people....- Lack of clear goals.
Evaluations also fail to identify significant results because we often don’t know what we are trying to achieve. There is a political rhetoric that is needed to get money from Congress, foundations, or other funding sources. Then there is what happens in the program. These two are both often quite different from each other.- Lack of appropriate measures.
The results of evaluations are often invalid due to our inability to measure what we want to measure. Child care programs often have objectives such as promoting warmth and caring. Yet it is very hard to operationalize these objectives in such a way that you can measure them with reasonable precision.- Failure of effects to last.
Evaluations also fail to demonstrate results because any effect that a program has tends to decay over time unless that program continues to be there or unless there are other supporting mechanisms that maintain or enhance the effect. There are very few changes that, once started, strengthen their effect over time �" particularly if the environment hasn’t changed.- Failure to note unanticipated outcomes.
In measuring the expected outcomes of a program, evaluators often permit themselves to be blinded to unexpected outcomes. These unanticipated outcomes are sometimes more important than the one a program originally set out to accomplish.
Rowe’s complete article can be accessed in one of two ways:
Comments (2)
Displaying All 2 CommentsMBay, British Columbia, Canada
Re: Why Evaluations Fail
I've been most interested in this issue... of dimishing effects, the resistance of follow through and the impact on teams.
My search has has led me to
Chip Heat and Dan Heath's volume
"Made to Stick:Why Some Ideas Survive and Others Die".
Since this text does not have an ECE focus, I'm engaged in some crossover translation.
Fairfield, CA, United States
Reading this article made me think about Emmi Pikler and the Pikler Institute in Budapest Hungary. She made evaluation of her approach to caring for orphans in an institution work and now I'm curious to go through each aspect of the article and try to figure out how managed to overcome the barriers that the article points out.
Post a Comment