08/24/2007
Why Evaluations Fail
Much learning does not teach understanding.
Heraclitus
As we approach the 30th anniversary of Exchange magazine, we enjoy looking back at our very first articles. One article, from our very first year, caught my eye today. In our November 1978 issue, Cambridge researcher Dick Rowe contributed “Making Evaluation Work in Child Care.” It's one of those insightful articles that stand the test of time. Today I would like to share excerpts from this article where he talks about reasons evaluations often fail...
- Lack of significant effects.
Frequently the evaluations of social programs reveal no significant effects of even the most innovative programs.... Differences that do exist commonly are so small or so mixed in with other variables that cancel out or dampen the effects you are looking for, that when all is said and done, when you look at the data, you don’t see any program impact on the lives of the people....- Lack of clear goals.
Evaluations also fail to identify significant results because we often don’t know what we are trying to achieve. There is a political rhetoric that is needed to get money from Congress, foundations, or other funding sources. Then there is what happens in the program. These two are both often quite different from each other.- Lack of appropriate measures.
The results of evaluations are often invalid due to our inability to measure what we want to measure. Child care programs often have objectives such as promoting warmth and caring. Yet it is very hard to operationalize these objectives in such a way that you can measure them with reasonable precision.- Failure of effects to last.
Evaluations also fail to demonstrate results because any effect that a program has tends to decay over time unless that program continues to be there or unless there are other supporting mechanisms that maintain or enhance the effect. There are very few changes that, once started, strengthen their effect over time — particularly if the environment hasn’t changed.- Failure to note unanticipated outcomes.
In measuring the expected outcomes of a program, evaluators often permit themselves to be blinded to unexpected outcomes. These unanticipated outcomes are sometimes more important than the one a program originally set out to accomplish.
Rowe’s complete article can be accessed in one of two ways:
THE MOST IMPORTANT BRACELET YOU’LL EVER GIVE A CHILD!
Safety, Security and Identification are at your fingertips with our Duplicate Tuff-Band® Wristbands.
Two wristbands with identical 4-digit numbers. Custom print on all of our wristbands. Save an Extra 20% when you order online. USE PROMO CODE: CCE507
For more information about Exchange's magazine, books, and other products pertaining to ECE, go to www.ccie.com.
|
© 2005 Child Care Information Exchange - All Rights Reserved
| Contact Us
| Return to Site