Evaluation is a relatively young discipline, growing in theory and practice. There are many theories and approaches to evaluation practice, but fundamentally, the disciplines involves determining the merit, worth and value of things. Unlike pure social research, the practice of evaluation is about making a judgement on the worth of something.
One of the most traditional and straightforward approaches to program evaluation is to assess a program’s worth and the extent to which it delivered on its stated goals or objectives. This is how I have approached most of my evaluations to date, mostly because it is has been a response to a client’s brief. But even when conducting an evaluation that directly explores the program’s outcomes in relation to its stated intended goals, there are invariably a number of unintended outcomes that are often as important as the stated goals, and sometimes even more important. This is because they reveal the true, actual worth of the program, rather than simply its intended worth.
Michael Scriven is one of the industry’s most hightly respected thinkers in the discipline of evaluation. He has become increasingly uneasy about the separation between goals and unintended outcomes, which he calls ‘side-effects’. What should be concerning us should be what side effects the program had and evaluating those, whether or not they were intended.
Scriven goes further to argue that to consider the stated goals of a program when evaluating it can be considered to be not only unnecessary, but also dangerous. He suggests an alternative approach: evaluating the actual effects against a profile of demonstrated need. This is goal-free evaluation.
Intended goals are different from real goals
Scriven states what I have experienced in practice: that the alleged goals of a program are often different from real goals. That is, what the program designer may have articulated as the important outcomes when the program was being designed, may not be so important on the ground. So why should the evaluator get into the messy job of trying to disentangle that knot? Scriven believes that intended goals are often stated so vaguely by almost anyone’s standards, and when pressed to articulate what would determine a program’s measure of success, program designers often find it difficult to know. He goes on further to ask why try to find out what was really intended at all? This is an interesting, if controversial, question.
Criticisms of goal-free evaluation
Scriven identifies 6 common criticisms of the goal-free evaluation approach, and offers counterclaims.
Goal-free evaluation simply substitutes its own goals from those of the project
No, he claims. The goal-free evaluation my use program goals as standards, but goal-free evaluation is not limited to just examining whether those goals were met.
Great idea, but impractical
Scriven argues that an evaluator may find it comforting to reach for a security blanket of goals, but one should learn to do without them.
There is a chance that some of the most important effects will be missed
This is acknowledge to be a trade-off. The value of a goal-free evaluation does not ignore what everyone else already knows, but it also notices something that everyone else has overlooked, or produces a novel yet important perspective.
This approach can only lead to poor planning
Goal-free evaluation involves formulating goals, and formulating them in testable terms. But it’s the determination of what those goals are that should not necessarily be limited to the client’s belief of what they should be.
You can’t test for all possible effects
This is true to an extent, but the role of the evaluator is look for side-effects, and there should be no limitation on where or in what form they should crop up. The main job of the evaluator is to evaluate achievement, whether it be expected or unexpected.
Goal-free evaluation is seen as a threat by many program designers
This is also considered to be true. This is because goal-free evaluation is less under the control of management and there is much more scope in terms of what an assessment of their program may reveal.
Ultimately, if the aim of good evaluation is to really uncover the true and important merit or worth of a program, this approach cannot be overlooked. Although I have never overlooked the ‘side effects’ revealed in an evaluation, Michael Scriven has made me feel more free to legitimately explore the merit of the true outcomes rather than simply the intended ones. And that can only result in more meaningful and useful evaluations.