The practice of evaluation involves arriving at succinct answers to important questions. This is also known as evaluative reasoning. Evaluative reasoning is what distinguishes from research. Evaluations involve conducting research, but they go further, to determine merit of worth, with the aim of making evaluative conclusions. To evaluate is to make deductive arguments and claims about how good, valuable or worthwhile an initiative is.
What is ‘good’?
To determine what is good––and how good good is––is a challenging task, and one that an evaluator faces all the time. It is the most central issue for evaluation. Programs can be evaluated using all kinds of criteria and this process can generate a wide range of perfectly logical evaluations. But, unless we use criteria that are relevant and that represent a true claim of social value, evaluation outcomes may be misleading.
How well did the initiative perform?
It is important to start with factual information about the program’s performance to draw conclusions, and to be clear about which aspects of a program’s performance are the most important to the client.
For example, an initiative may be judged on its appropriateness; that is, the extent to which a program is relevant to the needs of the community, and how it was designed to address the community’s most significant problems. The most successful implementation means nothing if the intervention is not socially relevant.
Or, perhaps the key measure is about efficiency; value for money. Success may relate to the how well the sequence of operations was managed to deliver the outcomes. Often a successful program is one that uses its resources to produce as much output as possible.
Thirdly, a program’s effectiveness may be used to determine its success. This relates to the extent to which the initiative generated its intended outcomes. Perhaps the cost was too high for the program to have been considered effective, despite it delivering its intended outcomes, in which case, the evaluation may conclude it was less than entirely successful.
How good is good enough?
Once evidence has been gathered and examined relating to the program’s various levels of performance, it is the role of the evaluator to determine how good is good enough. For example, what ‘excellent’ looks like, what determines ‘good’, or ‘good enough’, and what defines ‘poor’. These answers come from lengthy discussion with a client about their expectations regarding outcomes, what the program was intending to achieve, comparing the outcomes against benchmarks such as previous similar initiatives, and may also relate to an evaluator’s assessment of the public good of the initiative.
For example, a program that shows evidence of having achieved all of its intended outcomes may, on face value, be considered to be ‘good’ or even ‘excellent’. But if the program benefits to society are few, it was costly to deliver, and was funded as a priority over other initiatives that may have delivered a greater public good, then perhaps the evaluator may be wise to judge the initiative less favourably. It may be prudent for a needs assessment to be carried out before a program is delivered to help to determine whether the program was likely to meet the needs of the community it was intended to serve.
Ultimately, it is challenging to arrive at well-reasoned, direct answers to complex questions. But even with limited evidence available, to determine what is good enough is all about well-considered reasoning.