As evaluators, it is sometimes clear at the outset of a project that there is an expectation of positive outcomes and recommendations that will lead to projects leveraging ongoing funding (perhaps from government, philanthropic organisation, social enterprise or other foundations). There are even times when this is explicitly articulated in the project brief.
Evaluators may be tempted to provide overly positive findings to avoid conflict with clients and to ensure future work. Such temptations often fall heavily on independent consultants who rely on future contracts. It can seem easy to sympathise with the client’s situation. Who doesn’t want positive results? Who doesn’t want to leverage ongoing funding? But as a client, it is important for you to reconsider the role of the evaluator and think about the true value they can add if open to a range of outcomes.
Principles of honesty and transparency
As members of the Australasian Evaluation Society, we have a commitment to operate according to the following principles of honesty and transparency, as outlined in the Society’s Guidelines for the Ethical Conduct of Evaluation.
1. Being clear at the outset
At the start of a project, evaluators have a responsibility to discuss the project brief with the client. We should understand the purpose of your evaluation, how you intend to use the findings, and the intended audience for the report. We need to make distinctions between the interests of the commissioner and other stakeholders in the evaluation, including the general public and taxpayers.
2. Telling the truth
Independent evaluators must fully report negative findings. We are obliged to adhere to professional standards of practice which include reporting results that truthfully and fully reflect our findings. We must not influence outcomes, or tamper with results that breach the integrity of our research, regardless of any subtle or even unspoken pressure we may feel to report positive findings. If findings are negative, we are obliged to explain with as much certainty as possible what the results are and what they mean.
There are a number of reasons why results may be negative, including:
Targets have been missed
If targets have been missed, we are obliged to be clear about that fact that also clarify how negative the results are. We must report as carefully as possible when things have gone wrong so that similar errors in programming or delivery approach can be avoided in the future.
Data may be inaccurate or inconclusive
It is possible that the results are inaccurate due to faulty data collection or analysis, or other evaluation mis-steps. Due to project limitations determined by budget or scope, sometimes data collection may have been inaccurate or incomplete, which means results may not indicate clearly what happened or what the next steps should be.
Not fit for purpose
A program or activity may be seen to be working well, but for the wrong audience or community. We need to be able to determine if a program has not addressed the needs of your community of interest.
Evaluation is not marketing
A truthful, transparent, independent evaluation is more valuable than one that is designed as a marketing tool. Good summative evaluation is an essential source of useful in-house information for learning about what works, building competitive advantage, and making prudent decisions (Davidson, 2012).
It is the not responsibility of the evaluator to prepare a marketing tool for a client without the freedom to openly discuss any weaknesses or issues associated with the program or activity being evaluated.
What can a client do with negative findings?
Negative findings can be valuable to you, particularly if you are hoping to leverage more funding or support for your program. You can demonstrate that you have remedied what wasn’t working so well, showing you fully understand their community of interest and have responded accordingly. You may be in a better position to seek funding support for the stronger elements of your program, without investing time and resources in aspects that are not so working so well. This demonstrates that you are reflective and have a thorough understanding of the range of impacts of your program on your community.
You may not realise it, but you need us to tell it as it really is. And that can be a positive thing, without the positive bias.
* Davidson, E.J. (2005). Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation. Los Angeles, California: SAGE.