Creating a chain of evidence can help you demonstrate the value of a program through its entire system using qualitative and quantitative data, when it is not easy or possible to go to the end source to find evidence of program outputs.
We are currently conducting an evaluation of The Climate Reality Project for the Australian Conservation Foundation (ACF). The Project has been running in Australian since 2006. Its objectives are to build public understanding and support for action on climate change, and to further political will. The Project relies on a ‘snowball’ mode of impact. Individuals attend a three-day intensive training program about the impact of climate change (delivered personally by Al Gore), and are supported by ACF to go out into their communities to educate and engage people about the impact of climate change on our society.
The Climate Reality Project supports the work of close to 450 presenters in Australia. One in 62 Australians have attended a presentation or discussion about climate change from someone who has participated in the Project.
Ideally, the evaluation would involve going to the source––the end-user––the community––to explore the extent to which community attitudes and behaviour have changed as a result of the Project, along with the influence it has had on political policy. But this would require population-based surveys, amongst other things. And as with all projects, there were limitations on time and budget, so this was not feasible.
Instead, we adopted a ‘chain of evidence approach’, as outlined by the Kirkpatrick model of evaluation. Donald Kirkpatrick, Professor Emeritus, University of Wisconsin, first published his theory in 1959 and his theory has become one of the most widely used and popular models for the evaluation of training and learning.
This approach involves using self-reporting mechanisms from participants, combined with available program participation and output data held by ACF. Using the ‘chain of evidence’ Kirkpatrick Model of evaluation, through self-reporting, we explored the the following:
To what degree participants react favourably to the training. It includes the degree to which participants are actively involved in and contributing to the experience. It also involves the degree to which training participants will have the opportunity to use or apply what they learned in training.
The extent to which participants acquire the intended knowledge (they know the relevant information), skills (they can do it right now), attitudes (they believe it will be worthwhile to do on the job), confidence (they think they can do it on the job) and commitment (they intend to do it on the job) based on their participation in the training event.
The degree to which participants apply what they learned during training when they are out in their communities. For this to occur, there need to be processes and systems in place that reinforce, encourage and reward performance of critical behaviours in the community.
To what degree outcomes occur as a result of the training event and subsequent reinforcement. This relies on self-reporting by participants, and validation of outcomes and facts presented.
One of the main limitations with a chain of approach method of evaluation is that we are relying on accurate and honest self-reporting of people, which, by its nature, is subjective. But, combined with accurate and current program participation and outcomes data, this approach is useful when there are limitations on time and budget.