Causation can be a basic concept in everyday life. If you slip on a banana peel you will likely fall, therefore it makes logical sense to say that the presence of a banana peel in your path caused you to fall.
But things can also get more complex than that.
If you think about what you may have achieved in your life, you may come up with what seems to be a simple causal chain of events: one job led to another opportunity, which then led to a major decision about something, which led to a change in career. This assumes that there is only one causal pathway and that those single events are the only thing influencing your decision. What about timing? Or the people who helped you? Or your health and wellbeing at the time? All of these things may contribute to what you have achieved, yet it can be easy to overlook the full range of influences.
In the field of evaluation, the concept of causation can be as complex, and it is an important concept to understand, unpack and prove. To understand cause and effect can be the only way of proving that a program or initiative can be directly attributable to a particular outcome. To ignore this concept means we are making assumptions that can be costly or serious. It may seem obvious that the community has changed as a result of a program or intervention, but what if those changes occurred due to something else? To make recommendations about the future of a program without knowing this can be costly or even serious.
It is not always possible to accurately measure causation, or to even be certain about it, yet it can be possible to prove it beyond reasonable doubt. In a previous post, I posed the question: Is qualitative evidence legitimate for proving cause and effect? You can read that post here. here
Internationally recognised evaluation thought leader, Jane Davidson, explains causation clearly. She says:
There are two basic principles relating to causation. Firstly, it is to look for evidence for and against the suspected cause (the thing being evaluated).
Second, it is look for evidence for an against any alternative causes, that is, rival explanations. As an evaluator, this means putting ourselves in the shoes of the harshest critic and looking for anything that can disprove what we have assumed to be the causal case.
Davidson outlines eight possible strategies that can be used to explore the concept of causation. Some involve qualitative inquiries, other are quantitative. Many are extremely simple and seemingly obvious. But they are important to consider out rather than to assume causation cannot be proven.
The qualitative strategies include talking with key stakeholders and program users to ask how things came about, checking whether the content of the program matches the observed outcomes, and looking for obvious signs that may suggest one cause or another.
More complex strategies––which may include quantitative inquiries––include making comparisons with a ‘control’ or ‘comparison’ group, or introducing statistical controls for extraneous variables.
It’s important to know how certain you need to be about cause and effect. There are times when it may be important for an evaluator to be water tight when it comes to proving causation. At other times it may be sufficient to simply examine how the program mechanisms work, and, given the observed outcomes, scrutinise the likelihood that the program itself was what caused the changes, or whether there were other external factors happening in the community that were more likely to have been the cause for change.
It can be too easy to make assumptions about cause and effect, particularly when the purpose of an evaluation is to focus on outcomes. Sometimes, when focusing on outcomes, we can make lazy assumptions that the outcomes we have observed came about directly from the program or initiative, when there may be little evidence to suggest that is the case. Instead, it should be important to consider whether the program being evaluated can be proven to have added a practically significant impact above and beyond anything else that was happening at the time that is large enough to justify its existence.
If understanding cause and effect is important in an evaluation inquiry, it may be prudent to employ a combination of strategies to explore this concept. Often a combination of more than one of these strategies may be what is required. But, as with every evaluation project, the strategy employed will be determined by the purpose of the study––particularly the level of certainty required about cause and effect––the scope, and the budget.
Davidson, E.J. (2005). Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation. Chapter 5: Dealing with the Causation Issue. Los Angeles, California: SAGE.