FROM THE BLOG

Is all error bad?

 

Screen Shot 2016-05-19 at 8.35.38 AM

Evaluation and research inquiries are about exploring the truth, about difference- also known as ‘variance’. For example:

What are the different ways that this program or service delivered its outcomes?
How did different community groups differ in their perception of the new client service?

These are both fairly common evaluative questions. And common to both of them is the word ‘difference’. Understanding difference––or variance––is often at the heart of an evaluative or research inquiry. We may be looking for a specific type of difference, or maybe we just want to describe differences as we find them emerge in our inquiry, using an emergent design. Other times we are trying to create that difference––through experiments or interventions––or we may be looking to describe the causes of the differences we have observed.

However, in every study there will be many other differences (variances) that are not necessarily related to our areas of interest. That is a euphemistic way of saying we have ERROR.

Before we can know if all error is bad, we need to identify some common types of error that may emerge in our research:

Random errors

Random errors are caused by unknown and unpredictable changes in a research study. Even through they occur, they are not related to the differences we are trying to understand in our inquiry. Some example of random error include:

 – The mood levels and alertness of respondents

 – The noise in the room in which an interview is being conducted

 – The time of day we interview respondents

These are considered errors because they are not in our control. Yet they shouldn’t affect the outcome of the inquiry in a systematic way. This is because if we could see all of the errors that impacted on the study, they are likely to average out (some having a negative impact on the study, others having a positive impact). So, random error is not bad error. If averaged out across a study, random error can just be considered to be ‘noise’ in the data.

Systematic errors

Systematic errors, on the other hand, are not ideal and are important to minimise. They are errors associated with a flaw in the equipment or design of the study. Systematic errors should be easier to estimate than random errors, as they are within our control. They can influence the outcome of the inquiry or experiment. Examples of systematic errors include:

 – A faulty measurement instrument such as a stopwatch

 – Interviewer bias, including using leading questions in interview

 – Evaluators imposing their own values on the interpretation of outcomes

Reducing measurement error

Although not all error is bad, it is ideal to try to reduce as much error as possible. There are a number of ways to do this:

  • Pilot test  survey instruments to gain feedback from respondents regarding how easy or hard the measure was and how the testing environment affected their performance
  • Train interviewers thoroughly to ensure they do not introduce bias inadvertently;
  • Verify any quantitative data;
  • If using quantitative measures, use statistical procedures to adjust for measurement error;
  • Triangulate the findings, that is, collect data from more than once source, in more than one way, to get a more accurate sense of what is happening.

Just like good fats and bad fats, it is important to understand what type of error is an acceptable part of research design, and what types of errors may adversely impact on the validity of a study. Once we are aware of this, we can embrace good error and eliminate the bad.

Back to All Posts