Evaluation in the regional arts sector – current thinking

Dookie silos

In the last decade or so, regional arts programs have been designed to be powerful tools with which to engage communities in various levels of change. They have been delivered to address regional renewal, health outcomes, quality of life, sense of place, transformation, social development and marginalisation. Yet, the literature reveals deficiencies in the quality of evidence provided to support the need for arts programs.

The topic of conducting evaluations within the arts and cultural sector is a niche area of research, attracting a few key contributors from Australia and overseas, including cultural researcher Kim Dunphy, Australian freelance writer and arts researcher Francois Matarasso, American researcher Dr Maria-Rosario Jackson, and New Zealand researcher in social innovation Dr Emma Blomkamp. These contributors to the literature have written extensively about common problems association with the evaluation of arts-based initiatives.

Problem 1: Deficiencies in the quality of evidence

Jackson (2002) argues that the direct impact of arts, culture and creative expression on communities are either not well documented or understood in the arts or community building fields. Dunphy (2010) agrees with this stance and goes on to discuss how few arts organisations are taking an integrated approach to outcome measurement, where broader concepts such as theoretical considerations or internal mechanisms of program are considered. This, she claims, is the case both in Australia and internationally. The wider literature search supports Dunphy’s claim.

Problem 2: The limitations of participation and engagement data

The evaluation of arts and cultural initiatives have traditionally focused their data on participation and engagement. Although participation and engagement can be useful measures, Dunphy (2010) argues that this limits evaluations to examining inputs (what was done) against outputs (what was achieved). Yet, few provide real evidence reading the effectiveness of the proposed strategies. This absence of evidence-based studies means that recommendations for action could only be speculative (Dunphy, 2010). This view is shared by Matarasso (1996) and Blomkamp (2014), as cited by Dunphy (2010), who also argue that evaluation strategies about arts participation are traditionally focused only on outputs rather than broader community outcomes. Rife et al. (2014) identified that there are gaps between those in the arts world who interested in audience and participation rates, and people in other fields who have identified other indicators that measure broader impacts such as health, wellbeing and social cohesion in the arts.

Problem 3: Methodological weaknesses

Due to the common emphasis on the collection of participation and engagement data, Dunphy (2010) argues that evaluation of arts programs are often critiqued for their methodological weakness. This comes about due to their use of small sample sizes, reliance of anecdote, limited hypothesis testing and little attention to longitudinal measures.

Given these problems, let’s consider the merits of theory-based evaluation.

Theory-based evaluation

Theory-based evaluation is a well established and documented approach to evaluation (although rarely used in the arts sector), that calls on program managers and evaluators to firstly engage in rigorous thinking associated with the internal working mechanisms of a program, which is then used to develop a hypothesis about what the program is expected to achieve and in what way.  This includes identifying causal mechanisms, that is, thinking about what it is about the arts program that is intended to cause particular outcomes. An evaluation approach is then designed to test whether this theory can be confirmed or denied. This differs from the more common approach adopted in regional arts evaluations of focusing on simply data monitoring and collection. By using theory-based evaluation,  the evidence unearthed in the process will have greater transferability and validity.

But this approach is not without its limitations. There are issues associated with existing capacity to undertake such extensive evaluations in the regional arts sector, the availability of expertise, challenges associated with the articulating of program theories, and the time and effort required to carry out a theory-based evaluation.

Even with these challenges in mind, however, practitioners can assess, on a case-by-case basis, whether theory-based evaluation may provide benefit to the evaluation process.This would be an advancement on the current common situation in which theory-based evaluation is rarely considered as an option at all in this field, and therefore the wider benefits of arts-based initiatives are often unrecognised.


Blomkamp, E. (2014). Meanings and measures of urban cultural policy: local government, art and community wellbeing in Australia and New Zealand.

Dunphy, K. (2010). Planning and Evaluation: How Can the Impact of Cultural Development Work in Local Government Be Measured?: Towards More Effective Planning and Evaluation Strategies. Local-Global: Identity, Security, Community, 7100.

Jackson, M. (2002).  Culture Counts in Communities: A framework for measurement. The Urban Institute. USA.

Matarasso, F. (1996). Defining Values: Evaluating Arts Programs. London: Comedia

Rife, M., King, D., Thomas, S., Rose, L. (2014). Measuring Cultural Engagement: A quest for new terms, tools, and techniques. National Endowment for the Arts.Washington, DC.

Back to All Posts