performing impact

work in progress

evaluation as accountability

As practising arts and humanities and social sciences researchers ourselves in the middle of writing up ‘Impact Case Studies’ for the forthcoming national Research Excellence Framework exercise (REF 2014), we are well aware of the values placed on public engagement in our work. ‘Public’ in this sense refers to complex and sometimes overlapping communities of non-academic ‘beneficiaries’. Research Council grant applications now require an impact statement and evaluation of a project to have taken place prior to any monies being accorded and we have become more practised in building into projects engagements with external partners from the creative industries, from the heritage and museum sector, from publishers to galleries. Often these partnerships were already in existence or were part of the ways by which we tested and disseminated our work or by which it found purchase in so-called ‘real-life contexts’ but it would perhaps be fair to say that most of us would not now imagine an application that did not have a built–in impact element, however, small-scale.

Conversations with theatre practitioners during our recent two day workshop revealed an almost parallel experience taking place in Arts Council funding contexts. There was lively discussion of the extent to which the very nature of grant writing had to predetermine success, identify its beneficiaries and guarantee certain outcomes, experiential and financial. What kinds of projects might get lost in this scenario was an interesting question posed by these observations? Did the evaluation of funded projects (usually of course completed at the end of a predetermined period of activity) require almost as a necessity a narrative of success, of measureable outcomes and of communities enriched and enhanced in the specific context of community theatre practice?

We asked the question then whether this purely summative evaluation was merely evaluation as accountability … evaluation produced to reassure funders who in turn often had to justify their own fiscal activities to government. Once again the kinship between the Arts and Humanities Research Council who had generously funded our own project, and for whom we will need in due time to produce an evaluation, and the Arts Council was very clear and meant there was real value in the dialogue around the room between artists and academics on the day.

Other blogs will consider and have already considered possible alternative methods and modes of evaluation but one of the turning points of the debate was whether evaluation was or indeed should be purely summative even when it was in part for reasons of accountability.

We were struck by this statement on the UCL Public Engagement website.The site is a toolkit of materials ‘intended as a guide to encourage those running public engagement activities to think through and choose the most appropriate methods and techniques to evaluate the delivery and impact of their activities.’

The site asserts that: ‘Evaluation is a systematic way of reflecting on and assessing the value of what is being done (i.e. a project, a programme, an event). Evaluation is commonly interpreted as an end product or an activity taking place at the end of a project. However evaluation should be considered as a process, taking place across all phases of a project, used to determine what has happened and whether the initial aims of the project have been carried out and achieved. Evaluation is more than assessing and measuring; it helps set the stage for a culture of learning, change and improvement.

It was exactly this idea of process that our workshop sought to explore and respond to. As well as asking questions about evaluation and audience – i.e. who are we writing evaluation for exactly and where does it go? It was great that a tangible outcome of the two days of discussion was the plan on the part of several participants simply to be more proactive in speaking to funders on this point and in seeking access to the ‘archive’ of evaluation, if such a thing exists, since the value of comparison was only too clear from the work we were engaged in at the workshop itself. The sharing of best practice and concerns in the room was the best ‘impact’ we ourselves might have hoped from going into the event.

With the focus firmly on process and formative evaluation, we looked at some of the methods and modes of evaluation that the UCL site refers to: field notes and the value of documentary filmmaking were both explored as potentially rich means of producing and engaging with the evaluation process. The next step is to trial some of these methods with our partner practitioners over the coming few months to see what different kinds of evaluation this might produce.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on October 13, 2012 by in accountability, comparison, evaluation framework, formative evaluation, process and tagged , , , .

Enter your email address to follow this blog and receive notifications of new posts by email.

%d bloggers like this: