The following text has been adapted from and inspired by part of a report written within the PALETTE project.
Evaluation helps handle complexity in complex situations by providing provisional stabilities that assist decision-making processes and can serve to combat entropy by enabling the emergence of a simpler order from complexity, albeit temporarily.
Understanding and improving evaluation in complex situations requires extending the notion of evaluation to include moments of evaluation, that is to say, embedded evaluative processes not necessarily carried out by evaluation experts.
Many people are unaware that evaluation is an activity that involves and concerns all actors in a complex situation and not just the experts of evaluation. Evaluative moments are invariably to be found amongst their professional practices.
The understanding of this extended notion of evaluation can be enhanced by looking at evaluation as a series of practices developed within one or more communities of practice.
As evaluative practices may differ from one community to another, working in a trans-disciplinary and multi-cultural contexts requires that attention be paid to differences of perspective resulting from evaluation as potential sources of misunderstanding but also of learning.
These differences in perspective need to be ‘surfaced’ in order to collectively understand, appreciate and manage them in a productive way.
The difference between experts and the lay in evaluation can be understood from a social practice perspective in that evaluation experts belong to communities of practice centred on evaluation whereas lay users of evaluation belong to communities not centred on evaluation but whose practices contain some activities related to evaluation.
In a participative context that is heavily dependent on knowledge, the relation between experts and laypeople is necessarily problematic because of the imbalance of power due to differences in the perceived legitimacy of their respective knowledge and expertise.
The practice perspective throws new light on the way knowledge is given form and shape and is made to last as a process of learning. It raises the question of the extent to which evaluative processes and their outcomes are recorded and made available and whether this might contribute to improved efficiency or impact.
Understanding evaluation from a practice perspective suggests ways in which innovative practices can be adopted thanks to the efforts of those people who gravitate at the limits of such communities and act as go-betweens or knowledge brokers, bringing new ideas with them. These activities need to be encouraged and enhanced if we seek to innovate in ways of working.
The concept of usability, that addresses the relationship between the ‘design’ of the evaluative activity and the use to which it is put seen in the light of its role and place in a wider context or process, is a useful lever for improving evaluation and evaluative moments.
Usability has its limits, however, in particular the extent to which evaluation is able to question the framework in which it is taking place.
To enable the critical evaluation of fundamental questions, assumptions and frameworks, thought should be given to creating ‘places’ and roles in whichtransgression (asking questions that are generally not allowed) would be possible such that transgrssion could be innovative and constructive.