Without some form of feedback and subsequent evaluation, no learning – in the very broadest sense of the word – can take place. Life is not possible without learning, as life in all its facets involves ongoing processes of change and adaptation that require us to learn. Most of this learning requires no conscious intervention on our part. Generally, however, when we think of learning we think in terms of institutionalised learning as in schools and courses, and more recently also in the individual “life-long” learning. This article sets out to explore two institutional forms of evaluation – sommative and formative – and to see how these fit into learning in a broader context. In particular, what light these ideas could shed on project evaluation and management.
The most common institutional form of evaluation, the one that has been deeply ingrained in our minds by the experience of school, is that which seeks to indicate the level of our knowledge or ability in relationship to predetermined standards or criteria. Called “sommative” evaluation, this form is epitomised by the test or examination. Sommative evaluation can be limited to indicating that the required competences have been acquired or not. In many cases, however, sommative evaluation goes much further as performance is equated with a number (“accurate” to the decimal point) via an elaborate, but none-the-less extremely imprecise procedure, that one could call the “school bookkeeping system”.
Marks uber alles
The procedure of attributing marks can play such a central role in schools, that preparing and doing tests takes up a major part of the time. At the same time, pupils rapidly end up seeing their activities essentially in terms of the marks they are attributed, the averages of which dictate their success or failure in school, if not in their future life. Such is the importance of testing and examinations, that one wonders if they haven’t replaced learning as the main activities in schools. In many cases, information is delivered in schools and then subsequently tested. The “learning” itself is done elsewhere: at home for example, with school children memorising what has been dictated at school.
Bookkeeping or learning?
The cynic might say that this form of “bookkeeping”, with the “arbitrary” attribution of a quantity to a quality, is the major lesson taught by schools. That it prepares the way for acceptation of the logic of the market place, where a monetary value is attributed to anything and everything the market can lay its hands on. The relationship between market logic and the attribution of marks becomes all the stronger as we move to the so-called “knowledge economy”. In order to be able to trade in competences, market logic requires there be some form of equation between competences, which are fundamentally qualitative, and payment, which is “purely” quantitative. This is why there is such interest in certification in relationship to so-called “life-long” learning. This is also why others, less numerous and less vociferous, argue in favour of a parallel non-monetary economy.
“Formative” evaluation, as its name suggests, has to do with learning as a process and not with the certification of end results, nor the attribution of a prize or a price. There are no grades or marks with formative evaluation. The aim is for each actor or group of actors to take a closer look at competences and performance with a view to improving them. There should be no judgement for failings and inadequacies pointed out during formative evaluation lest it be the longer-term sanction of not being able to understand, decide or act appropriately when the time comes. As such, formative evaluation comes much closer to the natural course taken by learning, as far as evaluation is concerned, in which we continually assess what we know and what we do in relation to the world around us so as to be able to understand and act more appropriately. If formative evaluation does differ substantially from the way we “naturally” evaluate what we do, it is in the attention given to the process of learning itself. The hypothesis behind this position is that being aware of how we learn speeds up and improves our ability to learn. A philosopher might well ask “Whence and wither this desire to accelerate learning?”
Learning or administering learning
A serious difficulty arises when the distinction between sommative and formative evaluation is confused, misunderstood or ignored. Successful use of formative evaluation requires a relationship of confidence. It involves taking a risk on the part of the learner, accepting that he or she does not know all. This is a quite different perspective from proving that you know what you were supposed to learn. When the comments made in the context of formative evaluation are (mis)used for the purpose of certification, the learner may well have the deep-seated feeling that he or she has been seriously misled if not abused and any further use of formative evaluation becomes much more difficult. In the long run, sommative evaluation is not concerned with learning so much as with administering a defined number of circumscribed competences. And, as mentioned above, there is a very strong tendency to replace learning by the administration of units of competence. These comments apply not only to the school context but also to the way European Union and others evaluate projects. Learning in this project-based context could be epitomised by the development of best practices, but the latter are generally seen more as shortcuts, discrete recipes to be applied systematically, rather than an increasing awareness and understanding of on-going processes. “More haste, less speed” the adage says.
One form of evaluation often used by the European Union in judging projects is the “periodic review”. An expert or a group of experts study the project documents and may question the people involved in the project. On the basis of what they apprehend, they evaluate the project, generally pointing to a number of aspects they consider need to be improved. Having been both reviewer and reviewed, I have to admit that the procedure leaves me ill at ease. The source of this discomfort no doubt lies in the discordance between the perceptions and judgement of the reviewers and those of the actors involved in the project and the relationship of both parties to those who provide the funding. Reviewers are chosen because they are not directly involved in the project although they generally do have experience in the same field and/or in similar projects. This mixture of expertise and ignorance (their knowledge of the project being limited to what they can read about it and to the brief answers participants can provide) is supposed to produce a sound, unbiased judgement. Is this really what happens? Although reviewers very often raise stimulating questions about the project, these questions are only fully pertinent in an absolute context. Trying to apply reviewers’ suggestions in the real-life constraints within which the project took place can turn out to be quite inappropriate. In other words, as long as this review work remains within the context of formative evaluation, its value can be considerable. The moment, however, it becomes a tool for judgement the outcomes of which dictate the completion of the project, there should be serious doubts about its usefulness. I can hear my philosopher friend asking “Why do other people seek to control what and how we each learn?” Certainly in the case of project reviewing, it is a question of trying to get best value for money. “Natural” you might say. But what if that desire to control insidiously gets in the way of learning, innovation and creativity and, as a result, hinders finding the best solutions?
Towards a new system of values?
The very nature of the evaluation and the requirements put on project evaluators, biases everybody’s perceptions of the project and the judgements reached. There is a tendency to think in terms of discrete elements that can be weighed up and which are dissociated from the overall context of which they are, in reality, an integral and inseparable part. Talking to Andrea Ricci the other day about potential projects for the European Union, he insisted that “you need to provide tangible, quantifiable results when doing a project so that those who pay are reassured that their investment has been worthwhile”. This is even more the case when the return is not financial but “political” or “strategic”. No doubt Andrea’s advice is indeed wise in the current context. But I can’t help wondering about the impact of such a logic on the nature of projects themselves. If the sole measure of value is quantitative, or at best qualitative but in terms of discrete disconnected parts, many “valuable” activities risk being seeing as valueless. In addition, this piecemeal perception cannot but mislead in a context which is essentially systemic in its nature. Is there not an extremely pressing need to reintroduce or invent a more extensive system of evaluation based on values that cannot necessarily be reduced to numbers and which takes into consideration the process seen as part of whole? My philosopher friend would no doubt say “You may well be right, but why all this hurry?”