Evaluation by Any Other Means … Is Still Evaluation

At the LCO Symposium last Wednesday, a number of law reform related topics were discussed, but one that applies to law reform commissions and other institutions, as well, related to evaluation. The LCO has a particular interest in this because we’ll soon be evaluated on our performance so far. And of course, there hasn’t been much time to have developed a track record!

The concern expressed won’t surprise anyone: how can you get away from bean counting in evaluating organizations that engage in activities that just aren’t susceptible to measurement by numbers. For the LCO, traditional evaluation would focus on how many of our recommendations have been accepted by the government? For community clinics, it might be, how many cases they have won. Or their clients would be asked how satisfied they were with the outcome of their case (admittedly with the right questions, this can be a useful measure to some extent). If we use non-traditional media (such as a play about a law reform project, as the Law Commission of Canada did), how is that measured as contributing to law reform? As one our staff at the Symposium asked, how do you equate that kind of activity to the number of recommendations adopted? (Her example was an image and she asked how do you equate an image with words.)

The kinds of measures used to performance of law commissions and clinics can’t always be equated to traditional measures. They must be assessed in terms of their own value. To what extent has a clinic raised the awareness of their rights of those who benefit from their services? To what extent has a law commission carried out its work in a manner consistent with the values it espouses? As I said before, this isn’t a new issue. And it’s not news that the measures used to evaluate an organization can determine in large measure how it decides to carry out its mandate. So law commissions focus on technical statutory amendments: they are easy to measure. Clinics concentrate on one by one client representation: that is relatively easy to measure. And they don’t spend time and resources on raising awareness of the need for participatory law reform or on raising community awareness of rights, or much time and resources, at any rate.

This reveals a crucial part of the evaluation process: the education (if necessary, I don’t want to jump to conclusions) of those who will use the evaluation to make decisions about the organization. They have to see evaluation as a bit more complex and ambiguous than a counting exercise. How to satisfy people that it’s possible to be rigorous without a focus on numbers? We’ll face this challenge over the next few months.

Comments are closed.