‘…now is not a good time for control freaks” – Eric Young
In my last post for Slaw I wrote about the importance of creating the conditions for justice innovation by building the skills needed to work in multidisciplinary teams and collaborate rather than “consult” with justice system users. In this post I want to focus on another important part of creating the conditions for justice innovation, in particular how we might support innovators by rethinking our problem solving approaches and the methods we use to evaluate justice innovation initiatives.
How we evaluate the success (or failure) of a project is intimately tied to our understanding of how problems should be solved.
In his recent book, Social Lab Revolution, Zaid Hassan argues that our current models of problem solving and evaluation support a linear and expert-planning approach. This approach typically looks something like this: 1) identify the problem, 2) develop a plan to deal with the problem, 3) implement the plan exactly as conceived 4) evaluate the results. Referred to by Hassan as the “business as usual” approach, this approach is guided by three key assumptions (and here I am paraphrasing Hassan):
- That the problem solvers (i.e. experts) don’t have or don’t need “skin in the game”. Experts solve problems they don’t experience them.
- That a plan to solve the problem can be put together in advance. Once execution has begun, the plan does. not. change.
- That only those elements of the problem that can be measured and expressed on paper can be taken into consideration.
This model also makes particular assumptions about the problem itself, mainly that,
- The problem is well understood.
- The problem is bounded (either conceptually, geographically – or both).
- There are limited solutions to the problem and one solution will be better than the others.
- The problem is static and will remain unchanged as the solution is implemented.
In short, this linear problem solving method is neat, tidy and logical. But, it is also inflexible, unresponsive and unsuitable for tackling complex or “wicked” social problems – like those plaguing the justice system.
Complex problems laugh in the face of all of these assumptions because of one simple fact: they are unpredictable (yes, I said it!). Complex problems are not linear or bounded; they are dynamic, emergent, slippery and constantly in flux. They are the result of the continuous interactions between multiple independent actors and factors that are repeatedly pushing and pulling on the system in different ways.
The problems we face in the justice system – namely access to justice problems – are complex. This was made clear by both the Final Report of the Action Committee on Access to Justice in Civil and Family Matters, and the Equal Justice report by the Canadian Bar Association. As both reports highlight, access to justice is a complex problem that involves multiple actors, is interwoven with personal, social and political issues, and cannot be solved without working collaboratively. People’s legal issues are shaped by social and economic factors like poverty, marginalization, mental health issues, and familial relationships, among others, all of which contribute to the complexity of the problem. This means that business as usual problem-solving approaches that are linear and inflexible simply will not work.
Enter innovation. To tackle complex problems we need to rethink our approach to problem solving. Methods like design thinking, social innovation labs and change labs, emergent strategy planning are all tools that are increasingly being experimented with and used by justice innovators – whether they are individuals or organizations (e.g. Hiil, Stanford, Winkler Institute, etc.). Innovation approaches don’t fit neatly into our current problem-solving paradigm. For example, many of these approaches rely on “strategic learning”, which means that as a project unfolds, the “plan” is constantly being adapted in response to real-time feedback the innovators are receiving; innovators are frequently changing their plans as they learn “what works”. Innovation processes also rely heavily on creativity, ideation, problem definition and prototyping, all of which sit uncomfortably within more traditional planning models that ask project leaders to predict in advance what both the problem and the solution are.
Yet, even as we recognize that the work of justice innovation is typically very different than what we’ve done before, our evaluation methods – which often determine the “success” or “failure” of a project, and the potential for securing future funding – continue to align with business as usual problem solving approaches. We often ask innovators to predict, from the outset, what the outcome of their innovation process will be. We also ask them to provide a detailed plan of what they are going to do and how they are going to do it.
This can be frustrating for those working with innovation methods and techniques that don’t lend themselves to linear based approaches to problem solving or typical methods of evaluation. Often to receive funding innovators spend too much time trying to get their “round” projects to fit in the “square” holes of funding applications that use a planning based approach.
That said, traditional-based problem solving approaches can be as equally frustrating for funders who want to fund innovative projects but also need reliable ways to make sure that their limited funding resources are supporting projects that are making a difference.
My apologies if you are hoping that I am going to end this post with an answer – I don’t have one. What I do know is that, put simply, our heavy focus on past, linear, outcomes-based, business-as-usual approaches have not produced meaningful change. A shift in focus from outcomes to process – innovative process (including how we fund, execute and evaluate) – is required.
The good news: there are evaluation methods and strategies out there, which have helped me to start thinking differently about innovation, which are well suited to innovation.
The better news: many of these methods not only produce reliable data – they can actually improve the results of the innovation process. Enjoy!
- “Strategic Philanthropy for a Complex World”. In this article, published in the Stanford Review of Social Innovation, the authors discuss why the predictive model of strategy planning traditionally used by funders needs to give way to a model of emergent strategy that “better aligns with the complex nature of social progress”.
- “A Developmental Evaluation Primer”. This great resource by Jamie A.A. Gamble not only provides a guide on how to carry out developmental evaluation – an evaluation model that helps innovators continually improve their process – it also addresses many of the myths around developmental evaluation, including the myth that developmental evaluation is a “soft method”.
- “Evaluating Collective Impact: Five Simple Rules”. Aside from providing “five simple rules” that practitioners, funders, and evaluators should consider in their own evaluation efforts, this article provides a great comparison of the principles that guide traditional evaluation models and complexity-based models.
- Finally, for those who may have missed this gem when it was published in 2007– your next read should be “Getting to Maybe” an extremely readable and inspiring account of how to tackle complex problems in an increasing complex world. You will have no trouble seeing how it applies to justice innovation! For a specific look at evaluation see pages, 46-54.
— Nicole Aylwin