Column

Technology Is Changing, and So Should Our Approach to the Self-Representation Problem: Artificial Intelligence for SRLs

By David Lundgren, University of Toronto student Researcher in partnership with the NSRLP

In Canada, self-represented litigants (SRLs) are generally disadvantaged from the onset of their case and throughout the legal process. Litigants are often driven to self-representation by financial constraints or a lack of available resources. Cultural and linguistic barriers, mistrust of the justice system, and negative socioeconomic factors also influence their decision to self-represent. These considerations manifest negatively in SRL experiences and persist throughout cases. In court, self-represented litigants tend to fare worse; they are misperceived as vexatious and misinformed, or simply made to feel they do not belong in the courtroom. Their shared experiences dealing with the legal system tell a story of desperation, betrayal, and need.

Despite this, policymakers are doing little to tailor solutions to the specific issues plaguing SRLs. At the root of this crisis, there is significant concern over inadequate data tracking SRLs, which prevents decision makers from creating evidence- and data-based policies to treat the self-representation problem. This neglect leads to misperceptions of who SRLs are and the challenges they face.

However, there is still hope to address these issues. Innovative technologies and the rise of Artificial Intelligence (AI) enable us to unlock more valuable and relevant insights from conventional data. In redefining the boundaries of our human capabilities, we can now inject empirical analyses and tools to uncover more than what meets the eye. As the world transforms into an increasingly data-driven and digital space, self-represented litigants cannot be left behind.

AI is fundamentally different from inferential tools that previously characterized the self-representation problem. To reveal more in-depth insights, we need more in-depth data. It is time to examine how AI can be used to investigate the self-representation issue in Canada and how big-data learning overcomes the challenges that impede conventional statistical methods.

How can AI address the self-representation problem?

AI thrives on data. Whether it is numbers, texts, images, or even audio, AI tools are remarkably good at extracting hidden information that humans fail to register. However, the field of self-representation is characterized by a lack of data, specifically quantitative data. Quantitative data refers to countable or measurable numerical values and is often what most people think of when they hear the word data. This is also what most conventional statistical tools rely upon to extract information. The reason for this is that computers think in their own numerical language. Therefore, quantitative data is more or less a computer’s bread and butter.

However, the lack of quantitative data on SRLs makes it challenging to understand the entire scope of the problem and to identify trends and patterns that could inform policy decisions. While some quantitative data is available, it is often limited and does not provide a comprehensive picture of the experiences of SRLs. Because conventional tools rely on access to large amounts of quantitative data, this restricts traditional analyses’ inferential and predictive power. But AI is far from traditional.

AI trumps traditional statistical methods using its ‘intelligence.’ Indeed, AI pushes past the boundaries of quantitative analyses by understanding qualitative sources as well, such as texts, images, or audio. Fortunately, the wealth of qualitative data available, such as surveys, interviews, blog posts, and social media discussions, can overcome the limitations of sparse quantifiable data, and provide valuable insight into the experiences of SRLs.

This is where AI can play a significant role in addressing the issue. AI can better analyse the qualitative data and provide a more nuanced and comprehensive understanding of who self-represented litigants are and the issues they face. This could include identifying common themes in their experiences, such as the challenges of navigating the legal system without legal representation, the impact on their mental health and well-being, and disparities in access to justice based on socioeconomic status.

Furthermore, AI can also identify specific needs of self-represented litigants, such as the need for legal education and resources and greater support and guidance from the courts. This could inform policy decisions and initiatives to improve access to justice.

In addition to providing information to policymakers and advocating for a data-first approach, AI can be deployed directly to SRLs. Virtual assistants and chatbots, such as ChatGPT, could endow self-represented litigants with the answers to common legal questions, help in drafting documents, and provide tips and guidance on courtroom formalities. This could mitigate the undersupply of legal services, geographic and linguistic barriers, and the costs of existing legal options.

While AI has the potential to benefit both SRL advocates and research groups, and self-represented litigants themselves, it is of utmost importance to account for ethical considerations. AI is primarily intended to augment human intelligence, not replace it. While it is a fantastic way to extract patterns from large-scale and unconventional data, these patterns also need to be interpreted and checked by people. For instance, there is a risk of bias in the data used to train the AI algorithms, which could lead to inaccurate or unfair conclusions. There is also a risk of privacy breaches if the AI analyses personal information without proper safeguards. Specifically, in the case of direct use for self-represented litigants, it is crucial to note that AI can provide legal information but cannot offer legal advice. As such, it is essential to approach AI in this context with caution and prioritize ethical considerations throughout the process.

How can AI benefit the NSRLP?

The National Self-Represented Litigants Project (NSRLP) is at the forefront of organizations advocating for a data-oriented approach to address the self-representation problem. It is, therefore, no surprise that they are looking toward AI to uncover more information on how to help SRLs. Over the past nine months, I have been fortunate to work with the NSRLP to re-examine their annual litigant intake data using AI. My full final report on this work can be found here.

“Inaccessible Justice” is a qualitative and quantitative analysis of demographics, socioeconomics, and experiences of self-represented litigants in Canada. The report utilizes AI to extract sentiments behind self-reported experiences of litigants and correlate positive or negative experiences with other characteristics, such as income, age, gender, and ethnicity. It uses various AI models and tools to analyse SRL data comprehensively. It further promotes the NSRLP’s mission to adopt an evidence-based framework.

The report also includes an introduction to AI written in plain and easy-to-understand language for anyone interested in learning what AI is, how it works, and its applications. My goal in writing the report was not only to uncover new and hidden conclusions about the self-representation problem but also to introduce how AI can be used in this field. As such, my report suggests avenues for future research and highlights that new AI and digital approaches can support the NSRLP in raising awareness for the needs of self-represented litigants and their experiences.

In short, the inadequate data tracking of self-represented litigants in Canada is a significant issue that leads to incomplete perceptions of who they are and the challenges they face. However, by using AI to analyse qualitative data, we can gain a more realistic understanding of their experiences and needs, which can inform policy decisions and initiatives to improve access to justice for all.

Comments

  1. Agree that AI will become a valuable tool for SRLs. Disagree that it will solve their problems. Using AI is like using a calculator, it is a helpful tool. But it cannot replace support, advice and guidance from a legal professional. That’s why I support legal coaching – a very affordable service that SRLs can use to coach them through their legal problem – yes, even the use of AI. Otherwise the “tool” might be misused or misinterpreted by untrained people. Legal coaches bridge the gap amd empower people.

  2. I hope that AI can become a useful tool for self represented people but I’d strongly advise against using ChatGPT for legal research currently. I tried it out using questions I often get from self represented people and the answers were very unreliable. Some truth and and some misinformation but the average person would not be able to tell the difference, and when I asked it to give me precedents to support my argument it created completely fake cases with real looking citations. I’d say it is currently more dangerous than helpful.

  3. Oskar L. Dunklee

    I really appreciated how you emphasized the importance of data in understanding and addressing the self-representation problem. While traditional statistical methods rely on quantitative data, the lack of such data on SRLs limits the effectiveness of conventional analyses. However, like you mentioned, AI can analyze qualitative data, such as surveys, interviews, and social media discussions, to gain a more comprehensive understanding of SRL experiences. AI can identify common themes, disparities in access to justice, and specific needs of SRLs, enabling policymakers to make informed decisions and improve access to justice.

    Your mention of the ethical considerations associated with using AI and the emphasis on the need for human interpretation and oversight I believe was necessary as AI should augment human intelligence rather than replace it, and privacy and bias concerns must be addressed.

Leave a Reply

(Your email address will not be published or distributed)