Artificial intelligence has been in the news since late last year when OpenAI released ChatGPT, a large language model machine learning chatbot that provide surprisingly good responses to questions. This system can write essays, draft legal documents, and produce computer code.
To get a sense of the potential, consider Canada’s citizenship test. Applicants for Canadian citizenship must pass an online multiple-choice exam with questions about Canada’s history, geography, economy, government, laws, and important symbols. The government maintains a study guide to help applicants prepare, and many organizations, including the Toronto Public Library offer online practice questions.To see whether AI can pass this exam, I wrote a computer program to scrape the Toronto Public Library’s practice questions and answers. The program then uses another OpenAI large language model, GPT-3, to answer the test questions.
So, how did the AI do on the citizenship practice questions?
It passed with ease. Running the program several times led to scores between 88% and 92%. As a comparison, a poll conducted in 2019, found that 9 out of 10 Canadian citizens would not achieve the 75% passing grade.
What is surprising is not just that AI surpasses average human performance on a test that is supposed to measure whether a person has “adequate knowledge of Canada and the responsibilities and privileges of citizenship”. Rather, it is how easy it is to get AI to do this.
A few years ago, getting AI to successfully answer citizenship test questions would have taken skill and effort. A custom machine learning model would have had to be trained, requiring advanced technical knowledge, time, and resources.
Compare that with today. I am a law professor with an interest in technology and border control. I have amateur intermediate coding skills. Despite my lack of formal training and my modest skills, coding for this project took only a couple hours, with the bulk of that effort going towards collecting and cleaning the data. Moreover, with ChatGPT and a bit of copy-pasting, it’s possible to achieve similar results without any coding. If you’d like to try out the code, or the no-code alternative, instructions are available via my Refugee Law Lab.
What can we learn from all of this?
First, perhaps it’s time to reconsider Canada’s citizenship test. If most Canadian citizens can’t pass the test, while a machine learning system can, then maybe whatever the test is testing isn’t that important. Besides, we know that the test represents a barrier to some marginalized groups of permanent residents, and it’s hard to imagine what benefit there is from excluding them from citizenship.
Second, as this kind of technology becomes increasingly powerful and accessible, we may need to rethink all sorts of evaluation methods. It’s one thing if an AI system that most people have never engaged with (knowingly at least) can pass a multiple-choice citizenship test involving general knowledge about Canada. But what about similar systems that can take on more complex tasks and that are as easily available as a Google search? Already, for example, GPT-3 can pass some lawyer licensing exams. As a law professor, I wonder what it will mean for me and my students when AI that is readily available at everyone’s fingertips can easily answer law school fact-pattern and essay questions.
Third, those of us interested in law and social justice need to urgently work on ensuring that this technology, which has enormous potential to increase access to justice, doesn’t instead make things worse. In the legal field, economic incentives for the development of this technology risk driving us in directions that will exacerbate power imbalances. These incentives encourage the development of tools built mostly for lawyers whose clients are powerful and well-resourced. That means lawyers for corporations trying to classify their employees as contractors to avoid employment law obligations, not lawyers for employees trying to enforce those obligations. It means lawyers for landlords seeking to evict tenants, not lawyers for tenants trying to stay in their homes. It means government lawyers trying to deport non-citizens, not immigration lawyers trying to get them status.
To avoid such a scenario, we need to create alternative incentives, such as significant public funding for AI research involving legal technologies that are not about commercialization but that instead aim to advance public interests. And we need to make sure that the bulk legal data required to build these technologies is available to non-profit researchers and organizations – not just to corporations and the government, as is currently the case.
The study guide for the Canadian citizenship test says: “Lawyers can help you with legal problems and act for you in court. If you cannot pay for a lawyer, in most communities there are legal aid services available free of charge or at a low cost.”
As we enter an era where lawyers will increasingly rely on AI, we need to make sure that those providing free and low-cost legal services have fair and equitable access to this emerging technology.
— Sean Rehaag
Associate Professor, Osgoode Hall Law School