Using AI to Address Court Delays and Adjudicate Claims

There is a large backlog in our courts and tribunals. But simply converting our current system with minor tweaks does not go far enough to improve access to justice. We need a larger change. We need to harness AI to assist in adjudicating claims.
Perhaps the first body to use AI tools will be tribunals rather than the courts. The discrete areas of law that tribunals address lend themselves to being best suited for AI tools and algorithms to decide routine, interlocutory matters and some minor cases on the merits. As a result, tribunals may be forced into using emerging AI technologies to address their ever-growing backlogs. If successful, the technology could then be adopted to move up the value chain to more complicated matters.
In developing AI tools, the report by the Law Commission of Ontario, Accountable AI should be followed, including the recommendation below:
The provincial government should develop and adopt a Directive for provincial government agencies that can guide provincial decision-making and serve as template for other public organizations under provincial jurisdiction. Factors that the Directive could address include:
• What constitutes “notice” when an AI system is used by the provincial government, a municipality or provincial agency?
• Do impacted parties need to know only that an AI system was used in the process of the decision, or do they need more information about how the system works, such as what data it relied on, how the data was sorted and weighted, or whether a human was involved in the decision at a certain stage?
• Do parties need to participate in a decision made in part or in whole by an AI system?
• Do parties need to be consulted in the creation of a system?
• What type of “reasons” or “explainability” is required from a system?
• How to assess the “reasonableness” of a decision made or influenced by AI? What
is the standard an AI system will be held to?
• Can an AI system be explained sufficiently in a way that impacted parties can
understand the decision made against them and are meaningful appeal options
possible?
In my opinion, AI tools to assist in adjudicating claims is inevitable. We need to design these tools to be fair and free from bias. We shouldn’t insist on maintaining the tradition of physical hearings if online courts and AI tools could greatly and reliably increase access to justice. We should aim for proportionate justice.
The expense, speed, complexity and the extent of combativeness should be proportionate to the scale of the case. Most cases can be and should be adequately addressed online, with assistance where possible from AI technologies.

Comments

  1. I’m not a lawyer. I have no experience as an adjudicator and I’ve never written anything in any software language.

    But I may be qualified to have an opinion about this matter because of my experience as a litigant.

    And my opinion is that I’d trust an AI driven car to get me safely to my destination before I’d trust some AI entity to provide a fair hearing of any legal matter.

    The efforts to prematurely deliver AI driven transportation have already resulted in some tragic accidents. We are living in a world in which technology is being forced upon us. We need to solve some of these problems the old fashioned way – with non-artificial intelligence. And common sense.

  2. Alex Heshmaty’s article, “AI legal technology: fact vs fiction” suggests that “references to artificial intelligence within the context of legal technology should generally be viewed, in the words familiar to everyone who studied law, as ‘mere puff””.

  3. RE: “We need to design these tools to be fair and free from bias.”

    ON courts have yet to actualize the promise (in the “Rules”) that expert witnesses will be “fair and free from bias.” Thus we see “hired gun” experts perpetually thrive in a civil justice system in which judges fall short in their gatekeeping of experts responsibility. How would AI resolve this issue?

    RE: “The expense, speed, complexity and the extent of combativeness should be proportionate to the scale of the case.”

    How would one operationalize a method to assess the “scale” of the case in order to sort out which ones are deserving of human being oversight/involvement? For example, would AI be suitable for a personal injury auto accident case in which a catastrophically injured litigant is seeking to compel a Goliath auto insurer to abide by its policy obligations?

    AI sound plausible when describing the system in terms of the macro view (from 30,000 feet up). But from a micro (on the ground) perspective it is is far from clear how this could become anything more than an aspirational goal. Especially given that even in the current non-AI system, at least in terms of expert witnesses and Rule 49; the promise of properly qualified and impartial expert witnesses in the personal injury context remains to this day an aspirational goal (if not a complete hoax on the public).