Machines Regulating Humans: The Q&A

This is a follow up to my previous post on Benjamin Alarie’s talk about the potential of using machine learning to regulate human activity. The presentation was followed by a great question and answer period and I thought I’d share my notes with you.

Q. When you say you are achieving 90% accuracy in evaluating tests like whether someone is an employee or an independent contractor, what are you using determine that the outcome is correct?

Alarie: use what the courts say to check whether the algorithm is right; train the algorithm using 70% of the data then use the remaining 30% to see if algorithm can predict what the court decided; achieving over 90% success rate; not making any “normative claims” about how things should be decided; rather predicting what the courts are likely to say

Q. Is the machine learning how to update the questionnaire?

Alarie: technology is not there yet; we’re at an early stage in development of these algorithms, compare the evolution of the F1 racing simulation video games; natural language processing will get better at interpreting information will improve along with human-machine interaction; try to be faithful with how judges think about these legal tests, facts, legal merits; important to get the questions right

Q. The end point is not codification you want the algorithm to keep learning.

Alarie: use this to go up to the boundary and not go beyond; improve detection/tax compliance; identify the boundaries more clearly; better articulation of the law; issues of “misspecification” of the law, coping mechanisms: statutory interpretation, prosecutorial discretion; bad facts make bad law

Q. Applications to criminal law? A system that determines guilt or innocence? Ethical line? Locks in the status quo? Can an algorithm overcome this to make it a progressive instrument change?

Alarie: issues with quality of current decision making; mitigate heuristics into algorithms; implicit bias of judges; you control the info that you expose the algorithm to, curate the information; still problems, things may be correlated with negative things, e.g. racial implications, that we don’t want related; gender, etc. other human rights type things; how to cleanse it appropriately; self driving cars just need to be better than humans; i.e. don’t hold them to the standard of perfection; short term gains to be had

Q. Given the subjectivity of evidenced base processes are these algorithms more suited to mechanical questions like a TurboTax for law?

Alarie: still need triers of fact; evidence, credibility of witnesses, etc.; this is more of an appellate level kind of tool; once you establish the facts you can map them; algorithms can assist with this

Q.Potential normative retrenchment? Outcomes are still tied to the facts you curate out of the equation? Sense of justice and policy come into play? Not everything is reported.

Alarie: feedback for the algorithm, conversation between all players involved; data, time, learning from new cases; learn to trust the algorithms; mistake to rush to acceptance but also to ignore these technologies; co-evolution; pay off more transparency, better access to justice, consistency, validating, becomes a public utility?; law belongs to all of us and should be clear

Q. Regulating for who? Who will benefit? Machine advances whatever exists, if there is a problem with the law can this technology fix those problems?

Alarie: misspecification of the law; increase in computing power exponential developments leading to emergent systems; influence our collective governance; machine learning about normative inconsistencies; thought experiment: learning about values to influence law reform, based on pairwise comparisons that are voted on in parliament; learn about normative inconsistencies; recommend legislative changes based on collective values; it will probably be better than that; machine: “trust me I know you better than you do”; dark side potential OpenAI – safety; recommends Super Intelligence by Bostrum

Q. False answers to questions? Proof beyond reasonable doubt? Ramifications of false statements?

Alarie: asymmetric outcomes; a research tool to help people make better decisions; law itself has many grey areas; shouldn’t rush to democratize this, sophisticated tool, sophisticated user helps to make sure the results are valid

Q. Interpretation of law that has changed over time so past precedents are no longer relevant? Also, if cases not going to court what data to use?

Alarie: we deal with this now; judgments change interpretation; machine will deal with this the same way we do as new information added; cases that end up being adjudicated are already a fraction of disputes that arise, i.e. many settled or abandoned; time of the courts better spent dealing with marginal cases; opportunity to spend more time considering these cases more carefully, better decisions, and more informative; many cases should not go to court

Q. Tension between predicting the law vs. algorithms becoming the law?

Alarie: decisions made through heuristics; what can the machines can teach us about our human practices?; exposing, surfacing things happening in the legal system that we need to address; how the legal system is meant to develop; assist in governing ourselves

Q. Who owns/controls these algorithms? Troublesome if owned by private company.

Alarie: technology doesn’t invent itself, time, money, risk to develop this; need incentives for investing in this; no agenda then to hold the mirror up to ourselves; best if academics involved, i.e. no commercial interests; see potential to nationalize this as a public good, natural monopoly emerging; like energy from nuclear power; government’s not doing this research

Comments are closed.