Responsible AI: A Review

ITechLaw, C. Morgan, ed., Responsible AI: A Global Policy Framework, 2019

Can technology lawyers think outside the box? They may be better at it than some of their legal colleagues because the box itself is redesigned so frequently, the walls knocked down and rebuilt in different places, the interactions among the sections rethought, the whole picture scarcely recognizable over the years.

In this spirit, perhaps, a number of members of ITechLaw, the international body once known as the Computer Law Association, addressed their minds to the legal and policy challenges of artificial intelligence, still known as AI.

This field is of course is quickly evolving and hard to predict, but nonetheless on the verge of being all-pervasive in our society. That was the essence of the challenge: to set out principles, guidelines and reasoned rules by which AI could be brought to develop “responsibly”, to minimize the harm to traditional values and maximize its social benefit.

Given the numbers of places where AI is being developed around the world, and the size of the investment in it in both private and public sectors, how could it be possible to rein it in, harmonize it, change its course, perhaps? The effort to do so would have to be knowledgeable, imaginative and global.

The team assembled by ITechLaw was all of those. The authors of this book number over fifty, from 27 law firms in 16 countries on five continents, plus academics and industry representatives. They drew on experience and speculative thinking from their diverse origins and their wide reading.

The authors are well aware of the benefits of AI but also of its threats to important social and even moral values. Their work, Responsible AI, accentuates the positive. It sets out eight areas of focus, most of them stating the positive principle that the book promotes. The threats are dealt with in detail, but the focus is firmly on the kind of beneficial outcome the authors seek.

Thus we find chapters on the Ethical Purpose and Society Benefit that AI should have, the need for accountability for the use and effects of AI, the desirability of transparency of AI processes and the explainability of their results in the face of challenges, and the demands of fairness and non-discrimination it its operations.

Other chapters focus on the safety and reliability of the use of AI, the benefits of open data and fair competition, privacy and intellectual property.

The authors describe in an Introduction the history of AI and how it tends to work today. For a long time ambitions, and works of fiction, treated what they call “general AI”, meaning a kind of intelligence that mimicked human intelligence, with its calculations, emotions, and sense of self. This kind of AI is still a long way off, they say. However, “narrow AI”, the kind that performs particular kinds of tasks, has made great strides in the past quarter-century. Before that, AI tended to be a promise never kept, if not a dead end for technology careers.

The recent successful period of narrow AI first depended on what they call “good old-fashioned AI”. This was characterized by intensive programming of existing human knowledge, entering reams of information in careful and complicated patterns that could be called into use by super-fast search and retrieval. “Deep Blue”, the IBM computer that beat world chess champion Gary Kasparov in 1997, was this type.

More recently, AI does not try to stuff existing knowledge into computers. It turns them loose on large sets of data using machine learning and deep learning. The computers teach themselves the “knowledge”, they figure out the patterns on their own, though their capacity to do so is built into them by their owners and designers, and the results depend on the data they have to learn from.

Another issue with the deep learning model is that the computers do not “understand” the matters they are dealing with, or the consequences of their conclusions. As the text says,

… the answer is arrived at essentially using mathematical techniques which identify the output most likely to be correct, without reflecting upon the “meaning’ of the outcome, including its ethical merits or societal policy implications.

Consider also: do algorithms lie? They certainly can arrive at conclusions without regard to the “rules” that a human researcher or policy analyst would take into account. Whatever works, works, unless the algorithm contains the appropriate limits. Cheating has no meaning in itself. As my high-school geometry teacher used to say, “All’s fair in love, war and mathematics.”

As a result, much interest is shown in recent years in figuring out how to “govern” AI, in the broadest sense. What rules should apply to it, and what is the source of the rules? Data scientists and engineers are asking, human rights organizations are asking, and governments are asking. This book is a comprehensive attempt at answering those questions.

The authors do not say that their eight principles are listed in descending order of importance, but one could come to that conclusion. The discussions of the principles cross-reference the others as required, since they are inter-related. Each chapter sums up its key recommendations, the total collection of which appears at the end of the volume as a Responsible AI Policy Framework.

The first principle supported by the text is that AI should be developed with an ethical purpose and for societal benefit. The discussion sets out criteria of “beneficence” and “non-maleficence” and examines the impact of current or proposed AI applications on employment, the environment, weapons systems and fake news. Each topic is examined with subtlety in several parts. It is clear in many places that AI offers benefits as well as drawbacks. Doesn’t everything? But the novelty of how AI offers them make for fascinating reading.

“The ramifications and unseen consequences of a new technology are often harder to deal with than the technology itself.” The book aims to unravel and make visible as many consequences as possible.

The second principle is accountability – some legally recognizable entity must be responsible, in policy and in law, for the results that AI produces, whether or not the people who launch an AI program can trace exactly how the program comes to its conclusions. The authors call this a “human-centric approach.” It invokes

not only well-developed existing frameworks of civil and criminal liability (where applicable), but also principles of good corporate governance…. Our goal is to ensure that AI systems do not go “on a frolic of their own” … and that in the event of adverse outcomes, there is someone in charge to take remedial measures and provide reparations.

This does not, they say, giving AI systems their own legal personality. “… this remains for now in the realm of science fiction and the AI-as-tool paradigm should be modified only with the greatest prudence.”

One of the pillars of any legal system is its liability framework…. It embodies into law moral principles for a stable society.

The chapter goes on to analyse the activity of three kinds of “stakeholders” of AI – governments, corporations and individuals. “Accountability should be divided among stakeholders according to the degree of control a stakeholder has over the development, deployment, use and evolution of an AI system.”

The authors do not favour an overarching “law of AI”. Existing regulators and governing bodies are in the best position to adapt laws and regulations to the new realities. Some changes will clearly be needed – to deal with autonomous vehicles, for example – but not a wholesale displacement.

To facilitate accountability, Principle 3 recommends transparency and explainability. These are essential ingredients of trust.

As with non-AI decision-making, people will only accept this ever-increasing presence of AI … if they trust the results, the means through which the results are produced and the institutions that present them.

Transparency would let people know that an AI system was going to affect their treatment in some way. They need to know they are dealing with a robot. The authors compare the principle to that of privacy law and informed consent to collection, use or disclosure of personal information. For example, the EU’s General Data Protection Regulation (GDPR) requires data controllers to inform data subjects that their data will be analysed by an automated decision-making system.

Explainability (no, my word-processing spell checker does not recognize the word either) lets people know how exactly an output was produced. The chapter goes on to discuss

The circumstances where the ability to use an AI system should depend on the system being explainable, [and] to define what level of explanation should be required, who should produce the explanation and who should have access to the explanation.

It sets out a case study from the private sector (a bank deciding whether to make a loan) and a public sector study (the U.S. immigration service’s analysis of risk that an illegal immigrant poses of committing a crime.) Explainability in both cases can help detect bias in the programming assumptions, or in the data sets used to train the machines.

Public or private use of AI systems should not get a free pass on compliance with a society’s ethical or legal standards simply because they may constitute the latest, most technologically advanced tools or because “the issue is hard.”

The chapter sets out a number of factors that may influence decisions of what gets to be explainable at what level and to whom. It also describes limits to transparency and explainability based on private interests like trade secrets and public interests like avoiding letting people game the system because they know exactly how it works. A balance is required. An algorithm audit may be required.

The fourth major principle is fairness and non-discrimination. This idea is one of the easiest to understand intuitively, though demonstrating compliance can be complex. Much depends on the nature of machine learning and the integrity of the data sets from which the machine learns what is right. Risks and benefits are reviewed for risk assessment and sentencing in criminal justice, predictive policing, health policy, facial recognition systems, labour relations, insurance and advertising.

While the appearance of fairness is important, AI systems tend to be “black boxes”, processes where it is hard to know what is going on inside. Thus the importance of explainability in the previous chapter. Here, independent reviewing and testing is said to be critical to acceptance, along with oversight and regulation. Human rights law as well as privacy law can be a model.

Fifth come safety and reliability. To some extent, the evaluation of risk is an ethical or a moral question. Such questions can vary across countries and over time. The authors outline approaches to these issues, which depend greatly on the area in which AI is applied. They review the cases of autonomous vehicles, robotic surgery, quality appraisal and control in manufacturing, and the use of voice and face recognition – decoupling reliability from safety. The chapter closes with a review of vertical regulation (by legislation) and horizontal (by civil liability). Conclusion: “it will likely be quite some time before society settles in on a stable regime to address safety and reliability of AI systems.”

The sixth chapter deals with open data and fair competition. Both are of course considered desirable; the how, and how far, are the matters for debate in the text. “Like any other new technology, the commercial development of … AI-based solutions takes place within the standard legal frameworks for business activities.” How to solve “this compliance jigsaw”?

The first part of the chapter reviews access to data, market power and the ways that open data may be encouraged or guaranteed. There follows an analysis of the operation of AI if it is treated as a proprietary technology (the algorithms are owned and protected), a collaborative technology (co-owned, perhaps), and as an open-source technology. Governments should lead by example, it is said. Public sector investment models may stimulate both growth and competition.

Can the use of AI harm competition? Perhaps by monopolies over essential data sets from which computers learn to perform valuable functions. Can access to data be prescribed to prevent this, even private collections of data? Must the mass collectors of data, such as Google or Amazon or Facebook, be required to share what they collect with potential competitors who want to use them to develop their own algorithms? A number of studies are referred to – the EU, the UK, Germany, Canada, the OECD – considering how competition law may need to be adapted to deal properly with AI.

Principle 7 is privacy – which is given a brief outline before the chapter dives into the impact of AI on the traditional values: anonymization, lawful basis for collection including consent (consent being hard to manage for bulk data from all over, but a known legal basis often hard to substitute for it), the need to identify the purpose of collection (difficult for a machine – or business – that learns of potential uses as they analyse data already collected), and others. Can AI be used to meet privacy requirements, foreseeing and dealing with threats? The authors are hopeful. In any event the tension between AI and privacy laws, which apply to AI systems for better or worse (enforcement), is not close to being resolved.

The final chapter touches on intellectual property: who owns the masses of data behind AI or produced by it? How does the usual law of copyright or patents apply, or trade secrets? Can copyrighted material be used – via “fair use” or “fair dealing” – to “train” the deep learning of AI? Are the creations of more-or-less autonomous machines themselves eligible for copyright protection? Are AI systems business processes that have struggled to maintain patent protection? Is there a role for trade-mark-based brand protection? Should there be a sui generis right to protect algorithms? On this last question, the authors are not persuaded.

Legislation may resolve these uncertainties, ideally devised at an international level to ensure cross-border data flows. The book here, as in most of its chapters, raises numerous questions it does not answer. Asking them in a systematic way, in a coherent and constructive framework, is already an achievement.

Overall, the book is very well written. One notes with pleasure a book written by lawyers without legalese, and generally straightforward and accessible. (A couple of chapters are at worst a little mushy.) The principles in focus are always kept plain and clear, with the key thought at the beginning of each chapter, the principle itself summed up in point form at the end of each chapter, then the whole Policy Framework collected at the end. Each chapter has essentially the same structure, though the authorial team varied for each, so the reader has a good notion of where the authors are leading in each discussion.

The analysis is careful, knowledgeable and balanced, though the desire for an ethical legal regime drove the project. “AI by Design” is a frequent theme, the echo of the world of privacy acknowledged. Given the origin of the work, in ITechLaw, much attention is paid to legislation, regulation and the role of laws in framing the ethical AI regime.

Sources of authority and ideas are as diverse and global as the authors. One gets the idea that these are the main ideas in the world, not just in the U.S. or Europe or Asia. Both the Montreal Declaration and the Toronto Declaration on AI policy are noted. (Having a Canadian as head of the team no doubt helped.) The extensive and helpful endnotes to each chapter include sources as recent as early 2019.

Attention has been paid to graphics, so the wide pages have a lot of white space (but not wasted – lawyers are sometimes tempted to present a wall of intimidating text), with colour-coded headings (the blue ones are at a different level of analysis than the brown ones) and occasional side-bar diagrams or definitions to show the flow of ideas or influences. It all works very well.

Anyone interested in the future of AI in society would find the book well worth reading. It points to promising directions, and does it in the framework of ethical and legal principles that have a good deal of strength thanks to the collaborative, global and perceptive analysis that has produced them.

One may wonder if the whole enterprise is too naïve or optimistic. Given the economic and social forces promoting the rapid development of AI systems, will anybody really want to listen to voices saying “let’s analyze this carefully, let’s get this right”? Especially if they rock some commercial boats…

It is clear from the text that many specialists in AI themselves have issued warnings about the direction their discipline may be heading, and have made proposals for restraint. Many national and regional governments have also joined the conversation in the public interest. Perhaps the answer depends, among others, on You!

(The book is currently available to buy or to download at the ITechLaw website. In addition, the authors – and ITechLaw – are seeking feedback on their proposals, ideally by mid-September of this year to prepare a second edition. )


Comments are closed.