Osgoode AI Sets Vision for Canadian Leadership

When the Government of Canada earmarked $125 million for a Pan-Canadian Artificial Intelligence Strategy, it was premised on the assumption that this area will be a future economic growth engine,

Artificial intelligence is a burgeoning area of research with implications for everything from better medical diagnoses to self-driving cars. The market for artificial intelligence-related products is predicted to reach $47 billion in 2020, and the field has attracted significant investment from Google, Facebook, Baidu and other major technology players.

The 2018 Budget reaffirmed this commitment, with one of the five innovation superclusters focusing on artificial intelligence-powered supply chains that would create jobs and develop research to improve standard of living. At least one of the private partners in this plan includes a major law firm, who have launched a microsite covering some of the legal and ethical risks involved.

While the potential of AI is undisputed, the practical implementation of these technologies in society within the short-term remains debated. In part, this is based on some of the legal and ethical challenges, but it is also premised on some skepticism as to how effective AI may actually be in addressing existing industry challenges.

Earlier this year, Osgoode Hall held a conference that brought together academics from around the world to explore these issues. Some, such as Prof. Ian Kerr, Canada Research Chair in Ethics Law and Technology, indicated that although we may have to brace for the impact of AI in some aspects of society, the vast majority of society will not immediately be affected by these technologies.

AI has been around for decades, since at least 1956, and there have been periods of productivity in AI which have been followed by periods of stagnation. Although we are currently seeing significant advances since 2001 and what looks like the edge of an exponential curve, Kerr claimed an equally plausible explanation is that we are approaching another phase of stagnation which will not result in societal transformation.

What AI has been successful in so far is outperforming humans in very specific and narrow tasks (and even then, there have been exceptions). Artificial General Intelligence (AGI) is something that yet appears to be elusive based on our current technologies. Researchers in the area have indicated this year that deep learning techniques must be supplemented by other techniques if AGI is to be achieved, and that AI research may actually have hit a wall,

Now, psychologist Gary Marcus of New York University — formerly director of Uber’s AI labs — argues that the lack of progress isn’t surprising, as researchers are running up against a host of new challenges.

Algorithms also can’t engage in what Marcus calls “open-ended inference,” which entails bringing background knowledge to bear on a question. We all know the difference between “John promised Mary to leave” and “John promised to leave Mary.” We make the distinction using information that isn’t explicitly included in either phrase. Researchers haven’t made much progress in getting computers to do the same.

Carole Piovesan of McCarthy Tétrault LLP spoke on how policy and law are always a step behind innovation, almost by design because both are used as mechanisms to identify problems that exist and address those problems. The challenge is that the current innovations are occurring at such a rapid pace that policy makers cannot properly anticipate issues that don’t necessarily exist here and now. Businesses also face this same challenge, to innovate, to be seen as innovators, and to identify and address risks when deploying technologies into the mass market.

Traditional notions of copyright are premised on the notion that innovation is incentivised by recognition and reward. These underlying IP principles don’t necessarily hold true for AI systems that are creating independently irrespective of any reward, which could create difficulties with our applications of intellectual property, especially where AI is no longer the tool, but the inventor. Instead, Piovesan suggests dealing with IP issues by looking at the nature of the systems, look at the purpose,

Mark Perry and Thomas Margoni explore in the Computer Law and Security Review whether the reward for computer generated works, and consider whether it should be allocated to the author, the user, or even the program itself, and then consider another alternative,

The last of the identified solutions is: nobody. A basic principle of economics is to allocate scarce resources in an efficient way, that is to say in a way in which those given scarce resources are able to produce the maximum benefit for the whole society (maximisation of allocative efficiency). So long as there is no subject who can be identified in terms of incentive theory as an efficient allocation, then the question becomes why should they be allocated “randomly” just for the sake of allocation, and not on the basis of any theoretical justification, being it utilitarian or naturalistic? This would produce property rights where economically speaking such a solution can be identified as inefficient, not optimal, or even worse, as leading to market failures. Market failures such as these are avoidable by falling back to a specific morphology of property rights that seems to be often overlooked as a feasible and many times more allocative- efficient solution: the Public Domain. This point seems to have been recognised by other authors “If there is no human author of the computer-generated work, the intellectual property system has assumed no one deserves to be rewarded for it. If there is no human author of such a work, how can any human be motivated to create it? If it is not clearly necessary to grant the exclusive rights to stimulate creativity, traditional principles would seem to argue that the set of exclusive rights not be awarded to anyone”.

Ann Cavoukian of Ryerson University emphasized that Privacy by Design allowed for abandoning zero-sum models. Privacy and security can be used in conjunction, without any trade-offs or either/or propositions. Earlier this year she spoke to IT World Canada on how this could be achieved through legislative amendments to Personal Information Protection and Electronic Documents Act (PIPEDA),

While the US s struggles with rampant surveillance and no independent commissioners, Cavoukian doesn’t see Canada going down the same path as its southern neighbour.

“We do fare much better than the US and I don’t think we’ll go in that direction. Over my dead body,” she laughs. “We’re much more closely aligned with the EU on privacy measures and I’m pleased with federal commissioner Therrien’s approach in trying to update Canada’s regulations.”

Cavoukian is hopeful for the future, convinced that emerging tech like AI will help Canadians find the right balance between security and privacy.

“We’re looking to fund research into embedding privacy in new technology because we need to pave the way and prove this is doable. My view is we have to believe this is possible; we don’t have a choice. If you value freedom and liberty, then you value privacy,” she concludes.

Jutta Treviranus, Director of the Inclusive Design Research Centre (IDRC) and Professor, OCAD University described problems with developing a norm or average, highlighting that the danger of creating artificial norms is accentuated with the use of technologies that aggregate averages in a population. The greatest innovation however, occurs with the unexpected black swans, with events or data that happens at the edges. 

We assume that the data doesn’t lie, but the data isn’t actually objective, and risk reliving the myth of mechanical objectivity. The law, especially when dealing with protected groups, requires individualized treatment of outliers, rather than assumptions that gravitate towards the greatest clusters. Evidence-based governance presumes homogeneity, which perpetuates or even amplifies vicious cycles of exclusion.

Brandie M. Nonnecke, Research & Development Manager for CITRIS, UC Berkeley, focused on how AI could be used to promote social good. AI allows for analysis of big data in ways that humans cannot, but to address bias creep in these models requires scrutiny. Nonnecke wrote about this last year in the World Education Forum,

We must recognize that algorithms are not neutral. They reflect the data and assumptions inherent in their calculations. If prejudiced data is fed into an algorithm or factors that reflect existing social biases are prioritized, discriminatory results will follow. Algorithms function by prioritizing certain factors—identifying statistical patterns from observed and latent variables and subsequently offering “if this, then that” conclusions. By assuming that certain factors are appropriate predictors of an outcome and historical trends will be repeated, an algorithm can exhibit a self-reinforcing bias. For those who are over-, under- or misrepresented in the data and calculations, decisions made on their behalf can perpetuate inequality.

She proposes that incorporating diversity into every stage of design, deployment and review of AI can minimize the inadvertent discriminatory effects of AI. These interdisciplinary teams should include lawyers, social scientists, as well as coders and computer engineers.

 

Danielle Goldfarb & Candice Faktor described in Maclean’s how Canada can stay a leader in artificial intelligence:

  1. Develop Canadian-grown startups, think of global opportunities
  2. Address fears of job losses and disruptions head on
  3. Participate in the global discussions on AI policies to promote social good
  4. Open Canada to talent, collaboration and investment from outside the country

The talks from the Osgoode conference, which go beyond the summary above, provide some insights into how all of the above can be achieved. Videos from the conference are available online, with convenient downloadable audio files on the go.

 

Comments are closed.