Column

Privacy and Artificial Intelligence

The increased utilization of artificial intelligence (AI) has been identified as giving rise to numerous privacy concerns.[1] For illustration, “the data protection principle of limiting collection may be incompatible with the basic functionality of AI systems”. AI systems generally rely on large amounts of personal data to train and test algorithms, and limiting some of the data could lead to reduced quality and utility of the output.[2]

Another issue is that organizations using AI for advanced data analytics may not know ahead of time how the information that is processed by an AI system will be used or the kinds of insights that the system may reveal. In this context one must consider the practicality of the principle requiring specification of the purpose for which personal information is collected, used or disclosed.[3] Similarly, this challenges the organizations ability to limit use and disclosure to the purposes for which the information was collected.[4]

In order to address these challenges posed by AI, the Office of the Information and Privacy Commissioner (OIPC) has proposed enhancements to PIPEDA and has sought guidance on those proposals.[5] Those proposals are:

  1. Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI.
  2. Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights.[6]
  3. Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions.[7]
  4. Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing.[8]
  5. Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection.[9]
  6. Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective.[10]
  7. Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable.[11]
  8. Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification.[12]
  9. Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle.[13]
  10. Mandate demonstrable accountability for the development and implementation of AI processing.[14]
  11. Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law.[15]

Canada has the advantage of seeing how other major trading nations address the issues of AI. Based on the proposals, it is reasonably predicted that Canada’s privacy law will move closer to some of the principles of the EU General Data Protection Regulation Regulation.

 

_________________________________

[1] See, for example, Office of the Privacy Commissioner of Canada, “Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence”, closing March 13, 2020. [OIPC AI Consultation] Link at: https://priv.gc.ca/en/about-the-opc/what-we-do/consultations/consultation-ai/pos_ai_202001/

[2] See OIPC AI Consultation, supra, citing Centre for Information Policy Leadership, First Report: Artificial Intelligence and Data Protection in Tension, Oct 2018, pg. 12-13. The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018.

[3] See OIPC AI Consultation, supra, citing The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018. See blog post from lawyer, Doug Garnett, AI & Big Data Question: What happened to the distinction between primary and secondary research? Mar 22 2019.

[4] See OIPC AI Consultation, supra, citing Ian Kerr, “Robots and Artificial Intelligence in Health Care,” Canadian Health Law and Policy, 5th edition, 2017, p.279.

[5] Office of the Privacy Commissioner of Canada, “Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence”, closing March 13, 2020. Link at: https://priv.gc.ca/en/about-the-opc/what-we-do/consultations/consultation-ai/pos_ai_202001/

[6] The OIPC AI Consultation, supra, tracks the EU General Data Protection Regulation Regulation which, takes a human rights based approach to data protection.

[7] The OIPC AI Consultation, supra, again takes a lead from the EU General Data Protection Regulation Regulation which, in Article 22, grants individuals “the right not to be subject to automated decision-making, including profiling, except when an automated decision is necessary for a contract; authorized by law; or explicit consent is obtained. Article 22 also contains the caveat that where significant automated decisions are taken on the basis of a legitimate grounds for processing, the data subject still has the right to obtain human intervention, to contest the decision, and to express his or her point of view.”

[8] As promoted by the Department of Innovation, Government of Canada, in Strengthening Privacy for the Digital Age: Proposals to modernize the Personal Information Protection and Electronic Documents Act, May 2019.

[9] The OIPC AI Consultation, supra, again cites the EU General Data Protection Regulation Regulation Article 25 “Data Protection by Design and by Default” stating “Article 25 discusses a number of elements of this obligation, including putting in place appropriate technical and organizational measures designed to implement the data protection principles and safeguard individual rights and freedoms.” Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, (GDPR).

[10] The OIPC AI Consultation, supra, notes that there is an inherent tension between the purpose specification and limiting uses to that purpose and the benefits to an AI system of having access to vast amounts of data.

[11] While content is the organizing theme for PIPEDA, other jurisdictions provide for other bases for regulation of the collection, use and disclosure of personal information. The EU General Data Protection Regulation Regulation (EU) 2016/679 (General Data Protection Regulation), Article 6(1)(f). provides additional bases for processing of personal information such as “when processing is necessary for the performance of a task carried out in the public interest, and when the processing is necessary for the purposes of the “legitimate interests” pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject (in particular where the data subject is a child).”

[12] The OIPC AI Consultation, supra, notes that there is a divergence internationally as many countries view de-identified or anonymized data as not in the category of personal information and thus not addressed by the applicable privacy law.

[13] The OIPC AI Consultation, supra, notes that a “requirement for algorithmic traceability would facilitate the application of several principles, including accountability, accuracy, transparency, data minimization as well as access and correction.”

[14] The OIPC AI Consultation, supra, notes “While Principle 4.1 of PIPEDA requires organizations to be accountable for the personal information under their control, we propose that the principle be reframed to require “demonstrable” accountability on the part of organizations.”

[15] See, for example, the order marking powers and penalties available in British Columbia and Alberta Personal Information Protection Acts.

Comments are closed.