Bill C-12: Safeguarding Canadians’ Personal Information Act – Eroding Privacy in the Name of Privacy

The Government has recently announced its intention to focus on Bill C-12, the Safeguarding Canadians’ Personal Information Act, its attempt to update PIPEDA in accordance with the statute’s last 5 year review (which incidentally was conducted over 6 years ago). Bill C-12 is a lackluster piece of privacy protection that, in spite of its name, arguably does far more to erode privacy than it does to enhance it. One commentator even dubbed it’s last incarnation the ‘anti-privacy privacy bill‘. As the legislation can be expected to reemerge as early as two weeks hence, a few of its more problematic elements are worth a brief look. I leave out some of the more beneficial elements of the legislation here, such as Clause 5, which will strengthen the consent requirements in PIPEDA. But these are few, far between, and largely outnumbered by the privacy-corrosive elements of the Bill.

a. ‘voluntary’ surveillance of customers

While it appears that Bill C-30, the latest legislative attempt to update Canada’s state surveillance capacity, is temporarily on hiatus, Bill C-12 touches on a number of the same issues by expanding the conditions under which companies can voluntarily share information with police. Currently, section 7(3)(c.1) of PIPEDA arguably operates as a blocking statute preventing organizations from voluntarily handing over customer information in situation where police have failed to demonstrate their ‘lawful authority’ to request it.

The exact parameters of what ‘lawful authority’ means remain unclear. As one commentator notes:

…the term “lawful authority” has been fraught with conflicting interpretations, with some TSPs taking the position that it means a warrant or court order, and with courts struggling to determine its scope.

Regardless of this ongoing confusion, the provision is important because it obligates telecommunications service providers (ISPs, mobile service providers, Google, Yahoo, YouTube, Blog hosting sites, etc.) to at the least assess police demands for customer data with a bit of skepticism. Requests for vast amounts of sensitive data (‘we want to be able to copy all of your users’ emails and web browsing at will, without a warrant’; ‘we want to know which of your customers were in downtown Ottawa from 5pm to 10pm last Thursday) are likely to be refused.

As matters currently stand, statistics confirm that TSPs tend to be cooperative and will hand over user data in response to 94% of RCMP requests. This ‘voluntary sharing’ regime is troubling because it manages to avoid even the most rudimentary of oversight and tracking typically associated with police surveillance. Reports suggest tens of thousands of requests for information are voluntarily complied with each year, and there is little information on the scope or parameters of these requests (although the majority of these are currently limited to subscriber identification information). Indeed, the only real oversight of this information sharing regime is the obligation on TSPs to limit disclosures to scenarios where ‘lawful authority’ has been identified.

Bill C-12 erodes this obligation in a few ways. First, it broadens the entities covered by 7(3)(c.1) to include not only law enforcement agencies, but those requesting the information in order to perform ‘policing services’ (Clause 6(6)). While purportedly aimed at facilitating community policing, this broadening is deeply concerning in light of increasing attempts in other jurisdictions to outsource policing tasks to private security firms, or even to privatize policing altogether. It also raises concerns in light of the vast amounts of customer surveillance currently envisioned for TSPs by other jurisdictions in the name of cybersecurity.

Second, it defines ‘lawful authority’ in a rather vague manner as referring to ‘any lawful authority other than’ a subpoena, warrant or court order (Clause 6(12). This suggests that merely displaying a police badge might be enough to meet the ‘lawful authority’ criteria, and TSPs are free to give away their customers data at will.

Finally, and perhaps most concerning, Bill C-12 immunizes organizations from any obligation whatsoever to even verify the validity of any lawful authority offered (Clause 6(12)).

Taken together, these provisions set the stage to a dramatic expansion of the TSP voluntary sharing regime currently in place well beyond the limited amount of subscriber identification information that is its current focus. This is in spite of the fact that the constitutionality of this information sharing has been questioned in R. v. Trapp, 2011 SKCA 143, where the Saskatchewan Court of Appeal found a reasonable expectation of privacy exists in voluntarily disclosed subscriber data (although see Trapp’s sister case, R. v. Spencer, 2011 SKCA 144 which concludes there is no reasonable expectation of privacy in similar contexts — the ISP contractual terms being the operative difference). The incentives to expand this regime are already operating in other jurisdictions, and are sure to manifest in Canada soon. They may already be operative here. We would not know, as far from imposing even rudimentary reporting obligations, TSPs are prevented from ever disclosing such sharing has occurred without first seeking permission from the requesting agency (section 9(2.1) of PIPEDA).

b. bypassing safeguards in the discovery process

Another concerning element of Bill C-12 is the way in which it removes current PIPEDA provisions that limit the conditions under which TSPs and other organizations can hand over customer information in order to help someone sue one of their customers.

Clause 6(9) puts in place a ‘litigation exception’ that will effectively bypass privacy safeguards carefully built into the discovery process. Currently, in order to obtain information relevant to a lawsuit from a non-party such as an ISP, litigants must convince the Court the information is relevant, that their lawsuit is bona fide and/or that they have a prima facie case, and that the information cannot be obtained from a more readily available source (see: BMG Canada Inc. v. Doe, 2005 FCA 193; Warman v. Wilkins-Fournier, 2010 ONSC 2126; and more recently Voltage Pictures LLC v. Doe, 2011 FC 1024).

These safeguards are critical to ensure that the civil litigation system impacts on rights to privacy and anonymous expression in a disproportionate manner. Prohibitions on voluntary information disclosures in this context are critical. Often, requests addressed to TSPs and similar companies for identification information are accompanied by legal threats (if you don’t comply, we will include you in the lawsuit). There are few incentives in place for TSPs to undertake costly legal fights in these contexts. Absent legal protection, most are likely to simply hand over the data and be rid of the matter.

This is why other jurisdictions (and, until Bill C-12 came along, Canada) have in place legal protections preventing such disclosures. In the U.S., for example, absent a court order, handing over customer information to a civil litigant is a criminal violation of the Stored Communications Protection Act, 18 U.S.C. 2072(a)(1), which holds that anyone “providing an electronic communication service to the public shall not knowingly divulge to any person or entity the contents of a communication while in electronic storage by that service.” No litigation exception exists.

c. illusory breach notification obligations

Finally, where Bill C-12 attempts to add some privacy protective measures by putting in place a desperately needed breach notification regime, it falls flat by severely undermining the effectiveness of this regime. One study of the regime, conducted by the Public Interest Advocacy Centre, concludes that the regime is “too weak” to ensure user expectations with respect to notification are met. This is in part because the legislation was drafted a few years back now, and has not kept up with proposals in other jurisdictions, and partly due to apparent efforts to avoid obligations to put in place costly technical safeguards to address breaches.

Breach notification is critical for a number of reasons. To begin with, it allows users of services to take corrective measures to avoid identity theft and related harms in scenarios where their personal data has been exposed. Second, it provides a valuable source of information regarding the scope and nature of cyber security breaches and the effectiveness of data security procedures. Finally, it puts in place important incentives for organizations to take adequate steps to ensure their users’ data is secure. The objectives of security breach notification (SBN) are summarized in the White House’s recent privacy protection framework:

Notice helps consumers protect themselves against harms such as identity theft. It also provides companies with incentives to establish better data security in the first place. The SBN model is also gaining acceptance internationally as a performance-based requirement that effectively protects consumers.

While historically, breach notification was largely focused solely on the first of these three objectives (facilitating corrective action by individuals) the other two rationales (reporting and incentivizing stronger technical safeguards) have grown in importance in recent years. The OECD describes the important transformative effect breach notification obligations have had on enhancing cyber security:

Today, privacy impact assessments (PIAs) are helping organisations analyse the “life cycle” of personal data and take privacy into account before introducing new technologies or programmes. Such efforts can be seen as part of an overall privacy management framework and are an integral part of a mature security risk assessment. This has meant a new focus on information security that recognises that personal data is an asset that requires sustained protection. This transformation of the risk assessment and recognition of the parties potentially harmed from threats to information systems are very significant developments, and, in several countries, are largely a result of data breaches and the consequences that follow under data breach notification laws (i.e. fines, the costs of providing notice to affected individuals, and reputational harm).

Currently, an organization that discovers a breach in safeguards has little incentive to tell anyone it has occurred. The counter incentives to disclosure are high — great embarrassment for the organization, potential loss of customers, potential regulatory scrutiny of existing safeguards to see if the problem has been adequately addressed, potential added regulatory obligations to adopt further safeguards beyond those adopted at great cost. Under such conditions, the only real incentive to disclose is the risk of getting caught knowingly hiding a breach.

The idea of implementing a breach notification regime is a good one, as 47 of the 52 U.S. states have found. Indeed, the U.S. has plans to enact federal breach notification legislation as part of its cybersecurity strategy. The Europeans are similarly in the process of adopting EU-wide data breach notification obligations.

The problem is that the breach notification regime sketched out in Clause 11 of Bill C-12 is designed in a manner that will impose on subjective organization decision-making so minimally as to be almost counterproductive.

To begin with, the standards employed by Bill C-12 are so high and subjective that they will provide minimal real incentive for companies to disclose specific breaches as they occur. C-12 employs a two-tier reporting structure, where all “material breaches” must be reported to the Privacy Commissioner, while whenever a breach imposes a “real risk of significant harm” onto an individual she must be notified.

tier 2: real risk of significant harm (disclose to affected individuals)

As a starting point, the second ‘real risk of significant harm’ reporting tier is problematic. The standard itself ‘real risk of significant harm’ is significantly higher than that under consideration elsewhere. The US proposal (which will only apply to organizations with over 10,000 users) obligates user notification whenever it is reasonably believed that sensitive identifiable information has been accessed or acquired unless there is no reasonable risk of harm or fraud whatsoever. The EU proposal goes further, obligating organizations to notify users in any scenario where a data breach “is likely to adversely affect the protection of the personal data or privacy” of an individual (proposed Article 29).

Second, the standard itself is far too subjective to be effective. Even well-meaning organizations faced with a decision — disclose that this breach occurred or not — will find it difficult to err on the side of disclosure given the many counter incentives in place. In this sense, subjectivity is likely to work against disclosure, particularly in scenarios that threaten the reputation of an organization or that may lead to forced regulatory adoption of expensive technical safeguards. Yet this second standard is subjective in the extreme as organizations are given an array of factors to weigh and counter-weigh.

‘Significant harm’ is defined to include ‘bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on credit record and damage to or loss of property.” (proposed sub-section 10.2(2)) This is a solid definition of significant harm, and should be credited for its inclusion of oft-ignored privacy harms such as humiliation, damage to relationships, and loss of professional opportunities. However, it remains left to organizations to decide which of these factors are implicated and whether the information in question is capable of, say, humiliation, or merely embarrassment.

To further complicate matters, proposed sub-secti0n 10.2(3) of Bill C-12 defines ‘real risk of significant harm’ as a product of two other factors: (a) the sensitivity of the information involved in the bread, and (b) the probability the information will be misused. Of these, the ‘sensitivity’ factor is one that, historically, has proven a workable standard. Indeed, Principle 4.3.4 of PIPEDA provides guidance on how to determine ‘sensitive information’:

Although some information (for example, medical records and income records) is almost always considered to be sensitive, any information can be sensitive, depending on the context. For example, the names and addresses of subscribers to a newsmagazine would generally not be considered sensitive information. However, the names and addresses of subscribers to some special-interest magazines might be considered sensitive.

This has been supplemented by court decisions as well as many findings by Federal and Provincial Privacy Commissioners. In this particular context, organizations will be able to draw on additional guidance from proposed sub-section 10.2(2).

The second factor — (b) the probability that the information will be misused — is not only subjective and difficult to quantify, but will in most cases be more or less impossible to determine. To begin with, the very term ‘misuse’ undermines, to some extent, the list of factors defining ‘significant harm’ and found in sub-section 10.2(2). Is it a misuse of information to reveal true facts that may harm an individual’s reputation? Is it a misuse of information to expose an individual’s true religious beliefs in a way that might humiliate her?

More problematic is the fact that probability of misuse will depend on whether the information falls into the hands of someone who is will to take advantage of it. Excepting scenarios where it is clear that a specific set of information was purposefully targeted for access by an individual with nefarious intentions, this will by no means be evident. It is a staple of cybersecurity breaches that it is often unclear who breached a system and for what purpose. In other types of breaches, such as the paradigmatic ‘lost USB key full of data’ scenario, it is equally uncertain precisely who ends up with the data and what they might do with it.

The worst impact of this factor, however, will be on cybersecurity. Many cybersecurity vulnerabilities are of a nature that can be exploited without any trace being left behind. When the vulnerability is eventually discovered, by an industrious security researcher or otherwise, there will typically be no indication whether it has been exploited yet or not, even where it involves very sensitive financial information that would be highly useful for fraud and identity theft. Given such ambiguities, this second factor is likely broad enough to potentially defeat many if not any data breach scenarios. Particularly in situations where a vulnerability has been discovered, but it is not clear whether it has been exploited yet or not.

Why is this problematic? For one thing, the incentive to audit technical safeguards with a great deal of due diligence or face public outcry is significantly diminished where organizations know they will be able to address breach discovery internally. Second, where one organization discovers a breach, there is no obligation to disclose to the public so as to ensure that others to not repeat the same mistake. If discovered by an external security researcher, there is always the possibility the researcher will make the issue public, but in many cases organizations are more likely to issue legal threats against disclosure than assistance in the reporting process.

The U.S. and E.U. proposals set the ‘report to affected individuals’ bar far lower in an attempt to better capture these types of scenarios. However, there are problems associated with over-reporting to individuals as well — notification fatigue. Breaches have become frequent enough that individuals are likely to receive numerous notifications over time, and the likelihood of a remedial user response diminishes with each subsequent notification.

tier 1: material breach (report to OPC)

Now, a properly calibrated two-tier breach notification system can alleviate these concerns. The first tier controls disclosure to a third party — the Privacy Commissioner of Canada, in our case — and is set at a significantly lower level in order to provide objective oversight over organization decisions to ‘go public’. Further, more rigorous reporting of breaches to a central body is important so that we can better understand the scope and nature of cybersecurity vulnerabilities. Finally, even low-level breaches can be indicative of security flaws that need to be addressed in order to avoid a more serious breach. This may strain OPC resources, but given the benefits of a reporting regime of this type, it is well worth the added effort.

Bill C-12 adopts a two-tier reporting obligation of this nature. However, the first reporting tier is not designed to meet this objective (seemingly in response to TSP concerns raised by the specter of costly regulator-imposed technical safeguards to remedy vulnerabilities revealed by breaches reported to the Privacy Commissioner). Instead of setting the first tier bar low, so as to catch the majority of breaches that might be of interest in light of the objectives identified above, Bill C-12 is carefully designed to ensure it imposes neither a higher, nor a lower standard than that found in tier two. This means that some breaches may qualify as posing a ‘real risk of significant harm’ (tier 2 disclose to individual) but not a ‘material breach’ (tier 1 report to OPC) or vice versa.

This is achieved by adopting a different, but overlapping set of standards for tier 1 and tier 2 disclosures. A tier 1 disclosure obligates organizations to report ‘any material breach of security safeguards’ to the Commissioner (proposed sub-section 10.1(1)). The set of factors defining ‘material breach’ include: the sensitivity of the information, the number of individuals involved, and ‘an assessment by the organization that the cause of the breach or a pattern of breaches indicates a systemic problem.’

‘Sensitivity of the information’ overlaps with the same factor as found in the tier 2 definition of ‘real risk of significant harm’, however, in this instance, it is not modified by the descriptive factors found in proposed sub-section 10.2(2). This means that, for any given breach, an organization may assess the ‘sensitivity’ factor differently for tier 1 and tier 2 reporting, but whether the outcome favours disclosure to the Privacy Commissioner over disclosure to the individual will depend on the specific scenario. ‘Number of individuals affected’ is fairly straightforward, but is notable in that it is not at all a factor relating to an assessment of ‘real risk of significant harm’.

The final ‘systemic’ factor is the most problematic element of the ‘material breach’ definition. Where a breach involves sensitive information and affects a large number of individuals, the organization may still avoid disclosure if, in its own assessment, it does not deem the breach to be indicative of a ‘systemic’ problem. As with the ‘number of individuals’ factor, the ‘systemic’ criteria does not in any way overlap with any tier 2 ‘real risk of significant harm’ factors.

This means that in cases where sensitive information relating to one individual alone has been breached, in a scenario where a high likelihood of misuse is evident, the individual might be notified, but not the OPC, or even the general public. This is problematic because there will be no record of the breach and, more importantly, there will be no one to verify objectively whether the organization effectively dealt with the cause of the breach in a manner likely to prevent its repetition. The OPC can only initiate an investigation into a suspected breach of Principle 4.3.7 of PIPEDA (obligation to put in place reasonable technical safeguards) if it is first notified of the breach. The EU proposal, by comparison, would obligates organizations to disclose any personal data breach to a regulator along with information on the cause of the breach as well as on how the organization intends to prevent the breach from recurring (proposed Article 28.4).

Similarly, the Uniform Law Commission of Canada draft Breach Notification Act similarly obligates organizations to “promptly file a report respecting any unauthorized access or disclosure with the Commissioner.” (Sub-ection 101(3)). As with the E.U. proposal, the ULC draft requires details regarding the nature of the breach and steps taken to remedy it to be on included in the report (Section 102).

In conclusion, the great innovation of a two-tier notification system employed by Bill C-12 is entirely undermined by the use of overly high, overly subjective and overlapping standards. Instead of operating as a well-needed oversight mechanism designed , the 1st tier to ensure that “the judgment about the degree of risk is subject to review by the Commissioner, and not left solely to the holder, who may have conflicting interests.”

no incentive to comply with the Act

Bill C-12 is also lacking in the ‘teeth’ department. To begin with, in those scenarios where an organization decides a breach is ‘material’ but does not raise ‘risk of significant harm’, the OPC is not granted the power to force the company to disclose. If it wanted to do this, it would need to initiate an investigation (which can take as long as 16 months [page 72]) and then find the organization in non-compliance.

While the Finding itself may act as a ‘notification’ to the public (assuming the OPC drops its historic aversion to ‘naming and shaming’ mal-doers), the OPC does not at this time have the power to issue fines for non-compliance or penalties for damages caused.

If these were to be sought, the OPC (or a complainant) would need to take the complaint to federal court (a costly process involving a trial de novo under section 14 of PIPEDA). But, even then, no damages-based incentive for non-compliance will be forthcoming. PIPEDA currently allows judges to issue three types of remedies for breaches of the statute: an order to correct its practices (16(a)); an order to publish corrective actions taken (16(b)); and an order for damages including humiliation (16(c)). Unfortunately, while Clause 14 of Bill C-12 expands subsection 16(a) to include remedies for elements of the data breach notification regime, it does not do so for sections 16 (b)-(c). One wonders why a complainant would ever bother taking non-compliance to court or, for that matter, why an organization would err on the side of compliance.

Finally, with respect to failures to disclose tier 1 ‘material breaches’ (assuming such failures are ever discovered), Bill C-12 does not even permit the federal court to order an organization to correct its future practices with respect to compliance with this obligation. Subsection 16(a) is only extended to mandate compliance with tier 2 ‘risk of significant harm’ obligations, while violations of the ‘material breach’ reporting obligation are expressly excluded from 16(a) remedies. This means that the OPC and even the Federal Court will be powerless to stop organizational notification policies designed in a way that allows for repeated mis-classifications of ‘material breaches’.

The penalty for failing to disclose a material breach or a breach leading to a ‘real risk of significant harm’ is….nothing. The worst possible result of non-compliance is delayed disclosure of a failure to notify. But that incentive is already in place. While the OPC has recently signaled its intention to call for order making and fine imposing powers in general (presumably these would cover the breach notification regime as well as existing PIPEDA obligations), there appears to be little Government will to update PIPEDA or improve privacy protections. Delays over PIPEDA’s mandatory 5 year review (slated to have been completed last year) and Bill C-12 itself is evidence of that. As currently designed, the worst possible result of non-compliance by an organization is delayed public exposure that the organization failed to notify the public of the breach. But that incentive is already in place.

Comments are closed.