Column

Incentives for Internet Security

Almost everything of social or financial value is now online in some form, benefitting in many ways from the interconnection with the world, and tempting in many other ways to the world’s thieves and saboteurs. As a result, Internet security has never been more important to personal, corporate and political interests than it is now.

Yet we read weekly of new damage done to online resources: legal service firms taken offline by ransomware, virtual currencies highjacked, endless personal records stolen from enterprises in all lines of business. It is remarkable how rarely critical infrastructure – power supplies, transportation, communications – is taken down, given the damage that such attacks could do. (It has happened in other countries, though – notably Estonia, Georgia and Ukraine, where Russians are blamed. And election systems are critical infrastructure in their way, as well.) Perhaps that’s because there is less money in attacking critical infrastructure than in other forms of harm, and hackers with strategic rather than monetary interests are either biding their time or wary of “kinetic” response (i.e. physical rather than electronic retaliation).

Despite the sophistication of some of the attacks, a significant number seem to succeed because of sloppy conduct by the victims: failure to change default passwords, failure to patch software, inattention to the kinds of devices connected to important data banks, and persistence in clicking on dangerous links or attaching insecure hardware.

In short, a lot of the harm could be avoided by known defensive conduct. The bad guys can use a lot of well-known attacks and long-patched vulnerabilities and still succeed.

A good argument can be made that more effort is needed to ensure that custodians of important online resources defend them properly. Question: what kind of effort? What incentives will work? What should be done to raise standards of conduct in information security?

This topic was canvassed by a couple of panels at the recent RSA 2020 Conference.

Nanny State panel

The first discussed a range of options “from the nanny state to the invisible hand.” The former is represented by detailed, top-down regulation, often with criminal consequences or administrative penalties. Examples of the latter could be the loss of market share, if not outright bankruptcy, because of lack of consumer or business partner confidence. Somewhere at the invisible hand end of the spectrum is civil litigation, less so if the standard of care is established by regulation.

A chart was used as the basis of discussion, showing measures mainly but not exclusively in the US laid out along a scale from market-driven on the left side to strict regulation on the right.

[ Click the image to see the a larger version. ]

Source: Gilbert Sorebo, Accenture

One notes that privacy regulation is listed in the middle, though the EU’s General Data Protection Regulation might be shown as further along toward the nanny state end than, say, PIPEDA. (The GDPR is a ‘spare the rod…’ kind of nanny.) Health information protection tends to be stricter than that given to other forms of personally identifiable information.

Members of the panel, including a civil litigator and a health law expert, elaborated on the experience with the different techniques or incentives. It was noted that Canada has a National Cyber Security Strategy and a five-year Cyber Security Action Plan. They rely heavily on cooperation between government and the private sector, with three points of focus: resilience, innovation and collaboration. Very definitely not the nanny state, except for the availability of money to help build and maintain cybersecurity facilities. Canada’s nanny holds the purse strings but not the rod.

The discussion ended with a sense that nothing was really working very well. Whatever was tried, and although security technology is ever better, the numbers of security failures do not go down. The program had invited the panel to make policy recommendations to improve security, but no one felt very confident that they would make a difference.

In particular, panel members were not optimistic about the role of cyberinsurance. While policies have been offered for some years now, nobody thought that insurers had a serious risk model to hold their clients to. Policies may refer to known standards (National Institute of Science and Technology – NIST, and the International Standards Organization – ISO being the main ones), but they also tend to require adherence to “reasonable best practices.” Their expertise was more in avoiding paying out on policies than in setting standards of conduct that clients could follow to reduce premiums or ensure coverage when losses occurred.

Investors panel

Another panel asked whether investors care about information security. Examples given were of venture capitalists and private equity funders, rather than the general market investor. This panel was more optimistic than the “nanny state” panel, alleging that funders did pay attention to security and that businesses, whether startups or mature companies, did better in raising money if they could demonstrate good security practices.

There was little detail, however, about what sort of cybersecurity due diligence they practice when they say security is a priority for them. Do they hire an independent firm to assess the security, doing a deep dive into how devices are configured, where information is stored, and reviewing the effectiveness of the controls in place? Do they apply the same standards as insurers?

Mailing list discussion

Before the Conference, I asked members of a mailing list on electronic communications law and policy what incentives would work. A number of interesting responses followed. It was pointed out, on the ‘invisible hand’ end, that no Canadian court had ever held anyone civilly liable for negligent information security, though some class actions had been settled. Proving that damages resulted from any particular incident or breach of security was normally too difficult.

Information security is usually thought of as having three components: confidentiality, accessibility and integrity. A case was made at RSA for adding safety (where corrupted or manipulated information could harm people’s health, for example, or where the electricity grid could be knocked out of service), resilience and recovery.

Data breach legislation, and the powers of the Privacy Commissioner, may be thought to focus only on confidentiality. The protection of accessibility of data and its integrity should not be overlooked in the regulatory agenda.

Members of the list seemed to agree, however, that mandatory reporting of data breaches was desirable. The threat to the reputation of those reporting would be some incentive to better security. PIPEDA has such a requirement since November 2018, and the Investment Industry Regulatory Organization of Canada (IIROC) has new cybersecurity reporting rules as well – that require reports on what steps have been taken to prevent a repeat incident. The IIROC obligation goes beyond privacy breaches. (The Fasken law firm has a comparative chart of reporting obligations that may be of interest.)

On the other hand, one wonders whether the volume of such reports tends to dull the senses, i.e. one stops paying attention, and reputation is not very seriously harmed – so the incentive is not very strong. Most US states have had mandatory breach reporting for some years, and the breaches continue.

The suggestions for promoting information security that seemed to attract the most support on the list (among a relatively restrained number of participants) were:

  • Holding corporate directors personally liable for damages caused by security breaches. Security breaches do sometimes affect share prices, and that might be a form of damage to be compensated, even if direct personal harm from information leaks is harder to show. Share prices may fall at news of a breach, but they tend to recover.
  • Making one or more senior officers certify that proper information security was in place. This would be a parallel to the requirement in the U.S. under the Sarbanes-Oxley Act that the Chief Financial Officer must certify that the audited statements are accurate. It was noted, however, that auditors in Canada have not succeeded in using disclosure of security levels as an effective lever for consistent adoption of information security governance, risk management and compliance.

Companies tend these days to mention information security risks among their statements of material risks that could affect future performance of the company (and its shares). However, many mentions are very vague, essentially along the line “we might get hacked and that could cost us”, with some non-specific assurance that information security is important to the enterprise. Investors may find such statements of small comfort.

  • Giving the Privacy Commissioner of Canada (and presumably any provincial counterparts who need it) the power to make orders of compliance with privacy statutes and to issue administrative monetary penalties. While personal information was not the only matter of value in information security – other types of information could be more important – the data governance patterns shown by leaks of personally identifiable information may reflect more general practices in the enterprise. Privacy statutes usually at least contain a clear statement of the duty to keep personal information securely. The legal obligation to keep other information secure is really one of general prudence rather than of statute.

Conclusion

Threats to information security are not going away, and despite the progress of technology to fight them, few experts are prepared to say that defenders of information are winning the war. That said, good data governance is very valuable to business and government. The law needs to promote this objective, in the ways already tried and perhaps in new ones.

Afterword

A personal and corporate note on keeping information secure was given at RSA in a discussion by Frank Abagnale, of “Catch Me if You Can” fame. Mr. Abagnale has been consulting with the FBI and other law enforcement agencies for the past forty years. He had some thoughts on how individuals can improve their information security practices. He has recently published a book for the American Association of Retired People (AARP) called “Scam Me if You Can”, being a collection of knowledgeable warnings. His session was a useful reminder that information security should start at home.

Comments are closed.