Further Legal Snapshots From the Internet of Things

Interconnected computers – computers that talk to each other – are no longer a novelty. These days, one is more inclined to be surprised by an electronic device, or even an electrical device, that is not part of a network.

We looked at some legal implications of interconnections a few years back. Here are several more, roughly divided into issues about privacy and security (which tend to overlap). Feel free to add others in a comment.


By definition, interconnected devices communicate information about themselves or their environment, or both, to other devices. That information can and usually does flow through servers owned by the creators of the devices. What do the creators do with that information, besides making it available to other devices? In many cases, who knows?

Smart Homes

If you buy a ‘smart’ lightbulb, does it come with a privacy policy? If one wants the smartness, one has no choice even if the policy were known. Will people have to sign a consent form when they visit the hardware store?

Why should I care if someone – even a lot of people – know when my lights are on or off? Someone wanting to know if I were home might … but a smart bulb can be set to be on even when I’m not. The more serious issue comes with big data, i.e. the aggregation of many points of information to create a detailed picture of a person’s habits, which can show interests, and opinions, and activities.

Hearing voices

Information about their environment goes well beyond heat and light data. Devices that record and transmit speech are common, and personal assistants increasingly recognize human voice commands. Where does that information go, and what can be done with it?

We looked at interactive dolls last year. They haven’t shut up since, they’ve become even more problematic – or maybe the problems are being more widely recognized. Germany has banned the sale of one kind of doll for this reason. Lisa Lifshitz has reviewed this and other possibly creepy examples from the world of our children.

Listening and talking devices for adults are increasingly common – Siri, Alexa, Cortana, Google Home, and no doubt more. Law enforcement has been interested in what they may have heard – even if they were turned on, i.e. started recording, without the owner of the device choosing to activate them. Is this the kind of evidence that should be able to be used against someone, whether or not collected with a warrant? Is it different from having a tape recorder running? Some tape recorders were voice-activated and could have presented a similar problem – but they were not ubiquitous and did not put the ‘evidence’ in the cloud, or in somebody else’s hands. Do numbers make a difference to the principle?


We have also looked at wearable technology. Not only the creators of the device may be interested in how your body is doing, and by analysis what it is doing. Insurance companies would probably pay to find out about their potential policy holders.

The data have found their way into courts too. A woman complaining of rape was charged with obstruction of justice because her Fitbit-recorded activity did not align with her story about the alleged assault. A man was charged with arson and insurance fraud when his pacemaker did not support his story about how his house burned down. And a man was charged with murdering his wife when his alibi was disbelieved because of the evidence of her Fitbit, which showed she was alive after his story said she was dead.

And the ultimate in wearables may be the device reported on here by Omar Ha-Redeye last month, the nature of which is suggested in his title ‘the intimate of things’… and the lawsuit it provoked.


Privacy concerns often resemble intrusions into one’s personal space, and intrusions are also the language of security. Offences against privacy may result from weaknesses of security, but such weaknesses may have much broader consequences.

Connectivity above all

The Internet of Things is notoriously insecure and not getting much better. The people who build and sell the devices are more interested in connectivity than in security. There is little space on some devices for a lot of code – so elements for access control such as passwords, plus capacity for updates, patches and the like are simply not included. Software with known vulnerabilities may be incorporated, even when more recent patched versions are available, out of convenience or because the developers did not take the time to research the point.

Smart but unsafe homes

A recent example involved Aga stoves, a big brand in the United Kingdom. Aga developed stoves that could be turned on and adjusted by text message – but did not build in security. As a result, a hacker could turn on a stove, and not just ruin dinner but possibly burn down the house.

And speaking of turning on the device: Burger King got into some controversy, not accidentally, by running a TV ad that called out the ‘code’ to activate Google Home – at which point the device would read out the Wikipedia article on the Whopper. A small technology war ensured, and at last notice I believe that the home devices now do not respond to such commands. But given the amount of information that people access through the devices, it is a striking vulnerability.

Nonetheless, as this article points out, “because voice assistants are so new and limited in scope, more established connected devices such as webcams, routers and printers pose more of a threat for now.” Printers? I have been surprised how often my phone has searched for available wi-fi networks and has picked up printers, as well as other people’s phones. They are password-protected, but passwords are notoriously vulnerable, at least as most people use them.

The generality of the risk – i.e. how widespread the damage could be – has been underscored by the collapse of large parts of the Internet in North America in the autumn of 2016 through an attack via smart home devices.

Private protections

Some steps are being taken by some manufacturers, it must be said. According to the article above on the main threats, “Amazon already has options for setting up security codes to shop, make financial transactions or unlock and start cars.” Progress is being made to have the personal assistant devices respond only to authorized voices.

But proprietary systems may not always be the best route. Apple has been promoting security for the smart home through its HomeKit, but all devices must be Apple devices or connected by intermediary devices sold by Apple. One understands the need to have consistent standards, but consistency can lead to captive markets too. The difficulties with the Apple system have been criticized on that ground. “What frustrates me is that HomeKit ignores all previous work done to standardize the Internet of Things, leaving thousands of useful products incompatible.”

One might note here some “private enterprise” security – “white hat” computer experts who hack into interconnected devices and disable their capacity to be taken over by the “black hats”. Two such operations have recently been noted: Brickbot and Hajime. The authors of the article cited deplore the illegality of such hacking and describe its authors as “gray hats”. Most of the comments on the article seem to favour their operations, in the absence of better security from the manufacturers. (The legality of “hacking back” against attacks has been discussed here and here.)


Here as elsewhere, standards may be the answer. Are the manufacturers of connecting devices interested in the arduous work of developing standards, however, or just in marketing the latest flashy control device (Lightbulbs! Stoves! Toys!) or in pulling all the multitudes of the Internet of Things into their own feudal domains?

Some early work that could lead to standards is being done on the legal front by the American Bar Association Section of Science and Technology and by the insurance industry concerned about liability rules. (In the latter document, see the discussion topics on wearable computing and the Internet of Things, among others.) Likewise, leading legislation by important jurisdictions such as this bill in California (a state of early adopters, as well as developers, of technology) could set other jurisdictions on an accepted path.

In addition, some have speculated that major buyers of connected technology could insist on security by contract, and the influence of large contracts could set a standard in practice.


Technology development is never-ending. The likely route for the Internet of Things is for increasing awareness of the vulnerabilities it creates for privacy and security, without any slowing down of innovation. Battles will be fought here as elsewhere online between feudal or proprietary solutions, on the one hand, and publicly developed open standards on the other. The former appear to have the upper hand at present. The imposition of civil liability for overly insecure devices – now at its very early stages – may promote improved practices but is unlikely to resolve the private vs. public debate.

Meanwhile, it may be wise not to be too smart.


  1. The United States Government Accountability Office (GAO) has recently issued a comprehensive study of the Internet of Things. A summary is here.

    Its overview of the challenges:

    Information security. The IoT brings the risks inherent in potentially unsecured information technology systems into homes, factories, and communities. IoT devices, networks, or the cloud servers where they store data can be compromised in a cyberattack. For example, in 2016, hundreds of thousands of weakly-secured IoT devices were accessed and hacked, disrupting traffic on the Internet.

    Privacy. Smart devices that monitor public spaces may collect information about individuals without their knowledge or consent. For example, fitness trackers link the data they collect to online user accounts, which generally include personally identifiable information, such as names, email addresses, and dates of birth. Such information could be used in ways that the consumer did not anticipate. For example, that data could be sold to companies to target consumers with advertising or to determine insurance rates.

    Safety. Researchers have demonstrated that IoT devices such as connected automobiles and medical devices can be hacked, potentially endangering the health and safety of their owners. For example, in 2015, hackers gained remote access to a car through its connected entertainment system and were able to cut the brakes and disable the transmission.

    Standards. IoT devices and systems must be able to communicate easily. Technical standards to enable this communication will need to be developed and implemented effectively.

    Economic issues. While impacts such as positive growth for industries that can use the IoT to reduce costs and provide better services to customers are likely, economic disruptions are also possible, such as reducing the need for certain types of businesses and jobs that rely on individual interventions, including assembly line work or commercial vehicle deliveries.

  2. And to add to the vulnerability list on the security side: here Bruce Schneier contemplates ransomware applied to IoT devices – so you can’t start your car and your thermostat won’t work, and the devices aren’t patchable …

    He suggests that software developers should be liable for such problems, to give them the incentive to fix them, despite the market being to the swift and cheap rather than to the complex and more costly.

    Do you agree?