What if Your Personal Digital Assistant Defames Somebody?
We recently had a discussion about police access to the recordings made by in-home digital assistants like Amazon’s Alexa and its (her?) ilk.
Now our focus turns to the actions of these devices if they do bad things themselves. This story reports that Siri, Apple’s version, routinely answered requests in Toronto for prostitutes by referring the inquired to an “eSports bar” – one where clients play electronic sports games. Apparently the word may be too close to “escorts” for Siri’s sense of discrimination. It is clear – take it as established for the present discussion – that the bar is NOT a hangout for prostitutes.
So: Has a tort been committed? Even if these days of pretty legal prostitution, it can lower the estimation of a bar in public opinion if it is known as a hangout for prostitutes. So: defamation.
Who is liable, if anyone? Canada’s rules of intermediary liability are a bit vague, but tend to impose liability once the intermediary knows of the defamation. Apple has been told.
Even in the US, where s. 230 of the Communications Decency Act gives very broad immunity to intermediaries, that immunity is for content provided by others. Siri’s algorithms for finding answers to questions is proprietary to Apple – so the company is not really an intermediary, is it? It’s a primary party.
One has jurisdictional issues, but can somebody sell products and services (the device is probably both) in Canada and escape all liability for how it works? Would this be a defamation action or a product liability action? (The bar owner does not currently seem inclined to sue, but that does not stop us from speculating.)
Is this a good case study for the Law Commission of Ontario’s current project on online defamation? Or is it open-and-shut except for the novel medium of publication?
Interesting! I’m actually working (with Emily Laidlaw at UCalgary) on a paper for the LCO on internet intermediary liability. So this is something we’ve been thinking a lot about — not the Siri example specifically, but it’s quite similar to the examples of Google search results or autocompletes, which are the subject of case law.
My first thought it that it’s not obvious it’s defamation at all. Defamation requires an intent to convey and courts have disagreed about whether there’s intent when a machine generates defamatory content, albeit operating as it is programmed to do. Most UK cases have said Google isn’t a publisher of search engine results — at least before notice. Some Australian courts have said the opposite, but most of the academic discussion favours the UK approach. In Canada, it’s unclear what the outcome would be but the trend seems to be toward publication requiring knowledge. Arguably, then, in Canada Siri/Apple wouldn’t be a publisher of the content. That said, once notice is received, if the conduct continues, under current law there would be publication.
In terms of where the law is heading, my personal preference is for the common law of defamation to require knowledge of particular words for publication. If that were the law again, Siri/Apple wouldn’t be a publisher — not even after notice. Emily and I think that the best way to deal with intermediary liability is through regulation of intermediaries (outside the realm of defamation law), with rules similar (but not identical) to copyright’s notice-and-notice regime. Then again, that might not be helpful in this particular example since Apple is both the intermediary and the content creator.