Big Brother in Your TV? 10 “Freaky Line” Things to Think About

There has been a big kerfuffle in the last few days over the thought that Samsung smart TV’s are listening to and recording TV watcher’s conversations via their voice command feature. That arose from a clause in their privacy policy that said in part “…if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.”

Samsung has since clarified this language to explain that some voice commands may be transmitted to third parties to convert the command to text and make the command work. Also to point out that you can choose to just turn that feature off. That is similar to how Siri, Google Now, Cortana, and other voice command platforms work. Some voice commands are processed locally, and some may require processing in the cloud. How much is done locally, and how much in the cloud varies depending on the platform and the nature of the command.

While one should never reach conclusions based on press reports, the probability is that this issue was way overblown. But it does show how challenging privacy issues can get when it comes to technology and the internet of things (IOT).

Issues to ponder include:

  1. The importance of designing privacy into tech – often called “Privacy by Design” – rather than trying to bolt it on later.
  2. How complex privacy is in the context of modern and future technology where massive amounts of data are being collected on us from almost everything that includes things like fitness trackers, web browsers, smartphones, cars, thermostats, and appliances. Not to mention government surveillance such as the NSA and the Canadian CSE.
  3. The mothership issue – meaning where does all that information about us go, how much is anonymised, what happens to it when it gets there, and who gets to see or use it?
  4. How difficult it is to draft privacy language so it gives the business protection from doing something allegedly outside its policy – while at the same time not suggesting that it does unwanted things with information – while at the same time being clear and concise.
  5. How difficult it is for the average person to understand what is really happening with their information, and how much comfort comes – or doesn’t come – from a trust factor rather than a technical explanation.
  6. How easy it is for a business that may not be doing anything technically wrong or may be doing the same as everyone else is to become vilified for perceived privacy issues.
  7. Have we lost the privacy war? Are we headed to a big brother world where governments and business amass huge amounts of information about us with creeping (and creepy) uses for it?
  8. Are we in a world of tech haves and have nots where those making the most use of tech will be the ones willing to cross the “freaky line” where the good from the use outweighs the bad from a privacy perspective?
  9. Are we headed to more situations where we don’t have control over our personal freaky line?
  10. Where is your personal freaky line?

 

Comments

  1. A lot of the most popular smartphone apps ask (in the terms of use that most people accept without reading) for lots of permissions that make people nervous if they do read them.
    Some may have innocent explanations. Others may invoke that ‘freaky line’ (or freak-out line?)