Big data and privacy was one of the topics discussed at the Canadian IT Law Association conference this week. Some of the issues worth pondering include:
- Privacy principles say one should collect only what you need, and keep only as long as needed. Big data says collect and retain as much as possible in case it is useful.
- Accuracy is a basic privacy principle – but with big data accuracy is being replaced by probability.
- A fundamental privacy notion is informed consent for the use of one’s personal information. How do you have informed consent and control for big data uses when you don’t know what it might be used for or combined with?
- Probability means that the inferences drawn may not always be accurate. How do we deal with that if we as individuals are faced with erroneous inferences about us?
- If based on information that may itself be questionable, the results may be questionable. (The old garbage in, garbage out concept.) It has been proposed that for big data and AI, we might want to add to Asimov’s 3 laws of robotics that it won’t discriminate, and that it will disclose its algorithm.
- If AI reaches conclusions that lead to discriminatory results, is that going to be dealt with by privacy regulators, or human rights regulators, or some combination?
- Should some of this be dealt with by ethical layers on top of privacy principles? Perhaps no go zones for things felt to be improper, such as capturing audio and video without notice, charging to remove or amend information, or re-identifying anonymized information.