US Trade Commission Publishes Guidelines on Facial Recognition

The American Federal Trade Commission has published a staff report entitled Best Practices for Common Uses of Facial Recognition Technologies [PDF]. The FTC has the jurisdiction under 15 USC § 45 (m) to make rules “respecting unfair or deceptive acts or practices” in commerce. As I’m sure you’ll know, facial recognition technology is fast advancing and has already found its way into software such as that used to organize photographs by the identity of the people in them (and, presumably, into the various operations of the authorities concerned with security — something left untouched by the report, of course).

The report makes a number of recommendations intended to operate as “best practices” or guidelines:

  • “. . . companies using facial recognition technologies [should] design their services with privacy in mind . . . . maintain reasonable data security protections . . . . [and] consider putting protections in place that would prevent unauthorized scraping . . . .”
  • “. . . companies should establish and maintain appropriate retention and disposal practices . . . .”
  • “. . . companies should consider the sensitivity of information when developing their facial recognition products and services.”
  • “. . . provide consumers with simplified choices and increase the transparency of their practices . . . . [e.g.] clear notice that technologies are in use . . . .”
  • “. . . obtain consumers’ affirmative express consent before collecting or using biometric data from facial images” [in at least two scenarios: where use is materially different from that represented upon collection; where identifying anonymous image to someone who could not otherwise identify him or her]

There is a dissenting statement by one of the FTC Commissioners at the end of the report.

It strikes me that this is pretty mild stuff, given the potential for the near eradication of privacy, at least as currently understood, that facial recognition software presents. On the other hand, I’m unaware of any similar guidelines issuing from a Canadian government.

[hat tip: @randypicker]

Retweet information »

Comments

  1. David Collier-Brown

    They completely miss the problem we saw at Siemens, working with the German security services: false positives!

    If you’re looking for one “bad guy” in an airport, and the program claims “only one error in a thousand”, then you can expect to pull one person out of line for every thousand passengers.

    If, on the other hand, you’re looking for a thousand bad guys, you can end up getting a “check this guy” alert for everyone who walks in front of the camera. Including grandma!

    It’s related to the birthday paradox. If you have 23 people in a room, you have a 50% chance of them having something in common, in this case their birthday. That’s because there are 253 different pairs of people if you compare each person to the other 22, 23 times (ie, (22 * 23) / 2)

    Now compare 1000 bad people with 1000 good people for similar facial features, and you will have 999 * 1000 / 2 = 499,500 pairs. At 1/1000 chance of a false identification, that’s 499 1/2 people the program will think it recognizes and have you check manually.

    There are some genuinely good uses of facial recognition: Ann Cavoukian, the Ontario Information and Privacy Commissioner, came up with one that’s privacy-preserving and fairly insensitive to false positives, in cooperation with the casinos of Ontario.

    However, if you apply any approximate matching scheme without knowing about the birthday paradox, you can get some interesting results.

    The classic is having one’s grandma identified as a possible member of the Baader-Meinhof gang. That, if the story is true, was the tip-off to the FSS that something was very wrong in the state of Germany (:-))

    –dave