AI Requires a Different Lens Than Human Decisions

There has been a lot of consternation over artificial intelligence, and rightly so. The technology will be disruptive in many ways, and there are ethical issues to be solved. Embedded bias for example. The form of AI called machine learning learns by looking at large data sets of information. That could be how to play a game, who to hire, or even who to convict.

If the data set has bias in it, the AI will learn and propagate that bias. This problem has led to unexpected results which may or may not be obvious.

In some ways this is similar to the analogue to digital conversion.

When new tech comes along, it is natural to view it with the same lens as existing tech. That doesn’t always work, and we need to think about how it is different.

An example in analogue to digital conversion is the concept of practical obscurity. Many records kept on paper were considered public records. Court records for example. But not many people looked at them because you had to go to the courthouse, ask somebody, look through documents, and generally invest a lot of time. Some jurisdictions decided to put court records online for anyone to see. But because it was so easy to look at them, the “nosey neighbor” issue caused a rethinking of whether those records should really be that public. In other words, the digital world took away practical obscurity that existed in the paper world, causing a rethink of public vs private.

When adopting AI, we also must think about the lens we view it with. After all, AI just makes the same decisions that a human would make, right? But that doesn’t account for how the AI has learned to make those decisions. And it doesn’t account for the speed of those decisions and the ability of humans to pause and reflect on other relevant factors before acting.

While the lens issue is similar, AI takes it to a higher level of importance.

Comments are closed.