An AI to Make Me Smarter

At Kobo last week, we had a guest-talk about machine learning by Sheldon Fernandez of Infusion.

After with a good grounding in deep learning systems, which mimic the human brain to a degree, we got to the interesting stuff: inscrutability, hidden factors and confounding variables. All of these pose problems to people trying to use AI, and illustrate reasons why others fear it.

Inscrutability in AIU is exactly what it is in people: an AI often cannot tell us how it arrived at a decision, or it’s description is so convoluted that it is almost worthless.

When we dig in to the explanations it offers, we find two other interesting things:

Hidden factors.

In a HR experiment, an AI noticed that people who live only a few blocks from the office stay with the company significantly longer, and its operators therefor proposed that as a criteria for hiring.
A human HR person immediately objected: ‘What, you want me to only hire rich people who can afford to live downtown?’

The program had detected a real factor which HR people had used in the data it was trained on, and exposed it. Unfortunately, the factor represented a bias, but the reviewers were sharp enough to recognize it.

Confounding variables.

In a related example, a (human-run) study appeared to argue that coffee leads to heart attacks. In this, as in some AI cases, the reason was a “confounding” variable. The class of coffee drinkers happens to include the great majority of smokers, and a small subset of non-smokers. This confounded the actual evidence, and gave the impression that coffee was a contributing factor in heart attacks, rather than identifying the correct observation, that smoking is the contributing factor.

All of these make the advice of an AI untrustworthy: it doesn’t know how it reasons, it’s very sensitive to biases in the data leading it astray, and it can simply get the wrong answer when the training data contains confounding variables.

The EU is sensitive to this, and to a set of related algorithmic problems, and posits, in the preamble to the GPDR regulations, that an individual has the right to ‘an explanation of the decision reached after [algorithmic] assessment,’ (Goodman and Flaxman, as cited by Ethan Chiel).

All in all, not a good picture for those wanting to take advantage of AI in law.

Now turn this on its head.

If AI has trouble understanding itself, then human intelligence may be what is needed to help it introspect. At the very least, a human can write an deep learning system to learn how deep learning systems make decisions.

If AI can expose hidden factors, then it can help us learn what those factors are. Consider the “lives close” case. In a typical US city, that could be a bias against black people living in a lower-rent district. If a human sees that the factors used in their business are really disguised biases, he or she can inspect all the factors, and weed out the unwarranted ones before they’re accidentally used.

If AI can expose confounding variables, it can tell us why our logic sometimes goes all wonky. If my AI says the evidence says something that’s arguably false, I can inspect the evidence I use and filter out the confounding factors, so I can reason to sound conclusions.

It makes perfect sense to buy a deep learning AI to learn how people make decisions, and then give it my decisions to comment on. If my data contains misleading evidence, looking for confounding variables can expose them, and if my reasoning contains unstated assumptions, looking for hidden variables can expose them. In effect, the AI is helping me to make my reasoning better, and is inspecting my data for land-mines.

I want that kind of AI: owning it can make me smarter.

Comments

  1. “It makes perfect sense to buy a deep learning AI to learn how people make decisions, and then give it my decisions to comment on. If my data contains misleading evidence, looking for confounding variables can expose them, and if my reasoning contains unstated assumptions, looking for hidden variables can expose them. In effect, the AI is helping me to make my reasoning better, and is inspecting my data for land-mines.” Couldn’t agree more. However, depending on the data collected (amount and quality) AI could also be used as a tool to manipulate and obfuscate information particularly to influence decision-making, for instance, if applied to the electorate of any society. Such is the power of AI, and “with great power, comes great responsibility.” So will applying AI to the justice system make the system more just or just more legal for some?

  2. David Collier-Brown

    I think that very much depends on who’s using the AI. The earliest adopters are academics, followed by some businesses, and then sometimes followed by crooks (;-)) After passing the trough of disillusionment*, other businesses will come on board, eventually followed by government, the courts and very conservative businesses.

    In the presence of crooks, the trough of disillusionment can easily turn into the slough of despond…

    –dave
    [* https://en.wikipedia.org/wiki/Hype_cycle%5D

Leave a Reply

(Your email address will not be published or distributed)