Читать книгу Outsmarting AI - Brennan Pursell - Страница 14

Myth 4: AI Has Insight

Оглавление

People claim that AI “perceives,” “learns,” “understands,” “comprehends,” and, worst of all, “discerns hidden patterns” in data, as if it had some kind of inherent insight. Referring to groups of AI algorithms as “deep learning” and “deep belief networks” doesn’t help.

AI algorithms churn through numbers without a clue as to what they refer to. They have no idea about the difference between correlation and causation, they have no understanding of context, and they are notoriously bad at analyzing what-ifs—how things might be if we imagine circumstances different from what they are.

AI applications should be predictable, transparent, explicable, rational, and, above all, accurate. No one has any need for more software that classifies things incorrectly, returns false answers, and makes bad predictions.

Backpropagation algorithms on which AI, “deep learning,” and “neural networks” are based, take input numbers, make calculations based on them in “hidden layers,” and generate output numbers. You “train” the system by telling it what outputs it should produce, given the inputs. The algorithm then automatically adjusts the calculations in the “hidden layers” to produce the desired output. There can be just one or two to many of these hidden layers. I’ll provide examples in chapter 2, but for now, this is obviously not insight.

Computers don’t know what they are doing and don’t know when they are dead wrong. People have to catch the errors and retrain the system for improvement. AI algorithms adjust their hidden layers by trial and error. If, in this work, they figure out a “hidden pattern,” then we may not ever know how it did, any more than the computer does, given the number and sheer complexity of the layers. Much of AI calculations go on in a “black box.”

AI’s blindness to its own workings is as bad as its brainlessness. It is a huge problem for compliance with law, especially in the European Union, where people have the right to know why the algorithm did what it did—why, for example, their application for a loan or insurance or a job was rejected.

But sometimes interesting trends do emerge. One bank determined that among its customer base, those who filled out the loan application in all caps were riskier—that is, defaulted at a higher rate—than those who used both upper- and lower-case letters (correctly, we assume). This is an example of AI exposing a hidden pattern, but it takes a human to interpret and act on it. And the correlation probably has nothing to do with causation. What to do with this information is up to the bank. Should the system be configured to accept only those applications that use upper- and lower-case letters? Should applicants be warned not to use all caps? Bank personnel will have insights on this matter, not the AI.

There is a set of “unsupervised learning” algorithms that conduct statistical analysis of data to identify relationships among data entries, such as clusters, associations, regression, time series, etc. These are actually standard data-mining tools of the data scientist, not a mysterious form of insight.

So, if you ever meet an AI vendor who claims their algorithms think better than you do, jack up the BS sensor.

Outsmarting AI

Подняться наверх