Computers generally do well at tasks that are repetitive and involve keeping track of lots of individual bits of information. People get bored doing the same things over and over again, and tend to prefer to abstract over lots of information rather than track every detail. Computers don’t get distracted or tired and can track every detail if they have enough memory, and so they are indeed better at doing some things than people.

When it is claimed that algorithms are more “perfect” than people, this really means that the algorithm is better at doing some task than individual humans, not better than all people put together. Just as there is the “wisdom of the crowd”, algorithms benefit from pooling across many examples to derive the patterns. Actually, these sorts of claims are almost always based on comparisons with humans; it is human-level performance that we are striving for. When we measure how good an algorithm is, the “right” answer is always based on what a human would say. Humans are our “gold standard” that we compare the algorithms with. This assumes that a majority of humans agree on a given answer.

Because artificial intelligence algorithms can find patterns in very complex data sets that are difficult for humans to see, there are certainly many decisions that can benefit from being supported with AI. I personally think that it is very exciting to see advancement in the sophistication and accuracy of algorithms on many tasks, and can see real advantages to using computers to sift through increasingly large data sets to find the important patterns. I do think it is a dream to have the ability to look deeply into this data, to help us make decisions with the confidence that we haven’t missed critical factors that should influence those decisions.

But we shouldn’t proceed blindly. Increasingly, we are aware of the potential for algorithms to introduce bias because they only see a fraction of the data that actually matters, or to reinforce inequities because they focus attention only on the things that they know how to recognise, or have seen before. We have to understand the limitations of these algorithms, and make sure that there are checks in place to mitigate those limitations. We need to develop strategies for flagging the decisions that are hard for the algorithms to make, where human judgement can be applied. Humans are better at dealing with context and nuance, and at recognising exceptions. We need to think about humans making decisions with the help of AI, rather than one or the other taking over.

Prof Karin Verspoor
School of Computing and Information Systems, The University of Melbourne