Tech

Artificial intelligence isnt as smart as it thinks

This article is part of a special report on artificial intelligence, The AI Issue.

Digital personal assistants, software that can trounce board game champions, algorithms serving up customized online advertising — wherever you turn, artificial intelligence appears to be taking over the world.

But look past the self-driving cars and facial-recognition cameras, and youll see that the technology is a lot less intelligent than it may at first appear. Its likely to be decades, at best, before even the smartest forms of AI can outdo humans in the complex tasks that make up daily life.

“The real world is a complicated, messy place,” said Michael Wooldridge, program co-director at the Alan Turing Institute, the United Kingdoms national center of excellence for data science and artificial intelligence. “What it would take to create general human levels of artificial intelligence is something like the Apollo 11 program. It will take decades to evolve.”

That hasnt stopped people from trying.

Venture capitalists and tech giants are plowing in billions in investment. Policymakers are falling over themselves to write new rules for the sector. Global leaders are jostling to lead the pack in how AI develops.

“You can train a self-driving car. But that car cant tell if people want to be dead or not” — Stuart Russell, AI professor at UC Berkeley in California

So far, however, the technology is limited to narrow tasks that require human oversight and mountains of data which is often skewed in ways that lead to unexpected, or even biased, results. The holy grail — so-called general artificial intelligence that can flit between various jobs mimicking human behavior — is still more a myth than a reality, with more than half of almost 400 AI experts recently surveyed saying it will be at least 2060, if not later, for such technology to become feasible.

Getting there will be an uphill challenge, involving a step change in how computers can siphon and interpret data.

Current artificial intelligence systems may be able to whistle through complex tasks, as long as theyve been given set parameters and enough data to crunch almost all the possible eventualities. But ask them to do something new and the technology quickly goes from acting superhuman to falling at the first hurdle.

“With a computer program, it can be designed to do something well and fail at everything else,” said Stuart Russell, an AI professor at UC Berkeley in California, whose new book, Human Compatible, argues for a shift in how people use the technology. “You can train a self-driving car. But that car cant tell if people want to be dead or not.”

Right now, AI is more myth than reality | Isaac Lawrence/AFP via Getty Images

Overcoming that limitation will require increased computer power, complex data-crunching and sophisticated artificial intelligence algorithms that have yet to be developed.

Current limits: data bias

Even in its current niche uses, artificial intelligence can quickly get off track if its not marshaled by an army of data scientists.

In part, thats due to the inherent biases in the digital information underpinning these complex systems. The datasets used to train AI often mirror the prejudices of wider society.

When the data itself is based on biased assumptions — many law enforcement agencies databases, for instance, skew against minority groups — the use of sophisticated algorithms can lead to dangerous results, said James Manyika, chairman of the McKinsey Global Institute, the research unit of the global consultancy. “If there are any biases, the real risk is that we bake them into decision-making at scale,” he added.

On a recent sunny winter morning in East London, the limits of artificial intelligence were on show as the citys police force set up a blue truck with several antennas and cameras — the latest trial of facial-recognition technology that local law enforcement

politico

[contfnewc] [contfnewc]