AI: Should We Fear the Demon? (Not Yet)

Are driverless cars a pending threat?

The world of Artificial Intelligence (AI) may not have a more well-known figure than Elon Musk, entrepreneurial head of Tesla and SpaceX. In a 2014 interview, Musk stated: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah, he’s sure he can control the demon. Didn’t work out.”

While Musk’s concerns haven’t been proven completely unfounded, AI today functions strictly within the realm it is programmed for. Becoming a “demon” with expanding abilities and intelligence out of our control is currently beyond its limited capacity. Even AI that is designed to “learn” only learns within the confines of what it has been specifically programmed for, and as driverless cars show us, learning and doing are two different things.

Driverless Limitation

One of the most glaring examples of AI’s current limitations is in the rise in autonomous vehicles. General Motors, Uber, Waymo, and Tesla’s Autopilot are only a few in the race to true self-driving cars. Driverless car software utilizes machine learning, and the software is far from perfect.

Recognizing a bicycle and then anticipating which way it’s going to go is just too complicated to boil down to a series of instructions. Instead, programmers use machine learning to train their software. They might show it thousands of photographs of different bikes, from various angles and in many contexts. They might also show it some motorcycles or unicycles, so it learns the difference. Over time, the machine works out its own rules for interpreting what it sees. – Zachary Mider, Bloomberg

But as much as a computer learns about what potential hazards might look and act like, computers still have no grasp of three basic concepts: time, space, and causality. This approach, while it does keep a self-driving car from becoming KITT or Herbie, also keeps self-driving cars from being truly 100% error free. Unlike human drivers, cars will never be able to fully understand consequences or anticipate randomized behavior that may be apparent from factors these cars haven’t been programmed to recognize.

And what about scenarios that self-driving cars aren’t being trained for? In October 2019, testing by AAA indicated that dummy pedestrians crossing the road were hit 60 percent of the time, even in daylight and at low speeds. Such obvious limitations should, and do, give users pause.

While self-driving technology may be improving in some ways, seven in ten Americans still have no interest in using—or even sharing the road with—driverless cars. Driver-assist technology, on the other hand, is far more popular. Human input and supervision, it seems, still offer something that machine learning cannot replicate. Driver-assist technology allows users to benefit from the areas where AI has excelled, such as alerting drivers when they veer from the center of a lane or back up too close to an obstacle, but humans still stay in control to make split-second decisions on the road.

Functioning in Limited Capacities

While driverless car technology dominates headlines, AI is being used successfully in more limited capacities. Where machine learning does excel is in analyzing data sets far too large for human processing.

Using publicly available data and AI, university student Anne Dattilo discovered two exoplanets. Footage from Kepler space telescope tracked “100,000 stars in its field of view.” Dattilo’s modified AI was programmed to identify and flag stars that, due to light fluctuations, appeared to have planets. Dattilo and human colleagues confirmed the findings.

Elsewhere, scientists are using AI to sift through audio recordings for the sounds of elephants in the rainforests. This data is then used to count the animals, helping to provide a more accurate picture of population and poaching rates.

These highly focused uses of AI have proven successful in accomplishing what humans alone could not. Success in the science field, however, doesn’t seem to translate to highways or a grander intelligence.

A Constrained Approach

Some researchers hold the belief that machine learning and AI as it exists now simply may not hold the keys to the change promised by innovators and fiction writers, and the ways AI has been used so far seem to support this.

According to skeptics like [Gary] Marcus, [professor of cognitive psychology at NYU,] deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology. – Jason Pontin, Wired

While AI may provide some incredible shortcuts in daily life and populate wells of information, these limitations mean that fears of an all-powerful, all-knowing “demon” remain far from tangible. The need for oversight and limits in AI is no joke, and Elon Musk may yet prove to be a technological prophet of sorts. But the truth is, we have no reason to fear our robot overlords today.

Are you interested in the ways AI can enhance your life and business? Are IoT devices creating gaps in your carefully maintained cyber security? Contact Anderson Technologies today for enlightened solutions to all your technological troubles. We can be reached by phone at 314.394.3001 or email at info@andersontech.com.