Understanding AI Before It Overtakes Us

The great misconception about artificial intelligence (AI) is that it will overtake the world suddenly by replacing the human mind with legions of sentient robots. Perhaps in some distant future, AI will reach a level of sophistication to allow for a race of artificial minds. But we aren’t remotely close to the stage yet.

Instead, AI developments are taking place gradually. This isn’t to diminish the powerful advancements AI is enabling in the scientific and technology sectors, but we need to collectively rethink how we understand the impact of AI. The technologies associated with AI have given humanity better tools, but they haven’t changed how we think. And without oversight, particularly when it comes to AI technology in warfare, it could present grave dangers.

Thanks to the gradual development of AI systems, it can be difficult to fully grasp the radical nature of how technology is changing society. Over the past two decades, AI systems and technology have developed to be crucial for human endeavors. This means that AI systems are helping us explore spaces that we couldn’t previously access.

In biochemistry, for example, researchers are using AI to develop new neural networks that “hallucinate” proteins with new, stable structures. This breakthrough expands our capacity to understand the construction of a protein. In another discipline, researchers at the University of Oxford are working with Google’s sister AI company DeepMind to develop fundamentally new techniques in mathematics. They have established a new theorem known as knot theory, by connecting algebraic and geometric invariants of knots.

NASA scientists are even taking AI advancements to space on the Kepler spacecraft. Using a neural network called ExoMiner, scientists have discovered 301 planets outside of our solar system. These developments demonstrate that we don’t need to wait for AI to create artificial minds to see profound shifts in technology. By enhancing our capacity, AItransforms how we look and understand the world.

While many of these developments are positive and could potentially advance human civilization, progress has a dark side. When it comes to AI on the battlefield and in cyber warfare, the advancements are accelerating the possibility of potentially deadly and reckless technologies. The prolific AI researcher, writer, and professor at the University of California in Berkeley, Stuart Russell, recently met with British defense officials to raise serious concerns about the use of unchecked AI in warfare.

Russell believes that AI used to locate and kill human targets could eventually wipe out humanity. This sentiment is shared by more than 400 German AI researchers, who took their concerns to the German government in an open letter. “The killing of humans should never be automated based on algorithmic formulas,” the letter said in no uncertain terms. “Such dehumanisation of life and death decision-making by autonomous weapons systems must be outlawed worldwide.”

The challenge is that existing regulations, both on an international and national level, aren’t written in a way that considers the tremendous potential of automated AI systems to transform warfare. Major military powers, including the United States, Russia, and Israel, have repeatedly resisted calls for new international treaties governing the use of autonomous weapons systems. At the same time, arms manufacturers continue to develop more advanced autonomous killing machines. STM, a Turkish arms manufacturer, has developed autonomous killer drones little bigger than a rugby ball that can carry out targeted hits based on image and facial recognition.

This trajectory will continue as long as mainstream society continues to think of AI in rudimentary terms that have more to do with science fiction than actual science. Put simply, we need a new code of ethics detailing the use of these new killing machines and technologies. AI researchers such as Stuart Russell are right to be vocal that we are designing technologies that could be used against us one day.

The first step to solving the problem is creating a fresh debate about AI that helps society understand the profound but subtle changes taking place in AI right now. We need to understand better how AI is subtly changing everyday life with manifestations in how we write (such as with Google’s predictive algorithms) and how we receive information (through the algorithms on the likes of YouTube and Netflix).

A deeper understanding of the infamous link between arms manufacturers and technology companies will also expand our knowledge of the potential hazards in AI. The revelations about how Israel’s NSO Group has used its technology to hack into cellphones and monitor communications of various world leaders is one sordid example of how technology is upending the nature of statecraft and warfare.

It’s time to listen to the thinkers and researchers that wrote the AI programs they are warning about. When used constructively, AI will help propel humanity forward. Our challenge now is making sure that we can keep the dark side of progress at bay.

Joseph Dana is the senior editor of Exponential View, a weekly newsletter about technology and its impact on society. He was formerly the editor-in-chief of emerge85, a lab exploring change in emerging markets and its global impact.