Artificial Intelligence (AI) has been a focus of science fiction for the last five decades, often positioned as a super-intelligent antagonist with the goal of eliminating or manipulating all of humanity. The association of AI with fictional entertainment is, perhaps, detrimental to public perceptions of AI—the concept shouldn’t be confined to fiction, because artificial intelligence is already here.
Real artificial intelligence is unlikely to terminate us like Skynet in The Terminator—at least, it won’t do so maliciously, consciously, or emotionally. We stand on the edge of a true AI revolution, an event many scientists predict will occur by the end of this century, if not sooner. The emergence of true AI will be a seminal moment of change in human history, equivalent to, or perhaps far greater in magnitude than, the discovery of steam power and the industrial revolution that came about in a similar fashion.
We already operate AI, with limited functionality. This is known as Artificial Narrow Intelligence (ANI), which specialises in one specific function. The IBM supercomputer that defeated the world chess champion Garry Kasparov is a good example of ANI. It served no other function than be excellent at chess. Most of the apps on smartphones, such as Siri or map navigation, are also examples of limited AI. The real challenge that AI scientists and theorists are currently trying to meet is the creation of an Artificial General Intelligence (AGI). AGI is human-level artificial intelligence; a computer that is equivalent to a human and can perform all intellectual functions—defined as the ability to reason, solve problems, and think abstractly about complex ideas and, most importantly, have the ability to learn. This last ability leads us, quickly, towards Artificial Superintelligence (ASI). Nick Bostrom, a leading AI theorist, defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” ASI is where existential risk comes into play.
The real threat of artificial intelligence comes from its programming and its limitations. Let’s talk about paperclip doomsday. Let’s say that, in 2040, the first self-improving AGI (and, soon, ASI) is created, with the sole purpose of producing paperclips. There is no other element of design to the intelligence, and while its potential is vast, with an intelligence almost immediately greater than the sum of all humankind, this intelligence has one purpose—make paperclips: as many as possible and as efficiently as possible. So, the super-intelligence turns the whole Earth into paperclips; it has no reason not to do so. It could well spread to the rest of the galaxy, turning every speck of matter in its path into a paperclip. An easy way to understand this paperclip ASI’s core function is to consider humans as a computer, and to consider our core function—to improve, generation by generation, via reproduction, towards the goal of continued survival. Our core function is, arguably, propagation of the species for extended survival. This ASI´s is to make paperclips. Unlike humankind, however, it has no other limitations or secondary programming, such as social or cultural rules, and will achieve its core function endlessly.
The issue of ethics in artificial intelligence really stems from the matter of who, in the end, produces the first self-improving AGI, as it may well be the last. In the right hands, with a utopian goal in mind and the correct limitations in place, a super-intelligent computer could be a benefit to everyone. If a single thing goes wrong, though, we’re looking at paperclips all the way down.
Words: George Cheese
Copy edited by Elena Stanciu