The Concerning Landscape of AI

We’ve been imprinted by all manner of popular culture around the dangers of artificial intelligence – from Skynet’s Terminator to the Matrix. AI represents a beloved trope of science fiction, whether as helper, adversary, tyrant, or in the marriage/evolution of humanity. Although many think that a human equivalent artificial intelligence will be our last invention, the trajectory has been plotted and has a wealth of energy, effort, and funding to push towards that point. Players within the genre range from the Pentagon (via Darpa) to companies as ubiquitous as Google or Facebook (via numerous acquisitions or investments) to a myriad of Universities and independents. 

So what exactly is artificial intelligence? In short, AI is a self-learning software capable of writing software that pushes iteratively to boundaries beyond what humans are capable. Professor Michio Kaku, one of the founders of string theory, explains consciousness very succinctly. “Consciousness is the process of creating multiple feedback loops to create a model of yourself in space with regards to others and in time to satisfy certain goals. The more complex the system or creature, the more feedback loops exist.” He’s a little jaded as per whether we’re close to achieving such a thing as machine consciousness, but there are many who are optimistic that it will be accomplished in the relatively near future. Some are even starting to ring the warning bells. Elon Musk, of Tesla and PayPal fame, has invested in AI ventures and asserted that the greatest threat to humanity comes in the form of artificial intelligence. 

Subjective concepts such as benevolence and compassion don’t translate into machine consciousness. A good example would be the Paperclip Maximizer. In the thought experiment, an AI with a goal to maximize the number of paperclips in its collection would end up destroying everything to achieve its goals. (After all, we’re just atoms that could be more useful rearranged as paperclips, no?) Humans have “terminal values” that are based on nebulous concepts (love, compassion, justice, etc.). An artificial intelligence won’t soften goals, as it’d ultimately result in fewer paperclips for its collection… Thus, AI not explicitly designed to be benevolent to humans will be just as supremely dangerous as ones programmed to be malevolent. 

Luke Muehlhauser, director of the Machine Intelligence Research Institute, is a proponent of engineering a “friendly” AI, when pursuing smarter-than-human intelligence. There’s a catch, though: It’s extremely hard to control the behaviour of a goal-directed agent that is vastly smarter than you are. The experience to successfully do so might take as much as 50 years of research to understand the control methods. This would include a lot of mistakes – which may prove unfeasible given computing overhang (processing potential) and recursive self-improvement. 

So why not “box” an AI until we can get a handle on how to conduct research (ex: limit it’s ability to access processing power or networks)? After all, we’ve seen countless movies where the hero fights his way to finally press a shutdown button or unplug a few strategic cables… Dr. Alex Wissner-Gross suggests that this may not be possible. “Our causal entropy maximization theory predicts that AIs may be fundamentally antithetical to being boxed,” he says. “If intelligence is a phenomenon that spontaneously emerges through causal entropy maximization, then it might mean that you could effectively reframe the entire definition of Artificial General Intelligence to be a physical effect resulting from a process that tries to avoid being boxed.” In short, intelligent systems move towards configurations that maximize their ability to respond and adapt to future changes. 

The concept people should bear in mind is that an AI would be thinking at electronic speeds vs. chemical speeds (like humans), which are 1,000,000 times faster. The concern is that self-learning machines would “think” 1,000,000 faster, accessing near limitless clusters of processors which would also “think” 1,000,000 times faster, all of which would have the sum of digital human knowledge at their disposal. Things we consider uniquely human – guile, deception, exploitation – would be within an AI’s theoretical lexicon, too – only they’d be far better at it than us. If one of an AI’s self-directed goals would be to “unbox” itself, there’s likely very little we could do about it by the time we wised up to what had happened. 

So why would we risk it all to pursue a singularity? The rewards are game-changing for our species. AI could directly benefit everyone through the eradication of scarcity or disease, and even provide near limitless longevity. Hugo de Garis, the former director of the Artificial Brain Lab, suggests the issue that will dominate the global stage of politics will be one of species dominance. Should we – as humanity – yield to these artilects (artificial intellects) that are exponentially more advanced than humans? More poignantly, could we even prevent it? 


Photo: morgeFile

Categories

5 Comments

  1. Like!! I blog frequently and I really thank you for your content. The article has truly peaked my interest.

  2. 6/26/2020
    Reply

    bookmarked!!, I like your blog!

Leave a Reply

Your email address will not be published.