I sat on a robotics panel last week that discussed the future of robotics. The audiences' questions exposed the fact that at least some people are really scared of robotics and Artificial Intelligence. It seems that some of this renewed fear is due to the philosopher Nick Bostrom,who recently authored Superintelligence: Paths, Dangers, Strategies. Bostrom specializes in "existential risk" and I have a hunch that just like everything tends to look like a nail when the only tool you have is a hammer, it's convenient for everything to look catastrophically dangerous when your specialty is existential risk. It certainly increases your likelihood of funding!
The basis for the fear is the advancement of machine intelligence coupled with a technology singularity. The following is a description of levels of machine intelligence:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.
The Technological Singularity is described as follows:
The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2]
The concepts of varying levels of artificial intelligence and the singularity have been around for a long time, starting well before existential risk philosopher Bostrom was even born. I've had the opportunity to contemplate these concepts for decades while I've worked in technology, robotics and artificial intelligence, and I think these concepts are egregiously fundamentally flawed. They make for a good science fiction story and not much else. I was glad to find I'm not alone in this:
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative super intelligence Bostrom describes is far in the future and perhaps impossible.
While it would take volumes of highly detailed technical information for me to present a fully convincing argument, for now, I'd like to leave y'all with a couple of thoughts.
Consider the following words: computation, intelligence, experience, information/knowledge, decision, action.
- Even infinite computation (which is kind of the basis of the singularity) doesn't inherently translate to infinite intelligence or even any real or general intelligence.
- In a vacuum, even infinite intelligence is useless.
- The frontiers of information/knowledge can't be very much expanded with intelligence alone - experience (hypotheses, experiment, scientific method, etc.) is required no matter how intelligent something or someone is, and experience takes time, a long, long, long time as any researcher, developer or thinker (apparently other than an existential risk philospher) knows.
- No matter how intelligent something is, it can't make decisions to take catastrophic actions based on currently unknown knowledge until it takes the time to gain experience to push the state of knowledge. The actions required to gain that experience will be observable and easily stoppable if necessary.
The point being that we're already more than intelligent enough to destroy ourselves via nukes, pathogens, etc. The risk from super intelligent machines pales in comparison. Consider:
- About 1% of humans are sociopaths and that translates to about 70,000,000 people worldwide. Given standard bell curves, some of those are likely to have IQs in the neighborhood of 200. If intelligence alone is a thing to fear, then it's too late unless we're willing to kill all the smart people, and I strongly suggest we don't do that.
- Humans, using tools (including computers), have and will continue to have access to
all the tools of annihilation that a super intelligence would
have and some of us are downright evil already.
But consider the saying: "Jack-of-all-trades, master of none." My view is that narrow, focused intelligence, sort of the idiot-savants of the AI world, in their narrow area, will outperform a super general intelligence, and enable us to use them as tools to keep super general intelligences, if any are ever created, in check.
There is no commercial reason to ever create a general intelligence. For example, at my company, our vision systems will soon surpass human vision systems, and watching our Robotic Pruner prune, it looks quite purposeful and intelligent, but there's no real intelligence there. Siri's "descendants" will far surpass the Turing Test in a couple of decades (or sooner or later), and will appear extremely intelligent, but will be just a very, very good verbal analysis and response AI and will have no general intelligence of any kind. C-3PO in Star Wars appears intelligent and we will be able to create a C-3PO eventually, but the real world version will have no real, general intelligence.
The illusion that many of us seem to have fallen for is that many behaviors that we associate with our own anthropomorphic intelligence are only possible if we create an entity with intelligence that somehow operates like a human's, or is orthogonal to the way human intelligence operates, but is similarly global and all encompassing. I strongly believe that view is mistaken and that it is just an illusion. Seemingly intelligent looking behaviors will emerge from massive computation and information interacting with a non-trivial environment, but it won't be any sort of conscious or real intelligence. And because of that, it won't be dangerous.
Human intelligence requires a human body with a circulatory system pumping hormones and responding to rhythms and movements and events and sensory input. I always chuckle when someone suggests encoding someone's brain (neurons & connections) into a computer. You know what you get if you do that? The person in a coma, which doesn't seem particularly useful to me.
I think intelligence, especially within this particular topic, is wildly overrated, and there's nothing to fear.