Search This Blog

Monday, February 16, 2015

Fear of Intelligence

I sat on a robotics panel last week that discussed the future of robotics. The audiences' questions exposed the fact that at least some people are really scared of robotics and Artificial Intelligence.  It seems that some of this renewed fear is due to the philosopher Nick Bostrom,who recently authored Superintelligence: Paths, Dangers, Strategies. Bostrom specializes in "existential risk" and I have a hunch that just like everything tends to look like a nail when the only tool you have is a hammer, it's convenient for everything to look catastrophically dangerous when your specialty is existential risk. It certainly increases your likelihood of funding!

The basis for the fear is the advancement of machine intelligence coupled with a technology singularity. The following is a description of levels of machine intelligence:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly. 
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.
The Technological Singularity is described as follows:
The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2]
The concepts of varying levels of artificial intelligence and the singularity have been around for a long time, starting well before existential risk philosopher Bostrom was even born. I've had the opportunity to contemplate these concepts for decades while I've worked in technology, robotics and artificial intelligence, and I think these concepts are egregiously fundamentally flawed. They make for a good science fiction story and not much else. I was glad to find I'm not alone in this:
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative super intelligence Bostrom describes is far in the future and perhaps impossible.
While it would take volumes of highly detailed technical information for me to present a fully convincing argument, for now, I'd like to leave y'all with a couple of thoughts.

Consider the following words: computation, intelligence, experience, information/knowledge, decision, action.

  • Even infinite computation (which is kind of the basis of the singularity) doesn't inherently translate to infinite intelligence or even any real or general intelligence.
  • In a vacuum, even infinite intelligence is useless.
  • The frontiers of information/knowledge can't be very much expanded with intelligence alone - experience (hypotheses, experiment, scientific method, etc.) is required no matter how intelligent something or someone is, and experience takes time, a long, long, long time as any researcher, developer or thinker (apparently other than an existential risk philospher) knows.
  • No matter how intelligent something is, it can't make decisions to take catastrophic actions based on currently unknown knowledge until it takes the time to gain experience to push the state of knowledge. The actions required to gain that experience will be observable and easily stoppable if necessary.
On the other hand, consider a nuclear tipped cruise missile. It can perform some computation and can maneuver in its very narrowly intelligent way, has none of its own experience (it's a one shot deal after all), has some information/knowledge in terms of maps, someone else made the decision to launch, but it's action is quite devastating. 10,000 of them could destroy most of the advanced life on earth. When I was a child, we had air raid drills in school because we thought some crazy soviet might do exactly that.

The point being that we're already more than intelligent enough to destroy ourselves via nukes, pathogens, etc.  The risk from super intelligent machines pales in comparison. Consider:

  • About 1% of humans are sociopaths and that translates to about 70,000,000 people worldwide. Given standard bell curves, some of those are likely to have IQs in the neighborhood of 200. If intelligence alone is a thing to fear, then it's too late unless we're willing to kill all the smart people, and I strongly suggest we don't do that.
  • Humans, using tools (including computers), have and will continue to have access to all the tools of annihilation that a super intelligence would have and some of us are downright evil already.
Part of the runaway AI fear is based on the concept of a single Artificial Super Intelligence emerging in a winner-takes-all scenario, where it redesigns and rebuilds itself so fast that nothing else will ever be able to out think it and disable it so we'd better hope it's beneficent.

But consider the saying: "Jack-of-all-trades, master of none." My view is that narrow, focused intelligence, sort of the idiot-savants of the AI world, in their narrow area, will outperform a super general intelligence, and enable us to use them as tools to keep super general intelligences, if any are ever created, in check.

There is no commercial reason to ever create a general intelligence. For example, at my company, our vision systems will soon surpass human vision systems, and watching our Robotic Pruner prune, it looks quite purposeful and intelligent, but there's no real intelligence there. Siri's "descendants" will far surpass the Turing Test in a couple of decades (or sooner or later), and will appear extremely intelligent, but will be just a very, very good verbal analysis and response AI and will have no general intelligence of any kind. C-3PO in Star Wars appears intelligent and we will be able to create a C-3PO eventually, but the real world version will have no real, general intelligence.

The illusion that many of us seem to have fallen for is that many behaviors that we associate with our own anthropomorphic intelligence are only possible if we create an entity with intelligence that somehow operates like a human's, or is orthogonal to the way human intelligence operates, but is similarly global and all encompassing. I strongly believe that view is mistaken and that it is just an illusion. Seemingly intelligent looking behaviors will emerge from massive computation and information interacting with a non-trivial environment, but it won't be any sort of conscious or real intelligence. And because of that, it won't be dangerous.

Human intelligence requires a human body with a circulatory system pumping hormones and responding to rhythms and movements and events and sensory input. I always chuckle when someone suggests encoding someone's brain (neurons & connections) into a computer. You know what you get if you do that? The person in a coma, which doesn't seem particularly useful to me.

I think intelligence, especially within this particular topic, is wildly overrated, and there's nothing to fear.

60 comments:

erp said...

Bravo Bret!

I especially like this quote: ... Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

He left out fairness and equality and redistribution of income and getting the good looking babes and ...

BTW and just asking, why would robots need social skills? :-)

Bret said...

erp asks: "...why would robots need social skills?"

Perhaps you're asking a joking question, but it's a good one.

Before considering the realm of social skills, let's consider the psychology of appearance. Many humans tend to get a little creeped out by robots and creatures that start to look like a human. For example, I contend that one of the underlying issues of the evolution debate is that the discovery of the gorilla in 1847 right around the time Darwin was theorizing away, totally creeped people out - gorillas just look too much like us - and people naturally revolted against (and were revolted by) the concept of gorillas being somehow "related" to humans.

People who study robots see the same thing. If I make a cuddly stuffed animal robot, you won't have a problem. If I make a robot indistinguishable from a human in every way, you won't have a problem either. If I make a robot that looks and acts kinda like a human, you may find it quite creepy.

With the "social skills" realm, the same sort of logic applies. Siri is clearly not human and not creepy. The AI Samantha in the movie "her" has perfectly human social skills (verbally at least) and is also not creepy. Samantha is more useful. So we have to get from Siri to Samantha and jump over that creepy point that's somewhere in between.

That's quite possible without Samantha being a general AI.

erp said...

Bret, I am abysmally ignorant of modern pop culture, so although I read the novel, "She," before recorded history, I don't know the movie, "her," nor do I know Siri (voice on smart phones?) or Samantha.

My question is semi-serious. It is my hope that we humans maintain our sovereignty over our machines (my washing machine already thinks it knows more than I do and constantly changes the setting to suit itself and I am reluctant to replace my ailing dishwasher for fear a new one will team up with the washer and wrest total control from me).

Robots may be made to look and act like us, but I don’t think appearances is what makes us human. It’s intuition, imagination, genius, emotion, the unimaginable differences in the way our experiences color our thinking ... Why we see someone and instantly fall in love with them and stay with them for 58 years and counting and the reaction to the smiles of my six month old granddaughter – more beautiful a sight doesn’t exist.

I’ve seen pictures of Japanese robots that are very life-like and they creep me out. I’d rather have a bucket of bolts doing housework for me, than a cute little robot in a French maid’s outfit. Not so sure about my roomie.

Thanks for keeping us informed about progress in the genius ranks.

Anonymous said...

Bret, I think you're talking about the "uncanny valley" the existence of which is disputed.

I'm not sure whether you mean to strongly imply, as you do, that human style general intelligence cannot be done with non-biological computational systems. As for no commercial utility, our reality is filled with things built that do not have that property.

erp;

AIs will need social skills for the same reason we do - either to interact with humans or to interact with each other.

erp said...

Aog, do you think robots will be our equals, not merely tools that do our bidding?

Anonymous said...

erp;

I think biological life forms are just a phase and the long term future is originally artificial intelligences. I don't see how humans can successfully compete over evolutionarily significant spans. At that point the AIs will see us as we see the original multi-celluar creatures.

erp said...

... but robots or AI's have no cells. They're made of man-made materials, not flesh, blood and bone.

I dearly hope you are only joshing with the old party.

Anonymous said...

I meant as an inferior life form that was a necessary precursor.

erp said...

But aog, robots/AI aren't life forms. They aren't alive. They're machines and what they "think" is what some human has planted into their innards.

BTW I totally agree with your uncanny valley link. I've always been creeped out by dolls, especially the ones that try to be most realistic ditto some cartoons.

Anonymous said...

They are not now. But I see no reason that can't become such. Biological life started at some point from non-living material, so why can't the same happen for cybernetic life?

As for an AI thinking only what its programmers intended, it depends on how much you really believe in free will. If you think we humans have that, then we (as a species) acquired it from precursors who did not. Again, why can't that happen for cybernetic life as well?

I should be clear that I am thinking in large time scales, thousands to millions of years.

erp said...

Cybernetic life?

If one is not religious, there is no explanation for anything in the universe. Something, to our feeble brains, cannot have no beginning and no end = globe, circle? Makes no sense to us, but we're here? Big bang from a microscopic speck - of what - where did it come from - who or what blew it up? Gazillions of alternate universes? Why not? Does our universe sit on the head of a pin in an old lady’s sewing basket? ¿QuiĆ©n sabe?

Interesting theories abound.

One thing we do know is that every living thing on earth came from the same primeval ooze. How it got here and what caused one errant cell to change direction and start the ball rolling. Unknown.

What seems clear to me is that if we make robotic (AI still means Artificial Intelligence?) objects that take over the earth and replace us, there is no point in making them like us in appearance or in any other way since their “bodies” needn’t be concerned with our biological needs for sustenance, shelter, community, childcare, etc.

The new order, just as we did, will make their world to suit their needs and I wish them luck of making a better job of it than we’ve done.

Bret said...

erp wrote: "I’ve seen pictures of Japanese robots that are very life-like and they creep me out."

That's the point. If they looked and acted perfectly human in every way, they wouldn't creep you out (by definition, you wouldn't be able to tell the difference). It's when it's close but no cigar, it's really creepy.

erp said...

Bret, I'd like to think people would always be able to distinguish life from robotics, but maybe the coming generations will think it quite natural and not at all unusual that some of their cohort don't share their biology.

Anonymous said...

erp;

I don't see cybernetic life much liking life on a terrestrial planet so I doubt they'd take over the Earth. One reason I expect cybernetic life to do so much better is that it will be able to live in so much more of the Universe, rather than a very localized and rare habit like the surface of Earth. There are those who think it's already happened via alien species and we don't notice because we live in a (for cybernetic life) very undesirable location.

Bret said...

aog wrote: "I think you're talking about the "uncanny valley" the existence of which is disputed."

The existence is disputed? Well, let me end the dispute. At least some people (such as myself and erp in a previous comment) are creeped out by things that look, sound and/or act almost, but not quite, human. On the other hand, no doubt some people aren't bothered at all.

aog wrote: "...that human style general intelligence cannot be done with non-biological computational systems. ..."

I believe that entities can be created that seem to have human-like intelligence to a human observer. In other words, they far surpass the Turing test. However, when one looks under the "hood," one would find that the observable behavior that seems intelligent emerges from something completely different than what's human. So then my question is what do you mean by "human style?"

An example. When Big Blue beat Kasparov at chess in the 1990s, is Big Blue intelligent? Observably so in the realm of chess. Under the hood, not so much.

A second example. Confabulation based writing. No intelligence there, but the output is remarkably sophisticated.

I don't think human like intelligence can be created just by simulating a human brain. I think you have to simulate the rest of the human as well and a virtual reality for that simulate human to develop in. Otherwise your potential human intelligence gets stuck in a coma.

aog wrote: "...our reality is filled with things built that do not have that property."

Not totally sure where you're going with that, but my point is that out-of-control greed will not be the motivator behind some super general intelligence.

erp said...

I'm afraid our betters are betting on AI to enslave us cogs, but I'm betting that if it happens, the cybernetics will not differentiate among us biologics and they will also be among the enslaved.

BTW - Does anyone know why we need or want self-driving cars? Have all those who enjoy driving left the building?

Bret said...

erp,

Regarding self-driving cars:

1. Older and other people with vision problems.

2. Long and short haul trucking would be cheaper, reducing transportation and therefore product costs.

3. Uber/cabs/etc. will be much cheaper.

4. For those of us who spend our commute stuck in traffic, I'd much rather have the car drive itself while I blog. :-)

5. Transporting children to violin lessons.

6. Eventually, self-driving cars will be so much safer than humans that traffic fatalities will be reduced to a small fraction of the current number.

7. etc.

In talks I gave in the early 2000s, I gave 2020 as the year that the technology for self-driving vehicles would be feasible. Given that Google, Apple, Uber, Mercedes, Tesla, and many other entities are putting significant effort into it right now, I think my prediction will be pretty close.

Bret said...

Popular Mechanics agrees with aog:

http://www.popularmechanics.com/technology/robots/a13310/robot-universe-dominant-lifeform-17549081/?src=TrueAnth_POPMECHANICS_TW&utm_campaign=trueAnthem:+Trending+Content&utm_content=d3VRgg&utm_medium=trueAnthem&utm_source=twitter#!d3VRgg

erp said...

Bret, the problem with the notion of self-driving cars is it will quickly go from a novelty to a requirement.

Of course everybody agrees with aog, I just don't like the idea of humans being incorporated into the Borg.

Anonymous said...

Bret;

Popular Mechanics agrees with aog

That doesn't mean I'm wrong.

Let's replace "human style general intelligence" with just "general intelligence". Is your view that this is possible only with biological systems?

I think you have to simulate the rest of the human as well and a virtual reality for that simulate human to develop in

Why? Humans don't need a virtual reality to do so, why would a cybernetic intelligence? Why is the real world not sufficient?

A contrary view on the uncanny valley. It's hard for me to judge because I have never experienced it.

erp;

I simply can't see cybernetics enslaving biologicals. It doesn't make any sense. They might keep us as well treated pets though (see "The Culture" books by Ian M. Banks or "The Golden Transcendence" by John C. Wright for fictional examples). I tend to think "Newton's Wake" has the better view, where the cybernetics interact with us briefly, until they get bored and leave for a better place.

I personally would love a self-driving car. To me, driving is a boring waste of time which I do only because I need to move from point A to point B. Alternatively, why does anyone want a chauffer? A self driving car is simply chauffers for the masses.

Bret said...

aog wrote: "Let's replace "human style general intelligence" with just "general intelligence". Is your view that this is possible only with biological systems?"

Possible? Anything's possible.

aog wrote: "Why is the real world not sufficient?"

Let's specify time frame. I've been mostly discussing AI possibilities in the near future (say, within 50 years) as part of the "Singularity." In one of your comments, you wrote that you're "thinking in large time scales, thousands to millions of years."

I think that the answers are completely different in those two time frames. In 50 years, in my opinion, the "real world" is definitely not sufficient to cause evolution of an AI super intelligence. That's why a very, very sophisticated virtual reality would be needed and I'm very skeptical that either that sort of VR or a super intelligence will exist in that time frame.

In a million years? Maybe.

And maybe not. The reason I say maybe not is that I think intelligence is hugely overrated and I am doubtful that there are any evolutionary processes that would push the creation of a super intelligence.

Let me ask you this. Let's assume that humans don't destroy themselves and computers/robots never get particularly intelligent and we're left in peace to evolve for 100 million years. Will humans (or the descendents of homo sapiens) be substantially smarter on average than we are now? If so, what will drive that evolution? Once bad decisions don't kill us why would we evolve to be more intelligent on average? From what I can tell, if anything more intelligent people are having fewer children which probably won't lead to a more intelligent population.

To me, what limits the usefulness of intelligence, is the combinatorial problem. Increased intelligence enables thinking about more complicated subjects. The complexity of a subject is defined by the interrelations of the factors of the subject. Let's say someone is just smart enough to think through a subject with 10 factors and each factor is affected by every other factor. The complexity of the subject is basically 10 factorial. Now let's say that person wants to add a single additional factor to the subject. They'd have to have 11 times the intelligence (or cognitive capacity) to do so! To merely increase the factors they're considering by 10%!

Everybody's so excited about the "Singularity" supposedly being brought about by the exponential underlying Moore's law. Exponentials grow, well, exponentially. But exponential growth is basically zero compared to combinatorial growth. Zero.

Plot an exponential versus a gamma function (continuous factorial like function). There's a tiny range where the exponential is observable above the origin when the gamma function still below the top of the chart. I believe intelligence versus the subjects to be understood is in that narrow range now and it has very limited use for intelligence, human or machine based, to evolve much further.

P.S. I may rewrite part of this comment and make it into a post at some point.

Bret said...

Also related. I haven't read the paper yet, but I will:

http://blogs.wsj.com/economics/2015/02/17/the-robots-are-coming-for-your-paycheck/?mod=WSJBlog

erp said...

I like driving, but perhaps that's because my commuting was done on the NYC subway which is an entirely other subject. My other experience was in rural Vermont where perhaps robot drivers would be useful, if only the scrape the icy windshield and dig the car out of snow drifts.

The danger I see if we don't get rid of the fascists, is that robot cars will be easy to program from a central point and we will be herded where our masters want us to go and kept out of places they don't want us to see.

Bret, I haven't kept abreast. Are self-driving cars just that, or will we have a robot-of-all-work to handle the driving as well as other menial tasks we humans abjure?

Howard said...

The number one reason to have "Fear of Intelligence" is, yet another opportunity to be an Apocaholic.

Anonymous said...

Bret;

I don't think humans will get much smarter, but that IMHO is due to inherent limitations of biological systems that I don't think apply to cybernetic ones (for instance, you can't really increase the clock speed on biological processes). Furter cybernetic life would come with its own design instructions which would make it far easier to have directed evolution rather than random and that favors ever increasing intelligence.

subject with 10 factors and each factor is affected by every other factor.

That's an n^2 problem, not a factorial one.

I'm not a big fan of the Singularity, although for different reasons (I think rate of adoption will be the limiting factor - you must deploy generation N in the field and observe before you can build a clearly better generation N+1, which gums up the works). I' not sure we're 50 years away from a real cybernetic AI, but that is at the low end of my time scale estimates. I'm also not sure I claimed "super-intelligences" although I do think cybernetic AIs will in the long run become such as noted above.

erp said...

Howard, I didn't keep up with the earlier fears because I had great optimism that We, the People, could and would overcome all odds. Not in my wildest nightmares could I have foreseen how our country could be turned and twisted into the fascist state it is now.

It's not intelligence that is to be feared, but it's opposite, blind adherence to the leader ala the Borg.

BTW - I miss the old crowd and wonder how Oro is doing these days.

erp said...

You guys probably all saw this already, but I think it's amazing.

Clovis said...

Bret,

---
Everybody's so excited about the "Singularity" supposedly being brought about by the exponential underlying Moore's law.
---
Not to mention that Moore's law is in a comma. Unless we learn new ways to do computers (e.g. quantum computers), the silicon based technology can not keep up with exponential growth of computation anymore.

---
To me, what limits the usefulness of intelligence, is the combinatorial problem.
---
A very good point. It touches the P-NP problem.

The weakness in your argument resides in the very reason why we still do not recognize computers as intelligent beings.
Their approach to a general problem is by default a combinatorial one, by analysis of all possibilities. That's clearly not *our* approach to general problems. The standard example is chess: at higher levels, chess masters are not counting every possibility in advance. They develop, through experience and tactics, ways to tackle the problem more directly, some NP to P reduction of sorts.

In other words, what we recognize as human intelligence is the skill of turning complex (sometimes combinatorial complexity) problems into solvable ones. We often look to do it not through direct computation, like a computer, but by this emerging property of our brains that we call generically as sentience.

Our ignorance on how that property emerges from our basic computational cells is the big unknown here. If a computer can achieve that, we have no clue yet as to how superior will be its capacity to turn NP -> P problems. If that capacity can be greatly enhanced, then the Singularity people may have a point. Otherwise, I'd say your views would win.

Anyway, the question is far from settled.

Bret said...

aog wrote: "That's an n^2 problem, not a factorial one."

Right you are. Yes, I'm embarrassed for writing something so stupid. Oops.

erp said...

See, that's how I feel when I put an apostrophe where it doesn't belong! Too many things on your mind.

Barry Meislin said...

Fear of intelligence (sic) indeed:

http://www.weeklystandard.com/blogs/gruber-pad_864995.html

Question:
Can robots be programmed to be corrupt? Hilariously stupid? Inane?

Hey Skipper said...

I'm playing some serious catchup here.

Checking out of one country and going through a month and a half of intensive training will do that. Never mind 0200 simulator show times.

[OP:] … in a winner-takes-all scenario, where it redesigns and rebuilds itself so fast that nothing else will ever be able to out think it and disable it so we'd better hope it's beneficent.

There's a quandary in neuroscience: Any brain simple enough to understand is too simple to understand itself.

But there's an implied, but wholly unacknowledged elephant standing right in the center of the china shop: an invocation of evolution that completely contradicts evolution.

Living systems changed over time through recursion plus random variation in the context of a geophysical environment. No plan, no design, no "rebuilding".

Instead, the OP gives us an antecedent that is creationism on steroids. It requires intelligence a step before intelligence is possible. At which step does a non-biological entity goes from being a mere Turing machine to something that can redesign itself without having to be capable of redesigning itself before it can possibly redesign itself? Worse, whatever It is that redesigns and rebuilds itself has to move from a state where It is completely incapable of either of those things. Yet the only example we have of such a thing never involved that leap in the first place.

It's kind of like this.

Hey Skipper said...

[OP:] Siri's "descendants" will far surpass the Turing Test in a couple of decades (or sooner or later), and will appear extremely intelligent, but will be just a very, very good verbal analysis and response AI and will have no general intelligence of any kind.

That's a claim that has been made for a couple decades now. I have an MS in Computer Science. All the classes I took, save one, were pertinent, rigorous, and useful. The sole exception was Artificial Intelligence. It was three semester credits worth of ignoring that there is no solving with computers an incomprehensible problem.

[OP:] The illusion that many of us seem to have fallen for is that many behaviors that we associate with our own anthropomorphic intelligence are only possible if we create an entity with intelligence that somehow operates like a human's, or is orthogonal to the way human intelligence operates, but is similarly global and all encompassing.

Coincidentally, the NYT had a pretty good article on this subject, Outing AI: Beyond the Turing Test, just a week after the post — it is a crime they didn't hat tip you.

Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.

[AOG:] I'm not sure whether you mean to strongly imply, as you do, that human style general intelligence cannot be done with non-biological computational systems.

[erp:] Bret, I'd like to think people would always be able to distinguish life from robotics …

Animals can distinguish life from non-life, and discriminate among life forms. Basic stuff.

A/I isn't within screaming distance.


[Clovis:] Not to mention that Moore's law is in a [coma]. Unless we learn new ways to do computers (e.g. quantum computers), the silicon based technology can not keep up with exponential growth of computation anymore.

DING DING DING Thread Winner!

So long as computation is two dimensional and binary, AI will never happen.

Anonymous said...

Skipper;

At which step does a non-biological entity goes from being a mere Turing machine to something that can redesign itself without having to be capable of redesigning itself before it can possibly redesign itself?

Two paths. One presupposes intelligence, only the part of humans who create the first self modification possible cybernetic system.

The other is first building self-replicating systems and then waiting. Not even cybernetic reproduction is perfect so you'd get normal evolution there.

A possible third path is similar to Bret's - much chip design is now done by software, we may reasonably presume that this gets more and more so. At what point does this kind of thing become sufficiently self referential to be self sustaining, when mated with automated fabs? Although you might consider this a subcase of path 1.

Clovis said...

I need to ask, Skipper: what is a 0200 simulator?

Clovis said...

AOG,

---
A possible third path is similar to Bret's - much chip design is now done by software, we may reasonably presume that this gets more and more so.
---
A good point, though I'd argue it will hardly happen as long as those "automated" systems keep being human serviced.

By what I mean, if their input of energy and raw materials depend on us feeding it, I don't think that provides for an evolutionary path.

If at some point those automated systems can work the whole production line, including the energy and materials, all by themselves, then soon enough they may be one more species competing for basic survival. At which point I think your view of them as "outgrowing our planet" goes south. If anything, they will need more energy than us - every step in the scale of complexity has been like that.

Anonymous said...

Clovis;

Yes, the presumption is a system sufficiently automated that no human input is required. See "Von Neumann Automata" for more details.

It's unclear why you think such system would need more energy, that's not the current technological trend. As for outgrowing our planet, I think it almost certain that such systems would deployed in space, not on Earth at all (this is in fact Skipper's argument for no human space travel, that we'll just build such systems to get the resources without going ourselves).

Hey Skipper said...

[Clovis:] I need to ask, Skipper: what is a 0200 simulator?

Sorry, that was a bit cryptic. I was referring to a full-flight simulator period that started at two in the morning with a two hour pre-brief, four hours of dial-a-disaster, then an hour debrief.

For reasons, the B757 simulators are running 24/7, so my schedule over the last month has been all over the place — a week of 0200 sims, then a couple starting at a more human hour (0600), then back to 0200 again.

So after the goofy hours and the studying, what already marginal intellectual horsepower I had was all used up.

[AOG:] A possible third path is similar to Bret's - much chip design is now done by software, we may reasonably presume that this gets more and more so.

You are arguing there is an evolutionary path from mere machines-in-silicone to some new form of life and intelligence.

Yet in doing so, you are invoking a deus ex machina: some agent from outside the environment creates this new form of self-propagating intelligence.

I had no idea you are a creationist.

Also, I doubt you are taking into account volume and time. We have no idea how live arose from mere chemistry, but it did so after at least several hundred million years of simultaneous trials occurring over most of the earth's surface, and in the oceans. In comparison, the experimental volume available to computers is at least 12 orders of magnitude smaller. (As a SWAG, the volume in which life evolved — excluding the atmosphere — was roughly 5*10^18 cubic feet.)

That's a lot of volume available for massively parallel chemistry experiments to produce life.

… this is in fact Skipper's argument for no human space travel, that we'll just build such systems to get the resources without going ourselves.

No, my argument is that whatever resources there might be in space are at such low density compare to Earth as to make them prohibitively expensive.

Never mind that the long term population growth trend is negative.

Anonymous said...

Skipper;

I had no idea you are a creationist

In speculating about cybernetic life I am.

I doubt you are taking into account volume and time

Quite the opposite, I think it is you who are not. The solar system is vastly larger than Earth and will be around longer than Earth will be. Earth, terrestial life, lives in the smaller amount of time and space.

whatever resources there might be in space are at such low density compare to Earth as to make them prohibitively expensive

Like solar energy, which is much denser here on Earth's surface than anywhere else in the solar system?

Clovis said...

AOG,

---
Like solar energy, which is much denser here on Earth's surface than anywhere else in the solar system?
---

No, like the materials that are needed to extract and make use of that energy in any machine.

---
It's unclear why you think such system would need more energy, that's not the current technological trend.
---

You can't possibly back up that affirmation with data. I guess you may be thinking in terms of the processors use of energy, while that is far from being the representative piece of the energy pie to actually produce and run a world of computers and robots.

Anonymous said...

Skipper;

No other planetary body is as dense as Earth? Nor any dwarf planet or planetoid? You can't possibly back up that affirmation with data.

On the flip side, I would point to not only processor and component energy consumption trends, but also nano-engineering trends. It depends on what you mean by "more energy". I presumed you meant for the smallest units of replication, otherwise, given the much larger volume and energy density available off Earth, it would seem an argument in favor of my viewpoint. A society that needs that much energy will go to space to get it.

erp said...

aog, I think Skipper meant no other planetary body in our solar system is as dense as the earth.

Anonymous said...

erp;

Perhaps, but that would still be wrong. Mercury is denser than Earth. That's still a red herring because being only 50% as dense as Earth isn't a big deal in terms of resource gathering. I would think that if you live on Earth it would be desirable to have your heavy resource extraction some place else where it can't mess up the environment you live in. I would also note moving resource *to* Earth is cheap - it's getting them in to space that's expensive. That, however, would be a capital cost and once your automata were in place you'd basically get free resources without almost no environmental impact. I can't imagine not preferring that to our current mining and manufacturing.

erp said...

aog, as long as private investors are doing it, it's fine with me. I just want to stop spending our money on politically correct schemes designed to enrich cronies of the compassionates aka collectivists.

Don't you think if people were left to their own devices, there would any manner of things unknown to us now that might negate even the need to mine on Mercury or anywhere else?

One of my favorite quotes: There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy and I don't think we're anywhere close to figuring it all out.

Hey Skipper said...

(Trigger warning: if you are subject to traumatic feelings due to inadequate proof reading, close this browser window STAT.)

[Hey Skipper:] No, my argument is that whatever resources there might be in space are at such low density [compared] to Earth as to make them prohibitively expensive.

[AOG:] The solar system is vastly larger than Earth and will be around longer than Earth will be. Earth, terrestial life, lives in the smaller amount of time and space.


The volume of the solar system is some incomprehensibly huge number, depending upon where one wishes to draw the boundary.

But that's not the entire answer.

(Keeping in mind that I'm two martinis into Saturday night, so be gentle with me) The rest of the answer is mass:volume.

Ignoring which kind of mass we are measuring, the mass:volume ratio for the solar system is close as dammit to zero.

In contrast, after normalizing units (NB: two martinis into the evening, I'm not about to prove this mathematically), the mass:volume ratio of Earth is 1. One divided by near as dammit to zero is practically infinity.

That is what I mean when I say that, compared to Earth, the volume density of the solar system's material resources is vanishingly small. And that is before striking off Jupiter, Saturn and Uranus. Never mind the energy required to get there and back, the gravitational fields of each are so immense as to render our means forever pathetic.

So redefine the solar system's volume to be that within Jupiter's orbit. Mass:volume is still nearly zero.

Therefore going to space for anything involving mass is a loser from the git go. Unless meteorites are unrepresentative of objects in the solar system, there are no pure platinum spheroids marking off orbital time waiting for us to show up and tow them to their destiny. (And that is completely ignoring the energy cost of doing so.)

And even if there are — assuming so is to make a leap into religious belief — there aren't enough of them in a suitable volume to make it worth getting them. Of course, one can posit all sorts of magic to wish away mass and energy, but wishes aren't reality. Until getting enough mass out to an object and shifting the desirable mass back to Earth is less costly than getting the same mass right here on Earth — I'll bet dinner that the difference today is at least 9 orders of magnitude different — then it is a non-starter.

You almost sound like Paul Ehrlich, or The Club of Rome. Yes, I know those are fighting words. But both posited crippling shortfalls in basic necessities. And both elevated wrong to an entirely new level.

But isn't that what you are doing?

Worse than that, you taking what is valuable today as what will be valuable forever. Assume, just for a moment, that graphene becomes economical in large quantities.

Anonymous said...

Skipper;

the mass:volume ratio for the solar system is close as dammit to zero

Doesn't your logic apply just as well to Earth, which therefore has no usable mass? Or is Earth not in the solar system?

Jupiter, Saturn and Uranus

As it turns out, those are not the only bodies of significant mass in the Solar System.

I would also note you're ignoring most of the context of this discussion, which was about automatic, AI, and Von Neumann automata. Given those, it seems very likely to me the cost of mass from off Earth will end up cheaper than on Earth. Beside ignoring dozens of massive bodies in the Solar System you also seem to be overlooking that transport costs are (1) much cheaper to Earth than from it and (2) there's a huge free for use fusion reactor which can provide almost (if not entirely all) of the power needed.

. But both posited crippling shortfalls in basic necessities. And both elevated wrong to an entirely new level.

But isn't that what you are doing?


No. Please point anywhere I posited an increase in basic necessities. Even if space resources remain more expensive, as our society gets richer we will value a clean environment relatively more (it's a luxury good) and will be willing to make the extra effort in that regard.

Anonymous said...

P.S. Skipper, here's a fun math puzzle for you. Presume our civilization continues to increase its energy use. For a given rate of increase, how long until the Earth becomes uninhabitable due to the increase in temperature necessary to dump that waste heat?

Clovis said...

AOG,

Sorry but that math puzzle is a bit of a fallacy.

Since we only want the long term temperature, it is fair to ignore spatial heat propagation and solve the heat equation with a heat source [solar and human generated, which means external solar energy input, natural nuclear decay sources and human induced nuclear (and temporarily, chemical ones too) by fission and fusion] and a heat sink (loss to external space at zero temperature, by Stefan-Boltzmann law) proportional to T^4.

So the asymptote temperature will be proportional to the fourth root of (natural heat +
human heat). IOW, if we could produce as much energy as the sun delivers to us [keep in mind it gives us more energy in one hour than we use in one year], that would give us and increase of mere 19% of the (long term) Earth average temperature.

You pose an ever increasing rate of energy use, but that must be reasonably limited, and limiting it to as much energy we get from the Sun (which is huge!) we still have a fairly habitable planet.

Anonymous said...

This was Freeman Dyson's insight, that you would eventually hit such a limit and still want more energy, at which point you start building your Dyson Sphere.

Bret said...

Hey Skipper wrote: "That's a claim that has been made for a couple decades now."

My recollection is that a couple of decades ago, that claim had stopped being made. You got any links?

Hey Skipper wrote: "A/I isn't within screaming distance."

AI by some definitions is way past here. Natural language understanding has far surpassed its definition of 50 years ago. It's just now that it's done, it no longer seems intelligent.

Hey Skipper wrote: "So long as computation is two dimensional and binary, AI will never happen."

Never is a long time.

Clovis said...

AOG,

Fair enough, but then if you have the means to build a Dyson Sphere, you also have the means to redistribute heat such that Earth keeps being cooled, like we do with of a room with air conditioning.

A common source of paradoxes is to pose a very advanced situation (e.g. we producing and using all that energy) while sticking to a supposed problem (heating of Earth) that most probably would be trivial to solve at that level of technological advancement.

Anonymous said...

Clovis;

Air condition works by physically transferring heat from region A to region B. For cooling the Earth, what is region B and how would you physically transport the heat there? Space elevator heat pipes? Refrigeration lasers?

Anonymous said...

Skipper;

You sly dog, you almost got that past me. Let's see, a 19% increase of Earth's mean temperature is more than 50°C (~300°K * .19 = 57). I think it's quite debatable if that is still a human habitable planet.

Hey Skipper said...

[AOG:] Doesn't your logic apply just as well to Earth, which therefore has no usable mass? Or is Earth not in the solar system?

That's not my logic. You are the one who said the solar system is vastly larger than Earth. No duh.

But we can play around with units so as to assign Earth a density:volume ratio of 1. Why? Because you haven't begun to make a case for anything worthwhile existing where the D:V is zero. Which means that while the solar system is vastly larger than, it's D:V is very nearly zero.

I'm sure there are volumes in the solar system whose D:V ratio is close to 1: Mars, for instance. Mercury might even be a little north of one. But so long as humans aren't engaging in wholesale conversion of mass into energy, very little of what we "consume" here on Earth is actually gone. Why go to Mars for what is just as plentiful here, and far easier to get to?

I would also note you're ignoring most of the context of this discussion, which was about automatic, AI, and Von Neumann automata.

I'm not ignoring them, I'm just not impressed by substance-free claims. With a great deal of history on my side, I think it is far more likely that increases computational power will follow the same horizontal-S curve that have characterized all other realms of technological development we know of.

No. Please point anywhere I posited an increase in basic necessities. Even if space resources remain more expensive, as our society gets richer we will value a clean environment relatively more (it's a luxury good) and will be willing to make the extra effort in that regard.

You have posited a sufficient shortfall in what we are increasingly valuing as a basic necessity — a clean environment — without demonstrating that modern manufacturing or mining techniques are sufficiently foul to ever justify the expense required to shift them off the planet.

As a matter of routine, I get to see how much of Earth is completely depopulated. Most of Canada, all of Siberia. Shwacking great chunks of the US midwest. Saudi Arabia's empty quarter. I have driven Nevada's great basin at night and not seen a man made light of any kind for almost two hours. The Pacific Ocean is miles upon miles of nothing but miles upon thousands of miles.

If the human population was to have a sustained positive increase, no matter how small, for a sufficiently long time, then at some point those spaces would be full.

But there is precisely no evidence for that supposition, and a great deal that precisely the opposite is happening.

Which means that the time when we won't be able to shove objectionable processes out of sight and mind here on good old Terra Firma is somewhere between nearly never and forever.

Hey Skipper said...

P.S. Skipper, here's a fun math puzzle for you. Presume our civilization continues to increase its energy use. For a given rate of increase, how long until the Earth becomes uninhabitable due to the increase in temperature necessary to dump that waste heat?

As I asserted above, I reject your initial premise: without an increasing population, our energy use will not continue to increase:

According to the Energy Information Administration's statistics, the per-capita energy consumption in the US has been somewhat consistent from the 1970s to today. The average has been 334 million British thermal units (BTUs) per person from 1980 to 2010.

So I'm not sure where you get the notion that our civilization will continue to increase its energy use. Widespread adoption of LED lighting, among other things, will create the opposite effect.

Then there is the presumption that the Earth can't dump that waste heat. Thunderstorms and hurricanes are giant heat engines, shifting warm air from near the surface to the stratosphere, where the heat can radiate nearly directly to outer space: that's how the waste heat will get dumped. (And is also problematic for warmenists, because if there was substantial warming actually going on, then it should show up in increased cyclonic energy and other severe convective weather. Ooops.)

AI by some definitions is way past here. Natural language understanding has far surpassed its definition of 50 years ago. It's just now that it's done, it no longer seems intelligent.

In very, very restricted ways. For questions that get asked sufficiently often, within, narrow bounds, sure.

But try putting this to the Jeopardy computer: "The blue and white thing on Alex Trebeck."

The computer will cogitate until the end of time. Meanwhile, the humans would be at the limits of their reaction times to say: "What is his necktie?"

Hey Skipper said...

Hey Skipper wrote: "So long as computation is two dimensional and binary, AI will never happen."

Never is a long time.


Yes, it is. Bold claim, I know.

How much Flatland does it take to reach the volume of a single human brain?

Bret said...

Hey Skipper asked: "How much Flatland does it take to reach the volume of a single human brain?"

Why is that question relevant?

The human brain has the processing capacity of approximately 10e15 instructions per second, give or take an order of magnitude. Take an i7-5670X. Full out, using all cores and vector processors at peak efficiency, it hits about 3.4e11 instructions per second on a 17.6 x 20.2 mm die. So that's more or less 10e15 instructions per second per square meter. So 1 sq meter of i7-5670X chips equals the processing power of the brain. Of course that square meter would cost about $3 million for the chips alone. But that's at 22nm etching. They think they can get down to 5nm etching which gives another factor of 20. So then it will be a mere 20th of a meter of flatland to equal the processing in a human brain and a mere $100,000 for a human equivalent, probably achieved by the end of the decade. Now wait a few centuries and the $100,000 will be dropped to $1,000 with manufacturing efficiencies and no extensions of Moore's law required.

Anonymous said...

Skipper;

I think it is far more likely that increases computational power will follow the same horizontal-S curve that have characterized all other realms of technological development we know of.

Such as our manufacturing power? At what point would you say that hit the upper curve?

Hey Skipper said...

Such as our manufacturing power? At what point would you say that hit the upper curve?

What, specifically? Cars, ships, planes, houses?

Why is [the Flatland] question relevant?

Because every intelligence we know of -- and there are plenty that aren't human -- uses a massively interconnected three dimensional organ that almost certainly doesn't rely upon binary computations.

In other words, something that is absolutely not a Turing machine.

Yet you are positing that a Turing machine, made fast enough, can be intelligent.

That is an extraordinary claim.

Anonymous said...

Skipper;

What, specifically?

As specifically as you used the term "computational power".

I would note your analysis depends on Earth launch capabilities never becoming much cheaper. That's a rather bold statement as well.

you are positing that a Turing machine, made fast enough, can be intelligent.

That is an extraordinary claim


Why? The counter claim requires either (1) sentience is not a computable process or (2) biological brains perform an unknown form of computation that is not available to a Turing machine. Any evidence for either?