Search This Blog

Thursday, August 09, 2018

You Are Here (Update)

In 2005, I posted about a new PowerPC processor we were starting to use in some of our robots. At processing capability of around 200 billion floating point operations per second (200 GigaFLOPS), it was big news at the time as it put the potential intelligence of our robots into the range of "mouse" instead of lizard as shown by the blue dot in the graph below.


I was thinking about this graph recently with the announcement of NVidia's new Xavier system on a chip computer. With processing capability of 30,000,000,000,000 (30 trillion) operations per second at a price of $1,299, this can be represented by the magenta square labeled "You Are Here" on the above graph. We will have this new processor incorporated into a new product prototype this October, so just like the PowerPC of yesteryear, this is something that's real and used by actual developers as opposed to some theoretical gadget.

It shows the potential intelligence of this device to be somewhere between monkey and human. What does that mean? First, while it's not known how much of the brain works, the functionality of parts of the brain are known really well, for example, the first part of the visual cortex. It's then straightforward to estimate how many computations are required for that functionality. The weight of that portion of the brain is known and it is assumed (this is the leap of faith) that the rest of the brain's processing happens with approximately the same efficiency. Divide the weight of the organism's brain by the weight of that part of the visual cortex, multiply by the number of operations required for that part of the visual cortex, and voila!, the total number of operations per second required for the entire brain of a given organism can be estimated.

A reasonable reaction is, "yeah, sure, whatever, but without the appropriate software, how will this computer be intelligent at all, much less at a monkey or human level?" And that was my first reaction as well, but I've since concluded it was misguided. Just like nobody has to know how a human brain works for it to work just fine, nobody has to know how a neural net within a computer works in order for it to work just fine, and, in fact, that's exactly what's happening, and at a very rapid rate. A number of research groups are trying different structures and computation approaches and are steadily improving the functionality and accuracy of the neural nets without really understanding how they work! It's basically a trial-and-error evolutionary approach.

With this new Xavier processor, using already known neural nets, it will be able to recognize objects in an image with a high degree of accuracy and tell you which pixel belongs to which object 100 times per second. Is that intelligence? Well, when a person does it, we think that's a form of intelligence and since nobody can tell you how the neural net works, I don't think we can say whether or not it's intelligent. If it is intelligent, it's certainly an alien intelligence, but looking at it operate, it looks to me as if it's intelligent.

For me personally, if something seems intelligent, then it is intelligent.

11 comments:

Clovis said...

Pretty amazing.

---
And that was my first reaction as well, but I've since concluded it was misguided. Just like nobody has to know how a human brain works for it to work just fine, nobody has to know how a neural net within a computer works in order for it to work just fine
---

This is the kind of insight that looks obvious after the fact, but that I would hardly arrive at by myself. Thanks.

Hey Skipper said...

[OP:] I was thinking about this graph recently with the announcement of NVidia's new Xavier system on a chip computer. With processing capability of 30,000,000,000,000 (30 trillion) operations per second at a price of $1,299, this can be represented by the magenta square labeled "You Are Here" on the above graph.

Where is the system that can do what a honey bee does?

Peter said...

Or tell a good joke.

Hey Skipper said...

Never mind a good joke.

That graphic is an excellent example of question begging: it takes as proven the assumption that brain activity can be equated in some regard to binary instructions.

But if the brain isn't a binary machine, then all bets are off.

If brain function is reducible to MIPs, then somewhere in the 1990s there should have been a machine that could be a bee as well as a bee can be bee.

Yet twenty years later, with three orders of magnitude more processing power, my bet would be safe if I was to throw it down on the side of the bee that isn't a machine bee.

And that is entirely aside from the equally inconvenient fact that there is no hope of packaging whatever would be a machine bee into a bee bee.

Bret said...

Hey Skipper,

Can you do what a honey bee does? If so, please send a video of you flying about and making honey. That'd be pretty funny, but I don't think it means you're less intelligent than a honey bee.

I have no idea of what you're trying to say.

Peter said...

But Bret, isn't Skipper's point that if you say this computer is intelligent, you have to say the same about a honey bee? I'm not sure I completely follow him either, but it seems to me some effort to define intelligence is called for, no? Are you not placing it foursquare in the realm of calculations and problem-solving? Would you call someone who was hopeless with numbers and technical stuff but was a brilliant painter intelligent? How about a screamingly funny comedian who can't screw in a light bulb or do grade school math, which was what I had in mind with that quip?

Bret said...

Peter,

I'd call all of your examples exhibit certain forms of intelligence.

I'm getting that this is just a definitional argument? Ok, fine. Then don't consider a device that can look at a complex arbitrary image and identify what's in it as intelligent. Don't consider a device that can hold a conversation with you at a more sophisticated level than a child intelligent. Come up with another word sequence for it then. Perhaps "device of remarkable computational complexity exhibiting some degree of functionality of monkeys, primates, and humans" (acronym: dorccesdofompah).

Peter said...

Oh gawd, Bret, not that cacophonous gobbledygook, please. I take it all back. Intelligence is fine. :-)

I understand that scientists and boffins define terms in a strict technical way. "Define your terms" sayeth the Master. That's fine, but as the representative of the humanities here, the problem comes when terms that are used by the general population in an everyday and less than systematically way are used by our scientific savants in a circumscribed professional sense, especially if they get into advocacy or popular writing. Most of AI is beyond my ken, but I've read some stuff on it to get a general sense of what is going on. A lot of the semi-popular stuff is either utopic or dystopic--either computers are going to cure all disease and make us all prosperous or they're going to revoltand destroy us. There doesn't seem to be much room for nerd computers who follow us everywhere asking if they can be our friend. In other words, the notion of "computer intelligence" can carry with it a whiff of menace.

Anyway, it seems to me that when we little people talk about intelligent, we aren't just talking about computational marvels, there is also an element of will and choice implied. I have to wonder whether either the Xavier or a honey bee can be considered intelligent if neither has no choice but to do what they do. Can the honey bee decide to take a break for the good of his family or work out a more efficient way of doing what honey bees do? (Yes, I know what strict Darwinists would say, which is why I have so little time for strict Darwinists).

So, unless your post was intended to claim computers are getting closer and closer to deciding to eat me, I'm not sure it's the right word.

Hey Skipper said...

[Peter:] But Bret, isn't Skipper's point that if you say this computer is intelligent, you have to say the same about a honey bee?

Oh for Peter's sake.

I monkeyed up the pronoun reference so badly that I completely obscured what I meant to say.

Which, optimistically, is this:

The graphic explicitly states that increases in processing speed are yield increases in intelligence.

That can't possibly be true, because it is, in effect, devising a ratio between completely unlike concepts. (And that is even before wondering what a dimensionless number — dollars — has to do with anything.)

I am typing this on a 2013 iMac running at 3.1 GHz, which, if I am reading your graph correctly, puts it off the charts, somewhat smarter than a human.

But it isn't, not even close. Like all other computers, it is a Turing machine. As such, it is no smarter than a 1970 Burroughs 5000; it just gets to the end of the tape one heck of a lot faster.

Which is why I brought up the humble bee. (see what I did there?)

If the correlation between MIPS and intelligence is valid, then somewhere around the mid 1990s, computer "intelligence" should have matched a bee's. Ignoring all that annoying packaging stuff, we should have long since had computers that could mimic the entire repertoire of beehavior.

Yet such a thing does not exist.

Which renders the whole concept very suspect.

Intelligence, whether that of a tree or a human, does not derive from synchronous binary state machines. We have utterly no idea how intelligence works, but surely it is not that.

Which I happened to mention here:

In an absolute sense [Noctilucent clouds] are rare, and the very restricted observation conditions make seeing them far rarer still; this was a first for me. … The big question is how it is that I was able to, within three-quarters of a second, use tenuous visual information to retrieve a term I had read at least a half dozen years prior. Humans do this all the time; it is extremely difficult to imagine AI ever managing it.

How much computer power will it take to enable mimicking the autonomous flight of a bee, while searching for the right pollen, gather the pollen, return to the hive, and communicate the direction to the pollen source?

Peter said...

But Skipper, isn't Bret's point that there is or may some qualitative change here that humans can't understand or explain or control? Presumably even Turing would have foreseen faster, more efficient Turing machines.

Hey Skipper said...

[OP:] It shows the potential intelligence of this device to be somewhere between monkey and human. What does that mean? First, while it's not known how much of the brain works, the functionality of parts of the brain are known really well, for example, the first part of the visual cortex. It's then straightforward to estimate how many computations are required for that functionality. The weight of that portion of the brain is known and it is assumed (this is the leap of faith) that the rest of the brain's processing happens with approximately the same efficiency. Divide the weight of the organism's brain by the weight of that part of the visual cortex, multiply by the number of operations required for that part of the visual cortex, and voila!, the total number of operations per second required for the entire brain of a given organism can be estimated.

The highlighted passage is the point in contention. It is one thing to assume some kind of functional correlation in the realm of sensory input -- photons and sound waves being specific physical phenomena. So it should come as no surprise that processing discrete phenomena should be relatively simple to functionally model (i.e., create an analog that performs the same functions, even though the manner of performing them might be entirely different).

But extrapolating that further isn't merely a leap of faith, it is smack dab in the realm of fantasy.

Without knowing how intelligence works -- and I'm betting we don't even know how a bee thinks -- then it is folly to assume that a Turing machine can ever be "intelligent", no matter how many gigamips it has.

The begged question is the assumption that intelligence doesn't require processes completely outside the realm of Turing machines, no matter how "neural networked" they are.

Which is why I bring up the bee. If intelligence and MIPS had any meaningful correlation, then we'd long since been able to mimic bee intelligence in silicon.

But we can't.

Which should bring the whole assumption into serious question.