Search This Blog

Showing posts with label Robots. Show all posts
Showing posts with label Robots. Show all posts

Thursday, August 09, 2018

You Are Here (Update)

In 2005, I posted about a new PowerPC processor we were starting to use in some of our robots. At processing capability of around 200 billion floating point operations per second (200 GigaFLOPS), it was big news at the time as it put the potential intelligence of our robots into the range of "mouse" instead of lizard as shown by the blue dot in the graph below.


I was thinking about this graph recently with the announcement of NVidia's new Xavier system on a chip computer. With processing capability of 30,000,000,000,000 (30 trillion) operations per second at a price of $1,299, this can be represented by the magenta square labeled "You Are Here" on the above graph. We will have this new processor incorporated into a new product prototype this October, so just like the PowerPC of yesteryear, this is something that's real and used by actual developers as opposed to some theoretical gadget.

It shows the potential intelligence of this device to be somewhere between monkey and human. What does that mean? First, while it's not known how much of the brain works, the functionality of parts of the brain are known really well, for example, the first part of the visual cortex. It's then straightforward to estimate how many computations are required for that functionality. The weight of that portion of the brain is known and it is assumed (this is the leap of faith) that the rest of the brain's processing happens with approximately the same efficiency. Divide the weight of the organism's brain by the weight of that part of the visual cortex, multiply by the number of operations required for that part of the visual cortex, and voila!, the total number of operations per second required for the entire brain of a given organism can be estimated.

A reasonable reaction is, "yeah, sure, whatever, but without the appropriate software, how will this computer be intelligent at all, much less at a monkey or human level?" And that was my first reaction as well, but I've since concluded it was misguided. Just like nobody has to know how a human brain works for it to work just fine, nobody has to know how a neural net within a computer works in order for it to work just fine, and, in fact, that's exactly what's happening, and at a very rapid rate. A number of research groups are trying different structures and computation approaches and are steadily improving the functionality and accuracy of the neural nets without really understanding how they work! It's basically a trial-and-error evolutionary approach.

With this new Xavier processor, using already known neural nets, it will be able to recognize objects in an image with a high degree of accuracy and tell you which pixel belongs to which object 100 times per second. Is that intelligence? Well, when a person does it, we think that's a form of intelligence and since nobody can tell you how the neural net works, I don't think we can say whether or not it's intelligent. If it is intelligent, it's certainly an alien intelligence, but looking at it operate, it looks to me as if it's intelligent.

For me personally, if something seems intelligent, then it is intelligent.

Friday, May 18, 2018

The Intelligent Dodder Weeder

I've been really busy lately at work building BIG machines. My latest creation is the Intelligent Dodder Weeder shown here ambling down the road between fields:



It's 40 feet wide when the wings are down, weighs about 20,000 pounds including the tractor, finds and kills dodder weed in fields of safflower at the rate of about 20 acres an hour, which replaces a crew of roughly 100 people.

With a network of 54 computers with total computational power of 75,000,000,000,000 operations per second (75 teraflops), it processes 720 images per second coming from 36 cameras and identifies the dodder weed using a traditional machine vision algorithm approach coupled with a deep convolutional neural net recognition system. Wherever the weed is found, it's sprayed with an herbicide to kill it.

At this moment in time, it may well be the most advanced mobile agricultural machine in the world in commercial operation. There are probably experimental machines that are at least as advanced, but our Weeder is operating 12 hours per day, 6 days per week in actual working conditions.

It was fun to design and develop this latest machine, but it was a huge amount of work to get it up and running.

Tuesday, March 20, 2018

The Precarious Future of Jobs

When someone quotes a dubious statistic in conversation, I often reply by interjecting the somewhat humorous, "Did you know that 47% of all statistics are made up on the spot?" I always use 47% as the made up number - it's a nice prime number and has a good ring to it in my opinion.

So I almost always chuckle when someone puts forth 47 percent as an actual statistic. In this case:
...a now-famous Oxford University analysis forecasted that 47 percent of all jobs are threatened in the United States [by robots and automation].
Yup, that definitely elicited a chuckle. Yet while the exact percentage of jobs that are threatened is of course completely unknown and the statistic is meaningless in any case without a timeframe associated with it (threatened by next week? Next year? A million years?), the concept is serious and perhaps deadly serious.

As a roboticist, I do see automation based on AI and increasingly intelligent and flexible computing accelerating. For example, autonomous vehicles alone could replace several million workers within ten years (maybe more, maybe less, who knows?).

I predict my company of 4 people will eliminate thousands or maybe even tens of thousands of jobs in agriculture over the next 5 years. And the problem is that as workers move to new jobs, we'll end up automating those too, making it difficult for them to ever get back to a stable job situation. That's potentially different than in the past. Sure, buggy whip workers lost their jobs once upon a time but then went on to work in the automobile industry which was stable for the rest of their careers. Maybe new industries and opportunities will develop as old jobs are automated away, but it looks to me like the destruction of jobs in Schumpeter's Creative Destruction process will far outpace the creation, especially the creation of lower to middle skilled occupations.

While automation could promise ever more plentiful availability of goods, it's possible that we'll face widespread poverty as more and more workers find it impossible to land stable and reasonably well-paying jobs. One straightforward way of mitigating that is the Universal Basic Income where all citizens get a fixed monthly stipend, no strings attached. That would at least keep people from starving in the streets. And if so much is automated and there's so much wealth being produced, the UBI would be easily affordable.

But what about work itself? Can humans live without working? Or is work part of what humans need to be fulfilled? Are idle hands the devil's workshop? Will opioid and other drug addiction become even more widespread? Will all these things lead to the collapse of civilization?

Or will we all become barbershop quartet singers (I'm the guy on the right) and live happily ever after?


Tuesday, November 14, 2017

Leaps in Artificial Intelligence

One of the holy grails of Artificial Intelligence research has been to understand how the human brain works. The idea is that if we knew how the brain works we could simulate its processing using computers and those computers would then be intelligent. Alas, remarkably little is known about the brain.

In the last few years, AI researchers have tried a different, but related approach. They've simulated neural network topologies in a computer that are sorta based on neural connectivity in a brain. The idea is that even though nobody knows how those neurons in the brain work when connected like that, perhaps those topologies will do something useful anyway.

And much to my utter amazement, that approach has made some really jaw dropping (well, my jaw, anyway) breakthroughs in a wide range of areas from vision to self-driving cars to emerging intelligence to self-directed learning and more. There's not time and space for me to get into all of these areas, but I'll touch on a couple.

The first is image recognition. A huge goal of AI has been to be able to have a computer look at an image and tell you what's in the image (for example, a car or sword or shark or poppy or fighter-jet or ...). And since a human can distinguish between hundreds of thousands of different types of objects, wouldn't it be nice if the computer could also distinguish between that many different things as well.

As of 2010, that level of ability for image recognition by computers was a pipe dream, and nothing more. As of today, a mere 7 years later, computers are now really good at that. Not quite as good as humans, but rapidly closing in as shown by the following graph:



Some background is required for the graph above. In 2006, some researchers got the idea to create a database of 14,000,000+ images (they hope to have 100 million images eventually) of tens of thousands of different objects, each image labeled with the object(s) it contains and bounding boxes of each object. With this database, neural nets can be trained to recognize the objects. Then, when shown an arbitrary image, the neural net will identify the objects in it.

The database is called ImageNet and was first ready for use in 2010. A contest was created to see who, if anybody, could create and train neural nets to distinguish between the tens of thousands of objects in the database. In 2010, the results were dismal with most contestants guessing right less than half the time. But, by trial-and-error and building on the best successes (evolution?), each year the results got better - a lot better. To the point where if you show one of the better nets a picture (and the picture is reasonably clear and a few other minor caveats), it will correctly identify the main object(s) (again, out of tens of thousands of possible different objects) in the image the vast majority of the time. And anyone can download these trained nets and utilize them with open source software such as Google's TensorFlow. While it takes weeks and weeks of cloud computing to train these networks using 14,000,000 images, once trained, a typical desktop can recognize the objects in an image in a few tens of seconds and in less than a second if it has a sufficiently powerful GPU (it turns out that graphics cards happen to be nearly exactly optimal for processing neural nets).

These image recognition nets are called "deep learning convolutional nets" and nobody really knows how they work, only that they do. Sorta like how we don't know how the human brain works - only that it does. Some modifications of these nets has enabled a lot of different applications to be addressed. For example, a while back, an AI beat the worlds Go champion. Ho hum, chess had already fallen to computers, so not a big deal, right? But it got a little more interesting a few weeks ago:
A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself.
Self-learning artificial intelligence. Pretty nifty.

Many of these techniques (and many more) are used in self-driving cars. They will soon teach themselves to drive really well - "literally thousands of years of human" driving experience. Bigger nets will be able to incorporate millions of years of human driving experience. It may take years to train them, but once trained, they can be downloaded to all cars. Humans may bested by AI in wide range of applications in my children's lifetimes, not just relatively trivial things like chess and Go (which only 10 years ago were not considered at all trivial).

I'll leave you with what I think is a very interesting video. I'm sure you've all seen faces morph from one person to another, but I think you'll find that the morphing is qualitatively different starting at the 1:50 mark on the video. All of those faces are simulated by the neural net which has been trained to "know" what a face is. The morphing from one face to another, even radically different faces in different poses, tends to stay pretty realistic throughout the transition. And the scene morphing, also completely simulated, maintains a surprisingly realistic rendition even when changing between radically different scenes, for example the bedrooms just after 4:00. Enjoy!
More information on the video is here.

Friday, September 15, 2017

What The Heck Is That?

Consider the gizmo in my hand below. It's called a vortex tube and is around 4 inches long and weighs maybe half a pound. I had never heard of these until recently. Had you?

There are no moving parts. You connect the inlet near my thumb to an ambient (room) temperature air supply. Really, really hot air comes out the end near my forefinger. Really, really cold air comes out the other end near the base of my palm. That little orange label says "CAUTION: Hot and cold surfaces" and it's not kidding. If the air supply is 8 cubic feet per minute at 100 psi, the hot end is over 100F hotter than the input air temperature and the cold end is over 100F colder than the input air temperature and can provide over 500 BTUs of cooling.

That's a pretty neat trick for something with no moving parts. Another neat trick is that until recently folks were still debating the physics behind how it works:
...for a long time the empirical studies made the vortex tube effect appear enigmatic and its explanation – a matter of debate.
In fact, the science wasn't totally settled until 2012:
This equation was published in 2012; it explains the fundamental operating principle of vortex tubes. The search for this explanation began in 1933 when the vortex tube was discovered and continued for more than 80 years.
So I don't feel bad that I'd never heard of it and had no idea how it worked. And I'll admit when I read the explanation that I still only have a vague notion of how it works.

Why did I discover it now? We have robotic machines that work in agricultural environments. Those machines have computers. We use computers and systems that can withstand up to about 105 (Fahrenheit). 99.8% of the time, the ambient temperature is below that. Unfortunately, 0.2% of the time, it gets hotter than that but the crops still need to be tended to and the machines fail and even die if they're run at a temperature hotter than 105. Yet for 0.2% of the time, it's expensive, bulky, and makes the system less robust due to complexity to add cooling via air conditioning to every single computer cabinet.

On the other hand, putting a vortex tube in each system isn't expensive, bulky, or complex. On those days when it's really hot, the grower can just attach an air supply from a compressor to the vortex tube and voila!, they can run our systems even when it's ridiculously hot. Most growers have compressors available, but even if they don't, it's straightforward to rent one on short notice. Problem solved!

Monday, February 16, 2015

Fear of Intelligence

I sat on a robotics panel last week that discussed the future of robotics. The audiences' questions exposed the fact that at least some people are really scared of robotics and Artificial Intelligence.  It seems that some of this renewed fear is due to the philosopher Nick Bostrom,who recently authored Superintelligence: Paths, Dangers, Strategies. Bostrom specializes in "existential risk" and I have a hunch that just like everything tends to look like a nail when the only tool you have is a hammer, it's convenient for everything to look catastrophically dangerous when your specialty is existential risk. It certainly increases your likelihood of funding!

The basis for the fear is the advancement of machine intelligence coupled with a technology singularity. The following is a description of levels of machine intelligence:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly. 
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.
The Technological Singularity is described as follows:
The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2]
The concepts of varying levels of artificial intelligence and the singularity have been around for a long time, starting well before existential risk philosopher Bostrom was even born. I've had the opportunity to contemplate these concepts for decades while I've worked in technology, robotics and artificial intelligence, and I think these concepts are egregiously fundamentally flawed. They make for a good science fiction story and not much else. I was glad to find I'm not alone in this:
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative super intelligence Bostrom describes is far in the future and perhaps impossible.
While it would take volumes of highly detailed technical information for me to present a fully convincing argument, for now, I'd like to leave y'all with a couple of thoughts.

Consider the following words: computation, intelligence, experience, information/knowledge, decision, action.

  • Even infinite computation (which is kind of the basis of the singularity) doesn't inherently translate to infinite intelligence or even any real or general intelligence.
  • In a vacuum, even infinite intelligence is useless.
  • The frontiers of information/knowledge can't be very much expanded with intelligence alone - experience (hypotheses, experiment, scientific method, etc.) is required no matter how intelligent something or someone is, and experience takes time, a long, long, long time as any researcher, developer or thinker (apparently other than an existential risk philospher) knows.
  • No matter how intelligent something is, it can't make decisions to take catastrophic actions based on currently unknown knowledge until it takes the time to gain experience to push the state of knowledge. The actions required to gain that experience will be observable and easily stoppable if necessary.
On the other hand, consider a nuclear tipped cruise missile. It can perform some computation and can maneuver in its very narrowly intelligent way, has none of its own experience (it's a one shot deal after all), has some information/knowledge in terms of maps, someone else made the decision to launch, but it's action is quite devastating. 10,000 of them could destroy most of the advanced life on earth. When I was a child, we had air raid drills in school because we thought some crazy soviet might do exactly that.

The point being that we're already more than intelligent enough to destroy ourselves via nukes, pathogens, etc.  The risk from super intelligent machines pales in comparison. Consider:

  • About 1% of humans are sociopaths and that translates to about 70,000,000 people worldwide. Given standard bell curves, some of those are likely to have IQs in the neighborhood of 200. If intelligence alone is a thing to fear, then it's too late unless we're willing to kill all the smart people, and I strongly suggest we don't do that.
  • Humans, using tools (including computers), have and will continue to have access to all the tools of annihilation that a super intelligence would have and some of us are downright evil already.
Part of the runaway AI fear is based on the concept of a single Artificial Super Intelligence emerging in a winner-takes-all scenario, where it redesigns and rebuilds itself so fast that nothing else will ever be able to out think it and disable it so we'd better hope it's beneficent.

But consider the saying: "Jack-of-all-trades, master of none." My view is that narrow, focused intelligence, sort of the idiot-savants of the AI world, in their narrow area, will outperform a super general intelligence, and enable us to use them as tools to keep super general intelligences, if any are ever created, in check.

There is no commercial reason to ever create a general intelligence. For example, at my company, our vision systems will soon surpass human vision systems, and watching our Robotic Pruner prune, it looks quite purposeful and intelligent, but there's no real intelligence there. Siri's "descendants" will far surpass the Turing Test in a couple of decades (or sooner or later), and will appear extremely intelligent, but will be just a very, very good verbal analysis and response AI and will have no general intelligence of any kind. C-3PO in Star Wars appears intelligent and we will be able to create a C-3PO eventually, but the real world version will have no real, general intelligence.

The illusion that many of us seem to have fallen for is that many behaviors that we associate with our own anthropomorphic intelligence are only possible if we create an entity with intelligence that somehow operates like a human's, or is orthogonal to the way human intelligence operates, but is similarly global and all encompassing. I strongly believe that view is mistaken and that it is just an illusion. Seemingly intelligent looking behaviors will emerge from massive computation and information interacting with a non-trivial environment, but it won't be any sort of conscious or real intelligence. And because of that, it won't be dangerous.

Human intelligence requires a human body with a circulatory system pumping hormones and responding to rhythms and movements and events and sensory input. I always chuckle when someone suggests encoding someone's brain (neurons & connections) into a computer. You know what you get if you do that? The person in a coma, which doesn't seem particularly useful to me.

I think intelligence, especially within this particular topic, is wildly overrated, and there's nothing to fear.

Wednesday, February 11, 2015

War of the Sexes: Part 10 - Karma?

In The End of Men: And the Rise of Women, Hanna Rosin created a metaphor for modern men and women:
Throughout my reporting, a certain imaginary comic book duo kept presenting themselves to me: Plastic Woman and Cardboard Man. Plastic Woman has during the last century performed superhuman feats of flexibility. She has gone from barely working at all to working only until she got married to working while married and then working with children, even babies. If a space opens up for her to make more money than her husband, she grabs it. If she is no longer required by ladylike standards to restrain her temper, she starts a brawl at the bar. If she can get away with staying unmarried and living as she pleases deep into her thirties, she will do that too. And if the era calls for sexual adventurousness, she is game. [...]
Cardboard Man, meanwhile, hardly changes at all. A century can go by and his lifestyle and ambitions remain largely the same. There are many professions that have gone from all- male to female, and almost none that have gone the other way. For most of the century men derived their sense of manliness from their work, or their role as head of the family. A 'coalminer' or 'rigger' used to be a complete identity, connecting a man to a long lineage of men.
Clearly the female gender is far superior with its "superhuman" flexibility and success while "centuries" can go by with men mired in the same muck. But wait! In the uncountable centuries prior to the last couple, where was that vaunted flexibility and success of the female gender? It seems like women were rather stuck in the mud with the same "ambitions" century after century as well, just like men.

Ah, but I'm sure it was the oppression of the evil patriarchy that kept "Plastic Woman" from launching into her meteoric rise all these millennia. After all, it's always the patriarchy's fault. Of course, then we have to wonder why the patriarchy suddenly became incompetent at oppression in the last couple of centuries.

To play a little with a famous saying, god created man and woman, but Sam Colt made them equal. When the armaments of an age are the broadsword and longbow, which require a lot of strength to use effectively, wielding weapons of war and defense is best left to the physically stronger sex. With the development of hand guns and rifles, a woman wielding a weapon became every bit as dangerous as most men. I don't think it's a coincidence that Plastic Woman seems to have emerged with the development of modern, light, and powerful weapons. The cost of oppression was suddenly much higher and the need for male defenders swinging broadswords was suddenly much lower.

Weapons technology was one technology behind the emergence of Plastic Woman, but virtually every technology has moved all aspects of life towards matching women's nature. I realize that only an evil patriarch like myself would even dare to suggest that women and men have different natures, but I'm the evil author of this post, so deal with it.

Natural attributes of men, such as physical strength and willingness to engage in physical and even mortal danger, have been rendered nearly useless by inventing machines and refining them to tame their potentially violent and dangerous or deadly force. Though there is debate on the issue, my opinion is that one would have to be deaf to not know that women are more verbal than men (I'm using up all of my words for at least three days just to write this post), and with the increasing importance of information and complex networks in all aspects of society, verbal competency has become ever more important, playing to women's strengths. Being naturally nurturing in nursing and other jobs in the growing service sector also moves the world towards women.

Therefore, it's not that Plastic Woman necessarily has superhuman flexibility, but rather the world was dumped in her lap and she could hardly help but flourish. On the other hand, Cardboard Man has pretty much had everything taken away from the sweet spot of his abilities. From this perspective, women haven't been flexible at all and while men may not have flexed enough to keep up with the stunningly rapid change of the last two centuries, they have flexed quite a bit. Other than a few things like female prostitution, there really aren't any jobs a woman can do that a man can't, and men have made at least some inroads into most existing careers.

Ms. Rosin can continue to gloat for her gender for a while longer. But what goes around comes back around eventually, and technology, which has so far destroyed mostly only men's livelihoods while creating new opportunities for women, is relentlessly marching towards eliminating women's work as well. Within decades, computers will learn to speak, and not just the rote responses you get on the phone or from Apple's Siri. They will learn to understand what they hear and respond with knowledgeable and seemingly empathetic responses. My estimate is that this strong AI will start to be well developed within two decades.

Along the way, computers, coupled with sensors and actuators, will become better doctors, nurses and therapists than humans; better administrators; better at customer service; and better at sales. They may eventually even be better at prostitution, though robot sex may not be considered prostitution, I suppose. At that point we'll get to see if Plastic Woman is really fantastically flexible or if women also are ejected unceremoniously from the workforce and end up sitting around watching soap operas all day.

Then Ms. Rosin can write a new book with the title "The Age of Spiritual Machines: When Computers Exceed Human Intelligence." Oh wait! Somebody already wrote that book.

Monday, July 15, 2013

Vision Robotics on the News

My company, Vision Robotics Corp., was on the San Diego Channel 10 6PM news this evening. If you're curious, to see the segment, visit http://www.10news.com/video and find the link for "Robots created by San Diego company" (I haven't been able to figure out how to link directly to the video).

10news did a pretty good job - especially considering that they did everything (contacted us, shot the video, did the editing, and produced it) in a span of just over 6 hours.  The first three steps were all done by one guy.

The only noticeable mistake is that they call it a "Lettuce Trimmer" when it's actually a "Lettuce Thinner."

Thursday, February 07, 2013

As Long As We're Talking About Lettuce...

I might as well post about some recent handiwork by my company (Vision Robotics Corp.), a robotic lettuce thinner...

When growers plant lettuce, they plant a lettuce seed every 2 to 3 inches.  Ultimately, they want a lettuce plant every 10 to 12 inches.  The reason for the over planting is that many of the lettuce seeds either don't come up at all, or don't survive the first couple of weeks.  However, they end up with too many plants and need to thin the survivors.  Except for our new, Robotic Thinner, the thinning is currently done by laborers with hoes.

Our Thinner uses cameras to detect the lettuce and decide which ones to keep.  It actuates sprayers which spray the lettuce plants to be removed with fertilizer. It turns out that fertilizer kills the plants it's sprayed on, then is absorbed into the soil and fertilizes the "keepers".  The Thinner is mounted on a tractor.  The Thinner uses the tractor's tanks and pump system and electrical power.

Here is a link to a 30 second youtube video of it in operation:
http://www.youtube.com/watch?v=xlw_OBpwtFs
In the video, as the tractor goes by, you can see the thinner mounted to the back.  You can see the banding of the fertilizer.  The dark areas are where the fertilizer is sprayed to thin lettuce and the light bands are where the "keepers" are located.

The pictures below show the effects of thinning the lettuce.  In addition to costing about 1/3 as much as manual labor using hoes to thin, notice that the ground is undisturbed by the thinner which, according to the growers using the thinner, allows the "keepers" to be more consistent in size and health since their roots are undisturbed.  In addition, fewer weeds are able to get started in the unbroken ground.

In the "4 days after thinning" shot, if you look closely, you can see little brown dead lettuces that have been thinned by fertilizer in between the remaining green keepers.  In the "4-5 weeks old" shot, the lettuce has been cultivated (weeded) so at this point the ground is disturbed but the lettuce is way ahead of any possible future weeds.

Each Thinner takes the place of roughly 30 to 50 people with hoes.  The world (well, North America) only needs about 150 Robotic Thinners to thin all of the lettuce so it's a pretty small market.  That will displace about 5,000 laborers, but it's really terrible work and the growers are having trouble getting people to do it anymore.

It's been a fun project.  I know a LOT about growing and processing lettuce now.

Monday, May 21, 2012

Intelligence: Muscles of Thought

When I watch a hand pick a piece of fruit, I marvel at the intelligence exhibited.  It doesn't matter if it's a human hand or the hand of another primate.  I find it exquisite to watch the coordination of countless neurons carefully controlling a huge number of individual muscle fibers within the numerous muscle groups of the fingers purposely grabbing the fruit, exerting just enough force to remove it from the tree, but not so much force as to crush it.  The movement is purposeful, controlled, and coordinated.  In other words, it's intelligent.

E-mails are dispatches without the physical form of mail. Somewhat analogously, e-motions are dispatches or signals without the physical motion, and in fact emotions are often coupled with actual motion (fear->fight or flight, happiness->smiling, anger->violence, etc.).  As the brain processes get more and more decoupled from motion, those processes are less emotional and more what we think of as thoughts.  Even these thoughts are generally coupled, at least loosely, with emotion and sometimes motion.

When I look at the numerous theories (all unproven) of how the brain is fundamentally structured, I personally find the analogy of the brain being the muscle of thought the most compelling:
...that cognition is a phylogenetic outgrowth of movement and that cognition utilizes the same neural circuitry that was originally developed for movement.

Movement relies on the deliberate, smooth, properly sequenced and coordinated, graded, contractions of selected ensembles of discrete muscles. Therefore, the neural circuitry of movement was specialized for this purpose. Soon, a new design possibility emerged: the elaborate neuronal machinery of movement control could be applied to brain tissue itself. In particular, discrete brain structures, modules, emerged that could be controlled exactly like individual muscles. ... By manipulating these modules in properly coordinated 'movements' (thought processes), valuable information processing (cognition) could be carried out – thereby further enhancing animal competitive success and diversity. 
From this point of view, intelligence in thinking is little different than intelligence in controlling motion.  In other words, it's the ability to coordinate the actions of different muscles groups, but in this case, without the muscles.

Wednesday, April 01, 2009

Robots in Action

Some of you may have noticed (or not) that posting was pretty sparse (non-existent) earlier this year. I was working 'round-the-clock on a robotic grape vine pruner, which we demonstrated in mid-March in vineyards in Lodi, California. The following video shows the pruner and the demo and describes the technology. It's been the most fun and technically interesting project of my career.



Update:A common question seems to be, "So just what is this thing?"

The task at hand is pruning grape vines. After the grapes are harvested and the grape vines drop their leaves, the vines need to be pruned during the winter to ensure the optimal balance between maximizing fruit yield and obtaining optimum fruit quality during the next spring and summer. This is currently done by hand and is by far the most costly part of growing grapes.

The pruning rules vary between growers and varieties, but the general approach is to leave approximately eight Canes (the vertical shoots) on each Cordon (the horizontal portion of the vine). All the other Canes are to be removed as close as possible to the Cordon and the Canes that are kept should be pruned a bit above the 2nd bud.

The operational concept of the pruner is to use multiple sets of stereo cameras to collect images of the vine and then to process those image to create a detailed model of the vine. The pruning rules are applied to that model and a plan is generated to make the cuts with the robot arms. There are also cameras on the arms which enable the arms to be guided in real time to make the necessary intricate cuts.

This one-of-a-kind prototype was deployed in the field for this demo only 4 months after it was built. The production unit will be much faster, gentler (to the vine), and more accurate. But the prototype in the video is a pretty good start and certainly demonstrated the concept to the growers.

Monday, October 15, 2007

Confabulation Theory

How do we think? How does human cognition work? How can intelligence emanate from a bunch of cells seemingly randomly connected together? Even if the cells (neurons) and their interconnections are carefully arranged, how can that possibly be the basis for thinking?

Dr. Robert Hecht-Nielsen1 is certain he has the answers to all these questions. These answers are contained in his recently released book, Confabulation Theory: The Mechanism of Thought.

There is absolutely no consensus in the neuroscience and artificial intelligence communities supporting Robert's theories. However, seeking consensus is not one of Robert's strong points. As an example, he was a leading proponent of the use of artificial neural networks (as they were called at the time) for various applications even though they had fallen into disfavor amongst most researchers (and funders) after Minsky proved that the popular neural net configurations at the time (called "perceptrons") were incapable of doing something as simple as an "exclusive or" operation:
This was followed in 1962 by the perceptron model, devised by Rosenblatt, which generated much interest because of its ability to solve some simple pattern classification problems. This interest started to fade in 1969 when Minsky and Papert [1969] provided mathematical proofs of the limitations of the perceptron and pointed out its weakness in computation. In particular, it is incapable of solving the classic exclusive-or (XOR) problem...
In my opinion, Minsky's main motivation for "proving" the ineffectiveness of neural networks was that his expertise was in different areas of artificial intelligence research and he wanted to stifle funding of neural net research so that his funding would be increased.

Robert persevered against the prevailing opinion and founded HNC Software in 1986 to develop neural net applications. By the late 1990s, HNC Software had gone public and achieved a market valuation of over a billion dollars because of its neural net applications in many areas such as credit card fraud detection (not to mention good timing relative to the Internet bubble). HNC Software later merged with Fair Isaac to continue as a market leader in these and other areas.

What I've learned is that while Robert's theories are often far-fetched and his certitude regarding those theories is sometimes unwarranted, it's still a bad idea to bet against him.

So what is the Confabulation Theory of cognition? Here's a fairly concise description for the layperson:

Hecht-Nielsen proposed a theory based on four key elements to account for all aspects of cognition. The first hypothesizes that the human cerebral cortex is divided into about 4,000 ‘modules,’ each of which is responsible for describing one ‘attribute’ that an object of the mental universe may possess. An object attribute is described by activating the one ‘symbol’ that is the most apt for that object. (Each module has thousands of symbols, each represented by a collection of about 60 neurons.) “Symbols represent the stable ‘terms of reference’ for describing the objects of the mental universe which clearly must exist if knowledge is to be accumulated over decades,” argued Hecht-Nielsen. “Past theories have avoided such ‘hard’ and ‘discrete’ terms of reference because they seem – but are not – at odds with the widely assumed ‘mushy’ or ‘fuzzy’ qualities of neuronal stimulus response.”

The theory’s second key element is also a hypothesis: that each item of cognitive knowledge takes the form of axonal links between pairs of symbols. These ‘knowledge links,’ the theory posits, are implemented using a two-stage version of the “synfire chain” structure hypothesized by Israeli neuroscientist Moshe Abeles. According to Hecht-Nielsen’s theory, the average human possesses billions of these links – a claim which, if true, would make humans enormously smarter than currently believed by philosophers, psychologists, and educators.

The third foundation of Hecht-Nielsen’s theory is that thinking is divided up into simple, discrete winner-take-all competitions called ‘confabulations.’ Each confabulation is carried out by a cortical module when it receives an externally-supplied ‘thought command’ input. The winner is whichever symbol of the module happens to have the highest level of excitation supplied to the symbols of the module by incoming knowledge links. “The winning symbol is the conclusion of the confabulation, and this simple confabulation operation is believed to happen in less than a tenth of a second,” said Hecht-Nielsen. “It is a widely applicable, general purpose, decision-making procedure, and the theory argues that all aspects of cognition can be carried out by means of a few tens of these confabulation operations per second, many of them in parallel.”

The final element of the UCSD neuroscientist’s Confabulation Theory hypothesizes that every time a confabulation reaches a conclusion, a ‘behavior’ (a set of thought processes and/or movement processes) is instantly launched. “This explains how humans seem to launch many behaviors during each waking moment,” said Hecht-Nielsen. “In other words, every conclusion reached by a confabulation represents a changed state of the mental world, and a behavioral response associated with that changed state is instantly launched.” The associations between each conclusion symbol and its ‘action commands’ are termed ‘skill knowledge,’ which decays rapidly if unused because it is learned by repeated practice trials (since skill learning is managed by a deeply buried part of the brain called the basal ganglia). Cognitive knowledge, on the other hand, is long lasting because knowledge links form in response to meaningful co-occurrence of the involved symbols (an astoundingly prescient idea first advanced by Canadian neuroscientist Donald Hebb over 50 years ago).

Using these elements, explains Hecht-Nielsen, brains can apply millions of relevant knowledge items in parallel to arrive at an optimal conclusion – in less than a tenth of a second. “Confabulation is an alien kind of information processing with no analogue in today’s computer science,” he noted. “Tens of these confabulation operations happen in our minds every waking second, with each conclusion reached launching new behaviors, and this occurs all day long.”
That's all there is to it!

Robert has done a number of very impressive computer simulations to back up his theory. My favorite is the "Plausible Next Sentence" experiment. In this experiment, two sentences are presented to a confabulation architecture consisting of the equivalent of a few hundred million neurons with a potential of around a trillion connections between them. The simulation then confabulates a "Plausible Next Sentence". Here's one example:

Input Sentences: Michelle strengthened from a Category 2 to a Category 4 storm Saturday, with winds reaching 140 mph, but it was expected to weaken before it reached Florida. The storm or its effects could strike the Keys and South Florida tonight or early Monday, said Krissy Williams, a meteorologist at the National Hurricane Center in Miami.

Confabulation Result
:
Forecasters warned residents to evacuate their homes as a precaution.
The confabulation result is a very reasonable and plausible next sentence. It clearly makes sense within the context of the previous sentences showing that the confabulation "understood" those input sentences. The confabulation result is completely novel - the simulation had never previously seen the result sentence. Other than "to" and "a", not a single word in the result is present in the input sentences. Yet it "knows" that those words make "sense" given the current context.

Also, note that the grammar, capitalization, and punctuation are perfect. This is particularly surprising because in this simulation architectures there are no:
  1. Algorithms
  2. Software routines (beyond the simulations of the functional elements)
  3. Rules
  4. Ontologies
  5. Priors
  6. Bayesian networks
  7. Parsers
From this research, grammar, including capitalization and punctuation, seems to be an emergent property of language comprehension. No effort or mechanism was included to explicitly understand grammar in order to output the resulting sentence.

All knowledge contained within this system was derived from "reading" a corpus containing several billion words representing hundreds of millions of sentences contained in numerous novels, magazine articles, and other reference materials. In this case, "reading" consisted of translating each word and punctuation mark into a unique symbol number (for example, "the" might be symbol number 43,219) and strengthening the simulated axonal connections between symbols in various modules in the confabulation architecture as described above. It took the simulation several weeks to "read" the corpus.

The confabulator needed to "know" an impressive amount for the above example. It "knows" that there are homes in "South Florida" or the "Keys". It "knows" that it could be a good idea to evacuate in the case of storms like this. However, it "knows" that it's only a "precaution" because the storm is "expected" to "weaken". It "knows" that "forecasters" are probably involved and that "authorities" will be the ones to suggest the "evacuation".

In summary, the response was creative, knowledgeable, and intelligent. Anyone below a tenth grade reading comprehension level would be hard pressed to do as well and anyone, regardless of level, would be challenged to do materially better.

This is, by far, the most advanced example of a system based on a brain-like architecture. This is the only example I've seen of creative and intelligent language skills emerging from a massive number of simple connected symbolic elements. In fact, it's the only example I've seen of this level of language understanding by any computer ever. To me, this seems like the beginning of true artificial intelligence.

Here are several more examples from the Plausible Next Sentence experiment. For sure, some of the results are better than others, but in each case, the sentence is plausible and reasonably intelligent. If you take time to think about it, each result shows a remarkable amount of knowledge about the context and about the world given the vast diversity of topics.
Input Sentences: Several other centenarians at Maria Manor had talked about trying to live until 2000, but only Wegner made it. Her niece said that Wegner had always been a character – former glove model , buyer for Macy's, owner of Lydia's Smart Gifts downtown during the 1950s and '60s – and that she was determined to see 2000.

Confabulation Result: She was born in the Bronx Borough of New York City.
Input Sentences: A total of 22 defendants were convicted after the five-month trial of possessing explosives and plotting terrorist acts, but all were acquitted on charges that they were linked to the Al Qaeda terrorist network. Jordanian authorities now have a second chance on the Hijazi case.

Confabulation Result: The defendants are accused of conspiring with the outlawed rebel group.

Input Sentences: The doctrine is frank about Russia's economic weaknesses, calling for efforts to strengthen the economy in order for the country to remain a major power. It acknowledges that it is in Russia's interest to maintain its economic links to the outside world and there is no suggestion that it intends to abandon free market principles.

Confabulation Result: President Boris Yeltsin has expressed his willingness to compromise.

Input Sentences: Investigators say one man who got his license through a fixed test was Ricardo Guzman, the driver of a truck involved in a 1994 wreck in Wisconsin that killed six children in a burning minivan. Prosecutors say Bauer, now retired, hastily shut down the probe of the accident and blocked other investigations that might have embarrassed Ryan.

Confabulation Result: The driver fled the scene after the collision.

Input Sentences: The shortage has been attributed to rapid expansion of the prison system, low pay, a booming economy that makes the prospect of spending the day guarding convicts less attractive, and the risks of dealing with inmates who seem to be getting meaner and more violent. Prison officials are scrambling to keep penitentiaries staffed, recruiting at schools and from the Internet.

Confabulation Result: Prison officials are still debating what they have to do.

Input Sentences: Outside investigators announced the conclusions Tuesday as NASA's top scientist confirmed that the agency will cancel plans to launch a robot spacecraft in 2001 on a mission to land on Mars and indefinitely postpone all future launches to Mars, with one exception: a 2001 mission. With only its aging Mars Global Surveyor in orbit around Mars, the agency is reassessing its entire approach to the exploration of the planet after losing all four of its spacecraft bound for Mars last year – a package totaling $360 million.

Confabulation Result: Mars Global Surveyor will be mapping out the planet.

Input Sentences: Seeing us in a desperate situation, the Lahore airport authorities switched on the runway lights and allowed us to land with barely one to two minutes of fuel left in the aircraft, he said. At Lahore, Pakistani authorities denied Saran's request to accept wounded passengers and women and children, but they refueled the plane.

Confabulation Result: Airport authorities said they were not consulted beforehand.

Input Sentences: People protesting the INS decision gathered today in front of the agency's offices in Miami and the home of relatives who have cared for the boy since he was rescued on Thanksgiving Day . The boy was found clinging to an inner tube at sea after his mother and nine others died when their boat sank during their effort to leave Cuba for Florida.

Confabulation Result: Elian's mother and her grandparents perished in concentration camps.

Input Sentences: But the constant air and artillery attacks that precede the advance of Russian troops have left civilians trapped in southern mountain villages, afraid to venture under the bombs and shells raining on the roads, Chechen officials and civilians said. Residents of the capital Grozny who had fled the city in hopes of escaping to Georgia, which borders Chechnya to the south, have been stuck in the villages of Itum-Kale, 50 miles south of Grozny, and Shatoi, 35 miles south of Grozny.

Confabulation Result: Russian forces pounded the strongholds in the breakaway republic.

Input Sentences: The National Corn Growers Association says Gore is likely to have an ear of corn following him too if EPA sides with California officials, who oppose using ethanol. Ten days before the Iowa caucuses, Gore was more than 20 points ahead of Bradley in various Iowa presidential polls.

Confabulation Result: Gore's aides said they would not have any problems.

Input Sentences: The incident threatens relations between the Americans and Kosovo civilians, whom the peacekeepers were sent to protect after the 78-day NATO bombing campaign. We don't want them here to give us security if they are going to do this, said Muharram Samakova, a neighbor of the girl's family.

Confabulation Result: NATO has struck a military airfield near Pale.

Input Sentences: Now, I must admit that I'm not so sure the Palestinians really wanted to reach a framework agreement, Eran said Tuesday. Eran wondered aloud whether the Palestinian strategy might be to negotiate as much land as possible in the remaining transfers, then declare statehood unilaterally – as the Palestinians have threatened to do before when talks bog down.

Confabulation Result: Netanyahu said the Palestinians would be barred from jobs in Israel.
Ultimately, as impressed as I am with this research, I can't recommend Robert's book. It's a mish-mash of presentation slides and Robert's research papers which, since research is incremental, are rather repetitive. Also, links to all the papers and presentation are available for free from Robert's website if you're interested in more of the details.

Robert addresses why his theory hadn't been already discovered and embraced. Here's his answer from the book:
Since the mathematics of confabulation is simple, an obvious question is "Why wasn't confabulation theory discovered long ago?" A key reason is a decades-long intellectual constipation brought about by what might be called the "Bayesian religion."
Well, Robert, tell us how you really feel about other researchers! Robert's definitely not the most political or consensus oriented guy that ever lived, but he's quite a confabulator. And quite a humorist too (though not intentionally) - I laughed out loud when I read the above quote.

---

1Full disclosure: Robert is a friend of mine, and yes, I am very biased regarding this particular topic so the reader should significantly discount everything in this post.

Tuesday, August 14, 2007

The Future of Autonomous Automobiles

Cars will, one day, drive themselves.

The technology is not an issue. Between the DARPA Grand and Urban Challenges, the ability of an automobile to get about safely on its own will surpass that of a human driver in the next few years. Actually, relative to some drivers I've known, robotic cars are already far better. Add another five to ten years beyond that to make the technology cheap (less than $1,000 per vehicle) and autonomous automobiles will be ready to roll by 2020 (more or less).

But will cars drive themselves by 2020? Some might, but I doubt that very many will. There are many cultural issues that will stymie the adoption of robotic vehicles.

The biggest problem is that even a car that has perfect software will still occasionally get in an accident. Its frequency of accidents will be far below that of an average human, but it will still get in accidents. Driving on roads is sheer chaos. Sensors will stop working, joggers will jump in front of the cars, bicyclists will bound on by, meters will malfunction, streets will be unexpectedly slippery, etc. It's not clear how blame for accidents involving robotic cars will be assigned and how the damage will be paid for (I've given one humorous example here). My observations lead me to believe it will be many decades, perhaps even centuries before these critical issues are resolved. If every time someone is killed in an autonomous automobile accident (and it will happen), the software company is bankrupted via lawsuits, robotic vehicles will never gain any traction. That would be true even if they reduced automobile accidents by a factor of ten overall.

Ultimately, I think demand for such vehicles will be so strong that we'll figure out the legal and cultural aspects. The military will drive the technology whether or not there ever are non-military robotic cars. After all, the military uses far more dangerous weapons and people are killed all the time. For the military, safer and higher performance are better, even if people still die.

The elderly will increasingly need cars that drive themselves. Many states are now taking away licenses of older drivers who are likely to be high risk (for example, because they can't see). But taking away licenses has a huge cost of its own. It consigns the elderly to their houses, making it very difficult for them to get out and receive the stimulation required to maintain physical and mental health. It also makes it difficult for them to get groceries, buy clothes, and otherwise take care of themselves, leading to earlier transfer of these citizens into expensive assisted living situations.

For the elderly, cars that can drive themselves are the perfect solution. They needn't endanger themselves and others, yet they still would have the mobility they need. In fact, their mobility will likely increase, since many elderly drivers avoid going out at night because of vision issues.

So that's where I see robotic cars getting their toehold for non-military applications. The clout of the elderly voting bloc (and their children) will force the legislation required to enable the elderly to have such vehicles. I'm hoping by the time I'm old enough to need a car to drive me (approximately 2040), that we'll have gotten through all of the necessary societal hurdles.

Once this happens, I predict the flood gates will open. Taxis will drive themselves, greatly reducing cab fares. In fact, I predict that cab fares will be so greatly reduced that more and more people will take taxis everywhere and won't even bother owning a car. The autonomous taxis will be so smart that there will always be one of just the right size available for you and the group you're with right when and where you need it. Eventually, almost nobody will own a car since having a car sit in your garage is allowing a great deal of capital equipment sit idle.

Cars will talk to each other (electronically). As a result, they'll be able to drive much closer to each other which will enable far more of them to fit on existing roads even while traveling at much higher speeds. Most cars will have one or two seats enabling even more cars to fit on the roads. Fuel efficiency will increase further since the cars will "draft" off of each other. It will be possible to fit approximately 5 times as many cars on a typical freeway.

I see all of this happening within twenty years of when cars begin driving the elderly about. The first step is the hardest, the rest will happen very quickly.

Friday, August 03, 2007

Autonomous Automobiles

The DARPA Grand Challenge was a 132 mile race for autonomous (robotic) vehicles, sponsored by DARPA. The qualifying winner (turned out to be a team from Stanford) received a $2 million prize.

I recently learned from a colleague who worked on one of the teams that entered a robotic car in the competition that possibly the first robotic car collision with a private vehicle has already occurred. They were testing the robot in a parking lot where somebody had mistakenly left his van. The robot collided with the van in the parking lot.

Apparently, the interaction between the insurance adjuster ("Adjuster") assigned to the case and the owner of the van ("Owner") didn't go very smoothly. Though the exact conversation wasn't recorded, I imagine it might've gone something like the following:

Adjuster: Who was driving the car that hit your van?
Owner: Um, er, well, nobody.
Adjuster: Nobody was driving? Was there anybody in the car?
Owner: Uh - no.
Adjuster: Oh, is this one of those cases where somebody forgot to put on the parking brake...
Owner: Yeah, that's it, the parking brake definitely was not on!
Adjuster: ...and forgot to leave it in gear...
Owner: Um, no, er, well, the car was in gear.
Adjuster: Then why was it moving?
Owner: Well, er, because, um, the engine was running.
Adjuster [looking confused]: The car was on?
Owner: Um, well, yeah.
Adjuster: And it was in gear?
Owner: Yeah, um, yeah.
Adjuster: And nobody was in it?
Owner: Yeah, nobody was in it.
Adjuster: And it rolled across the parking lot and hit your van?
Owner: Um, yeah, that's right.
Adjuster: So did someone start it, put it in gear and jump out?
Owner: No, no, nothing like that.
Adjuster: Okay, so give me a hint. How did this car come to be rolling across the parking lot, engine on, in gear, with no driver?
Owner: Well, er, they, um, asked it to do that.
Adjuster: Pardon?
Owner: They, er, asked the car to drive around the parking lot.
Adjuster: They asked the car to drive around the parking lot.
Owner: Yes!
Adjuster: I'm not following. Who might 'they' be?
Owner: You know, the people who own the car.
Adjuster: And did 'they' talk to the car and say, "Hey car, drive around the parking lot"?
Owner: Er, no, they sent it a message.
Adjuster: Like an email?
Owner: Well, kinda like an email, I suppose.
Adjuster: And the message said, "Hey car, drive around the parking lot"?
Owner: Well, um, it sorta did say that, yeah.
Adjuster: Why didn't the message also say something like, "Hey car, don't hit that van over there."
Owner: It supposed to avoid other vehicles without being told.
Adjuster: So then why did it hit your van?
Owner: I think it just didn't see my van.
Adjuster: See it? The car has eyes?
Owner: Oh yeah, of course! It has lots of senses.
Adjuster: Well, why didn't it see your van?
Owner: It made a mistake.
Adjuster: The car made a mistake.
Owner: Yes, of course, it was an accident, it didn't hit my van on purpose.
Adjuster: And how long have you been under the impression that cars can be asked to drive around, see, think, and "make mistakes".
Owner: Oh, well, for a few years, I guess.
Adjuster: I think you should see a psychiatrist. I know a good one.

Well, maybe it didn't go quite that badly, but I guess there was quite a problem categorizing the accident since there isn't a category for autonomous vehicle collisions. At least not yet.

In a future post, I'll present my predictions for when cars that drive themselves will be available. I'll argue that the technology will be ready in the next ten years or so, but the legal, social, and cultural adjustments could take many decades.

Thursday, June 21, 2007

Agricultural Robots

Update: It was great to make Wired magazine, but now Instapundit has linked to the article too. Now Vision Robotics has really hit the big time!

In case you're curious (how could you not be?), a recent Wired Magazine article (Farms Fund Robots to Replace Migrant Fruit Pickers) describes some of my company's handiwork in the area of robots for agriculture. As I write this, the article actually headlines on the main www.wired.com page.

The title is rather silly. The farms aren't funding the robots (they don't exist yet), the farms are funding us (Vision Robotics). The primary reason for the funding is to get their fruit picked, not to replace the migrant workers. Other than that, for a random magazine article, it's surprisingly accurate.

In the past, I've almost never posted about robotics and artificial intelligence, my main two areas of expertise. After all, why write about something you know about? I find it much more fun to write about topics about which I am clueless (or semi-clueless).

That's going to change soon. I'm about to start a series of essays (next month) on how to use Artificial Intelligence techniques to make millions in the stock market. No really! I'm not kidding! You'll see!