In the last few years, AI researchers have tried a different, but related approach. They've simulated neural network topologies in a computer that are sorta based on neural connectivity in a brain. The idea is that even though nobody knows how those neurons in the brain work when connected like that, perhaps those topologies will do something useful anyway.
And much to my utter amazement, that approach has made some really jaw dropping (well, my jaw, anyway) breakthroughs in a wide range of areas from vision to self-driving cars to emerging intelligence to self-directed learning and more. There's not time and space for me to get into all of these areas, but I'll touch on a couple.
The first is image recognition. A huge goal of AI has been to be able to have a computer look at an image and tell you what's in the image (for example, a car or sword or shark or poppy or fighter-jet or ...). And since a human can distinguish between hundreds of thousands of different types of objects, wouldn't it be nice if the computer could also distinguish between that many different things as well.
As of 2010, that level of ability for image recognition by computers was a pipe dream, and nothing more. As of today, a mere 7 years later, computers are now really good at that. Not quite as good as humans, but rapidly closing in as shown by the following graph:
Some background is required for the graph above. In 2006, some researchers got the idea to create a database of 14,000,000+ images (they hope to have 100 million images eventually) of tens of thousands of different objects, each image labeled with the object(s) it contains and bounding boxes of each object. With this database, neural nets can be trained to recognize the objects. Then, when shown an arbitrary image, the neural net will identify the objects in it.
The database is called ImageNet and was first ready for use in 2010. A contest was created to see who, if anybody, could create and train neural nets to distinguish between the tens of thousands of objects in the database. In 2010, the results were dismal with most contestants guessing right less than half the time. But, by trial-and-error and building on the best successes (evolution?), each year the results got better - a lot better. To the point where if you show one of the better nets a picture (and the picture is reasonably clear and a few other minor caveats), it will correctly identify the main object(s) (again, out of tens of thousands of possible different objects) in the image the vast majority of the time. And anyone can download these trained nets and utilize them with open source software such as Google's TensorFlow. While it takes weeks and weeks of cloud computing to train these networks using 14,000,000 images, once trained, a typical desktop can recognize the objects in an image in a few tens of seconds and in less than a second if it has a sufficiently powerful GPU (it turns out that graphics cards happen to be nearly exactly optimal for processing neural nets).
These image recognition nets are called "deep learning convolutional nets" and nobody really knows how they work, only that they do. Sorta like how we don't know how the human brain works - only that it does. Some modifications of these nets has enabled a lot of different applications to be addressed. For example, a while back, an AI beat the worlds Go champion. Ho hum, chess had already fallen to computers, so not a big deal, right? But it got a little more interesting a few weeks ago:
A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself.Self-learning artificial intelligence. Pretty nifty.
Many of these techniques (and many more) are used in self-driving cars. They will soon teach themselves to drive really well - "literally thousands of years of human" driving experience. Bigger nets will be able to incorporate millions of years of human driving experience. It may take years to train them, but once trained, they can be downloaded to all cars. Humans may bested by AI in wide range of applications in my children's lifetimes, not just relatively trivial things like chess and Go (which only 10 years ago were not considered at all trivial).
I'll leave you with what I think is a very interesting video. I'm sure you've all seen faces morph from one person to another, but I think you'll find that the morphing is qualitatively different starting at the 1:50 mark on the video. All of those faces are simulated by the neural net which has been trained to "know" what a face is. The morphing from one face to another, even radically different faces in different poses, tends to stay pretty realistic throughout the transition. And the scene morphing, also completely simulated, maintains a surprisingly realistic rendition even when changing between radically different scenes, for example the bedrooms just after 4:00. Enjoy!