My post on Moravec’s paradox, felt a bit incomplete. I wanted to expand a bit more was on the probable causes behind the paradox as well as add some more thoughts on Yann Lecunn’s theory on what it would take to bridge the gap between the current state of LLMs and true “AGI” (Artificial General Intelligence).
The most accepted theory behind Moravec’s paradox is that evolution is the reason behind what we humans consider easy versus hard. The things that we consider easy, all the sensory-motor stuff, is actually extremely complex but humans, and all our ancestors, have had millions of years to fine tune it and make it seem simple. The things we consider hard, all the intellectual things that take effort, are evolutionarily speaking, very recent. They are approximately a few hundred thousand years old. Evolution has not yet perfected it and so it takes a lot of effort for us to do it. Hence the paradox.
So, what does this have to do with AGI? Since the beginning of AI research, people have tried to figure out how to mimic the human brain on silicon. A lot of the early symbolic reasoning efforts were based on how our brains think about the world. Well, the problem is that we do not have one overarching theory or framework of how our brain works because we don’t understand much of it. Researchers in this field have done plenty of work, and we have great insights into many parts of our brain, but on the whole, our brain/mind remains a black box.
One great insight into how our brain/mind works was explained by Daniel Kahneman in his book ‘Thinking, Fast and Slow”. Full disclosure: I have not read the book. However the key insight is that the brain has two modes of operating. “System 1” thinking happens near instantaneously, driven by instinct. “System 2” thinking takes a lot of conscious effort. Yann Lecunn’s efforts at arriving at an AGI involves figuring out how to incorporate “System 1/2” kind of behavior in AI systems. The hypothesis is that if we figure out how to do it, then AI can truly reason around problems.
One common theme in AI since the beginning, is that AI is always 20 years away. No matter how much progress we make, there will always be more things to do. Will “System 1/2” thinking and encoding physical reality into AI models, give us true AGI? Time will tell.