I recently learned about Moravec’s Paradox. First framed by Hans Moravec, it is an observation that AI (or computers in general) is good at tasks that we humans consider complex but is bad at tasks that we humans consider very simple.
For example, the more intellectual tasks such as writing, mathematics, creating art, images, music etc. are hard for humans. AI has made the most progress on these tasks, and many of them such as doing complex arithmetic are trivially easy for computers.
Now consider tasks such as simple walking, doing the dishes, folding laundry etc. These are tasks that humans can do without even thinking. However, there are no AI products on the market that can do these tasks for us. I, personally, would much prefer an AI that can do the dishes and clean the house instead of an AI that can write reports or create video.
This difference in progress in AI on tasks that we humans consider difficult versus those we consider easy is the paradox. I’d recommend the wikipedia page and this detailed reddit post for more indepth overview of Moravec’s paradox, including some of the evolution based explanations for this paradox.
So, why is progress on these AI products for physical tasks been so slow?
There are many LLMs on the market and all of them claim to perform very well in these areas. Most of us are familiar with these names: Open AI’s GPT models, Gemini, Claude, Mistral etc. However, the only company that I can think of that makes robots that can move around the natural world like humans is Boston Dynamics and their wonderful robot dancing videos.
One of the arguments that Yann LeCunn has been making is that to get to true Artificial General Intelligence (AGI) is that we need machines that have a innate understanding of the representation of the physical world. The animal world has evolved intelligence by interacting with the physical world and our brains have an intuitive grasp of how the world works. For example, most toddlers figure out that if they throw a ball, it is going to land down. They have an intuitive understanding of gravity and physics. The theory is that the better we figure out how to get AI to understand the physical world, the better they will get at interacting with the natural world. It will be interesting to see the next generation of AI technology and how they tackle these problems.
Pingback: “System 1-System 2” thinking and AGI | Pulper Tank