, 5 min read
The courage to face what we do not understand
Sadly, it is easy to forget that what we know is all but a tiny fraction of all there is to know. Human beings naturally focus on what they understand. The more you learn, the stronger this phenomenon tends to be.
Irrespective of any biological mechanism, I believe that it is a form of “aging” in the sense that it makes you increasingly inflexible. The more you know, the less open you are to new experiences. In effect, the dumber you get.
Let us call this “cognitive rigidity”.
This almost seems unavoidable, doesn’t it?
I believe that there are at least three factors driving cognitive rigidity:
- Economics often favor specialization. Suppose that you have been programming in Java for the last 5 years. You have an enormous advantage over anyone who starts out in Java. So you have every incentive to ignore new programming languages.
- Skills and experience are often poorly transferable. For example, I speak fluently in French and English. If I try to learn Chinese, it will take me years of hard work just to catch up with a 6-year-old child born in China.- The more and longer you focus, the more likely you are to hit diminishing returns. There is very little difference between spending 5 years programming Java and spending 20 years doing the same.
What is interesting is that cognitive rigidity is an entirely general process.
For example, there is no reason to believe that an artificial intelligence would not suffer from cognitive rigidity. Suppose that you train a machine for some task over many years. The machine has gotten very good at it, and any small change is likely to make it worse at its job. At some point, the machine will stop learning. It has fallen into a “local extrema”. Yet it is possible for a whole other piece of software to come in and surpass the old machine because it starts from new assumptions. The old machine could have explored new assumptions, but that would have likely provided no gain.
Organizations, communities, and even whole countries might also be subject to cognitive rigidity. For example, IBM famously missed the PC revolution… then Microsoft nearly missed the Internet revolution, and squarely missed the mobile revolution.
Where we should be really concerned is that I believe humanity as a whole can fall victim to cognitive rigidity. I have recently “reinvented” myself as an advocate for techno-optimism. I did so when I realized that even among people who should know better, there was a massive failure of imagination. Even young computer scientists fall for it. I find that people universally imagine the future as the present, with a few more gadgets. Though I do not share the pessimism, it is not what troubles me. What troubles me is that people assume we have reached worthwhile extrema. We haven’t!
- We do not know what intelligence is. We can emulate some of the human intelligence in computers. We can “measure” intelligence using IQ tests… but we do not know what it is. Not really. We can’t even reproduce the visual recognition abilities of a rodent, despite the fact that we have more than enough CPU cycles to do it. In a very fundamental way, we do not know anything about intelligence.- We are nowhere close to figuring out the laws of the universe. At a high level, we have two systems, quantum mechanics and relativity, that we glue together somehow. It is an ugly hack.
- We really do not know much about biology. We have had the (nearly) complete human genome for many years now. So we can touch the binary code of life. We can change it. We can tune it. But we have no idea how it works. Not really. We don’t know why we age. We don’t know why we get cancers while other animals don’t.
To make matters worse, there is a hidden form of cognitive rigidity when people consider biology. There is a strong assumption that whatever natural evolution produced, it must be ideal… and so tinkering with it is dangerous. For example, I was telling my neighbor about the existence of genes that make people stronger, or more resilient to cancer. His first reaction was that these genes must come at a sinister cost, otherwise we would all have them. This is, of course, a fallacy. You could equally say that whatever product has not yet been marketed must not be profitable, otherwise someone else would have already marketed it.
- Our best practices in politics and economics are based on debatable heuristics that work “ok”, but they are probably nowhere close to being optimal. Alastair Reynolds in his novel Revelation Space depicts a high-advanced human society where people have adopted radically different forms of politics and economics. His novel hypothesizes that this lead to a surge of prosperity never seen before. Yet almost any debate that puts into question current politics is a non-starter. People simply assume that whatever they have must be the best that can be had.
So we have these giant gaps. What really worries me is how most people do not even see these huge gaps in our knowledge. And these are just the beginning of a long list. If you drill down on any given issue, you find that we know nothing, and we often know less than nothing.
I do not think that cognitive rigidity is unavoidable. I don’t think that there is a law that says that you have to fall prey to it. For example, while Intel has had every occasion to fall the way IBM and Microsoft did, it has time and time again been able to adopt new techniques. We still look at Intel today to determine whether Moore’s law is holding, 40 years after the law was written.
The key to progress is to have the courage to face what we do not understand. But it takes courage to face the unknown.