, 5 min read
Common sense in artificial intelligence… by 2026?
Common sense is nothing more than a deposit of prejudices laid down in the mind before age eighteen (Albert Einstein)
Lots of people want to judge machine intelligence based on human intelligence. It dates back to Turing who proposed his eponymous Turing test: can machines “pass” as human beings? Turing, being clever, was aware of how biased this test was:
If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out some-thing which ought to be described as thinking but which is very different from what a man does?
I expect that we will eventually outgrow our anthropocentrism and view what machines really offer: a new kind of intelligence. In any case, from an economics perspective, it matters a great deal whether machines can do exactly what human beings can do. Calum Chace has published a new book on this topic: The Economic Singularity, Artificial intelligence and the death of capitalism. Chace’s excellent book in the latest in a stream of books hinting that we may soon all be unemployable simply because machines are better than us at most jobs.
To replace human beings at most jobs, machines need to exhibit what we intuitively call “common sense”. For example, if someone just bought a toaster… you do not try to sell them another toaster (as so many online ad systems do today).
Common sense is basic knowledge about how the world of human beings works. It is not rule-based. It is not entirely logical. It is a set of heuristics almost all human beings quickly acquire. If computers could be granted a generous measure of common sense, many believe that they could make better employees than human beings. Whatever one might think about economics, there is an interesting objective question… can machines achieve “common sense” in the near future?
It seems that Geoff Hinton, a famous computer scientist, predicted that within a decade, we would build computers with common sense. These are not computers that are smarter than all of us at all tasks. These are not computers with a soul. They are merely computers with a working knowledge of the world of human beings… computers that know our conventions, they know that stoves are hot, that people don’t usually own twelve toasters and so forth.
Chace recently placed a bet with a famous economist, Robin Hanson, that Hinton is right at 50-to-1 odds. This means that Hanson is very confident that computers will be unable to achieve common sense in the near future. Hanson is not exactly a Luddite who believes that technology will stall. In fact, Hanson has also an excellent book, the Age of Ems that describes a world where brains have been replaced with digital computers. Our entire civilization is made of software. I have covered some of the content of Hanson’s book on my blog before… for example, Hanson believes that software grows old and becomes senile.
I think that both Hanson and Chace are very well informed on the issues, but they have different biases.
What is my own take? The challenge for people like Chace who allude to an economic singularity where machines take over the economy… is that we have little to no evidence that such a thing is coming. For all the talks about massive unemployment coming up… the unemployment rates are really not that high. Geoff Hinton thinks that machines will soon acquire common sense… and it looks like an easy problem? But we have no clue right now how to go about solving this problem. It is hard to even define it.
As for Hanson’s, the problem is that betting against what we can do 10 years in the future is very risky. Ten years ago, we did not have iPhones. Today’s iPhone is more powerful than a PC from ten years ago. People at the beginning of the century thought that it would take a million years to get a working aeroplane, whereas it took a mere ten years…
I must say that despite the challenge, I am with Chace. At 50-to-1 odds, I would bet for the software industry. The incentive to offer common sense is great. After all, you can’t drive a car, clean a house or serve burgers without some common sense. What the deep learning craze has taught us is that it is not necessary for us to understand how the software works for the software to be effective. With enough data, enough computing power and trial and error, there is no telling what we can find!
Let us be more precise… what could we expect from software having common sense? It is hard to define it because it is a collection of small pieces… all of which are easy to program individually. For example, if you are lying on the floor yelling “I’m hurt”, common sense dictates that we call emergency services… but it is possible that Apple’s Siri could already be able to do this. We have the Winograd Schema Challenge but it seems to be tightly tied to natural language processing… I am not sure understanding language and common sense are the same thing. For example, many human beings are illiterate and yet they can be said to have common sense.
So I offer the following “test”. Every year, new original video games come out. Most of them come with no instruction whatsoever. You start playing and you figure it out as you… using “common sense”. So I think that if some piece of software is able to pick up a decent game from Apple’s AppStore and figure out how to play competently within minutes… without playing thousands of games… then it will have an interesting form of common sense. It is not necessary for the software to play at “human level”. For example, it would be ok if it only played simple games at the level of a 5-year-old. The key in this test is diversity. There are great many different games, and even when they have the same underlying mechanic, they can look quite a bit different.
Is it fair to test software intelligence using games? I think so. Games are how we learn about the world. And, frankly, office work is not all that different from a (bad) video game.