Daniel Lemire's blog

, 7 min read

Common sense in artificial intelligence… by 2026?

8 thoughts on “Common sense in artificial intelligence… by 2026?”

  1. ok says:

    When a “common sense” thing encounters and learns its knowledge from something else, like another “common sense” thing, that has its own “common sense” learned from another, well, and so on, what is it then? And how can it identify itself? And can it also shorten the path and communicate directly? Or will it always learn from a “black box”, from scratch?

  2. Diego Alonso says:

    “…if some piece of software is able to pick up a decent game […] and figure out how to play competently within minutes…”

    Isn’t this what Deepmind’s Atari-playing Reinforcement-Learning-based software does? It knows nothing about controls, games, life, or anything… and eventually plays better than humans.

    Great blog, btw!

  3. Atreyu says:
  4. @Diego @Atreyu

    I was thinking of DeepMind, yes. But DeepMind current falls short of my test. DeepMind’s AI needs extensive training to figure things out. It does not use common sense. It might need to play a game thousands, millions of times until it figures out how to play.

  5. Ankur says:

    Humans, though, design machines (that’s why the anthropocentrism). I would back a machine with common sense any day, but can it be built? Difficult … because we have not yet figured out, as far as I know, how we get our own common sense. It does not come from scholastic education, certainly. Common sense is a gut feeling (and is invaluable, better than all laurels and degrees): some have it in oodles, some don’t. But why? We may be bad at doing arithmetical sums, but that’s simply because we cannot handle enough data at one moment, but we do know how to add 2 and 2 and how it would give 4, so we can design computers for doing that. Lovely blog, though. I wouldn’t take a bet, but I don’t think odds of a machine with common sense are good, at least as yet.

  6. Anonymous says:

    >Geoff Hinton thinks that machines will soon acquire common sense… and it looks like an easy problem? But we have no clue right now how to go about solving this problem. It is hard to even define it.

    I think this problem is quite well approximated by various state of art benchmarks from Facebook AI Research:
    https://research.facebook.com/research/babi/
    and to fit the visual part there is http://visualqa.org/ https://visualgenome.org/ (also see cool machine learning model that solves visual qa http://arxiv.org/abs/1603.01417 )
    These are supervised learning tasks.

    Playing games is IMHO more general reinforcement learning problem, though eventually abovementioned tasks should be solvable in reinforcement learning mode.

    Also there is an interesting paper that outlines facebook’s research direction in general Reinforcement Learning which includes common sense: http://arxiv.org/abs/1511.08130

    1. Thanks for the informative comment.

  7. Insofe says:

    Many people want to judge machine intelligence based on human intelligence. Common sense is basic knowledge about how the world of human beings works. It is not rule-based and it is not totally logical. Besides, we are not even close to achievement the awesome capabilities of human intelligence. Even getting a machine to be as smart as a mouse would be a historic breakthrough. That alone would be highly useful. Stretching human intelligence is a good target but anything in between will be just as awful.