Daniel Lemire's blog

, 12 min read

How close are AI systems to human-level intelligence? The Allen AI challenge.

10 thoughts on “How close are AI systems to human-level intelligence? The Allen AI challenge.”

  1. Ben says:

    I dislike the phrase “human-level intelligence” because it implies the existence of a simple, objective intelligence metric (which I think Daniel would agree, does not exist).

    A modest proposal: What about the phrase “human-mimic intelligence”? I think this more accurately reflects the questions that sometimes get asked (“humans can do X; can AIs do X?”), and has a nice mildly pejorative ring to my ear.

    1. Right. So I think that “human-level intelligence” is a subjective and needlessly debatable term.

      That’s like saying… “we’ll agree that you have flying machines when they are indistinguishable from birds… otherwise you do not have bird-level flying machines”.

      What we actually want is superhuman intelligence… like how any smartphone can locate the nearest McDonald’s anywhere in the world in seconds and tell you exactly how to get there. There is nothing “human” about it but it is clearly “intelligence”.

      1. I would in turn argue that the focus on super-human intelligence is wrong. For one thing, there is no good way to define intelligence. For another thing, there is no good way to define what super human is. Calculator is super human (Oren Etzioni), so what? Human brains are surprisingly week in some areas, but likewise are surprisingly strong in others.

        A focus on super-human “intelligence” is also wrong, because we should be solving real problems instead. This focus creates incentives to beat humans on some tests, but consequences of these wins are not clear. A calculator beats humans in math, Deep Blue beats human in chess, IBM Watson beats human in Jeopardy, and Alpha Go beats humans in Go. Translating these wins into real life applications turns out to be super complex.

        The reason for this is that we, speaking in machine learning terms, we are overfitting to specific problems, specific data sets, etc. One should be careful not to do so. The real focus should be on real-world problems.

        1. A focus on super-human “intelligence” is also wrong, because we should be solving real problems instead.

          I agree with the second part of your statement.

          As for the first part, I agree with you that it needs care. When I use the term, I refer to technology that extends the capabilities of human beings… but, of course, all technologies do that in a way… starting with the hammer. Maybe I should be more careful with the term.

          This focus creates incentives to beat humans on some tests, but consequences of these wins are not clear.

          Regarding AlphaGo, it did show conclusively that deep learning is a powerful too (for some problems). Regarding the current Allen Institute test, it did show the power of information retrieval. Basically, a finely tuned search engine can nearly pass (59%) science tests.

          Translating these wins into real life applications turns out to be super complex.

          Yes. Thankfully, we have tens of thousands of brilliant engineers on the job.

          The reason for this is that we, speaking in machine learning terms, we are overfitting to specific problems, specific data sets, etc. One should be careful not to do so. The real focus should be on real-world problems.

          The ability of machines to specialize is not necessarily a fault. I like hammers, but I also use screwdrivers.

          Right now, if you have open-ended problems, you need human beings… but we have no shortages of human beings so that’s ok.

          I am sure we will get to a point where the same machine that learned to play Go can play tennis thanks to a robotic body… but I am not sure I care.

          Probably, the machine that maps the route I need to take to get to the dentist has little to do with the machine that tells me about the latest movies… but I don’t need all of these machines to work the same, to be based on the same principles.

          Nature was limited. It could not evolve a brain to make paintings, a brain to hunt, a brain to care for the young… it needed to integrate all functions into one machine. Morever, this machine could not use too much energy and it had to be robust (with respect to injuries, diseases and so forth).

          We are not similarly limited.

          I would add that since we already have the brains we do, the last thing we need are machines that can replace us. Rather, we need to specialized machines that can extend our reach.

          It is not that it would not be interesting to general artificial intelligence… but I think it would be most interesting from a philosophical point of view.

  2. Atul Mehta says:

    Fei Fei Li of Stanford has said that supervised deep learning (using deep CNNs) is at its best somewhere about the threshold of the intelligence of a 3-year old, as it relates to the large scale visual recognition Imagenet challenge.

    Designing domain-specific AI tailored to pass 8th-grade science tests shouldn’t be as hard considering that the all three winners thought that “a deeper, semantic level of reasoning with scientific knowledge to the questions and answers would be key to achieving scores of 80% and beyond.”

    LSTMs (https://en.wikipedia.org/wiki/Long_short-term_memory) and Google’s n-gram model appear promising but a fundamental problem often overlooked by researchers is that a high degree of semantic level of reasoning requires semantic unambiguity. English is an ambiguous language and anyone who has tried using voice search (Alexa, Cortana, et al) would attest to it.

    My guess is that machine learning would probably take a magic leap forward after a novel English-like unambiguous context-sensitive intermediate mapping language is invented. Here’s an interesting read on Quora: https://www.quora.com/What-is-the-reason-behind-saying-that-Sanskrit-is-the-most-suitable-language-for-programming

    1. supervised deep learning (using deep CNNs) is at its best somewhere about the threshold of the intelligence of a 3-year old

      Deep learning was a key ingredient of AlphaGo, and AlphaGo appears to be far beyond the abilities of a 3-year-old child.

      a high degree of semantic level of reasoning requires semantic unambiguity (…) My guess is that machine learning would probably take a magic leap forward after a novel English-like unambiguous context-sensitive intermediate mapping language is invented.

      I would rather think that we are doing away with formal reasoning as the cornerstone of intelligence.

      Further reading:

      When bad ideas will not die: from classical AI to Linked Data
      http://lemire.me/blog/2014/12/02/when-bad-ideas-will-not-die-from-classical-ai-to-linked-data/

  3. Shane Greunke says:

    I think the question of an “ai” is, at its core, more like quicker human thinking. if we can get a computer brain to do most of the thinking we do (except much faster) then we humans can focus on the things only a human mind can do. This way, in theory, we would be able to use this technology to advance faster than ever before.
    To evaluate, it’s like using google search and a person to apply information to a situation/task. if it were just the person it would take longer for that person to think of/learn all of the necessary information and apply it to the situation/task. With google search acting like a hive mind and information hub, the information is found and available in seconds. All information known to man is there now. Now this person can take that information to the situation/task almost immediately with the aid of the google search. The task will be completed with higher efficiency than a lone person could. this but on a higher technological scale is what we need from an “ai”

    ps. could quantum mechanics possibly be applied here? as in once we advance quantum mechanic processing.

    1. Travis says:

      Well quite frankly, most people think of A.I. as something that ‘feels’ they think of robots like wal-e or Chappie but the real question is , how can we define if a being is sentient if we ourselves do not fully understand what consciousness is? the real task at hand is creating something with a base amount of human input ( the same as natural instincts for us homo sapiens) but then allow the program to write itself. just as our brains react independently to the data that is collected by our sensory organs

  4. Bill Everitt says:

    It is interesting to note all the recent chatter in the media about Artificial Intelligence (AI). The fact is the scientific community has been trying for decades to get from AI level 4 to level 5. Level 5 has been defined as the ability of a computer to REASON as a human brain can. We are definitely not there yet and not getting there until the end of this century seems optimistic at best.
    Somehow the media has got hold of this and is giving the impression that science has finally reached the ultimate level 5 AI where machines can reason. Previously level 4 AI was never referred to as artificial intelligence as that definition was given and reserved only to a level 5 – an illusive goal that science has not yet been able to achieve.
    For sure level 4 AI has great computing power, but it is NOT artificial intelligence as the media would have us believe.
    There should be a better description of the 4 levels of AI and why we are still stuck at level 4. The hurdle of getting a machine to REASON is many years away.

  5. subash says:

    so like imagine AI is smart enough to expand and develop itself, produce smarter and more complex computers that humans find too sophisticated to even understand it.