Daniel Lemire's blog

, 7 min read

What are we going to do about ChatGPT?

7 thoughts on “What are we going to do about ChatGPT?”

  1. Patrick says:

    So far ChatGPT does not seem to be working with actual concepts, dealing with meaning, or producing actual information. Instead, it merely produces “information-shaped sentences,” as Neil Gaiman puts it on Twitter.

    Case in point: Consider this comment posted on my friend’s “April” project:

    https://github.com/phantomics/april/issues/269#issuecomment-1491731824

    There the commenter quoted GPT4, which surmised that an observed bug in April was due to it comparing memory addresses instead of doing a deep comparison. Its diagnosis sounds very plausible. Problem is, it’s just not true.

    My friend replies below it that April does not in fact compare memory locations. Apparently GPT4 just saw a whole bunch of discussions out there, and assembled output that had all the most probably statistical correlations. It is merely “generative,” as in a generative grammar that can generate an infinite set of strings in some grammar.

    As Colin Wright puts it on Twitter: “GPT can produce things that are right, but it also produces things that look like they’re probably right, but are absolutely wrong. So someone needs to check it.”

    I initially thought that GPT might be using something like “Conceptual Dependency Networks,” which I learned about at GA Tech in the 80s via Janet Kolodner. We wrote programs which “knew” about common scenarios like dining in a restaurant, and you could tell it a story and then ask questions about what happened in it. At least that was dealing with concepts of meaning. But I don’t think that’s how GPT works.

    1. It can generate good-looking content, but, as you say, it is not intelligent in the way a smart human being could be.

  2. Dong Xie says:

    Before even talking about banning AI and models, we should really understand what it is. A generative model that can generate some coherent-looking sentences does not just it has “intelligence”. Rather than panicking about how “smart” it become, we should be worry more about if it is actually very stupid. Appeared more powerful models is not scary; blindly trusting them without reasoning is.

    1. Ian Beauregard says:

      A generative model that can generate some coherent-looking sentences
      does not mean it has “intelligence”.

      You could say that about human beings as well.

    2. Patrick says:

      I could see GPT being very useful in search engines, so it might do a good job looking up facts about the 1964 Buick Skylark and the 1963 Pontiac Tempest. But I don’t think it (yet) has the conceptual knowledge needed to present a coherent argument like this:

      https://www.youtube.com/watch?v=W7YoxrKa4f0

      I think that’s a very different process from generating sentences which sound convincing merely because they are produced by what is in effect a “grammar” derived from observed probabilities during massive neural network training.

      AI can also be very useful in eliminating mental grunt-work, for example in symbolic algebra and calculus. But note there that the concepts are already well established and the outcomes are virtually guaranteed correct.

      I realize that there is a growing shortage of capability in human conceptual thought, but the impulse to eliminate the need for it altogether is very dangerous. People can die as a result.

      I’m not even sure neural networks by themselves can produce competent self-driving cars. Sure they don’t get tired and distracted like human drivers, but (so far) they lack the conceptual capacity of human drivers.

  3. al butlerian says:

    When it comes to A.I. my fears are mostly related to the confusion, violence and abuse it might bring.

    After seeing the “Trump arrested” and the “Fashion pope” pictures, I imagine it won’t be long until people would start to harass others by generating pornography and violent materials with their “victims”.
    Fake news, with “generative” evidences will become a thing. It will be hard to distinguish truth from falsehood. It’s hard even now, but give it 5-10 years.
    Online scams will flourish in creative and generative ways. Imagine being called by your grandkids asking you to send them some money because they had an accident.
    Plagiarism in education will become the norm.

    Banning A.I. doesn’t solve the problem. We will just have to deal with new challenges, and as a society to be creative enough to adapt to disruption. It will be hard, and things are moving fast.

  4. Mitchell Porter says:

    You should be asking, what will GPT-n do about us?