• slazer2au@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    3
    ·
    1 year ago

    Is it really AI? Llms are not really creating something new they are taking their training data throwing some probability at it and returning what is already in its training data.

    • Eggyhead@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      edit-2
      1 year ago

      I’m glad you brought this up. I imagine true AI would be able to take information and actually surmise original ideas from it. LLMs simply state what is already known using natural language. A human is still necessary if any new discoveries are expected to be made with that information.

      To answer OP’s question, the only thing that concerns me about “the rise of AI” is that people are convincing themselves it’s actual AI and then assuming it can make serious decisions on their behalf.

      • Lvxferre@lemmy.ml
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Is that not what we do?

        No. For two reasons:

        And some might say “hey, look at this output, it resembles human output!”, but the difference gets specially obvious when either side gets things wrong (human brainfarts still show some sort of purpose and conceptualisation; LLM hallucinations are often plain gibberish).

        • bionicjoey@lemmy.ca
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          For anyone doubting this, I encourage you to have GPT generate some riddles for you. It is remarkable how quickly the illusion is broken. Because it doesn’t understand enough about the concepts underpinning a word to create a good riddle.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          Have you seen this paper:

          Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create “latent saliency maps” that can help explain predictions in human terms.

          https://arxiv.org/abs/2210.13382

          It has an internal representation of the board state, despite training on just text. Leading to gpt-3.5-turbo-instruct beating chess engine Fairy-Stockfish 14 at level 5, putting it at around 1800 ELO.

          • Lvxferre@lemmy.ml
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            I didn’t read this paper (I’ll update this comment once I read it), but this sort of argument relying on emergent properties pops up from time to time, to claim that the bots are handling something further than tokens. It’s weak for two reasons:

            1. It boils down to appeal to ignorance - “we don’t know, it’s a blackbox system, so let’s assume that some property (conceptualisation) is there, even if there are other ways to explain the phenomenon (output)”.
            2. Hallucinations themselves provide evidence against any sort of conceptualisation. Specially when the bot contradicts itself.

            And note that the argument does not handle the lack of pragmatic purpose of the bot utterances. At all.

            Specifically regarding games, what I think that is happening is that the bot is handling some logic based on the tokens themselves, in order to maximise the probability of a certain output (e.g. “you won”). That isn’t even remotely surprising, and it doesn’t require any sort of further abstraction to explain.


            EDIT, from the paper:

            If we think of a board as the “world,” then games provide us with an appealing experimental testbed to explore world representations of moderate complexity

            This setting allows us to investigate world representations in a highly controlled context

            Our next step is to look for world representations that might be used by the network

            Othello makes a natural testbed for studying emergent world representations

            To systematically determine if this world representation

            Are you noticing the pattern? The researchers are taking for granted that the model will have some sort of “world representation”, as an unspoken premise. With no h₀ like “no such thing”.

            And at the end of the day they proved that a chatbot can perform the same sort of logical operations that a “proper” game engine would.

              • Lvxferre@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                1 year ago

                I think that image models are a completely different beast from language models, and I’m simply not informed enough about image models. So take what I’m going to say with a grain of salt.

                I think that it’s possible that image models do some sort of abstraction that resembles how humans handle images. Including modelling a third dimension not present in a 2D picture, or abstractions like foreground vs. background. If it does it or not, I don’t know.

                And unlike for language models, the image model hallucinations (e.g. people with six fingers) don’t seem to contradict the idea that the model still recognises individual objects.