Interesting discussion on HN.

  • BravoVictor@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    That was interesting. I’m very much in the ‘meh’ camp with the AI hype train. LLMs are going to make absolutely incredible interfaces.

    I feel like the kicker will come when additional unique training data will be needed, and the readily available online content is 85% LLM generated.

  • kraegar@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I think there is definitely some echo chambering, since the average person isn’t generally aware of AI. At the same time, mainstream media has been picking up the hype train a lot recently.

    People hear my grad school studies involve AI/ML and I instantly get bombarded with questions about ChatGPT.

    • DoubleEndedIterator@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 year ago

      I was talking to a friend of my mother and she was genuinely shocked when i told her that ChatGPT sometimes just makes things up. I was a little take aback with how deeply she thanked me for telling her that she needs to double check if ChatGPT suggests something. She was just completely believing the AI output.

      A friend of mine sent me some code and ask me if i can help him getting it to work. Turns out he doesn’t know any programming and just had ChatGPT generate it for him. He wanted it to do a thing for which there was no python library available so ChatGPT just hallucinated the API. It took me a while to explain to him that what he was trying to do was just not going to work but I was able to suggest him an existing software solution.

      So yeah I think that the average people are not aware of it and then many people that are aware of it are overestimating the technology.

      • kraegar@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        This has been my experience too. A junior dev at my last company kept trying to use ChatGPT to generate docket compose files and wondered why they generally didn’t work.

        My research has been on time series forecasting which is tangentially related to NLP. People are shocked when I point out to them that all these models do it predict the next token. Using weather forecasting has been a good analogy for why long AI generated texts are extra bad: weather forecasts get worse as the horizon increases.

        Despite all my gripes about LLMs, I must say that copilot has saved me writing TONS of boilerplate code and unit tests.