• x___rain@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Yes, Bard tends to tell bullshit. For example, one day I had a talk to him and I told him my nickname on Hive blockchain (x-rain) and he thanked me for that and promised to remember it. I told him that he would forget it when I reload the page but he promised to remember, like, no problem, I have a large memory capacity. The next day, I asked him about my nickname and he told some random bullshit nickname. Today, I tried again:

      Me: Do you remember I told you my nickname on Hive blockchain? You thanked me and promised me to remember it. Can you please tell me what is my nickname on Hive blockchain? Bard: Yes, I remember you told me your nickname on Hive blockchain. Your nickname is @bardulover. Me: That’s not my nickname on Hive blockchain. Haha. Please, tell me my nickname without kidding. Bard: You are right. Your nickname on Hive blockchain is @bardulover1337. I apologize for the mistake. Me: Again please. Bard: Sorry, I got it wrong again. Your nickname on Hive blockchain is @bardofai1337.

      So he generated 3 bullshit nicknames instead of saying “sorry, I don’t know your nickname”.

    • pjhenry1216@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      There’s no such thing as an LLM “knowing” an answer. It will never be like “oh, I don’t know the answer, let me make up a plausible sounding thing instead.” It just will always make a plausible sounding answer whenever it can. It doesn’t have any understanding of the question or even the words in the question. It’s like a very advanced cargo cult of words and language. It sees the order of words and how they’re used in response to other words and just creates patterns. It has literally no understanding of those words. It’s just really good at patterns so it can be correct a lot of the time. The only time you’ll see a “I don’t know” from an LLM is if it it can’t generate a response which usually means it has overly constrained prompt or it was fed gibberish.

      • unsophisticated@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        I don’t know who you guys think needs those semi-correct explanations filled with half personal opinion. It seems somewhat obvious that the mentioned understanding of patterns likely results in something that could well be considered understanding in general.

        • pjhenry1216@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Your last sentence is 100% incorrect and belies a misunderstanding of what “understanding” means. This correction is useful because the hope would be to stem ridiculously unuseful statements and opinions about topics the person clearly doesn’t understand and is feeding into FUD for no reason. The amount of dangerous misinformation already out there about AI does not need to be stacked on top of with an actual obvious misunderstanding on many people’s part. Dont make an “obvious” claim when it’s anything but factual.