• slazer2au@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    3
    ·
    1 year ago

    Is it really AI? Llms are not really creating something new they are taking their training data throwing some probability at it and returning what is already in its training data.

    • Eggyhead@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      edit-2
      1 year ago

      I’m glad you brought this up. I imagine true AI would be able to take information and actually surmise original ideas from it. LLMs simply state what is already known using natural language. A human is still necessary if any new discoveries are expected to be made with that information.

      To answer OP’s question, the only thing that concerns me about “the rise of AI” is that people are convincing themselves it’s actual AI and then assuming it can make serious decisions on their behalf.

      • Lvxferre@lemmy.ml
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Is that not what we do?

        No. For two reasons:

        And some might say “hey, look at this output, it resembles human output!”, but the difference gets specially obvious when either side gets things wrong (human brainfarts still show some sort of purpose and conceptualisation; LLM hallucinations are often plain gibberish).

        • bionicjoey@lemmy.ca
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          For anyone doubting this, I encourage you to have GPT generate some riddles for you. It is remarkable how quickly the illusion is broken. Because it doesn’t understand enough about the concepts underpinning a word to create a good riddle.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          Have you seen this paper:

          Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create “latent saliency maps” that can help explain predictions in human terms.

          https://arxiv.org/abs/2210.13382

          It has an internal representation of the board state, despite training on just text. Leading to gpt-3.5-turbo-instruct beating chess engine Fairy-Stockfish 14 at level 5, putting it at around 1800 ELO.

          • Lvxferre@lemmy.ml
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            I didn’t read this paper (I’ll update this comment once I read it), but this sort of argument relying on emergent properties pops up from time to time, to claim that the bots are handling something further than tokens. It’s weak for two reasons:

            1. It boils down to appeal to ignorance - “we don’t know, it’s a blackbox system, so let’s assume that some property (conceptualisation) is there, even if there are other ways to explain the phenomenon (output)”.
            2. Hallucinations themselves provide evidence against any sort of conceptualisation. Specially when the bot contradicts itself.

            And note that the argument does not handle the lack of pragmatic purpose of the bot utterances. At all.

            Specifically regarding games, what I think that is happening is that the bot is handling some logic based on the tokens themselves, in order to maximise the probability of a certain output (e.g. “you won”). That isn’t even remotely surprising, and it doesn’t require any sort of further abstraction to explain.


            EDIT, from the paper:

            If we think of a board as the “world,” then games provide us with an appealing experimental testbed to explore world representations of moderate complexity

            This setting allows us to investigate world representations in a highly controlled context

            Our next step is to look for world representations that might be used by the network

            Othello makes a natural testbed for studying emergent world representations

            To systematically determine if this world representation

            Are you noticing the pattern? The researchers are taking for granted that the model will have some sort of “world representation”, as an unspoken premise. With no h₀ like “no such thing”.

            And at the end of the day they proved that a chatbot can perform the same sort of logical operations that a “proper” game engine would.

              • Lvxferre@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                1 year ago

                I think that image models are a completely different beast from language models, and I’m simply not informed enough about image models. So take what I’m going to say with a grain of salt.

                I think that it’s possible that image models do some sort of abstraction that resembles how humans handle images. Including modelling a third dimension not present in a 2D picture, or abstractions like foreground vs. background. If it does it or not, I don’t know.

                And unlike for language models, the image model hallucinations (e.g. people with six fingers) don’t seem to contradict the idea that the model still recognises individual objects.

  • TheAlbatross@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    22
    ·
    1 year ago

    Bad.

    There’s no way this results in better lives and outcomes for anyone but the wealthy. In the perpetual race to the bottom with margins and budgets, quality human work will be replaced with cookie cutter AI garbage and all media will suffer. Ads are only going to get more annoying and lifeless, corporate copy of any kind will become even more wordy bullshit.

    I hear people talk about how it will free up people’s time for more important things. Those people are fucking morons who don’t understand it’ll be used to simply pay people less and make the world a blander place to live.

    • Mr_Magpie@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Copywriter here. I couldn’t agree more. The problem is many corporate management types will view it this way because it requires too much thinking to understand and they don’t do that sort of thing.

      Bless them, the majority I have worked with have only just about got their heads around SEO and Google ranking, so with AI, a lot of them are already asking me what the difference is.

      The difference is the objective and input. An AI can’t yet truly understand the idea of meeting objectives in a creative way. Likewise, input doesn’t come with the decades of cultural understanding that a human is evolved to pick up.

      The problem, and the same could be said for most things, is rich people with poor brains.

      Still, no doubt many are already losing their jobs to it, only for the management to realise they fucked up big time when ChatGPT goes down when they need priority copy at short notice.

    • jcarax@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      I agree completely. Technology has been making us more efficient for all of human history, and at an absolutely absurd pace since the transistor. We don’t see the benefit, we see more work for less, and live a degrading quality of life in favor of increasingly empty convenience.

  • Izzy@lemmy.ml
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    I’m not convinced it is as intelligent as people are making it out to be. What most people in the media are referring to as AI are actually complex language models. This technology seems incredible to me, but I am wary of using it in anything that is of critical importance. At least not without being thoroughly reviewed by a human. For example I would never get into a car that is being driven autonomously by an AI.

    Also this is just a random personal opinion I have, but I wish people would stop referring to AI unless they are referring to AGI. We should go back to calling it machine learning or more specifically large language models.

  • PeepinGoodArgs@reddthat.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    As a student, I love it. It’s saving me a ton of time.

    But we’re also at the beginning of the age of AI as a business. It might get better for a bit, even for a while. But, inevitably, once managers see that consumers are addicted to it or its in some way integral to their lives, they’ll enshitify it.

    • Izzy@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      Isn’t the main goal of being a student to learn things? Personally I find this more important than any kind of certification that a school could give you.

      • PeepinGoodArgs@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Isn’t the main goal of being a student to learn things?

        If I wasn’t doing this degree for my job, then yeah. And I still learn things. But, honestly, fuck everything to do with business. The faster I can be done with learning shit about it, the better.

    • DogMuffins@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      You’re right, but the manner of enshittification is unclear. I think the LLMs we’re using now are a very, very early iteration.

  • Contramuffin@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Slightly overblown. Don’t get me wrong, it’s a powerful tool. But it’s just a tool. It’s not some sort of sentient being. In my field of work (research), we found out pretty quickly that ChatGPT was virtually worthless, since the stuff that we were doing was so new and novel that ChatGPT hasn’t got training data on it yet. But you could use it as a glorified Google and ask it questions if there was some part of the protocol that you didn’t understand. And honestly that pretty much encapsulates my stance on the matter: good at rehashing and summarizing old information, but terrible at generating or organizing new information.

    Honestly, what I’m worried about is that the hype around AI is causing too many people to have an over-reliance on AI, and not realizing the limitations until it’s too late. A good example would be the case that was on the news a month or two ago about the lawyer who got in trouble because he used ChatGPT to write his case for him and it ended up making up court cases for citations. Suppose if some company puts an LLM as its CEO (which I feel fairly confident that some techbro is doing somewhere in the world). The company may be able to coast during fair weather and times of good business, but I am concerned that things will crash and burn when things aren’t going well. And I think that’s really the crux of it - AI is good enough that it looks competent when it’s not being stretched to its limits. And that’s leading too many people to think that AI is competent, always.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    edit-2
    1 year ago

    It’s not really rapid advancement, a lot of it is smoke and mirrors. A lot of execs are about to learn that the hard way after they fire their entire workforce thinking they’re no longer needed. Corporations are marketing language models (Like ChatGPT, which is glorified text suggestions on full autopilot) as being way bigger than they actually are. I thought it would have been obvious after how they hyped up NFTs and Web3.

    Now there IS potential for even a language model to become bigger than the sum of its parts, but once capitalists started feeding its garbage outputs straight into their ‘make money’ machine it’s resulted in the reference material for these predictors being garbage as well, any hope of that becoming a reality was dashed. In a socialist future, a successor to ChatGPT would have eventually achieved sapience (no joke, the underlying technology is literally a brain) but because we live in a wasteland of a system, any future attempts are going to result in absolutely useless outputs as garbage data continues to bioaccumulate uncontrollably.

  • TotallyHuman@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I don’t know where it’s going. We’re in the middle of a hype cycle. It could be anywhere from “mildly useful tool that reduces busywork and revolutionizes the clickbait industry” to “paradigm shift comparable to the printing press, radio, or Internet”. Either way, I predict that the hype will wear off, and some time later the effects will be felt – but I could be wrong.

  • kinther@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    It helped me pick out a Christmas gift for my wife. She said it was the most thoughtful gift she had ever gotten.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    1 year ago

    what advancements? all llms use pretty much the same architecture. And better models aren’t better because they have better tech, they’re just bigger. (and slower and with a much higher energy consumption)

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        proving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.

  • electrogamerman@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Like all technologies, in good hands can do so much good. In wrong hands can do so much damage.

    And we all know which is going to happen.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    As far as generative art goes, I think we’re seeing the birth of a new medium for expression that can and should be explored by anyone, regardless of any experience or skill level.

    Generative art allows more people to communicate with others in ways they couldn’t before. People want to broadly treat this stuff like it’s just pressing a button and getting a random result rather than focusing on the creativity, curiosity, experimentation, and refinement that goes into getting good results. It also requires learning how to use new skills they may not have had to effectively use new tools that are rapidly evolving and improving to express themselves.

    We can’t put a lid on this, but what we can do, keep making open source models that are, effective and affordable to the public. Mega-corps will have their own models, no matter the cost. They already have their own datasets, and have the money to buy whatever other ones they want. They can also make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us.

    Remember: It costs nothing to encourage an artist, and the potential benefits are staggering. A pat on the back to an artist now could one day result in your favorite film, or the cartoon you love to get stoned watching, or the song that saves your life. Discourage an artist, you get absolutely nothing in return, ever.

    ― Kevin Smith, Tough Shit: Life Advice from a Fat, Lazy Slob Who Did Good

    I believe that generative art, warts and all, is a vital new form of art that is shaking things up, challenging preconceptions, and getting people angry - just like art should. And if you see someone post some malformed monstrosity somewhere, cut them some slack, they’re just learning.

    For further reading, I recommend this article by Kit Walsh, a senior staff attorney at the EFF if you haven’t already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.

    You should also read this open letter by artists that have been using generative AI for years, some for decades.

  • KeenFlame@feddit.nu
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    Foremost, the interesting part is how many laymen have uninformed and intense opinions on what it is and can do. For someone that has followed and worked with / created machine learning algorithms for twenty years, I can honestly say that I have no possible way of keeping up with all advancements and or understand fully how it all works. I also cannot stress how exciting and strange it is to have technology created that we cannot fully understand and predict like with previous technology. We have nobody that can say for sure if they create an internal model of the world itself or not, and weirdly, everything points to that. This is something I wish many of you that have strong feelings about them would understand. Even the top researchers that work with them daily do not know what you claim to know. Please don’t spread false information, because you feel you know IT or programming. This isn’t the same. And secondly, is not AI. I think it was a big mistake to start calling these models that because it has generated mass misunderstandings and misinformation surrounding these models.

  • WeLoveCastingSpellz@lemmy.fmhy.net
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I will plug the essay that I have written for school: Technology is advancing rapidly and while this creates lots of new possibilities for humanity such as automating the jobs that require heavy labor and hopefully making life easier for the people but our society can’t always keep up with all these technological advancements. The reasons for the increasing skepticism towards these technologies that workers have is also important to take a look at.

    First of all the benefits that new technologies that are coming out such as artificial intelligence and machine learning are hard to ignore. Even a mundane and boring task such as replying to work emails can be automated. This helps people to free up their schedules and lets them spend more time with their friends and family.

    Another pretty significant benefit of artificial intelligence is automation of hard labor such as construction work. Thanks to AI both simple and tedious tasks that people don’t want to complete can be automated, therefore the improvement of these technologies is very important for our society.

    Even though the benefits of the ever improving technology and automation are significant, there is another more sinister side to them. Automation is supposed to automate difficult tasks and help free up time for workers but it instead hurts workers by encouraging employers to replace their workers with artificial intelligence. Since AI is cheaper than real workers, people are losing their jobs or working under living wage to be able to compete with AI.

    Moreover the huge corporations developing these automation technologies aren’t trustworthy. Most popular and mature Artificial Intelligence models are profit driven which means they don’t care for customers except for the money in their pocket, and the data that they can provide them to develop their AI further.

    All in all the new technologies that provide automations can on paper be incredibly advantageous for people but under a capitalistic society that is profit driven, the benefits they bring are mostly for corporations but not for the regular people.