Slightly overblown. Don’t get me wrong, it’s a powerful tool. But it’s just a tool. It’s not some sort of sentient being. In my field of work (research), we found out pretty quickly that ChatGPT was virtually worthless, since the stuff that we were doing was so new and novel that ChatGPT hasn’t got training data on it yet. But you could use it as a glorified Google and ask it questions if there was some part of the protocol that you didn’t understand. And honestly that pretty much encapsulates my stance on the matter: good at rehashing and summarizing old information, but terrible at generating or organizing new information.
Honestly, what I’m worried about is that the hype around AI is causing too many people to have an over-reliance on AI, and not realizing the limitations until it’s too late. A good example would be the case that was on the news a month or two ago about the lawyer who got in trouble because he used ChatGPT to write his case for him and it ended up making up court cases for citations. Suppose if some company puts an LLM as its CEO (which I feel fairly confident that some techbro is doing somewhere in the world). The company may be able to coast during fair weather and times of good business, but I am concerned that things will crash and burn when things aren’t going well. And I think that’s really the crux of it - AI is good enough that it looks competent when it’s not being stretched to its limits. And that’s leading too many people to think that AI is competent, always.
Slightly overblown. Don’t get me wrong, it’s a powerful tool. But it’s just a tool. It’s not some sort of sentient being. In my field of work (research), we found out pretty quickly that ChatGPT was virtually worthless, since the stuff that we were doing was so new and novel that ChatGPT hasn’t got training data on it yet. But you could use it as a glorified Google and ask it questions if there was some part of the protocol that you didn’t understand. And honestly that pretty much encapsulates my stance on the matter: good at rehashing and summarizing old information, but terrible at generating or organizing new information.
Honestly, what I’m worried about is that the hype around AI is causing too many people to have an over-reliance on AI, and not realizing the limitations until it’s too late. A good example would be the case that was on the news a month or two ago about the lawyer who got in trouble because he used ChatGPT to write his case for him and it ended up making up court cases for citations. Suppose if some company puts an LLM as its CEO (which I feel fairly confident that some techbro is doing somewhere in the world). The company may be able to coast during fair weather and times of good business, but I am concerned that things will crash and burn when things aren’t going well. And I think that’s really the crux of it - AI is good enough that it looks competent when it’s not being stretched to its limits. And that’s leading too many people to think that AI is competent, always.