ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.
Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.
That’s on the users. It straight up tells you not to give it sensitive information.
Every website/IT/whatever says since the beginning to not give out your login credentials to anyone.
I’m sorry but if you’re stupid enough to give chat gpt your passwords you deserve every bad thing that happens because of that.
This is not a chat gpt problem, it’s a PEBKAC one.
It is a user problem and an OpenAI problem. Some data shouldn’t be getting shoved into ChatGPT, without a doubt.
ChatGPT is pulling from its history data which should be isolated to each user. It’s starting to hint at some exceedingly bad design around their AI.
Any time that ChatGPT is “broken” with creative prompts, a new filter is put in front of, or after, the AI model. (The model itself doesn’t change as it would be too expensive to re-train.) The bot then refuses specific input or clips potentially bad output. Life goes on.
Any data repositories that are use for chat should be physically separated from user history, and it isn’t. This implies a ton of different things, but it would all be speculation.
I am really thinking there is a great deal more fuckery going on than what OpenAI is showing to the public. Regardless of the technology, there always is a ton of fake going on with any company.
That will be getting a problem in the future. People will start putting highly sensitive and confidential information into ChatGPT and the like. And of course they’ll use this data. Industrial espionage might get as easy as asking a common LLM for help with a specific problem.
That’d be amazing if it could take all the data that’s fed to it and readily produce solutions like that.
What a time to be alive.
I’ve been using ChatGPT as a poor man’s psychological analyst.
Does this mean my conversations about my deepest fears are not safe??
People are using it as a partner, they’ve already found that to be true. Probably teenagers, which is kind of worse.
Like sexual partner? Tell me more.
There are a lot of lonely people in this world, there was some mention of it in an article a few weeks back.
Because no one reads the article:
“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” the representative wrote. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”
Compromised account being used as a free access endpoint for GPT.
Doesn’t this mean that overwhelming non factual information would skew the results of chat gpt?
No