One of Spez’s answers in the infamous Reddit AMA struck me
Two things happened at the same time: the LLM explosion put all Reddit data use at the forefront, and our continuing efforts to reign in costs…
I am beginning to think all they wanted to do was getting their share of the AI pie, since we know Reddit’s data is one of the major datasets for training conversetional models. But they are such a bunch of bumbling fools, as well as being chronically understaffed, the whole thing exploded in their face. At this stage their only chance if survival may well be to be bought out by OpenAI…
The value of LLM’s has changed drastically in favor of open source since the Meta weights leak. The proprietary model looks pretty much wrecked now, at least as far as I understand the leaked internal memo from a google researcher last month.
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
Oh I’m not saying they are doing the right thing or that it was the correct decision. Just speculating whether LLMs is what kicked off the whole thing
I’m saying the premise that LLM’s have anything to do with it is either incompetent failure to keep up with LLM developments, or a pack of lies.
I disagree, it’s still too early and a bit presumptuous to make such conclusive statements
This is a fascinating read, thank you very much for sharing.