Sure, that’ll help… While currently these companies are in the lead, even Google internally admits they’ve lost to open source.
Stable Diffusion is not as plug and play as Midjourney, DALL-E etc., but damn is it much more powerful. And no one is gonna watermark open source (because someone is gonna release a fork without the watermark like 3 and half seconds after the commit adding the watermark is pushed).
It won’t be long before AI content is indistinguishable from human made content. So now they intend to flag one from the other I do not know. What about human made content where AI is used as a tool?
How?
Very simple. They will use water from their cloud vaporware.
Condensationware!
Presumably you watermark all the training data.
At least, that’s my first instinct.
Sure, that’ll work. Not like a nation state with a large budget would be able to replicate these very well-understood technologies and release falsified NON-watermarked media that now has a ring of “truth” because it doesn’t have Le Goog’s magic sigil in it…
Putting on my business hat, I’m guessing that this is more of a “Looking like a hostile non-state actor does not fit into our business model” thing than it is a “This is totally a long term and impregnable solution and all of our problems are solved forever and we got a cake and a pony” thing.
I mean thats a great thing but open source research kinda defeats the purpose, people gonna use stuff for evil