For regulators trying to put guardrails on AI, it’s mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations per second must now be reported to the U.S. government and could soon trigger even stricter requirements in California.
Say what? Well, if you’re counting the zeroes, that’s 100,000,000,000,000,000,000,000,000, or 100 septillion, calculations each second, using a measure known as flops.
What it signals to some lawmakers and AI safety advocates is a level of computing power that might enable rapidly advancing AI technology to create or proliferate weapons of mass destruction, or conduct catastrophic cyberattacks.
It doesn’t have to be “powerful” to be dangerous. People just have to believe that it is and/or believe what it craps out without fact checking it.
Participants rated the same text presented in Wikipedia lower than in ChatGPT or Alexa for credibility
Then there’s also the fact it’s driving up demand for energy and keeping dirty power plants online.