• lagomorphlecture@lemm.ee
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    Without bothering to read the article or investigate I’m going to say absolutely nothing. Because without legislation and steep fines they will do what they want. But if they’re under scrutiny they might try to be less blatant and hide it better so there’s that, I guess.

    • soulifix@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      We need a more up to date government that actually understands technology, the history of these companies and their practices. Until we get that government, this is all just fluff. Unfortunately in the back of my mind, it’ll take decades before we get one and for now these tech companies have been running wild knowing that our current government has absolutely zero understanding of technology.

    • eguidarelli@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

      That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

      The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images or audio known as deepfakes.

      These commitments are faster to secure while slower steps like creating regulations through laws can come after.