Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.

So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?

  • cecep@fedia.ioOP
    link
    fedilink
    arrow-up
    27
    arrow-down
    4
    ·
    9 months ago

    I don’t expect anything, I was merely asking a question to clarify this

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      29
      arrow-down
      5
      ·
      9 months ago

      Well, I hope my answer clarifies it. You can’t prevent LLMs from being trained on your public posts.