• kirklennon@kbin.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    11 months ago

    Do you happen to know a good source for information on this?

    Apple released detailed whitepapers and information about it when originally proposed but they shelved it so I don’t think they’re still readily available.

    One in a trillion sounds like a probability of a hash collision.

    Basically yes, but they’re assuming a much greater likelihood of a single hash collision. The system would upload a receipt of the on-device scan along with each photo. A threshold number of matches would be set to achieve the one in a trillion confidence level. I believe the initial estimate was roughly 30 images. In other words, you’d need to be uploading literally dozens of CSAM images for your account to get flagged. And these accompanying receipts use advanced cryptography so it’s not like they’re seeing “oh this account has 5 potential matches and this one has 10”; anything below the threshold would have zero flags. Only when enough “bad” receipts showed up for the same account would they collectively flag it.

    And I was under the impression that iPhones connected to the iCloud sync the pictures per default?

    This is for people who use iCloud Photo Library, which you have to turn on.

    • rufus@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      Thank you very much for typing that out for me! And this seems to be the first sound solution I read about. I think I would happily deploy something like that on my own (potential) server. I have to think about it and try and dig up more information.

      Lately, I’ve been following the news about EU data retention and all they come up with are solutions that are proper surveillance of every citizen, a slippery slope and come with many downsides. The justification is always “won’t somebody please think of the children” and the proposed solution is to break end to end encryption for everyone. They could have just implemented this. Okay I do actually know why they don’t… There is a lobby pushing for general surveillance and protecting children is just their superficial argument to gain acceptance for it. So they’re not interested in effective solutions to deal with the specific problem at all. They want something that actually is a slippery slope and can also be used for other purposes, later on.

      Such a hash-table would at least detect known illegal content. And it doesn’t even trigger on legal content. For example if someone underage sends nudes to their partner consentually. And having it only detect multiple images makes it less likely someone can get attacked by sending them one illegal picture / planting evidence and they’ll instantly be flagged and be raided by police. All the proper cases I read about they always find hundreds of images on a criminal’s harddisk. And the police already said they can’t handle loads of false positives. They’re understaffed and overworked and implementing a solution that’d generate many false positives would lead to them having to deal with it and have even less time to deal with the actual criminals.

      So this sounds like a solution that Apple has put some thought into. It tackles lots of issues that previously were arguments for me to advocate against CSAM filters.