I was completely unaware of this, but I still think Apple's take on this is way more invasive than Google's. According to the link you provided, it seems that Google scans images sent/recieved through Gmail, which technically means that they're not actively scanning people's devices. Apple wants to proactively scan your gallery.
While I agree that their intentions seem to be good, how are they gonna rule out 'false-positives', like a picture taken on a hot day at the beach? And even "worse" - how can I be sure that they won't scan more stuff from my phone without my consent? Whilist I don't have anything to hide, I rather keep my stuff private.
It's not quite like that - they're not actively "scanning" and then using AI or whatever, instead they simply calculate a file hash (checksum) and comparing that to a database with hashes of known (confirmed) CP imagery ... that's probably about the most robust and least questionable way of implementing this kind of thing.
Hey, thanks for the answer! So this means they won't get the images, just the hashes, right? Any place I can read more about how it'll work? I'm really interested on understanding more.
What they do is download a little database (list) of those hashes to your iPhone (this list gets updated regularly), then when uploading a file to iCloud they simply compute the hash/checksum and compare it to the list.
There's even more: it won't get flagged immediately but only once you cross a certain threshold of flagged images. They also won't immediately share the image with the reviewers but a low resolution version of it.
To be honest it is about as unobstrusive and privacy aware as you can implement such a feature and the media coverage is not quite doing that justice.
Ah really? yes well that is in fact quite lenient ... I agree with you on all counts, also about the media coverage. Never good to jump to conclusions too fast, and vilify a company or an individual before knowing all the facts.
Thanks a lot for this. I'm gonna read it entirely, because just the first page got me even more interested - the idea of matching known hashes on-device is incredible and now that I learnt a bit more about it I'm not as concerned about privacy as I was yesterday.
Yes I agree that this seems a clean approach - if they'd do shady things like unleashing AI algorithms on images to start guessing "what it is" and then framing people based on that, well that would definitely be a big no-no. This thing with the hashes though seems about the cleanest approach you could come up with.
I was completely unaware of this, but I still think Apple's take on this is way more invasive than Google's. According to the link you provided, it seems that Google scans images sent/recieved through Gmail, which technically means that they're not actively scanning people's devices. Apple wants to proactively scan your gallery.
While I agree that their intentions seem to be good, how are they gonna rule out 'false-positives', like a picture taken on a hot day at the beach? And even "worse" - how can I be sure that they won't scan more stuff from my phone without my consent? Whilist I don't have anything to hide, I rather keep my stuff private.
It's not quite like that - they're not actively "scanning" and then using AI or whatever, instead they simply calculate a file hash (checksum) and comparing that to a database with hashes of known (confirmed) CP imagery ... that's probably about the most robust and least questionable way of implementing this kind of thing.
Hey, thanks for the answer! So this means they won't get the images, just the hashes, right? Any place I can read more about how it'll work? I'm really interested on understanding more.
Yup it's here in their FAQ, it's very clearly explained here how it works:
apple.com/child-safety/pdf/Expande...
What they do is download a little database (list) of those hashes to your iPhone (this list gets updated regularly), then when uploading a file to iCloud they simply compute the hash/checksum and compare it to the list.
There's even more: it won't get flagged immediately but only once you cross a certain threshold of flagged images. They also won't immediately share the image with the reviewers but a low resolution version of it.
To be honest it is about as unobstrusive and privacy aware as you can implement such a feature and the media coverage is not quite doing that justice.
Ah really? yes well that is in fact quite lenient ... I agree with you on all counts, also about the media coverage. Never good to jump to conclusions too fast, and vilify a company or an individual before knowing all the facts.
Thanks a lot for this. I'm gonna read it entirely, because just the first page got me even more interested - the idea of matching known hashes on-device is incredible and now that I learnt a bit more about it I'm not as concerned about privacy as I was yesterday.
Yes I agree that this seems a clean approach - if they'd do shady things like unleashing AI algorithms on images to start guessing "what it is" and then framing people based on that, well that would definitely be a big no-no. This thing with the hashes though seems about the cleanest approach you could come up with.
Lots of details here: daringfireball.net/2021/08/apple_c...