Last week, Apple put a dent in its reputation for privacy by announcing that it would begin scanning photos uploaded to iCloud for child sexual abuse material (CSAM).
While few would make cheerily defending sex offenders the hill they wish to die on, privacy advocates, including Edward Snowden and cryptography academics, have sounded the alarm that this could be the thin end of the wedge, and that Apple could be strongarmed into reporting all kinds of content using the same technology.
“Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor,” wrote the Electronic Frontier Foundation in its critique, “but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.”
It appears Apple is rattled at the backlash, and has today published a six-page rebuttal in the form of an FAQ [PDF] for worried parties. The FAQ aims to “address these [privacy] questions and provide more clarity and transparency in the process.”
One thing Apple makes clear in this is that scanning is only looking for known CSAM images. In other words, it “only works with CMAS image hashes provided by NCMEC [National Center for Missing & Exploited Children] and other child safety organizations.”
This is presumably why Apple can afford to be bullish about the chances of it incorrectly flagging innocent photos as child abuse. Indeed, Apple puts the odds of this happening as less than one in a trillion per year, which is admittedly odds that only problem gamblers would be tempted by.
“In addition, any time an account is flagged by the system, Apple conducts human review before making a report to NCMEC,” the FAQ adds, proving there is such a thing as a horrible job at the world’s richest company. “As a result, system errors or attacks will not result in innocent people being reported to NCMEC.”
Apple also explains that while users can’t opt out of this, as such, turning off iCloud Photos disables the feature. “By design, this feature only applies to photos that the user chooses to upload to iCloud Photos, and even then Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM,” the FAQ explains. “The system does not work for users who have iCloud Photos disabled,” it continues. “This feature does not work on your private iPhone photo library on the device.”
But what about the slippery slope argument? Apple tackles this head on: “Could governments force Apple to add non-CSAM images to the hash list?” the FAQ asks. “Apple will refuse any such demands,” the FAQ says. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.”
While Apple has indeed had many a high-profile fight with the US government over privacy, not everyone is convinced that it’s quite so brave when taking on China.
In the past, Apple has tried hard to make its privacy credentials a black and white issue, most notably at CES 2019 where it planted the billboard above opposite the exhibition center (even though the company had no active presence within).
This FAQ shows just how grey things can become when taking on another seemingly black and white issue – the protection of children. No wonder Facebook, so often the target of Apple’s privacy attacks, couldn’t resist putting the boot in.