“Apple plans to use its new CSAM Detection system to monitor users and identify those who store child pornography on their devices.”
In early August 2021, Apple unveiled its new system for identifying photos containing child abuse images. While Apple’s motives — to iceland mobile database the spread of child pornography — seem arguably well-intentioned, the announcement was immediately criticized.
Apple has long cultivated an image of itself as a device maker that cares about user privacy. The new features planned for iOS 15 and iPadOS 15 have already put a serious dent in that reputation, but the company isn’t backing down. Here’s what happened and how it will affect average iPhone and iPad users.
What is CSAM Detection?
Apple’s plans are outlined on the company’s website. The company has developed a system called CSAM Detection that scans users’ devices for “child sexual abuse material,” also known as CSAM.
While “child pornography” is synonymous with CSAM, the National Center for Missing and Exploited Children (NCMEC), which helps find and rescue missing and exploited children in the United States, considers “CSAM” to be the more appropriate term. NCMEC provides Apple and other tech companies with information about known CSAM images.
Apple has introduced CSAM Detection along with several other features that expand parental controls on Apple's mobile devices. For example, parents will receive a notification if someone sends their child a sexually explicit photo in Apple's iMessage app.
The simultaneous unveiling of multiple technologies resulted in some confusion, and many people felt that Apple would monitor all users all the time. That is not the case.
CSAM Detection Implementation Timeline
CSAM Detection will be part of the iOS 15 and iPadOS 15 mobile operating systems , which will be available to users of all current iPhones and iPads (iPhone 6S, fifth-generation iPad and later) this fall. While the feature is theoretically available on Apple mobile devices worldwide, the system will only be fully functional in the United States for now.
How CSAM Detection will work
CSAM Detection only works in conjunction with iCloud Photos, which is part of the iCloud service that uploads photos from a smartphone or tablet to Apple servers. It also makes them accessible on the user's other devices.
If a user turns off photo syncing in the settings, CSAM Detection stops working. Does this mean that photos are only compared to criminal databases in the cloud? Not exactly. The system is deliberately complex; Apple is trying to ensure the necessary level of privacy.
As Apple explains, CSAM Detection works by scanning photos on a device to determine whether they match photos in the databases of NCMEC or other similar organizations.
Simplified diagram of how CSAM Detection works source
Simplified diagram of how CSAM Detection works
The detection method uses NeuralHash technology, which essentially creates digital identifiers, or hashes, for photos based on their content. If a hash matches one in the database of known child exploitation images, the image and its hash are uploaded to Apple's servers. Apple performs another check before officially registering the image.
Another component of the system, a cryptographic technology called private set intersection , encrypts the CSAM Detection scan results in such a way that Apple can only decrypt them if a series of criteria are met. In theory, this should prevent the system from being misused—that is, it should prevent a company employee from abusing the system or handing over images at the request of government agencies.
In an August 13 interview with the Wall Street Journal , Craig Federighi, Apple’s senior vice president of software engineering, articulated that the main safeguard for the private set intersection protocol: In order to alert Apple, 30 photos need to match images in the NCMEC database . As the diagram below shows, the private set intersection system will not allow the dataset — information about CSAM Detection’s operation and the photos — to be decrypted until that threshold is met. According to Apple, because the threshold for flagging an image is so high, a false match is very unlikely — a “one in a trillion chance.”
How does Apple plan to monitor its users?
-
- Posts: 545
- Joined: Tue Dec 03, 2024 3:00 am