Facebook's Former Security Chief Discusses Controversy Around Apple's Planned Child Safety Features

Amid the ongoing controversy around Apple's plans to implement new child safety features that would involve scanning messages and users' photos libraries, Facebook's former security chief, Alex Stamos, has weighed into the debate with criticisms of multiple parties involved and suggestions for the future.

Child Safety Feature
In an extensive Twitter thread, Stamos said that there are "no easy answers" in the debate around child protection versus personal privacy.

Stamos expressed his frustration with the way in which Apple handled the announcement of the new features and criticized the company for not engaging in wider industry discussions around the safety and privacy aspects of end-to-end encryption in recent years.

Apple was invited but declined to participate in these discussions, and with this announcement they just busted into the balancing debate and pushed everybody into the furthest corners with no public consultation or debate.

Likewise, Stamos said that he was disappointed with various NGOs, such as the Electronic Frontier Foundation (EFF) and National Center for Missing & Exploited Children (NCMEC), for leaving little room for discussion in their public statements. The NCMEC, for example, called Apple employees that questioned the privacy implications of the new features "the screeching voices of the minority." "Apple's public move has pushed them to advocate for their equities to the extreme," Stamos explained.

Stamos urged security researchers and campaigners who were surprised at Apple's announcement to pay closer attention to the global regulatory environment, and speculated that the UK's Online Safety Bill and the EU's Digital Services Act were instrumental in Apple's move to implement the new child safety features.

One of the basic problems with Apple's approach is that they seem desperate to avoid building a real trust and safety function for their communications products. There is no mechanism to report spam, death threats, hate speech, NCII, or any other kinds of abuse on iMessage.

He also said that Apple does not have sufficient functions for trust and safety, and encouraged Apple to create a reporting system in iMessage, roll out client-side ML to prompt users to report something abusive, and staff a child safety team to investigate the worst reports.

Instead, we get an ML system that is only targeted at (under) 13 year-olds (not the largest group of sextortion/grooming targets in my experience), that gives kids a choice they aren't equipped to make, and notifies parents instead of Apple T&S.

Stamos said that he did not understand why Apple is scanning for CSAM locally unless iCloud backup encryption is in the works, and warned that Apple may have "poisoned" opinion against client-side classifiers.

I also don't understand why Apple is pushing the CSAM scanning for iCloud into the device, unless it is in preparation for real encryption of iCloud backups. A reasonable target should be scanning shared iCloud albums, which could be implemented server-side.

In any case, coming out of the gate with non-consensual scanning of local photos, and creating client-side ML that won't provide a lot of real harm prevention, means that Apple might have just poisoned the well against any use of client-side classifiers to protect users.

Nevertheless, Stamos highlighted that Facebook caught 4.5 million users posting child abuse images, and that this is likely only a proportion of the overall number of offenders, by scanning for images with known matches for CSAM.

Top Rated Comments

Wesd1234 Avatar
7 weeks ago
I’m always amazed when Facebook and its former staff wants to talk about privacy in public space. Do they know what reputation they have regarding security and privacy?
Score: 37 Votes (Like | Disagree)
joelhinch Avatar
7 weeks ago
All of you missed the “Former” part, didn’t you? ^
Score: 31 Votes (Like | Disagree)
MJaP Avatar
7 weeks ago
Wow, it's like a mini cancel-culture starting to form here... "he's from Facebook so his views should be mocked with a snide comment and disregarded"... you learn by listening, not by shutting down conversations.
Score: 28 Votes (Like | Disagree)
Abazigal Avatar
7 weeks ago
I’m always amazed when Facebook wants to talk about privacy in public space. Do they know what reputation they have regarding security and privacy?
Regardless, this guy seems to know his stuff. I guess it’s one thing to be good at your job, and another to know when to toe the line when it comes to a company like facebook.
Score: 28 Votes (Like | Disagree)
mw360 Avatar
7 weeks ago

I’m always amazed when Facebook wants to talk about privacy in public space. Do they know what reputation they have regarding security and privacy?
We’ve graduated to not even reading the headlines now I see.
Score: 20 Votes (Like | Disagree)
Geert76 Avatar
7 weeks ago
hahah Facebook and privacy…the irony
Score: 20 Votes (Like | Disagree)

Top Stories

eff logo lockup cleaned

EFF Pressures Apple to Completely Abandon Controversial Child Safety Features

Monday September 6, 2021 3:18 am PDT by
The Electronic Frontier Foundation has said it is "pleased" with Apple's decision to delay the launch of of its controversial child safety features, but now it wants Apple to go further and completely abandon the rollout. Apple on Friday said it was delaying the planned features to "take additional time over the coming months to collect input and making improvements," following negative...
iphone communication safety feature

Apple Remains Committed to Launching New Child Safety Features Later This Year

Tuesday August 10, 2021 10:58 am PDT by
Last week, Apple previewed new child safety features that it said will be coming to the iPhone, iPad, and Mac with software updates later this year. The company said the features will be available in the U.S. only at launch. A refresher on Apple's new child safety features from our previous coverage:First, an optional Communication Safety feature in the Messages app on iPhone, iPad, and Mac...
Child Safety Feature Blue

Apple Delays Rollout of Controversial Child Safety Features to Make Improvements

Friday September 3, 2021 6:07 am PDT by
Apple has delayed the rollout of the Child Safety Features that it announced last month following negative feedback, the company has today announced. The planned features include scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance...
Child Safety Feature Blue

Apple Privacy Chief Explains 'Built-in' Privacy Protections in Child Safety Features Amid User Concerns

Tuesday August 10, 2021 9:07 am PDT by
Apple's Head of Privacy, Erik Neuenschwander, has responded to some of users' concerns around the company's plans for new child safety features that will scan messages and Photos libraries, in an interview with TechCrunch. When asked why Apple is only choosing to implement child safety features that scan for Child Sexual Abuse Material (CSAM), Neuenschwander explained that Apple has "now got ...
Child Safety Feature

Security Researchers Express Alarm Over Apple's Plans to Scan iCloud Images, But Practice Already Widespread

Thursday August 5, 2021 1:04 pm PDT by
Apple today announced that with the launch of iOS 15 and iPadOS 15, it will begin scanning iCloud Photos in the U.S. to look for known Child Sexual Abuse Material (CSAM), with plans to report the findings to the National Center for Missing and Exploited Children (NCMEC). Prior to when Apple detailed its plans, news of the CSAM initiative leaked, and security researchers have already begun...
appleprivacyad

Privacy Whistleblower Edward Snowden and EFF Slam Apple's Plans to Scan Messages and iCloud Images

Friday August 6, 2021 5:00 am PDT by
Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF). In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns...
craig wwdc 2021 privacy

Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards

Friday August 13, 2021 6:33 am PDT by
Apple's senior vice president of software engineering, Craig Federighi, has today defended the company's controversial planned child safety features in a significant interview with The Wall Street Journal, revealing a number of new details about the safeguards built into Apple's system for scanning users' photos libraries for Child Sexual Abuse Material (CSAM). Federighi admitted that Apple...
apple privacy

Apple Publishes FAQ to Address Concerns About CSAM Detection and Messages Scanning

Monday August 9, 2021 1:50 am PDT by
Apple has published a FAQ titled "Expanded Protections for Children" which aims to allay users' privacy concerns about the new CSAM detection in iCloud Photos and communication safety for Messages features that the company announced last week. "Since we announced these features, many stakeholders including privacy organizations and child safety organizations have expressed their support of...
iphone communication safety feature

Apple Introducing New Child Safety Features, Including Scanning Users' Photo Libraries for Known Sexual Abuse Material

Thursday August 5, 2021 12:00 pm PDT by
Apple today previewed new child safety features that will be coming to its platforms with software updates later this year. The company said the features will be available in the U.S. only at launch and will be expanded to other regions over time. Communication Safety First, the Messages app on the iPhone, iPad, and Mac will be getting a new Communication Safety feature to warn children...
twitter safety mode

Twitter Debuts New 'Safety Mode' for Automatically Blocking Unwanted Replies

Wednesday September 1, 2021 9:50 am PDT by
Twitter today announced that it is testing a new feature called Safety Mode, which is designed to cut down on harassment and unwelcome interactions on the social network. Users who often get unwanted, spammy, or abusive replies to their tweets can turn on Safety Mode, which will autoblock accounts that use harmful language like insults, or send repetitive, uninvited replies and mentions....