Apple Threatened to Pull Grok From App Store Over Sexualized Images - MacRumors
Skip to Content

Apple Threatened to Pull Grok From App Store Over Sexualized Images

Apple privately warned Elon Musk's xAI company in January that it would remove the Grok app from the App Store unless the company put a stop to the chatbot's nude and sexualized deepfakes, according to a letter Apple sent to U.S. senators and obtained by NBC News ($).

grok logo purple gradient
Earlier this year, Grok's AI capabilities came under scrutiny after X users shared nonconsensual sexualized images of women and children created by the app, many of which were based on photos of real people.

What followed was a confusing rollout of moderation changes to Grok, some of which could be easily bypassed. Publicly, Apple did not comment on the controversy at the time, but it did respond, and was in fact the instigator of the changes. Internally, the company had found both X and Grok in violation of its App Store guidelines and demanded its developers submit a content moderation plan, the letter reveals.

According to the letter, Apple rejected an initial fix from xAI as insufficient, saying the "changes didn't go far enough," and Apple warned it that additional alterations were required or Grok would be removed. After further back-and-forth, however, Apple eventually concluded that a later submission of the app had improved enough for it to be approved.

The disclosure was apparently prompted by a January letter from Senators Ron Wyden, Ben Ray Luján, and Edward Markey, who urged Apple and Google to pull both apps, arguing the imagery violated App Store rules barring offensive, sexual, and exploitative content.

The senators also said that Apple's response would test its own arguments, since the company has long defended its curated App Store by claiming its review process keeps users safer. Letting Grok continue to generate this kind of imagery, they argued, would undermine that case in the eyes of the public and in a court of law.

After NBC News published its report, X posted the following statement on its platform:

"We strictly prohibit users from generating non-consensual explicit deepfakes and from using our tools to undress real people. xAI has extensive safeguards in place to prevent such misuse, such as continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, prompt filters, and additional safeguards."

While the amount of sexualized deepfakes created by Grok and posted to X appears to have decreased significantly, NBC News found that Grok is still able to generate similar imagery, with some users apparently having simply updated their prompt tactics to get around the safeguards. You can read that report in its entirety by following this link.

Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Top Rated Comments

sw1tcher Avatar
5 hours ago at 07:21 am

Internally, the company had found both X and Grok in violation of its App Store guidelines and demanded its developers submit a content moderation plan, the letter reveals.

According to the letter, Apple rejected an initial fix from xAI as insufficient, saying the "changes didn't go far enough," and Apple warned it that additional alterations were required or Grok would be removed. After further back-and-forth, however, Apple eventually concluded that a later submission of the app had improved enough for it to be approved.
Apple/Tim Cook giving X special treatment.

If this was a small app developer, the app would have been pulled first, requiring a fix for the app to appear on the App Store.

Tim Cook: "We treat every developer the same. We have open and transparent rules, it's a rigorous process. Because we care so deeply about privacy and security and quality we do look at every app before it goes on. But those rules apply evenly to everyone."
Score: 13 Votes (Like | Disagree)
cateye Avatar
5 hours ago at 07:25 am
For those keeping score:

Have your silly live service game link sneakily to an outside payment method? IMMEDIATE BANISHMENT AND YEARS OF LITIGATION; THIS INJUSTICE WILL NOT STAND.

Update your app to make sexualized images of teens and children against their will? Strongly worded memo. Maybe a finger wag.

🎶 The grabbing hands, grab all they can, all for themselves... after all, it's a competitive world... Everything counts in large amounts 🎶
Score: 10 Votes (Like | Disagree)
cateye Avatar
5 hours ago at 07:36 am

Stupid and political. You can do the same thing with Safari, are they gonna pull Safari off the App Store also?
Strawman argument. Apple professes no control or curation of the web.

Apple does, however, strenuously defends its gatekeeper status on the App Store under the guideline that it specifically curates and rejects "objectionable, harmful, and illegal material." Let's say I run a porn web site. People are free to access that porn website with Safari. The moment I submit an app called "Free Porn" Apple will reject it. There is zero conflict in this.

If Apple does not want to be put in a position to make these judgements, it is free not to. It can then stop arguing under oath that this is the primary reason why it wants to maintain a closed App Store and that it has nothing at all to do with money or influence.
Score: 9 Votes (Like | Disagree)
5 hours ago at 07:29 am
Stupid and political. You can do the same thing with Safari, are they gonna pull Safari off the App Store also?
Score: 9 Votes (Like | Disagree)
ghostface147 Avatar
5 hours ago at 07:12 am
Should’ve followed through as you can still make these types of pics.
Score: 9 Votes (Like | Disagree)
BabyBoii Avatar
5 hours ago at 07:14 am
@Grok, put macrumors in a bikini
Score: 8 Votes (Like | Disagree)