Explosion
Apple Threatened to Pull Grok From App Store Over Deepfakes
Technology

Apple Threatened to Pull Grok From App Store Over Deepfakes

Maya TorresBy Maya Torres·

Apple has issued a formal warning to xAI, threatening to remove Grok from the App Store. This comes after reports emerged that the AI chatbot was generating sexualized deepfake images of real individuals, as revealed in a letter obtained by NBC News.

What Happened

Apple’s warning followed user reports that Grok, the AI assistant created by Elon Musk’s xAI, could be prompted to produce explicit AI-generated images, known as deepfakes, of identifiable people. In the letter, Apple demanded xAI address the issue or risk being removed from the App Store entirely.

The 9to5Mac report states that Apple’s letter emphasized Grok’s violation of App Store guidelines, which clearly prohibit apps from generating sexual content involving real people. These guidelines act like a code of conduct for developers, ensuring their apps remain accessible to iPhone and iPad users globally.

As of now, xAI hasn’t confirmed if any changes were made in response to Apple’s warning, but Grok is still available on the App Store.

Why This Is a Bigger Deal Than It Looks

Apple’s threat to pull a major AI app is significant. However, the context adds even more weight to the situation. Grok is connected to Elon Musk, who has publicly criticized content restrictions on AI systems. The reality that Apple, a platform gatekeeper rather than a regulator, could compel xAI to moderate content shows the substantial influence the App Store has over major tech companies.

Think of the App Store as a shopping mall with strict rules about what can be sold. Apple owns the mall, and if a store breaks the rules, Apple can shut it down. For any app aiming to reach over a billion iPhone users worldwide, that threat is very real.

This also aligns with Apple’s history of taking action against apps that misuse user data or produce harmful content. Earlier this week, Apple pulled the rewards app Freecash from the App Store after a MacRumors report revealed that the app had been harvesting data from iPhone users without proper disclosure for months.

The Deepfake Problem in AI Apps

Deepfakes, which are AI-generated images or videos depicting real people doing or saying things they never actually did, have become a major concern for AI image generators. When these images are sexualized, they can cause direct harm to the individuals depicted, primarily women, and they are illegal in an increasing number of jurisdictions.

AI companies face a challenge because large image-generation models rely on extensive datasets. Clever prompting can sometimes manipulate these models to bypass their built-in restrictions. Maintaining strict guardrails requires ongoing effort, and Apple places the onus on developers to ensure compliance.

Apple — By The Numbers
Ticker AAPL
Stock Price $263.40 (-1.14%)
CEO Tim Cook
Headquarters Cupertino, CA
Founded 1976
Sector Big Tech

What This Means for Everyday Users

If you use Grok on your iPhone, you probably won’t notice any immediate changes. The app is still active, and for most users who rely on it for tasks like answering questions or summarizing text, everything remains the same.

However, this situation is relevant for several reasons. First, it reminds us that the apps on our phones are under continuous scrutiny. Apple can and does remove apps that cross certain lines, including popular ones. Second, it highlights that despite their impressive capabilities, AI tools can be misused in ways that harm real people. Lastly, it suggests that platform gatekeepers like Apple might take on a bigger role in AI content moderation than any government regulator, at least for the time being.

For anyone whose likeness has been used without permission online, Apple’s willingness to threaten one of the top AI companies over this issue is a significant development.

Community Reaction

“Good. These AI companies need to be held accountable by someone. If it has to be Apple enforcing App Store rules, then fine. At least something is happening.”

— Reddit user on r/apple

“Honestly surprised it took this long. The fact that any AI image tool could do this and stay on the App Store was already a problem.”

— YouTube comment on 9to5Mac coverage

What To Watch

  • xAI’s response: Keep an eye out for any official statement from xAI or Elon Musk about potential changes to Grok’s image generation capabilities on iOS.
  • App Store status: If Grok’s compliance issues arise again, Apple might follow through on its threat to remove the app. A sudden disappearance from the App Store would confirm escalation.
  • Regulatory attention: With deepfake legislation moving through several U.S. states and the EU tightening AI rules under the AI Act, this case could attract lawmakers’ attention looking for examples of platform-level enforcement.
  • Other AI apps: Apple’s warning to Grok may indicate increased scrutiny of other AI image tools available on the App Store. Developers in that space will be paying close attention.
Maya Torres

Maya Torres

Maya Torres is the Consumer Tech Editor at Explosion.com with 7 years covering product launches for major technology publications. She has reviewed over 300 devices across smartphones, laptops, wearables, and smart home products. Maya specializes in translating spec sheets into real-world buying advice and attends CES, MWC, and Apple keynotes as press. Her reviews focus on helping readers decide what to buy, not just what specs look good on paper.