TL;DR
A new report from the Tech Transparency Project reveals that Apple and Google continue to profit from apps that generate nonconsensual sexualized images, despite explicit platform policies banning such content. This exposes a critical enforcement gap at a time when AI-generated abuse is surging and lawmakers are scrutinizing the power and responsibility of app store gatekeepers.
What Happened
A damning investigation has caught the world's two largest app store operators in a stark contradiction. The Tech Transparency Project (TTP) found that Apple’s App Store and Google’s Play Store are hosting and profiting from applications designed to create nonconsensual, sexually explicit “deepfake” images, even though both companies have public policies expressly prohibiting apps for the creation of such abusive content.
Key Facts
- The report, published Wednesday, April 15, 2026, by the non-profit watchdog Tech Transparency Project (TTP), analyzed apps available for download as of early April 2026.
- Researchers identified over a dozen apps across both stores that explicitly advertised the ability to “undress” people in photos or create nonconsensual nude imagery using artificial intelligence.
- These apps are not hidden; they appear in search results for terms like “undress” or “nudify” and often carry high user ratings, indicating significant download volumes.
- Both Apple and Google have publicly stated policies against apps that facilitate “creepshots” or the non-consensual creation of sexually explicit content. Apple’s App Review Guidelines, for instance, ban apps that “may be used to abuse, intimidate, or bully.”
- The app stores generate revenue from these applications through standard commission fees—typically 15% to 30% of sales—meaning the platforms are financially benefiting from the distribution of tools for digital sexual abuse.
- The TTP report notes that the apps often target younger users, with some featuring cartoonish avatars or marketing that downplays their harmful potential.
- This is not the first time such apps have been reported; both companies have faced previous criticism and have removed specific apps, but the TTP findings show the problem is systemic and ongoing.
Breaking It Down
The core revelation here is not the existence of these malicious apps—that is an unfortunate symptom of advancing AI technology. The scandal is the demonstrable failure of Apple and Google’s enforcement regimes. These companies position themselves as curated, safe marketplaces, with Apple famously touting its “walled garden” approach. Their policies are clear on paper, but the TTP’s findings show those policies are not being applied consistently or effectively. This creates a dangerous environment where abusers can easily access powerful tools, and victims have little recourse, all while the platform giants collect a fee.
The platforms are financially benefiting from the distribution of tools for digital sexual abuse, collecting standard 15-30% commission fees on sales.
This financial incentive is the most analytically significant element of the story. It transforms the issue from one of mere policy failure to one of potential perverse economic alignment. Apple and Google’s app store revenue models are built on taking a cut of all transactions. While they would never publicly endorse such apps, the current system lacks a proactive mechanism to prevent monetization of clearly prohibited content. This creates a scenario where, until an app is flagged and removed, it is generating revenue for the platform. The TTP’s work suggests that the review processes—both automated and human—are failing to catch these apps at the gate, allowing them to enter the revenue stream.
The report also highlights a critical asymmetry in platform power and accountability. Apple and Google hold unilateral power to set and enforce rules for millions of developers, yet they face little transparency or external audit of those enforcement actions. When they do remove an app, the process is opaque. The TTP’s methodology of searching for live, available apps provides a rare objective snapshot that contradicts the companies’ assurances. This evidence challenges the narrative that the app store model is inherently safer than more open distribution systems.
Furthermore, the targeting of younger users by some of these apps indicates a sophisticated understanding of how to evade scrutiny and exploit vulnerabilities. By using innocuous branding or cartoon graphics, developers attempt to fly under the radar of both platform reviewers and parents. This tactic underscores the need for enforcement that looks beyond superficial app presentation to the core functionality and its potential for harm.
What Comes Next
The publication of this report will trigger a series of immediate pressures on Apple and Google, with ramifications extending through 2026.
- Forced Platform Action: Within days, expect both companies to issue statements and begin a purge of the identified apps. The key metric to watch will be the speed and completeness of this removal. More importantly, observers will scrutinize whether similar apps simply re-appear under new names in the following weeks, indicating a superficial “whack-a-mole” response rather than a systemic fix to their review algorithms and human oversight processes.
- Regulatory and Legislative Scrutiny: This report lands on the desk of lawmakers already focused on platform accountability. In the United States, it will fuel ongoing efforts around bills like the Kids Online Safety Act (KOSA) and arguments in support of the Digital Services Act (DSA)-style regulations that hold “very large online platforms” accountable for systemic risks. In Europe, the DSA itself grants the European Commission direct power to audit platforms’ risk assessments; this report provides a clear case study for such an audit. Hearings on Capitol Hill and in the European Parliament referencing the TTP findings are likely before the end of Q2 2026.
- Legal and Financial Repercussions: The report strengthens potential legal arguments against the platforms. Lawsuits could be filed by victims or advocacy groups, alleging that by hosting and profiting from these apps, the companies facilitated harm. Shareholders may also raise questions about the material risk posed by persistent policy enforcement failures, potentially affecting the companies’ coveted reputations for user safety and privacy.
- Developer Policy Clarification: By the end of 2026, both Apple and Google will be pressured to update and clarify their developer guidelines with far more explicit language and examples regarding AI-generated non-consensual imagery. They may also be forced to disclose more about their app review and takedown processes to regain trust.
The Bigger Picture
This incident is a acute symptom of three chronic, intersecting issues in the technology sector. First, it exemplifies the AI Ethics Enforcement Gap. The speed of innovation in generative AI consistently outpaces the governance frameworks meant to contain it. Companies release powerful AI models and APIs to developers, but then disclaim responsibility for how those tools are used, leaving app store policies as a last, often failing, line of defense.
Second, it underscores the crisis of Platform Accountability. For years, Apple and Google have argued their app stores are private marketplaces with their own rules. This report demonstrates that when those rules are not enforced—especially when there is revenue at stake—the result is tangible societal harm. It provides concrete evidence for regulators arguing that the sheer scale and influence of these platforms requires external oversight and enforceable standards, not just self-policing.
Finally, it is a brutal data point in the escalating epidemic of Technology-Facilitated Sexual Abuse. The accessibility of “nudify” apps dramatically lowers the barrier to committing a deeply violating act, moving it from the realm of skilled hackers to anyone with a smartphone. This report forces a public conversation about whether the primary distribution channels for mobile software are doing enough to stem the flow of weapons in this digital abuse crisis.
Key Takeaways
- Policy vs. Practice: Apple and Google’s published content policies are not being effectively enforced, allowing apps that generate nonconsensual sexual imagery to remain available and profitable on their official stores.
- Financial Incentive: The app store business model creates a conflict of interest, as the platforms earn commission fees from abusive apps until they are belatedly removed.
- Systemic Failure: The presence of these apps indicates a breakdown in both automated and human app review processes, challenging the core promise of safety offered by curated app ecosystems.
- Regulatory Catalyst: This report provides concrete evidence for lawmakers worldwide pushing for stronger digital platform accountability, likely accelerating legislative and legal actions in 2026.


![Google’s new Fitbit band has continued hiding in plain sight, software too [Gallery] - 9to5Google — technology news on Trend Pulse](https://i0.wp.com/9to5google.com/wp-content/uploads/sites/4/2026/04/fitbit-band-steph-curry-1.jpg?resize=1200%2C628&quality=82&strip=all&ssl=1)
