The use of artificial intelligence (AI) has become an undeniable reality in the modern world, offering great benefits to humans across the globe. However, there is a dark side emerging, as its ubiquitous nature, coupled with criminal intentions and unethical practices, often leads to undesired results.

From revelations about websites using AI to generate fake non-consensual nude images to multi-billion-dollar government and corporate investments in AI infrastructure, artificial intelligence is once again at the center of political, ethical, and technological debate.

A new investigation has shed light on websites using artificial intelligence to “undress” women and girls without their consent. These new apps, reportedly 85 in number – broadly dubbed “nudify” or “undress” generators—according to a report by Indicator, a digital grooming investigative journalism platform, are highly popular and attract more than 18.5 million monthly visitors, generating estimated annual revenues of up to $36 million.

Shockingly, many of these sites operate thanks to infrastructure provided by tech giants, including Meta, Google, Amazon Web Services, and Cloudflare.

The report highlights a particularly disturbing aspect: many of the images involve underage girls, pushing these services dangerously close to producing child sexual abuse material (CSAM). Despite efforts by some governments and online platforms to crack down on these services, demand and revenues continue to grow. The revelations have reignited concerns over the lack of ethical oversight and the urgent need for global legislation to regulate the use of AI for non-consensual purposes.