Microsoft Takes a Stand Against Synthetic Revenge Porn with New Tool
In an era where generative AI technology poses unique challenges, one disturbing consequence has emerged: the rampant spread of synthetic nude images mimicking real individuals. In response to this growing issue, Microsoft has taken significant strides to assist victims of revenge porn through its Bing search engine.
A Groundbreaking Partnership
On Thursday, Microsoft announced its collaboration with an organization named StopNCII, aimed at empowering victims of non-consensual explicit imagery. Through this partnership, individuals affected by revenge porn can create a unique digital fingerprint—essentially a “hash”—of explicit images, whether they are real or generated artificially. This digital fingerprint is then utilized by StopNCII’s partners to remove such images from their platforms.
Notably, Microsoft joins an impressive roster of tech giants including Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, PornHub, and OnlyFans, all of which are committed to utilizing StopNCII’s system to combat the spread of revenge porn.
Effective Action Taken
In a recent blog update, Microsoft shared that it has already made notable progress. Since initiating a pilot program at the end of August, the tech giant has acted on approximately 268,000 explicit images flagged in Bing’s image search, leveraging StopNCII’s extensive database. Microsoft previously offered users a direct reporting mechanism; however, feedback from victims and experts indicated that this approach alone wasn’t sufficient to tackle the problem effectively.
“We have heard from victims and experts that user reporting without a systemic approach may not be effective in addressing the significant volume of harmful imagery that can easily be accessed through search engines,” Microsoft stated in its blog announcement.
The Looming Issue of AI Deepfakes
While Microsoft has made strides, the threat of synthetic nude images is growing, particularly in the absence of comprehensive legislation in the United States covering AI-generated deepfakes. Currently, StopNCII’s tools are limited to individuals over the age of 18, but concerns are rising, especially among teenagers, as “undressing” websites proliferate.
According to reports, San Francisco prosecutors have launched a lawsuit aiming to shut down 16 notorious “undressing” platforms. Meanwhile, a concerning statistic reveals that since 2020, Google users in South Korea have reported a staggering 170,000 links to sexual content on both search and YouTube platforms. Yet, Google has faced scrutiny for not joining forces with StopNCII, as highlighted by an investigation from Wired.
The State of Legislation
The challenge of combating non-consensual deepfakes varies significantly across the United States, which currently operates under a patchwork of state and local laws. While 23 states have enacted laws addressing non-consensual deepfake content, proposals in nine states have been turned down, highlighting the inconsistent legal framework for protecting individuals’ rights and privacy.
Conclusion
As AI-generated imagery continues to evolve and pose new threats, Microsoft’s proactive measures against synthetic revenge porn are vital steps toward fostering a safer online environment. By collaborating with organizations such as StopNCII, tech giants demonstrate a commitment to combating the misuse of technology and supporting victims. However, the necessity for more comprehensive legislation is essential to ensure justice and protection against the growing menace of AI deepfakes. As society grapples with the implications of generative AI, ongoing dialogue and action are needed to safeguard individual rights and personal dignity in the digital age.