Combatting Adversarial Content: A Call for Transparency and Collective Vigilance in the Age of AI

AI is being utilized by propagandists as well

The recent report by OpenAI highlights the threat of adversarial content in the field of artificial intelligence. Researchers have begun compiling databases of misuse, such as the AI Incident Database and the Political Deepfakes Incident Database, to track changes over time and compare different types of misuse. However, detecting misuse from an external standpoint can be challenging. As AI tools become more advanced and widespread, policymakers must understand how they are being used and exploited.

OpenAI’s initial report provided a broad overview and specific examples, but expanding data-sharing partnerships with researchers is a necessary next step to gain more insights into adversarial content or behaviors. In addition to policymakers, online users also play a significant role in fighting against influence operations and AI misuse. It is essential for individuals to verify the authenticity of content and individuals before sharing it on social media platforms.

As highlighted by OpenAI, threat actors work across the internet, so it is crucial for us to do the same. In this new era of AI-driven influence operations, we must address common challenges through transparency, data sharing, and collective vigilance to build a more resilient digital ecosystem. Renée DiResta is a research manager at the Stanford Internet Observatory and author of “Invisible Rulers: The People Who Turn Lies into Reality.” Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), focusing on the CyberAI Project.

Leave a Reply