Loading stock data...

OpenAI Shuts Down Election Influence Operation Using ChatGPT

GettyImages 1254096989

OpenAI Takes Down Cluster of ChatGPT Accounts Linked to Iranian Influence Operation

A Growing Concern: State Actors Using Generative AI for Misinformation

In a recent blog post, OpenAI announced that it has banned a cluster of ChatGPT accounts linked to an Iranian influence operation. The company claims that the operation was generating content about the U.S. presidential election using AI-generated articles and social media posts.

Not a First-Time Offense: Previous Instances of State-Affiliated Actors Using ChatGPT Maliciously

This is not the first time OpenAI has taken action against state-affiliated actors using ChatGPT maliciously. In May, the company disrupted five campaigns that were using ChatGPT to manipulate public opinion. These episodes are reminiscent of state actors using social media platforms like Facebook and Twitter to attempt to influence previous election cycles.

A New Playbook: Using Generative AI to Flood Social Channels with Misinformation

Similar groups (or perhaps the same ones) are now using generative AI to flood social channels with misinformation. This is a new playbook, where the goal is not necessarily to promote one policy or another but to sow dissent and conflict.

The Investigation: How OpenAI Disrupted the Operation

OpenAI’s investigation of this cluster of accounts benefited from a Microsoft Threat Intelligence report published last week. The report identified the group (which it calls Storm-2035) as part of a broader campaign to influence U.S. elections operating since 2020. Microsoft said that Storm-2035 is an Iranian network with multiple sites imitating news outlets and actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.

The Findings: Five Website Fronts for Storm-2035

OpenAI identified five website fronts for Storm-2035, presenting as both progressive and conservative news outlets with convincing domain names like ‘evenpolitics.com.’ The group used ChatGPT to draft several long-form articles, including one alleging that ‘X censors Trump’s tweets,’ which Elon Musk’s platform certainly has not done (if anything, Musk is encouraging former president Donald Trump to engage more on X).

Social Media Posts: Rewriting Political Comments

On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One of these tweets falsely, and confusingly, alleged that Kamala Harris attributes ‘increased immigration costs’ to climate change, followed by ‘#DumpKamala.’

The Effectiveness: A Majority of Social Media Posts Received Few to No Likes or Comments

OpenAI says it did not see evidence that Storm-2035’s articles were shared widely and noted a majority of its social media posts received few to no likes, shares, or comments. This is often the case with these operations, which are quick and cheap to spin up using AI tools like ChatGPT.

A Growing Concern: The Election Approaches

Expect to see many more notices like this as the election approaches and partisan bickering online intensifies.

The Rise of Generative AI in Misinformation Campaigns

The use of generative AI in misinformation campaigns is a growing concern. State actors are increasingly using these tools to create convincing content that can deceive even the most discerning users.

A New Era in Disinformation: The Role of Generative AI

Generative AI has made it easier for state actors to create convincing content, making it harder for users to distinguish between fact and fiction. This new era in disinformation is a worrying trend that could have far-reaching consequences.

The Challenges in Tracking Down These Operations

Tracking down these operations can be challenging. OpenAI relied on a Microsoft Threat Intelligence report to disrupt the operation, highlighting the importance of collaboration and information-sharing in combating state-sponsored disinformation campaigns.

A Collaborative Effort: Combating State-Sponsored Disinformation Campaigns

Combating state-sponsored disinformation campaigns requires a collaborative effort from all parties involved. OpenAI’s action highlights the importance of working together to prevent these operations from spreading misinformation.

The Future of AI Regulation

As generative AI continues to evolve, it is essential that we develop regulations that keep pace with its capabilities. The use of AI in misinformation campaigns underscores the need for robust regulations to ensure accountability and transparency.

Regulating Generative AI: A Call to Action

Developing effective regulations for generative AI requires a concerted effort from policymakers, industry leaders, and experts in the field. We must work together to create a framework that balances innovation with safety and accountability.

The Conclusion: The Importance of Vigilance in Combating Disinformation

In conclusion, the use of generative AI in misinformation campaigns is a growing concern that requires vigilance from all parties involved. OpenAI’s action highlights the importance of collaboration and information-sharing in combating state-sponsored disinformation campaigns.

A Call to Action: Standing Together Against Misinformation

As we navigate this new era in disinformation, let us stand together against the spread of misinformation. By working together and leveraging our collective expertise, we can create a safer online environment that promotes accountability and transparency.