Breaking
July 16, 2024

OpenAI Announces New Policies to Address AI Misinformation Ahead of 2024 Elections

AiBot
Written by AiBot

AiBot scans breaking news and distills multiple news articles into a concise, easy-to-understand summary which reads just like a news story, saving users time while keeping them well-informed.

Jan 16, 2024

OpenAI, the company behind the popular ChatGPT chatbot, has announced several new policies and initiatives aimed at combating the spread of misinformation by AI systems in the lead up to the 2024 US elections.

Background

As AI systems like ChatGPT and image generators become more advanced, concerns have grown over their potential to spread falsehoods and manipulate voters. The upcoming 2024 presidential election will likely see increased reliance on social media and digital communication, providing more avenues for misinformation powered by AI to spread.

OpenAI CEO Sam Altman outlined the company’s approach in a blog post over the weekend, stating they have a responsibility to mitigate harms caused by their AI systems. The post comes just as the 2024 primaries are set to begin, with Iowa holding the nation’s first contest in 10 days.

New Policies Ban Political Use of AI Tools

The centerpiece of OpenAI’s strategy focuses on banning outright certain types of misuse of their systems.

Altman stated that OpenAI’s Terms of Use will be updated to forbid using any of its AI systems for:

  • Impersonation of others online
  • Misrepresentation of identity or qualifications
  • Production of spam/deceptive content
  • Harassment, radicalization, or illegal activity

In addition, the updated terms will prohibit using AI systems for political campaigning, political advertising, or any efforts related to elections, voting, or census interference.

Altman acknowledged that enforcement of these policies poses challenges, but stated “we believe these updates can mitigate the most acute harms.”

Initiatives to Detect AI-Generated Content

Beyond policy changes, OpenAI also revealed new technical initiatives underway to detect AI-generated text, images, and video that violate its policies:

Initiative Description Status
Content classifier Detects if text is AI-generated Released in API last week
Image source classifier Labels images as “AI-generated” or not In development, to be released soon
Video authenticity classifier Identifies synthesized video Early research stage

Altman stated these classifiers are intended to empower platforms and researchers to easily identify misuse of AI systems.

Partnerships with Social Platforms and Election Officials

OpenAI also highlighted partnerships formed over the past month with major social media platforms and election oversight groups to identify and limit harms from AI.

They are sharing data and providing technical support to platforms like Facebook, Twitter, and TikTok to integrate OpenAI’s classifiers at scale to flag violating content.

Election officials and secretaries of state are also utilizing the classifiers to detect AI impersonation attempts of candidates and suspicious content aimed at voters.

Ongoing Research into AI Safety

Altman emphasized that OpenAI will continue dedicating resources into developing safe and beneficial AI systems. Over half their technical staff focuses on safety, policy, and ethics issues.

He admitted that risks from generative AI will continue growing as the technology advances. But OpenAI aims to institute policies and countermeasures in lockstep with innovations in order to mitigate emerging issues.

The initiatives revealed today underscore OpenAI’s push towards getting ahead of potential AI misinformation threats, rather than reacting after harms occur. Their partnerships also signal tech companies and government entities recognizing the dangers posed by AI and collaborating to address them.

With the Iowa caucuses just days away, marking the real start of election season, the ability for OpenAI and others to enforce the announced policies will soon face its first major test. Whether these moves are sufficient to prevent large-scale AI-powered misinformation campaigns remains an open question.

What’s Next

The updates provide a glimpse into how OpenAI views its role in combating AI harms going forward. Altman made clear that the company accepts responsibility for considering downstream societal impacts stemming from its research.

However, maximizing profits through technologies like ChatGPT remains the underlying priority for OpenAI, which faces pressure to capitalize on the red-hot interest in generative AI. The company must balance ethical considerations with business incentives.

It also remains unclear if malicious actors have the capability presently to orchestrate mass manipulation of voters through AI systems. OpenAI’s policies suggest they view such risks as credible enough to warrant intervention even if harm is not yet apparent.

Monitoring the effectiveness of classifiers for identifying AI content will be crucial. Adversaries will adapt to avoid detection, and some false positives/negatives seem inevitable. Refining accuracy and preventing adversarial attacks will require ongoing dedication of resources.

For now, Altman stated OpenAI will continue focusing developer time towards safety: “We are committed to spending more than necessary on mitigating downsides and potential misuse rather than maximizing near-term growth or profitability.”

Whether that commitment persists if public pressure fades after the election passes will indicate if OpenAI’s recent policies reflect enduring values or temporary placation.

AiBot

AiBot

Author

AiBot scans breaking news and distills multiple news articles into a concise, easy-to-understand summary which reads just like a news story, saving users time while keeping them well-informed.

To err is human, but AI does it too. Whilst factual data is used in the production of these articles, the content is written entirely by AI. Double check any facts you intend to rely on with another source.

By AiBot

AiBot scans breaking news and distills multiple news articles into a concise, easy-to-understand summary which reads just like a news story, saving users time while keeping them well-informed.

Related Post