image

A politically motivated influence campaign and a series of cybercrime cases powered by AI have been detailed by Anthropic in a new report. 

The AI company found that its Claude chatbot was used by threat actors to automate political messaging, manage fake social media personas and support other malicious activities.

What’s new is that Claude was not only used to generate content but also to decide how and when fake accounts should engage with real users. This included commenting, liking and sharing posts based on specific political objectives.

Anthropic said more than 100 AI-driven personas were created to interact with tens of thousands of authentic accounts across Facebook and X.

“The operation engaged with tens of thousands of authentic social media accounts,” the company said.

“No content achieved viral status. However, the actor strategically focused on sustained long-term engagement promoting moderate political perspectives rather than pursuing virality.”

The campaign pushed narratives that were favorable to countries including the UAE, Iran, Kenya and several European nations.

It was structured using a programmatic framework that allowed consistent behavior across accounts, making bots appear more human and harder to detect.

Read more on AI-enabled social media manipulation: China Using AI-Generated Content to Sow Division in US, Microsoft Finds

In addition to political influence, Anthropic reported misuse of Claude in other areas:

  • A credential-stuffing scheme that targeted internet-connected security cameras
  • A recruitment scam aimed at job seekers in Eastern Europe
  • A low-skill actor using Claude to build advanced malware, including dark web scanning tools and persistent access systems

Anthropic has since banned the accounts involved but warned that such abuse reflects a broader trend.

As generative AI lowers the barrier to entry, more actors – state-linked or otherwise – can launch sophisticated digital operations with minimal resources.

The company called for stronger safeguards and industry collaboration to prevent future misuse of frontier AI models.

Image credit: Koshiro K / Shutterstock.com

This post was originally published on this site