Sam Altman Questions Authenticity of Social Media Posts Amid Growing AI Presence

Altman thinks that AI bots are blurring the lines of authenticity on social media.

OpenAI CEO Sam Altman recently voiced fears over the truthfulness of social media content, wondering if bots were behind a good deal of it.

In an eye-opening X post, Altman explained how he scrolled through discussions in the r/Claudecode subreddit and how people were complimenting OpenAI's Codex. This made people think if real people were really writing this stuff.

The Rise of AI and Its Effect on Social Media Credibility

Altman's observation followed seeing a majority of posts from users saying they had switched over to OpenAI Codex, a rival programming service with Claude Code by Anthropic. A Reddit user even joked about using Codex without announcing it on Reddit publicly.

This prompted Altman's epiphany: how many of those posts were actually human-generated content? He confessed to having the perception that the majority of the posts were not genuine or bot-written, even though he was aware that Codex's expansion was real and natural.

"I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very "it's so over/we're so back" extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."

Are Humans Imitating LLMs?

What Altman is referencing is the irony that humans have begun to look and sound like language models (LLMs), which were created to mimic human speech. OpenAI's Codex and other LLMs are now so sophisticated that they are obscuring the distinction between human and AI-written material.

Altman commented that these human idiosyncrasies of "LLM-speak" are becoming widespread among social media users, making it even harder to identify true human voices versus AI-written responses.

According to TechCrunch, his own admission also raises the issue of the pervasive culture of social media fandoms and pressure on creators to create compelling content worth being monetized. This system tends to encourage hyperbolic behavior and widespread use of bots to drive engagement, which Altman identifies as an underlying cause of the increasing feeling that social media is "fake."

Astroturfing and the Influence of AI Bots

In his blog, Altman referred to the potential of "astroturfing," the act of having paid people or AI bots write posts to give a false impression of grassroots support for a product.

Although there is no concrete evidence that astroturfing exists in Codex-related postings, the remark by Altman highlights valid issues regarding the genuineness of online discussions. But the truth is, bots and AI are already thoroughly embedded in social media platforms, making it difficult to have the discussion about who or what is behind the content we consume.

The Change in Social Media Dynamics

Altman observed that the social media environment, particularly within the AI industry, now feels "very fake" compared to one or two years ago. He attributed this shift to the advancement of AI models like GPT, which can generate content so convincingly that it's difficult to tell whether a post is written by a human or a machine.

Despite concerns over the authenticity of AI-generated posts, Altman believes AI will continue to experience explosive growth, albeit at the cost of high subscription fees.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion