OpenAI CEO and Reddit shareholder Sam Altman recently raised concerns about the authenticity of social media content. He noted that the growing presence of bots makes it increasingly difficult to distinguish between posts created by humans and those generated by AI.
Observation on r/Claudecode
Altman noticed this phenomenon while browsing the r/Claudecode subreddit, which focuses on OpenAI’s Codex, a programming assistant platform launched in May to compete with Anthropic’s Claude Code. Many posts praised the switch to Codex, prompting him to question how many contributions were genuinely human. I have had the strangest experience reading this: I assume it’s all fake/bots, even though in this case I know Codex growth is really strong and the trend here is real, he admitted on X. He explained that humans are beginning to adopt LLM-style language patterns, online communities behave in highly correlated ways, social media hype cycles exaggerate trends, platform incentives amplify engagement, and some content may even be astroturfed by competitors.
Broader Implications
Altman’s observations highlight a larger challenge: AI-driven social media is feeling less genuine than in previous years. This issue affects social platforms, journalism, schools, and legal systems. Reports indicate that over half of internet traffic in 2024 was non-human, largely due to AI, while X estimates hundreds of millions of bot accounts on its platform. Even in networks composed entirely of bots, AI accounts can form cliques and echo chambers, mirroring human social dynamics. Altman’s remarks underscore the difficulty of maintaining authentic human interaction in today’s increasingly AI-saturated digital landscape.
I have had the strangest experience reading this: I assume it’s all fake/bots even though in this case I know Codex growth is really strong and the trend here is real.
