1 minute read

Reddit, once known as the “front page of the internet,” is clearly facing a growing bot problem that threatens the very nature of the site as a place for authentic human discussion and content sharing.

A recent exposé revealed that many Reddit threads are being reposted comment by comment, word for word, by networks of bot accounts. This lazy form of astroturfing allows bad actors to generate fake engagement and make certain topics or products appear more popular than they really are. It’s a brute force tactic, but an effective one on a site of Reddit’s scale.

While bots and spam are nothing new to Reddit, the scope of the problem seems to be accelerating, especially since Reddit restricted API access last year, hampering third-party tools that helped detect inauthentic behavior. The emergence of LLMs enables more sophisticated bot-generated comments that are harder to distinguish from genuine human posters.

Some fear we may be headed toward a “dead internet” where most content is bot-generated or bot-upvoted. Wikipedia remains a bastion of human-curated information, but for how long? The “information age” could give way to an age of weaponised misinformation and AI-powered noise.

But all is not lost. Smaller niche communities, invite-only Discords, and old-school web forums may be the path forward - places where real identity and personal connections matter more than fake internet points. A return to the Internet’s roots as a place to congregate with like-minded people.

Reddit itself must decide how much it really values authentic human users vs. inflated engagement metrics that look good to advertisers and investors. Will it take strong action to detect and ban bots, even at the expense of short-term growth? No, probably not.