Insights by Infegy

The AI Hangover: How the World Fell Out of Love With Artificial Intelligence

When OpenAI released ChatGPT in late 2022, the world reacted with something close to collective awe. Within weeks, millions of people were using it to draft emails, write code, brainstorm ideas, and explore questions they'd never thought to Google. The volume of online conversation about artificial intelligence exploded, and so did the optimism among users.

But that optimism, according to three years of social listening data, has been quietly eroding. Conversation volume around AI has never been higher, yet the tone of those conversations tells a different story: one of growing unease, distrust, and fatigue. Using Infegy Starscape, we explore these growing negative feelings towards a once promising technology.

AI Social Conversations: More Noise, Less Enthusiasm

The first thing to understand is that AI isn't fading from public conversation, it's louder than ever. Post volume tracking shows that monthly discussions about AI have climbed from roughly 4–5 million posts in early 2023 to nearly 10 million by early 2026. News events have kept the topic perpetually in the headlines: AI-driven layoffs, controversies within the U.S. Department of Defense, and high-profile product misfires have all ensured that people keep talking.

Figure 1: Conversation volume regarding AI, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

Figure 1: Conversation volume regarding AI, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

But volume and sentiment are two very different things. More conversation doesn't mean more enthusiasm. And since May 2024, the net sentiment score for AI-related content has been on a clear downward slope. The early days were defined by users sharing how AI had improved their productivity, sparked their creativity, or solved a problem they'd been stuck on. Those voices haven't disappeared — but they're increasingly sharing the stage with skeptics, critics, and people burned by the technology's failings.

Figure 2: Net sentiment for AI related conversations (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

Figure 2: Net sentiment for AI related conversations (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

From Wonder to Wariness

The narrative web emerging from social conversations captures this shift vividly. AI discussions are still largely centered on use cases: how to prompt, what tools to use, which workflows have improved. But orbiting that core conversation are growing clusters of distrust: concerns about the people building these systems, questions about whose interests the technology actually serves, and a palpable fear of replacement.

Figure 3: Narratives for AI related conversations with key topics colored by sentiment, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

Figure 3: Narratives for AI related conversations with key topics colored by sentiment, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

The feeling isn't irrational. AI has contributed to documented waves of layoffs across multiple industries. Content creators have watched their work used to train models without consent or compensation. Workers in creative fields have seen clients replace human labor with AI outputs. The abstract threat that once felt distant has, for many people, become concrete and personal.

The Three Major AI Platforms: A Tale of Diverging Fortunes

Not all AI providers are experiencing the same reputational trajectory. When you break down sentiment by the three major players: OpenAI, Google (Gemini), and Anthropic, clear differences emerge.

Figure 4: Share of Voice of the three major providers of generative AI products within social media conversations, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.Figure 4: Share of Voice of the three major providers of generative AI products within social media conversations, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

Figure 5: Social conversation volume about OpenAI, Anthropic, and Gemini, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

Figure 5: Social conversation volume about OpenAI, Anthropic, and Gemini, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

OpenAI dominated AI conversation from 2023 through mid-2025, commanding the largest share of posts by a wide margin. That dominance came with a cost: as the most visible target, it absorbed the most criticism. Net sentiment turned sharply negative beginning in late 2024, driven in large part by controversies around its ability to reproduce copyrighted intellectual property in video, and compounded by its decision to take a U.S. Department of Defense contract that another provider had declined on ethical grounds in Feburary 2026.

Figure 6: Net sentiment of OpenAI, Gemini, and Anthropic, (January 1, 2023 - March 11, 2026); Infegy Social Datset.

Figure 6: Net sentiment of OpenAI, Gemini, and Anthropic, (January 1, 2023 - March 11, 2026); Infegy Social Dataset.

Anthropic remained a quieter presence in public conversation until 2026, when it became the center of a very public dispute: the company reportedly refused to allow its AI to be used for mass surveillance and autonomous weapons systems, losing a U.S. Department of Defense contract to OpenAI as a result. That stance won Anthropic considerable goodwill, but the publicity also introduced a broader audience to scrutinize the company more closely, and its sentiment scores have shown a notable decline in the most recent data.

Gemini, Google's flagship AI, has maintained the highest and most stable net sentiment of the three. It occupies a quieter corner of the conversation,  it hasn't been at the center of the same ethical controversies or product misfires, though it also hasn't yet captured the cultural imagination the way its competitors have. A product update, referred to internally as "Nano Banana," gave it a modest traffic bump, but it remains the third voice in a conversation dominated by the other two.

What This Means Going Forward

The data points to a technology at an inflection point. AI isn't going away, if anything, its footprint in daily life continues to grow. But the era of unconditional enthusiasm appears to be over. What's replacing it is something more complicated: continued adoption paired with growing scrutiny, utility alongside distrust.

For the companies building these products, the challenge ahead is less about capability and more about trust. The public isn't just evaluating what AI can do, it's evaluating who's building it, who benefits, and at whose expense. The sentiment data suggests that the organizations seen as operating with genuine ethical commitments, even at commercial cost, are better positioned in the court of public opinion than those chasing every available contract and use case.

Three years in, the AI conversation has grown up. It's louder, more complex, and considerably more skeptical than the one that erupted in the wide-eyed weeks after ChatGPT first appeared. Whether the technology can win back the public's trust — or whether it even needs to — remains one of the defining questions of the next chapter.

Interested in finding more about an industry that interests you? Schedule a demo.

Key Takeaways

  1. Volume is up, sentiment is down. AI conversation has never been louder, but the tone has been on a downward trajectory since May 2024. More people are talking about AI, but with increasing skepticism and distrust.

  2. The public mood has shifted from excitement to anxiety. Early conversations were dominated by people sharing how AI improved their lives and work. Now, those voices share space with growing fears about job replacement, distrust of AI's leaders, and concerns about how the technology is being used, particularly by governments and militaries.

  3. Ethics is becoming a reputational differentiator. The three major providers are diverging sharply in public sentiment. Gemini is the most stable, OpenAI has gone net negative due to IP controversies and its DoD contract, and Anthropic, despite losing that same contract by refusing to allow its AI to be used for mass surveillance and autonomous weapons, earned a degree of public goodwill for taking a principled stance. Trust, not capability, is becoming the defining competitive battleground.