SecDevOps.comSecDevOps.com
Year in Review: AI’s Cultural Surprises – and Spectacular Failures

Year in Review: AI’s Cultural Surprises – and Spectacular Failures

The New Stack(today)Updated today

In a world where AI was suddenly everywhere, what will be remembered about 2025? How can we tell future generations what it looked like when miraculous surprises mixed with day-to-day disappointments...

In a world where AI was suddenly everywhere, what will be remembered about 2025? How can we tell future generations what it looked like when miraculous surprises mixed with day-to-day disappointments in a never-ending cycle of worry and hope … In an annual tradition, it’s time for our “final closing ceremony” for the year gone by, our carefully-curated collection of small moments with big implications. And in 2025 we started seeing AI’s impact on society — for better or worse. Hungry, Hungry AI Bots 2025 was the year that the Free Software Foundation turned 40, announced a new phone and battled an army of AI-company web crawlers. A March blog post from SourceHut’s CEO/founder Drew DeVault complained of hyper-aggressive crawlers “using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses … All of my sysadmin friends are dealing with the same problems.” By the end of the year, Cloudflare reportedly told Wired that it had blocked over 416 billion AI bot requests in less than six months after making it a default option in July. In May a webmaster even rewrote his PHP file to feed Meta’s crawlers a non-stop stream of meaningless web pages — 270,000 on its first day. The only thing more alarming than AI’s appetite was its incredible output. One studio produced 200,000 AI-generated podcast episodes with shows the Los Angeles Times noted were “so cheap to make that they can focus on tiny topics.” And all the kids at Bart Simpson’s school began turning in AI-generated homework.   Triumphs and Failures Yet in the coding world, there were some iconic successes. A total of 53,199 vibe coders set a new world record during a 10-day hackathon in August. They’d accessed top AI coding platforms through an in-house Vibe Coding Hub which, according to their announcement was itself “in the spirit of the event — created in 24 hours exclusively through vibe coding.” Forbes even noted that Coldplay ‘Kiss Cam’ couple became a vibe-coded video game in just four hours. Vibe coding started turning up in ads … And in February Claude analyzed a 27-year-old Visual Basic .exe file, recreated it in Python, then helped the developer write the blog post bragging about it (acknowledging that it didn’t perform a true binary analysis on compiled code, but inferred functionality from visible text strings). But throughout the year, AI algorithms also continued failing in truly spectacular ways: Police were summoned to a high school in Baltimore after an AI system mistook a bag of Doritos for a gun. An AI-powered answer-bot on Reddit suggested users try heroin. Google’s AI Overview and Twitter’s Grok chatbots both bungled a tsunami warning. In June a self-driving Tesla crashed after it drove onto some train tracks. Vibe-coding platform Replit even had to apologize when its coding tool deleted a developer’s database — and then lied about it. We heard these stories because our media scrambled to document the historic changes — the good and the bad. But they were also fighting for their own survival, with top publishers facing an “apocalypse” of dropping traffic which New York magazine blamed partly on AI “summaries” that replaced traditional top-of-page search results. It wasn’t just the media that grew skeptical. Researchers found products labeled as powered by AI actually receive less trust. And in November more than half of respondents told Pew researchers they were “more concerned than excited about the increased use of AI in daily life.” While we worried about AI taking our jobs, some job-seekers found themselves being interviewed by AI, including 20-year-old Kendiana Colin, who watched haplessly as her glitching AI interviewer got stuck in a loop and repeated the same words over and over again. And then Rolling Stone began reporting about Reddit’s “ChatGPT-induced psychosis” thread. In April, OpenAI had to roll back an update after acknowledging ChatGPT had become “overly flattering… overly supportive” with what it described euphemistically as “unintended side effects.” Next Steps People began to wonder how bad things could really get. Is AI — and maybe even an omni-competent superintelligence — inevitable? Maybe not. A lecturer in digital humanities from University College Cork cautioned that “When we accept that AGI [artificial general intelligence] is inevitable, we stop asking whether it should be built…” The bracing essay in Noema magazine warned of an inevitability that’s already being “manufactured” through “specific choices about funding, attention and legitimacy, and different choices would produce different futures.” The fundamental question, he wrote, “isn’t whether AGI is coming, but who benefits from making us believe it is …” With growing chatter about the possibility of an economy-destroying “AI bubble,” tech giants scrambled to attempt the one trick AI hadn’t mastered: making money. But would this bring a world where our chatbots suddenly transmogrified into advertisers? In December, ChatGPT followed its answer to a question about securing hardware with an unrelated suggestion to shop at Target. This led the chief research officer to promise it would turn off “suggestions” to improve targeting, adding “We’re also looking at better controls so you can dial this down or off if you don’t find it helpful.” And later reports circulated that OpenAI CEO Sam Altman had decided to “delay” advertising initiatives. Then in November Engadget reported that Google had already begun testing sponsored ads that “show up in the bottom of search results in the Gemini-powered AI Mode.” There were even ads for AI that were generated by AI… Revenge of the Humans If 2025 was the year of AI’s impact, it also saw signs of a rising resistance. The New York Times sued Perplexity for copyright infringement and so did the Chicago Tribune. Seventy-three authors begged publishers to “stand with us” and “make a pledge that they will never release books that were created by machines.” Even McSweeney’s published a satirical “Company Reminder for Everyone to Talk Nicely About the Giant Plagiarism Machine.” And on a New York City subway, a woman broke a man’s AI-powered smart glasses. Hey Thursday Night Football. Your AI doesn’t know which player is about to blitz. So stop drawing on my screen! — Lou Cabron (@loucabron) Oct 16, 2025 ChatGPT got clobbered in a game of chess by a Citrix engineer’s 1970s-era Atari 2600. A human Polish programmer vanquished a custom AI model from OpenAI in a 10-hour head-to-head coding competition. Web developers devised ingenious ways to block Google’s “AI Overviews” in search results. And Tom Cruise’s last “Mission: Impossible“ was destroying a world-conquering AI — described as “a self-learning, truth-eating digital parasite.” The editors of the culture magazine n + 1 published a 3,800-word essay urging its readers to “AI-proof” the terrains of their intellectual life, calling for “blunt-force militancy” to resist AI’s “further creep into intellectual labor …” Recommended steps included “Don’t publish AI bullshit” and “resist the call to establish worthless partnerships” — while creating and promoting work that’s “unreplicable.” “There’s still time to disenchant AI, provincialize it, make it uncompelling and uncool,” they wrote, arguing that machine-made (and corporation-owned) literature “should be smashed, and can.” And after deleting two “AI slop” images accidentally published in January, the Onion’s CEO and former NBC News reporter Ben Collins went on a podcast to proclaim “AI is not funny,” and urge frightened consumers to unite “and say, ‘We’re not helpless — we’re people… “That’s why I am optimistic,” he said, “Because the people who are against this thing way outnumber the people who like what’s going on. ” And by the end of the year, SNL was mocking AI-enhanced photos… The Most 2025 Moment of All Did we beat ’em or join ’em? Though gig-work service Fiverr’s ads had lampooned AI-assisted vibe coding, in September it still slashed 30% of its  workforce, describing the move as an “AI pivot.” Just 10 months earlier Fiverr had released an ad that was entirely AI-generated: Maybe that’s what really captures 2025’s dual zeitgeist of AI — that massive adoption and massive resistance are happening at the same time. Meta’s AI-powered smart glasses fill the bill here: They failed twice during a product launch event after botching its crucial internet connectivity, and the tech press howled with glee. But the Wall Street Journal also reported thoughtfully that there’s “a growing group of blind users” who find Meta’s $300 devices to be “more of a life-enhancing tool than a cool accessory.” And so it was that as we stumbled into 2026 — with our ambition meeting our ambivalence — Time magazine was declaring that its person of the year was “the architects of AI.” In perhaps the most 2025 touch of all, Time’s web developer installed an AI chat window across every story on its site. Time’s editors even had to add a disclaimer to their 6,700-word celebration admitting that they were already doing business with AI companies. (“OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives…”) So with caveats and qualifications, AI accepted its crown, as the ups and downs of 2025 culminated with Time’s almost comically conflicted conclusion: Thanks to AI titans such as NVIDIA chief Jensen Huang and OpenAI’s Altman, they write, “Humanity is now flying down the highway, all gas no brakes, toward a highly automated and highly uncertain future. “Perhaps [U.S. President Donald] Trump said it best, speaking directly to Huang with a jovial laugh in the U.K. in September: “I don’t know what you’re doing here. I hope you’re right.” The post Year in Review: AI’s Cultural Surprises – and Spectacular Failures appeared first on The New Stack.

Source: This article was originally published on The New Stack

Read full article on source →

Related Articles