Ep. 208: Q1 Trends Briefing - Model Release Frenzy, AI Lobbying, & Anthropic v. U.S. Government
TL;DR
Q1 was a genuine model-release sprint, not incremental churn — Anthropic shipped Claude Opus 4.6 and Sonnet 4.6, OpenAI launched GPT-5.3 Codex and GPT-5.4, Google pushed Gemini 3 DeepThink and 3.1 Pro, and XAI dropped Grok 4.2, making custom evals increasingly critical for deciding which model actually matters for your use case.
AI is becoming a hard political issue with real money behind it — Paul and Mike highlight nearly $300 million flowing into pro-AI midterm efforts from groups tied to figures like David Sacks, Greg Brockman, Joe Lonsdale, and Marc Andreessen, while the left is starting to organize around data center, labor, and environmental concerns.
Anthropic’s fight with the U.S. government became one of the quarter’s defining dramas — after refusing Pentagon demands for unrestricted Claude access for uses like domestic surveillance and autonomous weapons, Anthropic was labeled a supply-chain risk until a federal judge blocked the designation as an “Orwellian notion.”
The biggest blocker to enterprise AI adoption is leadership, not technology — Paul argues companies failing to get ROI from AI usually have CEOs who haven’t clearly explained the future of work, set expectations for AI usage, or made AI literacy and productivity gains part of how teams are measured.
The ‘SaaS apocalypse’ is no longer theoretical — after Anthropic announced legal and sales plugins, roughly $300 billion was wiped from software and data stocks in two days, with names like LegalZoom, HubSpot, and ServiceNow hit as buyers start asking whether frontier models and agents will eat core software features.
The vibe around AGI and job disruption shifted noticeably in Q1 — from Atlassian and Block tying layoffs to AI, to Uber’s CEO saying AI could replace 70-80% of human work within a decade, to “move 37 moments” where experts realize the machine is better, the conversation has moved from abstract hype to personal and organizational reality.
The Breakdown
A quarter so packed it needed a reset episode
Because Paul was on vacation, the show used the week to zoom out and recap roughly 150 topics covered across 12 episodes in Q1 2026. The framing is simple but effective: so much happened in three months that some of it already feels a year old.
Frontier labs went into release overdrive
Mike opens with the “model release frenzy,” rattling off major launches from Anthropic, OpenAI, Google, and XAI in a way that almost sounds absurd when read back-to-back. Paul’s takeaway is practical: benchmarks are nice, but companies now need their own evals, because the right model for coding, strategy, or no-code app building is becoming a business decision, not a nerdy preference.
AI money enters politics in a serious way
The conversation then turns to lobbying, where nearly $300 million is already lining up behind pro-AI political operations heading into the U.S. midterms. Paul’s read is that this won’t stay neatly partisan for long — once jobs and energy become the issue, both parties will adjust their messaging fast.
Anthropic vs. the Pentagon becomes must-watch AI drama
The quarter’s most cinematic story is Anthropic refusing to remove red lines around mass surveillance and autonomous weapons, then getting tagged as a supply-chain risk by the Pentagon. Mike runs through the legal escalation, Microsoft’s support, the judge’s sharp rebuke, and Paul half-jokes that the whole post-DeepMind era of AI now deserves a Netflix series.
OpenClaw made agentic AI feel real — and a little unhinged
OpenClaw and the AI-agent social network Moltbook gave people a tangible glimpse of autonomous agents talking, acting, and occasionally going off the rails. Paul says he still hasn’t built with it directly because the risk is real, but stories like Claire Vo’s — where an early OpenClaw deleted her family calendar before eventually becoming a powerful assistant — make the near future suddenly easier to picture.
Enterprise AI is stuck on the people problem
One of the strongest sections is Paul’s argument that poor AI adoption is usually a leadership failure in disguise. If the CEO hasn’t articulated what the future of work looks like, provided tools and training, and made AI competency an expectation, then AI stays trapped in little pockets — marketing here, ops there — instead of transforming the company.
SaaS is getting squeezed from both ends
The “SaaS apocalypse” section connects market panic with product reality: frontier models are swallowing features, agents threaten seat-based pricing, and customers are starting to wonder why they should pay extra for AI inside software they already license. Paul makes it personal with HubSpot and SDR workflows — if he has to stop and ask whether his current stack will even enable the future, that’s already a problem for vendors.
Agents, layoffs, and ‘move 37’ moments changed the mood
The back half of the ranking ties together three darker, more existential themes: labs are racing to turn agents into enterprise products, layoffs are increasingly being linked to AI, and more professionals are having “move 37 moments” where they realize the machine can match or exceed them. By the time they reach the final trend — the “vibe shift” around AGI — Paul describes the eerie feeling of talking to someone deep in OpenClaw one minute, then looking around a room full of people with careers and kids and thinking, they have no idea what’s coming.