What I Watched
March 8 – 16, 2026
Claude Code matured into a platform, local AI went mainstream, and open-source models started a price war nobody can win against.

When Your Feed Becomes a Signal
Every week I let my YouTube algorithm run without interference — no manual curation, no pruning, just wherever my curiosity takes me. Then I look back at the full picture. This week, I watched 199 videos across nine days. Most were noise. But the signal hiding inside the noise was unusually clear.
The top channels this week — Chase AI, NetworkChuck, Nate Herk, Simon Scrapes, Nick Saraev, Matthew Berman, Alex Ziskind, Brock Mesarich — aren't random. They represent the current frontier of how working developers and technical entrepreneurs are actually using AI tooling right now. Not hype cycles. Implementation.
Four themes emerged that I think matter. One of them — the maturation of Claude Code into a genuine platform — dominated so completely that I almost missed the other three. But all four are connected, and together they're pointing somewhere interesting.
Claude Code Stopped Being a Tool and Became a Platform
The volume of Claude Code content this week was striking — not just in quantity but in sophistication. We've moved past 'here’s how to get started' tutorials into 'here’s how to build self-improving systems on top of it.' Simon Scrapes' video 'Build Self-Improving Claude Code Skills. The Results Are Crazy.' captures the shift perfectly: Skills — the reusable instruction bundles you attach to Claude — can now be designed to evaluate their own performance and rewrite themselves. That’s a qualitatively different kind of tooling.
Chase AI tracked several major capability upgrades in rapid succession: the context window grew 5x ('Claude Code’s Context Window Just Got 5x Bigger'), a new visualizer feature apparently replaced entire startup products ('The NEW Claude Visualizer Just Replaced Entire Startups'), and Playwright browser automation became a first-class workflow ('Claude Code + Playwright = INSANE Browser Automations'). Brock Mesarich’s Cowork-focused content — including 'I Set Up Claude Cowork to Work While I Sleep' and 'How to Build $10,000+ Animated Websites with Claude Cowork (INSTANTLY)' — showed non-developers picking up capabilities that used to require engineering teams.
Nick Saraev’s full course on 'How to Build Claude Skills that Generate Revenue' and Simon Scrapes’ breakdown of 'The Claude Code Skills Trap (Most People Fall For This)' together frame a nuanced picture: the ecosystem is maturing fast enough that there are now wrong ways to build in it. That’s a sign of genuine platform status.
Claude Code is no longer a productivity tool — it’s an extensible platform with an ecosystem, and the builders who understand that distinction are already operating at a different leverage point.
Local AI Crossed the Line from Hobby to Infrastructure
A year ago, running a capable LLM locally was a hobbyist flex. This week’s content makes the case that it’s becoming standard infrastructure for serious builders. Alex Ziskind’s 'Your local LLM is 10x slower than it should be' is required viewing — it turns out most people running local models are leaving enormous performance on the table due to configuration choices that are easy to fix once you know what to look for.
The hardware side of the conversation is accelerating in parallel. Heavy Metal Cloud’s head-to-head of Mac Studio versus dedicated Nvidia hardware for local inference landed alongside multiple reviews of 128GB machines like the Minisforum MS-S1 Max. NetworkChuck covered not just how to host AI locally but how to integrate Open WebUI with LiteLLM to build a more flexible local stack ('I’m changing how I use AI (Open WebUI + LiteLLM)'). The hidden theme: 'local AI' is no longer just about privacy or cost — it’s about latency, control, and the ability to run agent workflows that can’t be rate-limited by a third-party API.
Daniel Jindoo’s 'Run AI Agents Locally? Here’s Why Companies Pay $15K for This' connects the dots. The reason enterprises pay consultants that much to set up local agent infrastructure isn’t the hardware — it’s the configuration knowledge. That knowledge is now being shared freely, which means the arbitrage window for anyone who builds this expertise early is closing.
Local AI is no longer about avoiding cloud costs — it’s about building agent systems with the latency, control, and reliability profile that cloud APIs can’t match.
AI Agents Are Eating Real Business Workflows
The agent content this week moved decisively away from toy demos into documented real-world deployment. iampauljames’ 'Gemini Voice AI Agent WIPED OUT $297/Month Answering Services' isn’t a proof-of-concept — it’s a field report from someone who replaced an actual vendor contract. Jacob Uldall’s walkthrough on building a voice receptionist you can sell to local car detailing businesses follows the same pattern: specific niche, specific price point, working implementation.
Jono Catliff’s 'The Only 12 n8n AI Automations You’ll Ever Need (Steal These)' and Nate Herk’s 'This Invoice Agent Analyzes Images in n8n' represent the workflow automation angle — n8n as the connective tissue between AI capabilities and business systems. Sabrina Ramonov’s '$10M AI Agent Business Idea Would Print Money' zooms out to the business model level, asking what the commercial opportunity looks like when autonomous agents can handle work that used to require FTEs.
The most philosophically interesting piece was Nick Vasilescu’s 'Why have just one agent inside of a computer when you can have many?' — a short but dense exploration of multi-agent computer use. The implication is that the right model isn’t 'one AI assistant per human' but 'swarms of specialized agents per workflow.' Brian Casel’s 'Managing AI via Chat is a Nightmare' captures the other side: as agent deployments scale, the interface for managing them needs to evolve beyond conversation.
The agent economy isn’t coming — it’s deploying. The businesses winning right now are the ones treating agent configuration as a core operational competency, not an IT project.
Open Source Models Are Waging a Price War Nobody Can Win Against
Three major open-source model stories landed within the same week, and the combined signal is hard to ignore. Matthew Berman’s coverage of Grok going fully open-source ('Elon Musk Open Sources Grok! Uncensored and Massive') marked a significant moment — xAI releasing a frontier-class model with no usage restrictions. Mike’s AI Forge made the case that Gemma 3 is 'pocket-sized' enough to run locally while still performing at a level that obsoletes paid API calls for many use cases.
The most commercially pointed video was iampauljames’ take on Qwen 3.5: 'China’s Qwen 3.5 AI OBLITERATED The $97/Month Tool Market 😱 (Freelancers Are Switching Fast).' The framing matters — this isn’t just about capabilities, it’s about what happens to subscription-based AI tools when capable open-source alternatives become trivially deployable. The answer is: the tools built on top of proprietary models get repriced or disappear.
The strategic implication for builders: if your product is a wrapper around a single proprietary model, the floor is dropping under you. If your product is the workflow, the integration, the context — the parts that require domain knowledge and implementation work — you’re more defensible than you might think.
Open-source model releases aren’t just technical milestones — they’re market structure events that compress the value of pure API-wrapper businesses and amplify the value of implementation expertise.
Where It All Converges
These four themes aren’t independent trends — they’re different faces of the same shift. Claude Code becoming a platform, local AI becoming infrastructure, agents taking over real workflows, and open-source models commoditizing raw capability: all of these point toward a world where the leverage is no longer in accessing AI, but in knowing what to build with it.
The implication for anyone building a career or business in this space: the moat isn’t model access, it’s configuration knowledge, workflow design, and the ability to compose systems that do real work reliably. The content creators dominating my feed this week aren’t AI researchers — they’re implementers. That tells you something about where the practical value is concentrating right now.
Notable Videos This Week
What This Means for My Content
Tracking what I consume forces a kind of honesty. My feed isn’t lying — it reflects where my attention actually lives. This week it lived in the intersection of platform-level AI tooling, local infrastructure, and the emerging economics of agent deployment. That’s not an accident. That’s the frontier I’m building toward.
If you’re not tracking what you consume, you’re missing one of the cheapest feedback loops available. The content you absorb shapes your mental models, which shape what you build. Make it intentional.
The content you consume shapes the mental models you build with.
Make it intentional!