In partnership with

Watch or listen to this newsletter.

Voice dictation that doesn't mangle your syntax.

Most dictation tools choke on technical language. Wispr Flow doesn't. It understands code syntax, framework names, and developer jargon - so you can dictate directly into your IDE and send without fixing.

Use it everywhere: Cursor, VS Code, Slack, Linear, Notion, your browser. Flow sits at the system level, so there's nothing to install per-app. Tap and talk.

Developers use Flow to write documentation 4x faster, give coding agents richer context, and respond to Slack without breaking focus. 89% of messages go out with zero edits.

Millions of users worldwide. Now on Mac, Windows, iPhone, and Android (free and unlimited on Android during launch).

Welcome to Next in Dev

What's up, everyone? Welcome to Next in Dev, a weekly overview of all the news I could find in the modern web dev industry. This week, Cloudflare rebuilt Next.js with AI in a week, Vercel's CEO called it "vibe-coded," and I have some thoughts about glass houses. Plus, Cursor agents can now control their own computers, and ChatGPT ads are already way more aggressive than anyone expected.

You currently have {{rp_num_referrals}} referrals.


CLOUDFLARE

One engineer at Cloudflare rebuilt Next.js from scratch on top of Vite in under a week. The result is called vinext, a planned drop-in replacement for Next.js that deploys to Cloudflare Workers with a single command. Replace next with vinext in your scripts and everything else stays the same. Your app directory, pages directory, and next config all work as-is.

This isn't a wrapper around Next.js output like OpenNext. It's a reimplementation of the API surface: routing, server rendering, React Server Components, server actions, caching, and middleware, all built as a Vite plugin. Early benchmarks on a moderately sized app show faster builds with Vite 8 and Rolldown, and much smaller client bundles. It covers almost all of the Next.js 16 API surface with over 2,000 tests, and Cloudflare claims it's already running in production on at least one customer site. But it is experimental, and the whole project cost about $1,100 in API tokens.

The framing is all collaboration and open source, but the subtext is clear: Cloudflare—and all of us—are tired of waiting for Next.js adapters. So, they decided to just reimplement the framework. The "one engineer, one week" narrative is great marketing, but the real question is whether one engineer can maintain something of this size, not if one engineer can rebuild a framework. Building it is the fun part. Keeping up is the job.

Of course, Vercel responded. Vercel's CEO posted that Vercel's security team identified and responsibly disclosed 7 security vulnerabilities in vinext, calling it "Cloudflare's vibe-coded framework." He framed it as Vercel extending help in the interest of the public internet's security.

Look...responsible disclosure is good. Finding vulnerabilities is good. But the framing here is dripping with condescension. We spent an entire summer upgrading vulnerable Next.js projects. Glass houses, Guillermo.

Vinext is explicitly labeled experimental. It's less than two weeks old. Attacking it as if Cloudflare is shipping it as a production replacement is an immature move. And calling it "vibe-coded"? This is coming from a CEO who shills AI, builds with AI, and continues to enable AI across Vercel's entire product line with Vercel's AI gateway. So does he not like AI anymore?

Separately, Cloudflare disclosed a 6-hour outage on February 20, where a cleanup task meant to automate prefix removal accidentally deleted 25% of all bring-your-own-IP prefixes. The root cause: a query parameter with no value evaluated to an empty string, skipped the filter, returned everything, and the system deleted it all.

The real irony is this bug shipped as part of their "Fail Small" initiative, designed specifically to prevent large blast-radius failures.

RECENT VIDEO:

I recently reviewed Strapi and compared it to Payload. Check out all the ways Strapi falls short:

FIGMA

Figma Make added new connectors, plus support for any remote MCP server via custom connectors. Organization admins can manage which connectors are available and who gets access, which matters for larger teams worried about data flowing freely between tools.

Figma is making a clear play to become the hub everything else plugs into, not just the place where design happens.

NEXT.JS

Next.js 16.2.0 has burned through 62 canary releases and counting, with work spread across performance, caching, and the new Instant Navigation feature. Highlights include a new disk cache for next/image with a configurable max size, Turbopack Subresource Integrity support, an Instant Navigation dev tools toggle, a dedicated @next/playwright testing helper, and in the latest canary, cached navigations that serve segments instantly on repeat visits. On the Turbopack side, there's better sourcemap handling, reduced memory usage during compaction, and an update to Rust edition 2024.

Sixty-two canaries for a single minor version raises the question of whether this pace serves developers or just makes the project harder to follow.

CURSOR

Cursor cloud agents now get their own virtual machines where they write code, build it, open a browser, click around to verify, and produce video demos. You can trigger them from the desktop app, web, mobile, Slack, or GitHub, and they produce PRs with demo artifacts attached. You can also remote into the agent's desktop to test the modified software yourself without checking out the branch locally. Cursor says they're already merging PRs from these agents.

The big question is how this scales beyond Cursor's own codebase. Dogfooding is the easiest possible test case because the team building the agent also controls the repo, the tooling, and the definition of done. Still, if even half of it translates to external projects, the gap between Cursor and everyone else in the AI editor space just got a lot wider.

AI

ChatGPT is now serving ads on the very first prompt, not deep in conversations as predicted. A user asking about booking a weekend trip got sponsored placements immediately, complete with brand favicons and "Sponsored" labels. That's just search ads with a chatbot skin.

Anthropic released the third version of its Responsible Scaling Policy, the framework it uses to decide what safeguards are required before training or deploying increasingly capable models. The big structural change: separating what Anthropic can do alone from what it thinks the entire industry needs to do collectively. They're introducing regular Risk Reports every 3-6 months with external third-party review, and a public Frontier Safety Roadmap they'll grade themselves on.

The post is candid about what hasn't worked. Anthropic admits capability thresholds turned out more ambiguous than expected, government regulation has moved slower than hoped, and higher-level safeguards may be impossible for any single company to implement alone.

The core admission is that the political environment isn't cooperating and voluntary frameworks have limits. This is important because it's the gap every AI safety effort runs into.

RAILWAY

Railway is partnering with Fastly to add DDoS protection to every public service on the platform, enabled by default with zero configuration. All public traffic now routes through Fastly's 100+ points of presence where malicious traffic gets filtered before it reaches your infrastructure. No proxy to set up, no plan to upgrade, no feature flag to enable. They're also building CDN caching on top of this partnership.

On top of that, Railway is rolling out domain purchasing directly inside the platform. Search, buy, and attach domains to services without leaving Railway. DNS configures itself automatically, WHOIS privacy is included, and pricing is at cost. No more bouncing between your registrar, Railway, and your DNS provider just to set up a domain. This is currently in beta, but I anticipate this moving fast.

The DDoS partnership is the bigger story. Railway previously told customers to bring their own protection, which meant juggling external providers and nameservers. With AI-driven bot traffic, scraping at scale, and the general increase in automated attacks, "bring your own DDoS" was becoming a real liability for a platform targeting smaller teams without a dedicated infrastructure person. Smart move to make it table stakes rather than an upsell.

Use my affiliate code to sign up for Railway if you want.

CLAUDE CODE

Claude Code pushed updates through version 2.1.59, heavily focused on stability and memory management. The standout new feature is auto-memory. Claude now automatically saves useful context from your sessions, manageable via the /memory command. There's also a new /copy command with an interactive picker for selecting individual code blocks from responses, and smarter always-allow prefix suggestions for compound bash commands.

They also released a large volume of fixes. The team patched unbounded memory growth, task state, shell execution, circular buffers, file history snapshots, and agent team sessions. Windows users got crash fixes. Multi-agent sessions got memory improvements, and a race condition with MCP OAuth token refresh when running multiple instances was fixed. If Claude Code has been getting sluggish during long sessions, update immediately.

What did I miss? There’s so much happening in modern web dev that I know I missed something. Please share your thoughts in the comments or reply to this email. I want to address your suggestions and may include them in future newsletters.

Thanks for reading. See you next time.

How did I do?

Tell me what you thought of this newsletter. All feedback makes me better, which makes this better for you!

Login or Subscribe to participate

Free email without sacrificing your privacy

Gmail tracks you. Proton doesn’t. Get private email that puts your data — and your privacy — first.

Keep Reading