
Privacy-first email. Built for real protection.
Proton Mail offers what others won’t:
End-to-end encryption by default
Zero access to your data
Open-source and independently audited
Based in Switzerland with strong privacy laws
Free to start, no ads
We don’t scan your emails. We don’t sell your data. And we don’t make you dig through settings to find basic security. Proton is built for people who want control, not compromise.
Simple, secure, and free.

Welcome to Next in Dev
What's up, everyone? Welcome to Next in Dev. In this edition: Payload patches critical SQL injection vulnerabilities, OpenAI tests ads in ChatGPT, and Anthropic commits to covering electricity costs from AI infrastructure.
You currently have {{rp_num_referrals}} referrals.
PAYLOAD
The first thing to address is that the Payload team issued two vulnerability announcements last week. The two vulnerabilities are patched as of versions 3.73 and 3.74. In both cases, you are unaffected if you use MongoDB for your database.
The first vulnerability is critical. It's an SQL injection vulnerability. You are affected if you use a Drizzle-based database adapter and use a JSON or richText field that has its read access set to true or a Where query. If that applies to you, upgrade to any version 3.74 or higher.
Technically, that first one is fixed in version 3.73. But the second vulnerability applies to access control in postgres and SQLite database adapters that use serial IDs. This is only true if you have multiple auth collections and the users in those collections share the same numeric ID. In addition to this not affecting Mongo databases, you're safe if you only have one auth collection or you use UUIDs. Upgrade to 3.74 or later to address this vulnerability.
Now back to the fun stuff.
The team released version 3.76 and 3.76.1. Version 3.76 adds a new export and import limit feature to the import and export plugin. This enables a new per-collection limit control to the plugin.
3.76 and 3.76.1 both address a few bugs as well. Some key ones include an updated build config that addresses a compiling issue with Vue's live preview, dropping support for Next.js versions with known CVEs, and adding CSP headers to SVG uploads.
RECENT VIDEO:
I built a website in 2 days using the tools I discuss here in my newsletter and the technologies I use in my videos. Here's the process I used.
SHADCN
Shadcn released all its blocks for both Radix and Base UI. Blocks will use your chosen library if you've already set up your project, so there's no real change to your workflow. Simply add blocks as you would with any component.
NEXT.JS
This past week, the Next.js team released 10 canary versions (v16.2.0-canary.28 through .37) focused heavily on Turbopack infrastructure improvements and developer experience enhancements. The team upgraded React three times, added instant validation features for both development and client navigation, and finished adding server-side Hot Module Replacement (HMR) infrastructure to Turbopack's Node.js runtime. Critical bug fixes include deprecating a Node.js utility replacement, middleware adapter consistency issues, an image optimization bug that broke low-quality settings, and browser memory leaks from unclosing prefetch streams. The releases also introduced AI-focused features by bundling documentation directly in Next.js and auto-generating AGENTS.md files in new projects.
These releases show Next.js is still investing heavily in build performance through Turbopack optimizations like persistent caching improvements and filesystem benchmarking fixes. The completed server HMR implementation means developers get instant feedback on both client and server code changes without full rebuilds. The new turbopackIgnoreIssue config option and experimental type: "text" support give developers more control over their build processes. The bundled documentation and AI tooling integration shows Next.js is preparing for AI-assisted development workflows, making framework conventions immediately accessible to coding assistants without external API calls.
CLOUDFLARE
Cloudflare launched "Markdown for Agents," a feature that converts HTML pages to markdown when AI agents request it by accepting text/markdown in their headers. The conversion happens on-the-fly at Cloudflare's edge, claiming a reduction in token usage by up to 80%. It's available now in beta for Pro, Business, and Enterprise plans at no additional cost.
This matters because it shifts a fundamental inefficiency in how AI systems browse the web. Every AI agent currently has to strip away navigation bars, styling, and other HTML overhead to get at the actual content. This wastes computation, adds latency, and burns tokens on packaging rather than substance. By handling the conversion at the source, Cloudflare makes the entire ecosystem more efficient and ensures content creators have control over how their content is structured for AI consumption. As web traffic increasingly comes from AI agents rather than human browsers, treating agents as first-class citizens with purpose-built content delivery becomes critical for businesses that want to be discovered and properly understood by AI systems.
It's starting to sound like we're getting close to "dead-internet" theory, where the internet exists only for bots.
AWS
AWS rolled out several infrastructure upgrades last week, including new EC2 instances with Intel Xeon 6 processors that claim to deliver up to 43% better performance, plus Network Firewall price cuts and expanded support for container deployments. Amazon also enhanced authentication options by adding Sign in with Apple for AWS Builder ID and introduced mutual TLS support for CloudFront origins. On the AI front, Amazon Bedrock now offers Claude Opus 4.6 and structured outputs for more reliable JSON responses from models.
This update touches multiple layers of the development stack. Backend developers get more powerful compute options and better database replication across accounts, frontend teams benefit from improved CDN security with mutual TLS, and anyone building AI-powered features gets access to more reliable model outputs through structured JSON schemas. The authentication improvements also simplify access management across AWS services, which reduces friction for teams managing multiple AWS accounts or working with federated identities.
Dokploy
Dokploy v0.27.0 focuses heavily on stability and developer experience improvements, patching multiple security vulnerabilities (including 12 CVEs across Next.js, Hono, and other dependencies) and fixing critical deployment bugs like stuck remote server deployments and preview deployment regressions. The release adds practical features like health check hooks, better container error visibility in logs, and optional internal URLs for GitLab and Gitea integrations. It also introduces a license key system and account linking for the cloud version.
AI
OpenAI
OpenAI is testing ads in ChatGPT for U.S. users on Free and Go tiers. Paid plans currently remain ad-free. OpenAI claim that the ads are clearly labeled, won't influence ChatGPT's answers, and are matched to conversation topics without sharing chat details with advertisers. I don't know how that's possible while sharing intent-based data. But sure.
Users will be able to dismiss ads, delete ad data, or manage personalization settings at any time. OpenAI frames this as funding infrastructure to keep free access fast and reliable while maintaining conversation privacy. If you can't tell, I'm skeptical.
Cursor
Cursor released Composer 1.5, an agentic coding model that uses 20x more reinforcement learning than its Composer 1 and exceeds the compute used to train the base model itself. Cursor claims that the model adapts its "thinking" depth based on problem difficulty and includes self-summarization capabilities that let it continue working when it hits context limits.
Cursor restructured its pricing model around two usage pools: one for their own Auto and Composer 1.5 models and another for external API models at standard rates. Composer 1.5 now has 3x the usage of Composer 1, which reflects Cursor's belief that developers are shifting from autocomplete to full-codebase agentic coding. They position Composer 1.5 as scoring above Claude Sonnet 4.5 on agent benchmarks but below top frontier models like Opus and Codex, offering a middle ground between cost and usefulness.
Anthropic
Anthropic quietly upgraded nonprofit access to include Claude Opus 4.6 at no additional cost for Team and Enterprise plans, a change from previous restrictions that limited nonprofits to Sonnet or lower tiers.
This matters because it gives access to top-tier AI for organizations that might not otherwise afford it, which could influence how nonprofits approach technical challenges. For developers working with or in the nonprofit sector, this opens up more capabilities without budget constraints. It's also worth noting as a broader industry signal: as AI companies compete for market share, strategic pricing for specific sectors is becoming a differentiation point beyond raw model performance.

Anthropic also announced it would cover electricity price increases that consumers face due to their data centers, including paying 100% of grid infrastructure upgrade costs and procuring new power generation to offset demand-driven price hikes. They also say they're investing in ways to reduce grid strain during peak demand, deploying water-efficient cooling, and creating local jobs while pushing for federal permitting reform to accelerate energy infrastructure development. The commitment recognizes that training frontier AI models will soon require gigawatts of power, with the AI sector needing at least 50 gigawatts over the next several years.
This matters because it addresses a growing tension between AI's infrastructure needs and public burden. Data centers can significantly raise local electricity costs through both infrastructure upgrades and market demand. There are societal, economic, ethical, and environmental impacts to AI, and it's important to address those. It's nice having some of these tools at our disposal, but it's short-sighted to forget the impact it can have on the world around us.
Claude Code
This past week, the team behind Claude Code released five updates (versions 2.1.34 through 2.1.39) with significant improvements across security, performance, and usability. Version 2.1.36 introduced fast mode for Opus 4.6, giving developers different options for speed. Version 2.1.37 fixed availability issues in fast mode after enabling /extra-usage. Version 2.1.38 addressed critical security concerns by improving bash permission matching for commands with environment variable wrappers, while also resolving VS Code terminal scroll regressions and duplicate session bugs. Version 2.1.39 brought the most extensive fixes: preventing nested Claude Code sessions, fixing MCP tool image streaming crashes, improving error visibility, and enhancing terminal rendering performance with fixes for character loss at screen boundaries.
Gemini
Google released Gemini 3 Deep Think, an upgrade to their reasoning mode designed for science, research, and engineering challenges where problems lack clear guardrails and data is messy or incomplete. They claim that the model achieves breakthrough performance. It's available now to Google AI Ultra subscribers and through the Gemini API via an early access program for researchers and enterprises.
RAILWAY
Updates include one-click DNS setup for Cloudflare-managed domains. This new update eliminates the copy-paste CNAME workflow by integrating directly with Cloudflare to configure DNS records automatically.
They also introduced horizontal scaling without deployment. You can now add or remove replicas instantly without waiting for a full deployment cycle, whether within a region or across multiple regions.
You can now use TXT record verification for trusted domains. This removes the previous requirement to deploy a throwaway service just to verify company email domains for auto-workspace onboarding.
Lastly, the team overhauled the Railway documentation. They've moved away from CSS-in-JS to a more modern, easily navigated structure that's open-source for contributions.
Railway also launched an Agent Directory that provides integration guides for eight major AI coding agents. Each agent connects to Railway through Skills, MCP Server, CLI, or GitHub Autodeploys depending on the tool. The directory includes detailed comparison charts showing which agents support IDE, Terminal, Cloud, Extension, Standalone, and Open Source deployment modes.
This is important because Railway is positioning itself as the primary choice for deployment for agentic workflows, addressing a real infrastructure gap as coding agents become more mainstream.
Use my affiliate code to sign up for Railway if you want.
What did I miss? There’s so much happening in modern web dev that I’m sure I have missed something. Please share your thoughts in the comments or reply to this email. I want to address your suggestions and may include them in future newsletters.
Thanks for reading. See you next time.
How did I do?
Turn AI Into Your Income Stream
The AI economy is booming, and smart entrepreneurs are already profiting. Subscribe to Mindstream and get instant access to 200+ proven strategies to monetize AI tools like ChatGPT, Midjourney, and more. From content creation to automation services, discover actionable ways to build your AI-powered income. No coding required, just practical strategies that work.




