The future belongs to SWE

The future belongs to SWE

You were told you’d be replaced.

You’ve heard it a hundred times by now. AI will replace software engineers. It writes code faster than you, it doesn’t need sleep, and it never argues in code review. The people saying this are mostly LinkedIn influencers, CEOs doing earnings calls, and commentators who have never deployed anything more complex than a WordPress plugin. They look at AI and see a magic box that does computer stuff. Infrastructure, orchestration, deployment pipelines, failure modes, the thousand decisions that go into putting software in front of actual users: invisible to them.

Here’s what’s funny about the “developers are toast” narrative: the people making the claim have no idea how the thing they’re hyping actually works. They see AI generate a function and think the job is done. Getting that function into production, keeping it running, handling the edge cases? That’s where the actual work lives. The gap between “AI can write code” and “AI can replace the people who understand systems” is where the entire argument falls apart.

If you actually understand how these systems work, you’re the one everyone else will depend on. Software engineers are becoming the most important people in the room.

Bigger Than Fire

There’s a detailed scenario published at ai-2027.com, written by Daniel Kokotajlo (former OpenAI researcher), Scott Alexander, and a group of top forecasters. Their prediction is straightforward: AI’s impact will exceed the Industrial Revolution. The trajectory they lay out goes from today’s unreliable agents to systems that autonomously conduct research, write and ship code, and make strategic decisions, all within a few years. They describe trillion-dollar infrastructure buildouts, geopolitical races for compute, and AI systems that can improve their own capabilities.

Maybe their timeline is aggressive. Maybe it takes twice as long. The direction is clear even if the speed is debatable.

Here’s what I find more compelling than any forecast: if you’ve spent real time building with these models, you’ve had the moment. The moment where it does something you were absolutely sure it couldn’t do. You prompt it on a whim, expecting garbage, and it returns something that makes you sit back in your chair. Those moments used to happen every few months. Now they happen every week. The comparison to fire earns itself here. Fire reshaped what humans could do. AI is reshaping what humans need to do. And the scale of investment behind it, hundreds of billions flowing into data centers, custom chips, and model training, tells you that the people writing the checks believe this too.

AI Has Learned to Act

Most people still think of AI as a text generator. You type a prompt, you get an email draft or a summary. That mental model is already outdated.

AI is becoming agentic. It takes action in the real world. OpenAI’s Operator navigates web browsers, clicking through interfaces, filling forms, and completing multi-step tasks. Anthropic’s Claude can see and interact with a desktop environment, reading screens and controlling applications the way a person would. Coding agents like Claude Code and Cursor take a task description and produce working, tested, deployed code. Home Assistant, paired with local language models, turns a self-hosted smart home into something that responds to natural language commands without ever phoning home to a cloud provider.

Once AI started taking action instead of just producing text, everything changed. Picture this concretely: an AI that orders your groceries based on what’s running low, deploys your application to production after running the test suite, adjusts your thermostat based on your calendar and the weather forecast, and triages your email before you wake up. These capabilities exist today in various stages of maturity. They’re clunky in places, sure. Some of them break in ways that are almost comical. But the trajectory is steep, and the gap between “demo” and “reliable” is closing fast.

And all of these agentic systems need to be built, deployed, connected, and maintained. Which raises the real question: by whom, and on whose terms?

Dependents and Sovereigns

Society is splitting into two groups in how they relate to AI.

The first group, call them dependents, consume AI through big tech platforms. They use ChatGPT, Copilot, Gemini. The experience is convenient and polished, but it’s controlled. They’re subject to pricing changes, content policies, rate limits, data collection, and whatever features the platform decides to ship or kill next quarter. If OpenAI doubles their API pricing tomorrow, dependents eat the cost or scramble for alternatives. If Google discontinues a product (and Google will discontinue a product), dependents lose whatever they built on top of it.

The second group, call them sovereigns, self-host, customize, and own their AI stack. They run models locally using tools like Ollama on their own hardware. They use open-weight models from Meta (Llama), Mistral, and DeepSeek that are approaching the quality of closed models. They deploy custom agents on their own infrastructure. They run Home Assistant with local AI, so their smart home works without depending on Amazon or Google’s cloud staying online and benevolent.

This is the same dynamic as the person who can fix their own car versus the person who takes it to the dealer for everything. There’s nothing wrong with using a service. But the person who can do it themselves operates with a fundamentally different level of freedom. When something breaks, they fix it. When they want something new, they build it.

Software engineers are the natural DIY class of the AI era. You already know how to set up infrastructure, debug systems, read documentation for tools you’ve never used before, and stitch components together into something that works. The jump from “I can deploy a web app” to “I can deploy a local LLM with a custom agent layer” is smaller than most people think. You’re already most of the way there.

Using ChatGPT is fine. Most people will be dependents, and for their needs that works. But the power dynamic matters. When you depend entirely on a platform, you inherit all of that platform’s constraints, costs, and risks. When you own your stack, you answer to yourself.

Knowledge Is Power

Every technological revolution has made the people who understood it indispensable. Right now, “understanding AI” means something very specific: being able to run it yourself, modify it, connect it to real-world systems, and keep it running when things go wrong. That’s sovereignty, and it maps almost exactly to what software engineers already do every day.

The new tech elite won’t be wealthy founders or VC-backed startup operators. Picture individuals or small teams who can stand up AI infrastructure, fine-tune models, and build autonomous agents independently of big tech platforms. The kind of people who, when a new open-weight model drops, have it running locally by the evening. One person with the right skills can now build what used to require a team of twenty. AI multiplies that leverage, and the engineer who wields these tools on their own terms, on their own infrastructure, has a kind of independence that compounds over time.

You were told your skills would become irrelevant. In reality, those skills are exactly what separate a sovereign from a dependent. The ability to self-host, to debug, to integrate, to ship: that’s the new literacy, and you already speak the language.

The Window Is Open

If the ai-2027.com predictions are even half right, the landscape solidifies fast. The infrastructure layer of the AI era is being built right now, and the positions are being established. In a few years, the lines will be drawn: builders and consumers, sovereigns and dependents. Latecomers will be users, not creators.

The window to develop these skills, to learn self-hosting, to experiment with open models, to build agents, to understand the stack top to bottom, is open today. It won’t stay open forever. The early movers in every technological shift are the ones who define the next era, and this shift is moving faster than any that came before.

You were told you’d be replaced. The opposite is true. The future belongs to the people who know how to wield AI.

The future belongs to SWE.


Thanks for reading!

Want to stay in touch? You can subscribe below to get notified when I write something new, or reach out directly if you have thoughts, questions, or just want to connect.

Subscribe for updates

Connect directly

hello@stefvanwijchen.com