The AI landscape is moving fast. New tools, new models, new startups. Every week brings another "revolutionary" announcement.
And with it comes the fear. The fear of falling behind. The fear that if you don't adopt the latest tool right now, you'll become obsolete. Your competitors will outpace you. Your skills will rot.
This is mostly poppycock.
The fundamentals of your craft haven't changed. The people who were good at what they do before AI are still good at it. The people who weren't are now just faster at producing mediocre output. The skill gap hasn't closed; it's just been reframed.
And here's the thing: there's nothing to "catch up" on. Whatever specific tool or workflow exists today will likely be irrelevant in six months. It's like Microsoft Word: you don't need to have used version 1 to use version 20. You can jump in whenever and be fine. Just understand the primitives of a word processor (editing text, formatting, page layout) and you'll be caught up in no time. The people who started earlier have no meaningful advantage over you.
What about the fear of being replaced entirely? I don't know. Maybe AI will take my job. But the solution isn't to look for job security in any specific tool or process. Software development might not look the same in the future, but the fundamentals of problem solving aren't changing any time soon.
Microsoft Word might get replaced by GenAI, but true storytelling? Figuring out what message I want to relay, and bringing my own experience and style to it? That's not going anywhere.
Same with software. This ties into what I believe our job as software developers even is: it's not about the code. We may love writing it, but it's not the point. Code doesn't bring any value in and of itself. Quite the contrary, it's mostly a liability. The value we bring lies in our ability to solve problems. Sometimes this takes the form of a feature with lots of code, in which case our job is to minimize the liability (make the code maintainable, tested, etc). Sometimes it takes the form of reframing the problem to allow a simple solution that requires no code at all.
AI can handle the mechanics: syntax, boilerplate, wiring things together. But understanding the actual problem you're solving? Knowing which problems are worth solving in the first place? Recognizing when a requirement doesn't make sense, anticipating how users will misuse your system, making tradeoffs between speed and maintainability? That's the real work. AI can write the code, but it can't tell you if you're building the right thing. You can use it to drive your problem solving, but it won't be creative in the way humans can be. It won't reframe the problem for you.
What's actually worth understanding #
So if the specific tools don't matter, what does? Let's start with what LLMs actually are.
The Magic Trick #
LLMs are a magic trick. Like all magic, the feeling of awe you get is real, but the magic itself is not. We know it's a trick. So how can we still feel the wonder even though we know it's a trick?
That's just how our brains evolved. For survival on the Serengeti, pattern recognition was our brain's top priority, even at the expense of false positives. We are hard-wired to put a face on everything, including a computer algorithm. If it feels like a human, we will automatically treat it like one, even when the data shows us that this way of thinking leads us astray.
In the moment, we can't help it. But we can be aware of it.
A fancy word association calculator #
So what's the magic trick? Where's the illusion?
No matter how impressive the demos get, your AI agent is fundamentally a fancy word association calculator. It has seen a lot of words (and code) in a lot of different contexts. When you give it some words as input, it is remarkably good at pulling out related words. It has seen far more contexts than any human can possibly retain. Its vast training data, plus its ability to iterate fast and adjust based on feedback, is what creates the illusion that it is "thinking" or "understanding".
But here's the key insight: it is not creating anything new. It is juxtaposing related words together based on patterns it has observed.
This is why when you ask it to work in a very esoteric or novel context, its answers can feel like utter made-up garbage. It simply does not have the right related words in its big bag of words1 to pull from. There is no reasoning happening. Just pattern matching against a void.
There's no ghost in the machine #
It is not a he, she, or they. It's an algorithm. A very fancy one, but still an algorithm.
Do not make the mistake of giving it feelings, intentions, or moods. When your sorting algorithm makes a mistake, you don't get angry at it, because you understand that either there's a bug, or the input is wrong. The algorithm didn't fail you. It just did what algorithms do: exactly what the inputs dictated.
The same applies to LLMs. They can only put related words together. We call their mistakes "hallucinations" because we want to believe they have minds that can dream. They're just errors.
Treating them as anything more leads to frustration, misplaced trust, and poor decisions about when and how to use them. In fact, many of the "failure modes" we criticize in LLMs (limited context, narrow training, hallucination, failure to generalize) are problems we exhibit ourselves2.
The Hype Machine #
Do not be fooled by hype. The industry is bubbling because there is a lot of money to be made out of basically nothing. We are easily fooled by magic tricks, and we like the feeling it gives us, so we pay people who make us feel this way.
The reality: most AI startups are just wrappers around the big LLM models. They add a UI, some prompt engineering, maybe a database, and call it innovation. Don't marry any of them. Focus on the underlying technology and the big picture.
There's also a proliferation of attempts to standardize prompting and configuration: spec files, agent files, MCP, skills, custom instructions... Tons of markdown files in various formats that nobody can guarantee your LLM will even read or follow consistently.
You are under no obligation to use someone else's standard. Adopt what makes sense for your workflow, throw out the rest. Keep it simple.
The hype promises an omniscient god. Reality gives us a context window. #
That said, the hype isn't completely devoid of value. There are real technical limitations being worked on, and solving them is worthwhile. The key is to understand the limitations themselves, not which vendor claims to solve them best.
LLMs have hard constraints. The most important one is the context window: the maximum amount of text they can "remember" at once.
When you throw too many words at it, it runs out of memory and has to do something about it. Usually this means it will simply "forget" details, often important ones.
The other constraint: garbage in, garbage out3. Or put positively, context is king. The more you let your LLM into your world, the better context it will have, the better its output will become. Let it into your logs, your metrics, your project management tool. Let it read files from related projects. Let it see what you see.
These two constraints together explain a lot of what you see in AI tooling. Why do agent tools fork off subagents? Context management. Why do they summarize conversations? Context management. It's all just trying to maintain the illusion within these bounds.
Most work on AI tooling is fundamentally about this: how do we manage context so that the magic is maintained?
Understanding these constraints changes how you work with these tools. You learn to be concise. You learn to front-load important information. You learn when to start fresh. Most importantly, you learn why your tools work the way they do, which gives you better command over them.
You are the driver #
Don't get me wrong: I love AI. It has done more for my personal motivation and productivity than any other tool before it.
But I don't let it dictate how I work. It is a tool. I am the driver.
I allow the feeling of magic to drive my motivation to keep going. That sense of possibility, of having a tireless collaborator, is genuinely energizing. But that's where I draw the line. I do not worship it or any of its output.
I use AI to amplify my own learning, not to replace it. I use it to explore ideas faster, to get unstuck, to handle tedious tasks. But the judgment, the direction, the final decisions? Those remain mine.
The moment you hand over your kingdom to an algorithm, you've stopped being a driver and become a passenger. And passengers don't learn the terrain.
The most important practical tip #
So what does being the driver actually look like in practice?
If you're using LLMs for coding or technical tasks, here's the most important lesson I've learned: give the agent a way to verify its own work.
Have it write a test. Have it create an automation script. Have it take screenshots. Then tell it to use these validations in its feedback cycle. An LLM that can check its own output is dramatically more useful than one flying blind.
If you're unsure how to verify the task you're giving it, ask it to come up with verification criteria itself. Often it will suggest something reasonable.
And if verification still feels unsuitable or impossible? That's feedback for you, not the tool. Do you actually understand the requirements? Can you break the work down further? Is the task well-defined enough to delegate at all?
Use the AI to help you through this thinking work too. Sometimes the most valuable output isn't code. It's clarity about what you're actually trying to build.
This is what it means to be the driver: you set the destination, you define success, and you keep your hands on the wheel.
The tools will keep changing. The hype will keep churning. But if you understand what's actually happening under the hood, you can ride the wave instead of being swept away by it.
Yes, I used AI to help write this. It fleshed out my notes, rephrased ideas, and structured the argument. The perspective is mine.