Q/A: Am I falling behind because I don’t want to adopt vibe coding in my development process?
I already use AI to some degree when I’m programming—mainly to look up functions and get quick examples. At the end of the day, my projects are for learning, and I’d rather understand how different frameworks, languages, and concepts actually work and how they’re applied.
Even in the enterprise domain, my team especially my team lead would look down upon you if you’re vibe coding anything. However, I’ve heard the complete opposite from other dev/data scientists/engineers in other firms.
I keep hearing tech gurus (aside from Primeagen) say that as a software engineer, you’ll have to choose between writing clean code and using AI—and that you should always choose AI, since “it knows everything.”
In my experience, I’d much rather debug clean, structured code than vibe code that feels like slop on top of slop. Maybe I don’t fully understand how vibe coding actually works, but I guess I’m worried that fully adopting it will come at the cost of skill atrophy.
This is an interesting question and it raises three points worth discussing.
Point 1: AI-usage and vibe-coding are not the same thing
The definition of vibe-coding is ping-ponging with AI to develop something without a clear understanding of what you’re doing. AI-usage, on the other hand, can be done responsibly and with deliberation.
Not to waste too much time here, because others have elaborated on this before, the key difference is whether you’re deciding to offload operational stuff (like typing) or decision-making.
Never offload decision-making.
Point 2: AI knows about everything, but knows nothing
It’s processed millions of examples, sure. But it doesn’t know your system, your constraints, or your goals. There’s no understanding there, only probabilistic word generation. It predicts what looks right and that’s not engineering, it’s powerful autocompletion.
Engineering starts with deliberate thought. Clean code is just the side effect of this. I’ve said this before. Engineers are creative, disagreeable, have a bias for action, communicate well, and don’t settle for makeshift solutions.
Out of the five, AI maybe gets one and a half. It can act. It can sort-of communicate. But even then, it hallucinates, lies, and disobeys non-negotiable constraints.
Give it a novel problem, and it’ll serve you confident garbage. And no, power prompts are not the solution. It's simply not the right tool for deliberate thinking.
This is where we thrive, I believe. By staying sharp, asking better questions, and positioning ourselves as problem-solvers, not just code-generators.
Point 3: Reckless AI-usage leads to skill atrophy
Over-dependence on anything leads to skill atrophy. From this perspective, offloading all your critical thinking onto a senior colleague leads to the same result - your skills and knowledge are on a decline.
If you’re an experienced engineer, you’ve probably had this “aha!” moment before. At some point you’ve realized that you have to start battling the unknowns yourself, and stop pulling people by their sleeve every time you’re stuck. You’ve realized this is how we learn and grow professionally. Through struggling and perseverance, over and over again until it sticks. AI-usage is no different.
A fairly recent research on The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers reaches a similar conclusion. 62% of participants reported engaging in less critical thinking when using AI. There are similar studies with similar results regarding creativity and writing.
It’s a tool and you get to choose whether and how you’ll use it. If you do decide to use it, make sure it’s for narrow-scoped and well-defined problems. The key thing is that you use it for operational work, rather than critical thinking. Again, never offload your critical thinking to anyone but yourself. Stay sharp and you’ll stay confident.
Bonus point: don’t believe the hype
Where there’s hype, there’s profit. That’s not new, and it’s definitely true for AI. Whether it’s AWS, Microsoft, or your any of the tech gurus, everyone benefits from keeping the hype alive. Why? Because AI isn't just a breakthrough, it's a business. A big one, for sure.
Take AWS and Microsoft, for example. They’ve poured billions into training foundation models, building APIs, and launching AI-as-a-service platforms. The more hype there is, the more people and companies feel pressured to “not fall behind”. Just like yourself.
Then you’ve got the gurus. The creators telling you “if you’re not using AI 10 hours a day, you’re already obsolete” or the YouTubers selling AI tools and repackaging ChatGPT prompts into $500 courses. AI is the next gold rush, and everyone’s trying to sell shovels. We've seen this before, but the scale is enormous now. This time we have the general public jumping on the train as well, which makes it feel way worse.
Hype wants you to feel behind, scarcity mentality wants you to act fast. Both want you to spend. Sometimes it’s money, sometimes it’s time. But either way, they want you to be invested in their business. The more you're invested, the more investors are. Again, it's a business. Don't mistake it for anything else.
Resist the urge to jump on the hype train. Keep learning, stay curious, explore and have fun with it. But as I've repeated multiple times before: don’t ever outsource your thinking.
AI is a threat for developers
I believe AI is a threat for developers, but not for engineers. I might be wrong, but this is my rationale.
That‘s a cool point about the overhyping of AI.
Everyone pretends to be an AI expert, hence just amplifying the same message as others like parrots.
And since social media is a lot about sensations and over exaggerated claims, we get bombarded with titles like „Use these 10 prompts to replace 80% of your employees or your users will replace you. Number 7 will shock you“.
I wonder if something similar was happening before the dotcom bubble burst 🤔
In reality, sure, you need to get with the program and leverage AI or risk legging behind people who do. Beyond that… noone knows