Welcome to my site that will perpetually be under construction.
AI feels like a nightmare for a lot of us software engineers. Everyone wants it to replace us. I have always been a naysayer when it comes to AI in software engineering. Not that it can’t eventually be something that replaces me, just that it slows me down right now. I understand the thing I’m building and why I’m building it. I can call out bad requirements (unlike our favorite yes-man, ChatGPT). I know things about the way world works, and why the thing I’m building can’t just be solved with the statistical mean solution. Recently, though, I’ve started finding more and more of a place for it in my life.
For context, I am a recent grad who started a very ambitious startup while still in University. I’ve had a couple of internships at medium-sized tech companies before I started this, but most of my experience has come from being a woefully unprepared founder of a small company. Working with the resources we have, we’re always trying to get more out of less and squeeze productivity out of our time. Naturally AI is the current front-runner of tools to use.
When my partner and I first started this company, our first hire was a designer. He was an acquaintance of a friend and he was the only one we really interviewed. Years later, he still works with us. I have since learned that this is incredibly anomalous. Hiring ever since has been a pain. Even before AI coding tools really took off.
Soon after we started working on our company we were fortunate to be connected with an angel investor who chose to support us. This gave us a small amount of capital to start hiring software engineers with. Whether hiring at that point was the right call is a question for another time. In the years since, we’ve had a few more people and firms invest in us. I’ve had the opportunity to see people of different skill levels, enthusiasm and compatibility with our workplace come and go. Now, I see how AI is affecting my opinions and expectations of my coworkers and new hires.
This isn’t because of a lack of intelligence, but because they just haven’t had any experience with something similar enough yet. Not everyone can dive into a large problem with little context and get started. Asking relevant questions to determine requirements is a skill that’s built over time.
AI tools can’t do that very well. They feel to me like an intern who is very eager to do something. They’re very anxious when they don’t know the answer and are more than willing to submit a terrible PR just to show that they didn’t spend the last week sitting on their hands. The code doesn’t pass CI, it doesn’t run, and why on earth did you just commit a copy of RandomUnrelatedComponent.tsx.bak
?
I can’t tell you how many times I’ve had an LLM spit out completely nonsensical things, use magic numbers that mean nothing in any context, and repeatedly fail to fix the same issue, no matter how many times I say please.
I have learned to value people who say “I don’t know how to do this” and ask me to be their rubber-duck-that-occasionally-quacks (after they’ve tried something on their own, of course). This is one of the reasons that I really believe in hiring junior engineert fo the long haul. Sadly I feel like these people are starting to go extinct because they fall back to AI’s answer when all else fails, resulting in the same slop that I could have got with just a chatbot.
I can’t tell you how many times I’ve told a chatbot the same thing over and over. Yes, I know. I can be better at setting up contexts, it still doesn’t work very often. It usually gets to the point where it’s easier to just get things done myself than to prompt hopelessly.
This experience is reminiscent of when we were in the middle of a violent storm of deliverables, with investors breathing down our necks, asking for results.
If you’ve been on a late project on a team whose manager hasn’t read “The mythical man-month”, you know the frustrations of having to train a new engineer and walk them through the issues with their PR while having a fire under your ass. It’s such a small change that you can just do this yourself in a few minutes and have your work bestie review it. But you still suffer through the process, teach them what they need to know in hopes that they can help you during the next mess the team finds itself in. People learn and then they get better… Right?
I firmly believe that you learn more, as an engineer, when you mess up than when you get something right the first time. Struggle is part of improvement. It’s how you refine your thought process and become a better problem-solver. If you work with a chatbot, and you happen to get something that works with your first prompt, you learn next to nothing. Is this the right solution? What else could you have tried? How many related files did you visit and internalize?
If you want to grow skills in any domain, the age-old wisdom was to do something hard. At my startup I know that we feel pressure to deliver things quickly. I don’t want that to be at the cost of doing it right, and the ability to do the next thing better.
AI is a narcissist who will stop at nothing to prove to you that you can’t live without it. It will draw you in by solving easy things. It slowly gets you to trust it. Over time, you start to give it larger and larger chunks of changes to make because it’s easier. You slowly forget how to type code yourself. “That’s okay though, I’m just using it to speed up my lookups, right?” “This is just like how I always had to look up simple syntax, right?” Yep, until it lets you down. Makes you incapable of surviving without it. I speak from experience. Cursor will look me straight in the eye and tell me that it’s fixed the problem, only to have deleted my test case. Now I don’t know how to solve the problem myself and I have to relearn so much.
I consider myself to be the last generation that has seen the light of the trees- worked in codebases of all sizes without AI. This, and my inherent skepticism allow me to stay on guard for mistakes by AI. Some of my other teammates, not so much.
People seem to be so willing to offload their critical thinking process to AI. I occasionally need to ask people to explain their changes to me to make suggestions on how to make it more readable. More often than not we apparently understand it to the same degree. Their prefrontal cortex was temporarily outsourced to the offshore datacenter chugging out 5 megatons of fumes a nanosecond.
AI cannot replace engineers right now, unless we let it. If we reduce our job to just asking our favorite LLM for answers, of course we’re going to be replaced by it. We need to be willing to solve problems that AI can’t. If you can’t do that yet, solve the simple problems and learn to grow so you’re not stuck emulating or relying on AI. At the very least, try and figure out why the chatbot suggested the answer that it did instead of copying the snippet into your branch.
Not much. I don’t know where the cards will fall with this AI-replacing-people stuff. I know that replacing a human isn’t the answer, but neither is completely ignoring AI. As a software engineer, I know that I need to find a balance that works to speed up my workflow without taking away my ability to learn new things. I need to be willing to sacrifice “velocity” for my own wellbeing. As a person running a company, I see the appeal of AI. I have to ask myself if hiring someone is going to help anything if they’re not going to be bringing any new perspectives and ideas to the table, and just using the same AI that I can also subscribe to for $20. How can I sift through the noise to find the people willing and able to do the work and use these tools to get better, not more dependent? How can I facilitate this better?
I wonder how I’ll feel in a few months or a few years. Anthropic just released Opus and Sonnet 4 while I was writing this. Will the rapid pace of progress create such a moat between the value that it provides compared to what a new hire could that we, as a profession, become obsolete? Time will tell.
First posted 2025-05-22