My Thoughts on AI, Code, and Craft

This post reflects my personal journey and experience with AI tools


Introduction

I remember the end of 2021 when I was given access to GitHub Copilot by one maintainer of a project. I gave it a try. It was the initial era of AI. When I started to code and defined a function and wrote the first few lines, it automatically suggested a few edge cases and more code, and I hit tab. It was a magical moment. In that moment I felt, for the first time, something that could massively improve the amount of code I can produce.

I continued to use AI for a while, and over the next 3–4 years a lot of improvements happened. I have always been a fan of LSPs and autocomplete. One personal favorite is Supermaven the fact it integrated with nvim.

The Positive Case

LLMs can write a large fraction of all the tedious code you’ll ever need to write. And most code on most projects is tedious. LLMs drastically reduce the number of things you’ll ever need to Google they look things up themselves. Most importantly, they don’t get tired; they’re immune to inertia.

Think of anything you wanted to build but didn’t. You tried to home in on some first steps. If you’d been in the limerent phase of a new programming language, you’d have started writing. But you weren’t, so you put it off for a day, a year, or your whole career. I can feel my blood pressure rising thinking of all the bookkeeping, Googling, and dependency drama of a new project. An LLM can be instructed to just figure all that shit out. Often, it will drop you precisely at that golden moment where shit almost works, and development becomes tweaking code and immediately seeing things work better. That dopamine hit is why I code.

LLMs are powerful for scaffolding, prototyping, automation, and reducing friction. They make initial implementations and exploration much faster. In many cases, they’re perfect for throwaway code, experiments, demos, and scripts.

Hype, CEOs, and the Damage

I am totally with the idea that AI will change the way we program. But I don’t buy the way some CEOs who never wrote code project this change. They say AI will replace programmers. That messaging did damage. You don’t really know what their true motives are behind saying that statement.

If you are a beginner and you saw this AI hype at the start of your developer journey, you might enjoy coding at first. But the hype can make beginners rethink their career decisions.

Some people get glazed about AI after building a ToDo app or an “Uber for cats”, and then start preaching: “If you can’t solve it with AI, you’re not doing it right,” or “you lack promptmaxing skill.” Try your prompting skills on a large, real codebase improve quality, squeeze performance, optimize, and solve fundamental, complex problems before claiming mastery.

Did I mention the problem of the expert beginner The Rise of the Expert Beginner?

Wrong use of AI gives a false sense of confidence. If you rely on AI to make decisions for you, you should seriously reconsider your life choices.

Writing Code Was Never The Bottleneck

For years, I’ve felt that writing lines of code was never the bottleneck in software engineering. The actual bottlenecks were and still are code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and human overhead of coordination and communication. All of this wrapped inside the maze of tickets, planning meetings, and agile rituals.

Now, with LLMs making it easy to generate working code faster than ever, a new narrative has emerged: that writing code was the bottleneck, and we’ve finally cracked it. But that’s not quite right.

The marginal cost of adding new software is approaching zero, especially with LLMs. But what is the price of understanding, testing, and trusting that code? It’s higher than ever. LLMs shift the workload they don’t remove it. Tools can speed up initial implementation, but the result is often more code flowing through systems and more pressure on the people responsible for reviewing, integrating, and maintaining it.

This becomes especially clear when:

  • It’s unclear whether the author fully understands what they submitted.
  • The generated code introduces unfamiliar patterns or breaks established conventions.
  • Edge cases and unintended side effects aren’t obvious.

We end up in a situation where code is easier to produce but more complex to verify. That doesn’t necessarily make teams move faster overall. The cost of making sense of the code together as a team remains the bottleneck. Let’s not pretend it isn’t.

Throwaway Code vs Production Code

Throwaway code: shitty scripts for benchmarks, prototypes for new features, demos.

Production code: the main infra for the main project.

Throwaway code written by LLMs is incredibly useful for testing, getting initial feedback, and where code longevity doesn’t matter. I don’t feel bad for a side project that died in two weeks generated using an LLM is not the same as production quality work. People’s inability to distinguish between these two is a very real problem.

More code ≠ better app.

Wrong use of AI makes the wrong parts easier.

Humans vs AI

AI can do X things better/faster than humans. Humans can do Y things better than AI. Pick the good parts from both and do something good. It’s about perspective on AI and how you see AI enhance your productivity.

AI should replace repetitive, mindless tasks the ones that effectively should be done by machines. If you’re just going through the motions without comprehension, you’re already acting like a machine and AI will do that job better than you.

AI is a desire multiplier. Was your desire to simply get things done, or was your desire to learn? If your desire is outcome only, AI will make that outcome easier. If your desire is to learn and craft, AI can help you scale your craft but only if you don’t let it erode your core skills.Make sure your desire is aligned with what you actually want to build and become.

What Really Matters

In the era of AI coding, where it’s easy to make beautiful UIs, things look catchy and standard. But what makes a difference is quality, optimization, security, maintainability, scalability, performance, reliability,user experience and many more. The problem was never the amount of code, it was the quality of the thing. More code doesn’t mean a better app.

LLMs don’t fix fundamentals. They amplify both good and bad habits. They’ll give you scaffolding quickly, but clear thinking, thoughtful design, careful review, and testing become even more important.

How to Use AI Well

  • Use LLMs for scaffolding, prototyping, and to remove boring friction.
  • Don’t outsource your judgment. Review generated code carefully.
  • Differentiate where AI is appropriate (experiments, demos, tests) and where it isn’t (critical production paths, core architecture).
  • Keep learning the craft: deep problem solving, optimization, and system design are human strengths.
  • Mentor and pair program; human knowledge transfer still matters.

Closing Thoughts

I’m not anti-AI. But I hate the low-IQ, high energy people who tell everyone how they did something with AI and mask a lack of understanding. It’s not just about writing code and building X; it’s about solving complex problems.

AI isn’t the problem, poor usage is. If you don’t know how to utilize AI without letting go of many of your abilities to control and understand your systems, you will hurt your craft. Pick the good parts from both and make something meaningful.

If I had given myself into AI hype, I would not have been able to do outstanding things myself. I would not have learned the things I learned. Talking about things and doing/experiencing things are two different things.

For those who don’t believe in the effect of using AI: try doing the tasks which you are doing only with AI. Try doing those tasks without AI for a week.