Back to articlesEdited 2026-01-16

My Approach to AI-Assisted Coding (mid 2025)

Abstract layered waves with warm amber highlights on a soft grid background.

Intro

I believe coding is to a large degree "solved". Not engineering, but coding. Basically, if I have a task and I'm able to explain the problem to the model and I'm able to give some direction for how the problem should be solved, then in nine cases out of ten, the model is able to come up with the code that is needed to solve the problem in the way I described. AI can write code faster and often to the same quality as you - as long as you give it the right information and tools. Of course, there are specific domains and edge cases where AI won't have enough training, but that's not the norm. However, the engineer is still very much needed. Agents need to be steered, they need to get the right contextual knowledge. When left to their own, they often produce so-called "slop".

What

Let's get our terms straight. AI-assisted coding != Vibe Coding.

"Vibe coding" was introduced by Andrej Karpathy in a tweet from early 2025. Basically, it's when you just prompt and forget. You don't read the generated code. You accept all changes automatically. Errors are mitigated by a loop of copying the error message into another prompt - repeat until (hopefully) the program works. Eventually, the code base grows behind the vibe coder's comprehension. It's fun, but not the most productive for serious projects.

AI-assisted coding, on the other hand, is about producing high-quality code that can be deployed and maintained long-term. You read and understand the generated code. You (or the agent itself) run tests to validate output. You use context to make the agent follow desired patterns, generate documentation, align with your organisation's guidelines.

Why

Why not just write the code yourself?

AI writes code much faster than you do. This is important because it allows you to tighten feedback loops -> reiterate -> produce something of value faster. Build something demonstrating the desired functionality, get feedback, improve, repeat until done. Need to build a data pipeline to transform and load data into your warehouse? Instead of spending a day writing Airflow DAGs and debugging SQL transformations, spend 30 minutes having AI generate the pipeline structure, then use that saved time to validate data quality, optimize performance, and handle edge cases you discover in production data.

After all, the goal isn't to write code, it's to achieve a business outcome. Code is just a means to an end. If your identity is built around being good at writing code, it might be time to re-evaluate.

AI lowers barrier to entry. With lower opportunity costs, you can try stuff and iterate much faster. You don't need the upfront costs of doing research and understanding how to implement something. This will allow you to explore different paths, increasing the chances of discovering something worthwhile. You can try new frameworks, solutions, methods etc. that used to be too time-consuming to be worth it. Now it's worth it.

Getting Started

To get started you'll need to pick an agentic tool. They usually start out at about $20 a month. Most common options are:

  • Cursor
  • Claude Code
  • OpenAI Codex
  • GitHub Copilot

It doesn't really matter which one you pick. I started with Cursor, then fell in love with Claude Code, and now I prefer Codex.

These tools are not really comparable to AI platforms such as Lovable or Replit. Agentic tools assist you as a developer by completing code and generating specifically off your requests. You still largely control the context, flow, and deployment. In contrast, AI platforms usually try to scaffold and build the entire system for you based on natural language. It's more hands-off and even more targeted towards non-developers that still want to build things with code. You can still iterate and build specific features, but they tend to combine development, runtime, and deployment under one roof.

To get started with AI-assisted coding, I recommend looking for low-hanging fruit. Generate unit tests, refactor existing modules, generate boilerplate and other repetitive code. This will allow you to put more time into the stuff that actually creates differentiation from the competition.

Context engineering

Context engineering is everything. Shortly, it's about providing high quality information that can be used to perform whatever task at hand. It takes AI agents from POC creators to production contributors. You can't expect an AI to just know details about your code base/system/how things relate/how they're defined, unless you actually tell them! And to not have to tell them all this stuff each time, you create reusable documentation files, such as AGENTS.md, but feel free to add additional docs that you refer to when suitable. Remember, each document eats context and can lead to context rot, so be somewhat selective.

Short Feedback Loops

Ship in small iterations. Ask the model for one function, write a focused test, run it, then iterate. Keeps surprises small and makes it easy to roll back if the AI veers off. Let the agent draft assertions and fixtures while you decide the behavioural boundaries. Commit to Git often so code isn't lost and mistakes can be recovered.

Practical Application

Start by running your LLM-agent's "init" command (usually built-in command ready to go). The agent will scan your codebase and create an AGENTS.md or CLAUDE.md file that covers most important aspects about your codebase. The AGENTS.md file is like an entrypoint to your codebase, for the coding agent. It fulfills a similar purpose that the README.md file. However, while the README file is for humans, the AGENTS complements this by containing additional contextual information for the agent to do its job well. Avoid making it overly detailed as it will be read by the coding agent every session, so keep information that is relevant and applicable throughout your codebase, not information that's only required for specific features or parts of it. Examples of such instructions could include build steps, tests, conventions, and other more technical aspects that aren't relevant in a README file, but that you still need the agent to consider when coding.

Then, as you go, start adding important contextual details - often knowledge that you have gained through experience that are specific to this project/organisation - and embed that in your agent file(s). For example, you might need to define specific terms, establish relationships between entities, sources for different things, etc.

As mentioned in the section about context engineering, you can't expect the model to know things unless you explicitly tell them about it or it's easy to understand from the existing code. If you have specific knowledge or mental models that you draw on when building/coding - embed that into the agent by documenting it in the instructions file.

As you develop, you might notice that your AGENTS.md (or CLAUDE.md) file gets very large, or that there is context you just need to provide the agent in certain scenarios or when working on specific aspects of your code. For that, you can either add directory-specific AGENTS.md files (most agents will automatically fetch those when working in the corresponding directory) or add documentation in a docs/ folder that you manually reference when prompting the agent. The key take-away is to start documenting much more and iterate on the documentation as you learn, so that it becomes super easy to provide the necessary context to the agent (context engineering).

I almost never prompt the agent without adding contextual information through @<file_name> tags.

Custom Commands

Use custom commands (either native to the agent-framework or copy-and-paste prompts) to perform recurring "chore"-like tasks such as:

  • Commit changes
  • Create onboarding file for a task
  • Create task-list for a task

Here are some of my favorites:

onboard.md (this one I got from McKay Wrigley):

# Onboard

You are given the following context:
$ARGUMENTS

## Instructions

"AI models are geniuses who start from scratch on every task." - Noam Brown

Your job is to "onboard" yourself to the current task.
Do this by:
- Using ultrathink
- Exploring the codebase
- Asking me questions if needed

The goal is to get you fully prepared to start working on the task.

Take as long as you need to get yourself ready. Overdoing it is better than underdoing it.

Record everything in a agent-tasks/[TASK_ID]/onboarding.md file. This file will be used to onboard you to the task in a new session if needed, so make sure it's comprehensive.

The purpose of the onboarding file is to generate a file maintaining the insights from a thorough investigation of the codebase by identifying all files and modules that are important for a given task. Why do we need this? Many tasks are comprehensive enough that they won't be completed in a single chat session. In a new session, we can quickly get the coding agent up to speed by referencing the onboarding file. It will immediately have the context needed to proceed. The onboarding file can be complemented by a task file which is generated by the following prompt:

create-tasks.md:

# Create Tasks

You are given the following context:
$ARGUMENTS

## Instructions

Your job is to create a task-list for the current task.
Do this by:
- Breaking down the task into smaller tasks
- Include tasks on how to test the functionality and the expected results of the task
- Creating a checklist of atomic tasks
- Make sure the task-list is well-organized and can be picked up to continue working on the task in a new session if needed

Most likely, there is already a agent-tasks/[TASK_ID]/onboarding.md file. If there is, use it to get a sense of the task.

Take as long as you need to get yourself ready. Overdoing it is better than underdoing it.

Record everything in a agent-tasks/[TASK_ID]/tasks.md file under the same [TASK_ID] folder as the onboarding.md file.

The create-tasks prompt simply uses the onboarding file we've already created to generate an actionable task list that be checked of iteratively as the agent works on the task. Similarly to the onboarding prompt, this will help when we're working on something across agent sessions. If we just provide the onboarding, the agent still has to figure out what has already been done versus what's in progress. Using the task file, it can get to work on the right subtask from the get-go.

commit.md:

# Commit

You are given the following context:
$ARGUMENTS

## Instructions

Commit any unstaged changes that appear to be complete.

Do this by:
- Do not use `git add .`
- Create logical groupings of changes in commits
- Follow husky commit hooks standards
- Only commit changes that appear to be complete, don't commit other changes
- Ask me questions if needed