articles on techniques I apply when using Claude Code to get the most out of it. However, a topic I’ve spent less time covering is how to improve my Claude Code usage in general. How do I optimize the way I interact with my Claude Code instances and the way my Claude Code operates within the code repositories I’m updating?
In this article, I want to highlight how I’m continually updating how I interact with Claude Code and how my Claude Code operates, which makes me and my coding agent more and more effective over time.
The concept of continual learning is incredibly powerful because if you can improve just a few percent every day, the cumulative effect over weeks and months will become enormous. You can become vastly more efficient than just the out-of-the-box version of Claude Code or any other coding agent.
Why perform continual learning?
I always try to cover why a topic is important, why you should care about it, and how it can help you. The reason you should perform continual learning is simple: if you’re just using the out-of-the-box version of Claude Code, Codex, or any other coding agent, you’re losing out. Of course, those models are incredibly powerful, and if you compare them to just a few years ago, you can still be many times more efficient than previously.
However, that doesn’t really matter. What really matters is that applying continual learning will again provide you with a massive efficiency boost.
In this article, I’ll cover one very simple technique on how I make my Claude Code improve itself every single day, and also give you an insight into how I try to optimize my own interactions with Claude Code to make the human coding agent interaction as effective as possible.
Making Claude Code learn from itself
I’ll start off by covering the simple technique that you can start using right now, which is almost certainly going to improve how your Claude Code performs.
You can simply make a skill within Claude Code that goes something like this:
Review my last interactions with Claude Code from the last 24 hours.
Look for any problems that I encountered, things that weren't working
efficiently, and unnecessary tool calling. Look for common mistakes
Claude Code was doing and other things that can be optimized.
Look thoroughly through all conversations and make a plan for how we
can optimize our flow in the future, both within each repository and
cross-repositories. Also look for insights that would be useful for the
coding agent to know beforehand, both before entering a repository and
when working in multiple repositories at the same time.
Let’s say we call this skill review-past-performance. Now, what you should do is set up a cron job to trigger this skill at 2 am every night or some time when you know you’re not interacting with your agents actively.
What’s going to happen when you implement and run this skill is that Claude is going to go through all the past conversations that you’ve had over the last 24 hours. It’s going to look at the threads. It’s going to see where you got stuck with Claude Code (i.e., where you spent more time than you should have), and it’s going to see where Claude Code got stuck making incorrect tool calls, incorrect assumptions, or where it simply didn’t have the context it needed to perform the task effectively.
It’s then going to make a plan for how to avoid these things from happening in the future and make Claude Code work more effectively in most cases. This will implement changes such as:
- Adding more information to agents.md or similar generic markdown files
- Creating specific skills that the agent can either load on demand or run on demand when dealing with certain tasks
- Implementing specific scripts or tooling, such as pre-commit hooks, testing scripts, and similar, to avoid mistakes from happening again
The best part about setting up a cron job to run this skill on a daily basis is that you don’t even have to interact with agents at all. It’s going to be able to self-reflect in an efficient manner, discover inefficiencies, tweak them, and thus improve Claude Code over time. One of the best parts of this is that Claude Code is going to be customized to your specific use cases. You might have a specific tech stack or preferences when working in repositories. Running this skill will discover these preferences and optimize them to make them as efficient as possible.
By simply running this cron job every night, I’ve unlocked massive efficiency gains, where my coding agents have become a lot stronger than they used to be, simply because they make fewer mistakes. They’re more aware of the correct approach to doing things, and overall, they follow my preferences better.
Improving human interaction with coding agents
Another more complicated thing to optimize is the human interaction with coding agents. I spend a lot of time thinking and reflecting on how to most effectively communicate with my agents to make them implement the code I want as quickly and as efficiently as possible.
Clearly, this is not a solved problem yet, as there are still a lot of different tooling and platforms coming out to make coding agents and interacting with them easier, better, and more efficient. In this section, I’ll cover some of my reflections on the human interaction with coding agents and how I try to optimize it myself.
Note that the techniques I’ll cover are, of course, optimized and tuned for my own workflows, and I urge you to read and learn about them and think about how this applies to your own workflows.
Running 7+ agents at once
I often find myself running a lot of agents at once simply because I have a lot of tasks to complete and can start working on them in parallel. Of course, there are external factors that decide whether it’s possible for me or even relevant to run this many agents at once. When the situation allows it and it makes sense efficiency-wise to do so, I’ll of course run as many agents as possible in parallel.
However, I have found that when I start reaching more than seven agents at once, I start losing control over all my agents. I’m not able to effectively context switch between them, keep up with what each agent is doing, and effectively answer the agent when it’s asking me questions.
I’ve tried a lot of different tools and platforms to make this interaction more efficient. I’m currently using Warp, where I use split panes for every tab when I’m working with parallel agents within one repository, and starting new tabs for each different repository I’m working on. This works relatively well, even though, as mentioned, I get stuck when running more than seven agents at once.
I’ve also tried more IDE-based approaches like Conductor or Omnara, but I don’t feel like they give me any productivity gains over what Warp can provide me.
My takeaway from this section is some techniques that allow you to run as many agents as possible at once. First of all, the situation has to allow it. It needs to be that you have enough tasks that can all be run in parallel, and where you can let the agent run for long enough that you’re not constantly interrupted. Step one is simply that the task or tasks that you’re completing have to allow it.
Second of all, a very powerful thing when working with many agents in parallel is a recap. Claude Code has started providing a recap at the bottom of the chat, which is incredibly powerful. It gives you a super brief overview of what you’re doing in that chat, which allows you to quickly catch up on context when you have to interact with an agent again. I urge you to enable recaps and actively use them if you need to read up on the context of a specific thread.
Lastly, in this section, I would also like to note that Claude Code, today, as of the writing of this article, just released an agent view in Claude Code. This is a view that should make it easier to keep an overview of all your agents at once. I haven’t tried it myself yet, though it looks to deal with exactly the problem I’m describing in this section. I’ll definitely be trying it out and writing an article on it in the future.
Let the agent ask you questions, not the other way around
This subsection is an interesting one because the common way to interact with AI models, at least in the beginning, was to ask them questions and have them reply to you in a concise manner. However, this completely shifts once you start dealing with long-running code sessions. You don’t want to ask it questions anymore, you want it to work as independently as possible for as long as possible, and only stop when it has to ask you questions.
This is thus something I recommend that you input into the prompts of your coding agents. You want them to run for as long and independently as possible and only stop implementing once it has to ask the user a question. This, of course, also ties strongly into another article I’ve written, which is how to let Claude Code validate its own work. To make the agent run for a long time, you need to give it an option or possibility of verifying its own work, which I covered in another Towards Data Science article. Check it out below:
How to Make Claude Code Validate its own Work
Conclusion
In this article, I covered how I continually improve my Claude Code instance, both by making Claude Code improve itself through self-reflection every night and through improving human interaction with Claude Code and other coding agents. I believe both of these things are things that you should try to optimize as an engineer to make your coding more effective. As an engineer, you should always be looking at the next bottleneck: what is slowing you down the most and would unlock the greatest productivity boost. For me, I discovered that this was:
- Claude Code repeating mistakes, which are fixed by the first section in this article
- The human interaction with Claude Code, which I covered in the second section of this article
I urge you to constantly be looking for such bottlenecks and try to remove them as quickly as possible to make your coding efforts as productive as they can be.
👉 My free eBook and Webinar:
🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)
📚 Get my free Vision Language Models ebook
💻 My webinar on Vision Language Models
👉 Find me on socials:
💌 Substack