The right way to Effectively Review Claude Code Output

-

can produce an incredible amount of content in a brief span. This may very well be creating recent features, reviewing production logs, or fixing a bug report.

The bottleneck in software engineering and data science has moved from developing code to reviewing what the coding agents are constructing. In this text, I discuss how I effectively review Claude output to be an excellent more efficient engineer using Claude Code.

This infographic highlights the foremost contents of this text, which is to point out you easy methods to review the output of coding agents more efficiently, to turn out to be an excellent more efficient engineer. Image by ChatGPT.

Why optimize output reviewing

You may wonder why you want to optimize reviewing code and output. Just a couple of years ago, the most important bottleneck (by far) was writing code to supply results. Now, nevertheless, we will produce code by simply prompting a coding agent like Claude Code.

Producing code is solely not the bottleneck anymore

Thus, since engineers are all the time striving to discover and minimize bottlenecks, we move on to the following bottleneck, which is reviewing the output of Claude Code.

After all, we want to review the code it produces through pull requests. Nevertheless, there may be so rather more output to review if you happen to’re using Claude Code to resolve all possible tasks (which you certainly ought to be doing). It is advisable to review:

  • The report Claude Code generated
  • The errors Claude Code present in your production logs
  • The emails Claude Code made on your outreach

You ought to be attempting to use coding agents for absolutely every task you’re doing, not only programming tasks, but your whole business work, making presentations, reviewing logs, and every little thing in between. Thus, we want to use special techniques to review this content faster.

In my next section, I’ll cover among the techniques I exploit to review the output of Claude Code.

Techniques to review output

The review technique I exploit varies by task, but I’ll cover specific examples in the next subsections. I’ll keep it as specific as possible to my exact use cases, and you then can try and generalize this to your individual tasks.

Reviewing code

Obviously, reviewing code is one of the common tasks you do as an engineer, especially now that coding agents have turn out to be so quick and efficient at producing code.

To more effectively perform code reviews, I’ve done two foremost things:

  • Arrange a custom code review skill that has a full overview of easy methods to efficiently perform a code review, what to search for and so forth.
  • Have an OpenClaw agent robotically run this skill at any time when I’m tagged in a pull request.

Thus, at any time when someone tags me in a poll request, my agent robotically sends me a message with the code review that I did and proposes to send that code review message to GitHub. All I want to do then is to easily take a look at the summary of the poll request, and if I need to, simply press send on the proposed poll request review. This uncovers lots of issues that would have gotten to production if not detected.

This might be the most dear or time-critical reviewing technique that I’m using, and I’d argue efficient code reviews are probably one of the essential things firms can give attention to now to extend speed, considering the increased output of code with coding agents.

Reviewing generated emails

Reviewing Claude Code output
This image shows some example emails (not real data) that I’m previewing in HTML to make it super efficient to investigate the output that my calling agent produced, and I can quickly give feedback to the agent. To make the feedback process much more efficient, I transcribe feedback while emails through the use of Superwhisper to record my voice, providing the feedback as I’m searching through the emails, after which quickly transcribing my feedback into Claude Code directly. Image by the creator.

One other common task that I do is generating emails that I send out through a chilly outreach tool or emails to answer people. Oftentimes I need to review these emails also with formatting. For instance, in the event that they have links in them or some daring lettering and so forth.

Reviewing this in a text-only interface akin to Slack is just not a great scenario. To start with, it creates lots of mess within the Slack channel and Slack also isn’t in a position to format it appropriately all the time.

Thus, one of the efficient ways of reviewing generated emails and basically formatted text I’ve found is to ask Claude Code to generate an HTML file and open it in your browser.

This enables Claude Code to incredibly quickly generate formatted content, making it super easy so that you can review. Claude can’t only show the formatted emails but additionally show it in a really nice manner, which person is receiving which email, and likewise if you happen to’re sending email sequences, it’s super easy to format.

Using HTML to review outputs is one in every of the key hacks which have saved me, that saves me hours of time every week.

Reviewing production log reports

One other quite common task I exploit Cloud Code for is to review production log reports. I typically run a day by day query where I analyze production logs, searching for errors and things I should concentrate on, and even just log warnings within the code.

That is incredibly useful because reporting services that send alerts on errors are sometimes very noisy, and you find yourself getting lots of false alerts.

Thus, as a substitute, I prefer to have a day by day report sent to me, which I can then analyze. This report is distributed with an OpenClaw agent, but the best way I preview the outcomes is incredibly essential, and that is where HTML file formatting is available in again.

When reviewing these production logs, there may be lots of information. To start with, you may have different error messages you can see. Secondly, you may have the variety of times each error message has occurred. You may have different IDs that consult with each error message that you furthermore may wish to display in an easy manner. All of this information is super difficult to supply in a pleasant manner in txt formatting, akin to slack for instance, but it surely’s incredibly nice to preview in an HTML file

Thus, after my agent has reviewed production logs, I ask it to supply a report and present it in an HTML file, which makes it super easy for me to review all of the output and quickly gain an outline of what’s essential, what they’ll skip, and so forth.


One other pro tip here is just not only to generate the HTML file but additionally to ask Claude Code to open it up in your specific browser, which it does robotically, and also you quickly get an outline. And principally get notified at any time when the agent is completed since the browser pops up in your computer with a brand new tab holding the HTML file that was generated.

Conclusion

In this text, I’ve covered among the specific techniques I exploit to review Claude Code output. I discussed why it’s so essential to optimize reviewing outputs, highlighting how the bottleneck in software engineering has shifted from producing code to analyzing the outcomes of code. Thus, for the reason that bottleneck is now the reviewing part, we need to make that as efficient as possible, which is the subject I’ve been discussing here today. I talked about different use cases I used Cloud Code for and the way I efficiently analyze results. Further improving the best way you analyze the output of your coding agents will probably be incredibly essential going forward, and I urge you to spend time optimizing this process and excited about how you may make reviewing coding agent output more efficient. I’ve covered some techniques that I exploit on a day-to-day basis, but after all, there are lots of other techniques you need to use, in addition to the proven fact that you should have your individual set of tests that can require their very own set of techniques to make use of which are different from mine.

👉 My free eBook and Webinar:

🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)

📚 Get my free Vision Language Models ebook

💻 My webinar on Vision Language Models

👉 Find me on socials:

💌 Substack

🔗 LinkedIn

🐦 X / Twitter

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x