Home
/
Podcast
/
Code Reviews with Adrienne Tacke, author of Looks Good to Me

Code Reviews with Adrienne Tacke, author of Looks Good to Me

March 27, 2025
Code reviews
Ankit speaks with Adrienne Tacke about the intricacies of code reviews, their importance in developer experience, and how to use metrics effectively without compromising the review process.
Hosted by
Ankit Jain
Co-founder at Aviator
Guest
Adrienne Tacke
Sr. Developer Advocate

About Adrienne Tacke

Adrienne is a Senior Developer Advocate at Viam. She's just published "Looks Good To Me: Constructive Code Reviews", a labor of love that she hopes will improve code reviews everywhere. She also spends way too much money on coffee and ungodly amounts of time playing Age of Empires II.

Adrienne on LinkedIn, GitHub, Bluesky, Website

Code Reviews with Adrienne Tacke, author of Looks Good to Me

If you asked any developer in the world if they’re happy with their code review process, you would probably not hear a yes. At the same time, if you asked 50 developers how code reviews should be done, you’d get 50 different responses.

In this episode of the Hangar DX podcast, Ankit speaks with Adrienne Tacke, the author of "Looks Good To Me: Constructive Code Reviews," a book that she hopes will improve code reviews everywhere.

Adrienne advocates for code reviews as the team's record-keeping function:

Yes, there is documentation, Jira, or code comments, but code reviews are great as the baseline record of the changes in the code because you have to do them anyway.
It’s part of the software development process, so why not use it to improve the code, share knowledge within the team, and have a clear record of how the code changed historically?

Adrienne also shares her blueprint for establishing a good code review culture:

1. What goals do you want to accomplish with code review

Consider the bottlenecks and frustrations happening because there is no code review process and take that as the initial starting goal. These can be, for example, making sure knowledge is not being siloed in teams

2. Set guidelines for what the code review process should be

This is the stage where most teams are unhappy with code reviews. There might be some process but everybody has a different understanding of what that process is. Create guidelines, which can be as simple as listing what the blocking vs non-blocking issues are or how many approvers something has to go through,

2. Do not set it in stone

If some of the guidelines don’t work or if something is missing, change it! And change it as a team, the point of this step is that team members talk more and agree more.

Can code reviews be done by AI?

Adrienne says that with all the code that is generated by AI, it’s even more important for humans to look at it:

Every tool that generates code comes with a disclaimer: Make sure you review it!
Absolutely use AI to take care of the more mundane tasks, like generating the description of what is happening in your PR or letting it do a first-pass review and making sure things are syntactically correct.
I’d rather have my computer tell me to fix formatting and style than see a comment by my colleague in code review on that. But AI can’t replace human judgment, at least at this point, even with all the improvements in LLMs and agents.

Adrienne and Ankit also discuss:

  • Commit history vs. code reviews
  • The value of code reviews in pair programming
  • Code reviews as bottlenecks
  • Linters vs. code reviews
  • Metric and how to measure the quality of code reviews

Chapters

00:00 Introduction to Code Reviews
03:01 The Importance of Team Dynamics in Code Reviews
05:58 Establishing a Code Review Culture
09:04 The Role of Code Reviews in Knowledge Sharing
11:48 AI and the Future of Code Reviews
14:48 Metrics and Their Impact on Code Review Quality

Every tool that generates code comes with a disclaimer: Make sure you review it!

Get notified of new episodes

Subscribe to receive our new podcast releases.

Listen on
Join Hangar DX
A vetted community of developer-experience (DX) enthusiasts.

Takeaways:

  • There’s no one-size-fits-every team code review process.
  • How to establish a good code review culture.
  • AI can assist in code reviews but cannot replace human judgment.
  • Metrics should be used to track trends over time, not to judge individual performance.
  • The value of code reviews as record-keeping
  • Pair programming can speed up feedback but still requires broader team reviews.
  • Low-level feedback can and should be automated
  • Developers should be encouraged to engage with unfamiliar code during reviews.
  • A diverse reviewer pool can improve code quality and team knowledge.
  • Large PRs and the small pool of reviewers are the most common reasons code reviews become bottlenecks

Every tool that generates code comes with a disclaimer: Make sure you review it!

Get notified of new episodes

Subscribe to receive new Hangar DX podcast releases.

We’ll be in touch with new episodes!