The Anatomy of Slow Code Reviews
The Anatomy of Slow Code Reviews
I’ve written about code reviews before, but this is another nice article. It considers the following points:
- Code Review Time
- The Social Aspects
- Incentives
- Review Styles
- Ownership
- What to Improve
- Code Complexity
- Iteration Time
- What Not To Improve
- Lead Time for Changes
- Test Coverage Percentage
- How to Improve
- Review SLO (service level objective)
And then it goes on about the SLO. Maybe that’s the pun of the article - slow/SLO?!
I think that “what not to improve” is what makes this article stand out as perhaps a new and controversial approach, but I agree with it.
Again, Google’s code review process is referenced. That has sections for the developer and a larger section for the reviewer
At NIPO these are the problems I see, specfically with cross-team code reviews:
- incentives to do a review:
- getting someone to do a code review at all
- authors taking a long time to implement feedback
- the fallacy of perceived seniority:
- not reviewing because someone else already approved it
- not reviewing because nothing to say
- the intent and perception of reviews:
- it’s hard to communicate in writing, and sometimes comments come across as more (or less) instructive than intended
I wonder if we should consider participating in a code review workshop at NIPO to get everyone on the same page?
I think everyone has different ideas about the importance and style of code reviews.
My recommendation would be
“Dr McKayla”’s
Code Review Workshops.
She also has some free online content with probably the same content,
but a workshop would be more emmersive and interactive.