Code review strategies to ensure highest standards of code in production

    Code reviews form a very important part of the software development process. Most modern-day software development processes try to get a second or third pair of eyes on any code changes to ensure that the code going into production is of great quality and meets certain standards and follows certain best practices. There have been arguments as to who and how many people should review a particular piece of code. Personally, I believe that in modern-day software development, pair programming is a norm and the pair partner must always be a part of the code review (CR). Apart from him/her, if the team has a subject matter expert (SME), he/she should also be included in the CR. If not, then some senior engineer may be made a part of the CR.

    The other problem which teams face with respect to a CR is the lack of context for the reviewer which hinders his/her ability to review the code more closely. I believe the only way to resolve this problem is that the team has regular knowledge transfer sessions along with demos and design reviews together so that every team member has at least a high-level context of what the other person is doing. This will also reduce the dependency on a particular person for a CR review and in case he/she is unavailable, others don't get dragged. People also complain about the phenomenon of "too many cooks spoiling the broth". Adding too many reviewers slows down the shipping speed of the code and many times makes the code change susceptible to over-engineering and what I call "engineering adventurism" where the reviewers don't just want the code to be good but want it to be ideal which seldom happens.

    Anyways, CRs are an integral part of any software development cycle and based on my little understanding, I have come up with the following points that I try to keep in mind when I am reviewing a piece of code:
  1. Review the code iteratively. This makes sure that we do not miss important points. The usual iteration pattern that I follow is as follows :
    1. Check for fundamental best practices. For example, naming conventions of variables and classes etc, declaration of final variables and use of constants wherever possible. Separation of concerns when it comes to the declaration of classes and having a separate utility class. Modularity and length of each function. Concepts of OOPs being followed. Dependency injection wherever possible and making sure no code tries to re-invent the wheel. Proper comments wherever needed and proper documentation attached with the CR etc.
    2. Try to follow the 150-150 rule. Have a maximum of 150 lines of business logic and 150 lines of tests per CR. As they say, "Give a developer 10 lines to review and he will give 10 comments and give him 200 lines to review and he would say looks fine!". Longer CRs tend to make reviewers skip things. If possible, try only to review a CR with less than 150 lines of code. 
    3. Check for business logic correctness. For example coverage of corner cases, enough unit tests and integration tests that cover at least a certain threshold of code percentage. Make sure the code is bug-free and has the most optimized approach in terms of space and time. Make sure the input and output are as per the contract and the code is safe in terms of security. Verify if the code is thread-safe wherever applicable. Verify proper exception handling and graceful recovery/retry wherever possible etc.
    4. Check for non-business logic code. For example, does the code come with alarms, dashboards, metrics etc for the new feature or code change? Does it have enough logging and metric emission that any failure can be caught easily and recovered? 
    5. A final end-to-end look. If everything fits together well and is it exactly as expected based on the low-level design and implementation plan? If everything looks good, we may then approve it. Other things might also be relevant but they are mostly on a case-by-case basis.
  2. Differentiate between "must have things in a code" vs "good to have things in code" in cases of emergency. Sometimes, when we find a bug in production or there is some failure in production or maybe the deadline of some deliverable is very very sharp, the speed of delivery becomes very very critical. I am not a fan of the "quick and dirty" approach to delivering anything as the quickness is soon forgotten while the dirtiness haunts the people who maintain the software. But in some cases, we might need quicker delivery and in these cases, we must make sure that we ensure that "must-have things" are present in the code but for the more ambitious "good to have things", we may create a backlog and enforce it to completion soon after the "emergency" is dealt with. This way we would not be extending the deadlines a lot.
  3. Don't just point out the problem but if possible, also suggest a solution for the comments on the CR. The best way I feel is to have a small one-to-one sync up and explain what is wrong and how it can be corrected. It reduces the to and fro and ensures things are done faster (it is much easier if everyone is in the office than in a WFH setup). 
  4. Allow creativity and trust the developer (at least to some extinct). This is an arguable point as the whole point of CR is to have a neutral view of the code which is going to end up in production. When I say trust the developer, I mean trust his creativity when it comes to trivial things like naming a variable, certain types of comments, a certain way of looping etc. I have seen people giving this thing called "nit". For example, I have come across these nits :
    1. nit: Can we rename "submissionTimestamp" to "submittedTimestamp".
    2. nit: Maybe we can update the comment and make it a few lines smaller.
    3. nit: Can we use a for each loop instead of a for loop.

      Comments of this sort are obviously aimed to make the code "perfect" but I think a piece of code may still be perfect and at the same time have the signature of the developer like an artist. These nits are almost always very annoying for the developer. I would not dare to suggest never adding nitpicks or comments of such type but we might try to minimize them if they are more of the developer's "signature moves" than "incorrect/poor moves". The code has to be "as good as possible" and not "as you would have written". It's just a thought and I leave it up to the readers for discussion!
  5. Lastly, make sure that you are as confident (if not more confident) that the code will work perfectly in the production environment before approving it.
    These steps consume a lot of time but is every penny worth it. As the saying goes, "The more you sweat during training, the less you bleed during the war". I would rather give the delivery dates with a bit of extra buffer for the CR and deployment than try to fast-track the CR process unless the deployment is an "emergency" situation. If we strictly follow at least these basic checks for every CR, the code quality, coverage, confidence and code structuring would enhance manifolds reducing the maintenance efforts later on. Ideally, these should be followed starting from the first commit in a package but every day is a good day to start improvising!

I found this one very interesting: CR review pyramid

Amrit Raj

Comments