I have a pre-commit hook that can make sure properties are set on files when they're committed and that these properties are set to particular properties.
This will make sure that the property is set on a files when they're committed. HOWEVER, there are several issues:
File properties are revisioned just like files. If I set on my file the property CodeReview
= Done
on revision 14, it will be set to CodeReview
= Done
when the next person commits their change into Subversion. Not exactly what you want.
There isn't an easy way to query file properties to see which ones have the value of Done
and which don't.
My humbly submitted recommendation: You're heading into a world of hurt. Instead, setup Jenkins as a continuous build server. Each time someone commits a change in Subversion, Jenkins will fire off a build recording what got changed and by who. Then, instead of marking individual files as being reviewed or not reviewed, you verify the Jenkins build which take place on an entire changeset (which really makes more sense anyway).
You can use the Promoted Build Plugin to mark whether a particular build has been reviewed, and whether it passed the code review. You can also attach to each build a comment on whatever issues were discovered in your code review.
Now, excuse me as I get on my soapbox here...
Most code reviews are pretty worthless. There are three major aspects of code review:
- Did the user follow the corporate standard practices?
- Is the logic embedded in the program any good?
- Did the developer make a silly error?
Too many code reviews focus on Issue #1 and leave little time left for the rest of code reviewing or even development. I've spent hours in a code review where a developer called a variable CustomerServiceResponseCode
and not customerServiceResponseCode
. We've even argued whether it should be responseCodeCustomerService
. Hours of my life I will never get back doing more important things like playing Angry Birds.
As for Point #2: This should be done before any code is written. It's a bit too late when the developer spent 5 hours coding to be told You did it all wrong.
Now, let's get to Point #3.
Before code reviews:
Developer #1: Hey, there's a bug in your program!
Developer #2: Wow, yeah. I forgot to fomberbrate the verbex before I plugdix it. Boy that's a stupid mistake.
Results: One developer who feels incompetent and stupid.
After code reviews:
Developer #1: We just got a report back that there's a bug in the program!
Developer #2: Wow, We forgot to fomberbrate the verbex before we plugdix it. How did this slip by in code review?
Results: A whole team of developers who feel incompetent and stupid.
I have rarely seen a code review that actually catches a bug -- even a simple one -- before approving the code. It's someone else's code and most of the developers have other things on their minds. They do a quick look over, see the overall logic looks right, and then to show they're paying attention, complain that the indentation isn't correct.
What to do?
As part of your process, all new development should be discussed before the code is written. This isn't code review, this is part of your issue tracking system.
Use automated tools to catch things. In Java, we have Checkstyle (is the code formatted correctly), Findbugs, PMD (both tools catch potential bugs), copy/paste detectors, etc. Jenkins can run these programs and build all sorts of charts and graphs. Even better, it can be part of a Continuous Integration game. You get points for solving Checkstyle and Findbugs issues, and lose points for creating new ones. Also insist that compiler warnings and deprecation notices are fixed. Again, Jenkins can track them and give points for fixing them and handling them.
Use Unit testing. When I head a project, I insist that all unit tests are written before a lick of code is created. This gives developers a goal to reach when developing and the code is usually better with fewer bugs. Use coverage tools to see how well your unit tests are covering your code.
Using automated tools usually catch more errors than manual code reviews can. Automated testing can detect when code logic might have issues.
This doesn't mean code reviews are useless. It simply allows for code reviews to be overall reviews of the code. For example:
- What did the developer think about the coding? Did they find the implementation more complex than it should have been because of poor design somewhere else?
- Is the program still readable and understandable? Are there too many exceptions and
if
clauses to get around implementation issues?
- Was the general strategy followed? Did the developer have an issue using a base class/API because of bad implementation?
- Did the developer follow the agreed upon logic?
- Did the developer depend on say one JSON third party library, and you've seen another third party library used in the rest of the code?
Don't do code reviews for the sake of code reviews. Don't waste too much time on the implementation of code reviews. And, for God's own sake, do not do pre-commit code reviews. This means that developers are stuck waiting for approvals before they can do any work.