Static Analysis tools > Experience
When developers choose to enforce Static Analysis tools (i.e. JTest or PMD) they might have the best intentions at heart, but are actually making the code worse in the long run. The value of the programmer’s experience and expertise on the code is not considered. Static analysis tools are automated programs that run a set of rules against source code to produce metrics, and give advice on best practices. So why does using these tools lead to bad code?
The HeartacheYou’re trying to finish up some down-to-the-wire coding to get it over to your Testers, and your deadline is approaching. You have this one last class to change and it is a legacy class. This class was written before the static analysis tools were forced to be run on the code base. You go in to make a simple change like adding a null pointer check. You now execute the static analysis tool on the code and you have 30 violations — from incorrect naming of fields to Cyclomatic Complexity greater than the threshold. You have 2 options here: A) You take the time to fix all the violations, and leave the code better than you got it or B) You take the lazy / easier solution and try and beat the rules — take the easy way out and work around them. What do you choose?
Reaction to the painDepending on what option you chose, you left the code beautiful, a little bit better, or in shambles and worse than you received it. By choosing option A you decided to be Pragmatic and make the code fix the Cyclomatic Complexity. Now being the pragmatic programmer you are you need to consider unit testing. You take the time to understand the violating method and make the appropriate refactoring. Now you hope the developer before you would have tested this method. A lot of the times this is not the case or if it is the case, did they write a good test? Did they write a test just to pass the code coverage standard? Again you have options, you can either take the time to use TDD or you make the refactoring without the test and hope you don’t break anything. If you chose option B, you decided to hack the code to bypass the rule. By circumventing the system you are making the code worse, harder to understand, and more likely broken. You figured out that by extracting pieces of the Cyclomatic Complexity method into other methods bypasses the rule. This doesn’t truly fix the Cyclomatic Complexity it just hides it from the tool. Again you have to worry about test or not to test. Lets look at another less extreme example, you’re now forced to fix a rule that is complaining because the code uses the ‘!’ operand more than max two times mandated by the rule. You might change the code to “== false” or flip the condition blocks so that it results in true and use the else to catch the false. This does work on a case by case basis. However sometimes flipping the if expression to check for true first makes the code harder to read. By making this change it loses the developers intention. You are probably making the code worse by trying to beat the system or making the rule happy. Is the code still easily understood by making this change? Taking the Dreyfus Model into account will help us determine why a developer would respond in these different ways.
Dreyfus ModelBy using the Dreyfus Model we can rate developers into 5 different categories: Novice, Advanced Beginner, Competent, Proficient, and Expert. Novices are driven by rules and need guidance to complete tasks. While the Experts are passed rules and use their intuition to drive their decisions. Something to note about Experts is they can’t explain why they do the things the way they do. A Novice that is driven by rules will likely fix the problem without hesitation or they could just be lazy and don’t want to push themselves to do the exploratory fix. Now with Experts they will question the rule violation against their code. Experts will not just blindly fix the error if they believe their code is correct / a work of art. This is not their ego talking, but more their experience / intuition / instinct on the subject. Not all developers are made equal, so why do we think code should be? The Expert will likely not be able to verbally prove their reason for wanting the violation to be suppressed — to them, it’s just clearer and better their way. By not trusting their instinct you are degrading them. They are likely going to strike and push back against this rule, toying with the Enforcer. Forcing rules on the Expert is an incorrect way to treat them and their knowledge. Rule violations shouldn’t be blindly enforced but taken on a case by case basis.
Learning ToolI do believe that Static Analysis tools make a great learning tool towards best practices. Novices, Advanced Beginners could get a lot of benefit from seeing what they did incorrectly when they violated a rule, and learning from it as they progress in their career. A lot of time you are doing maintenance on code someone else wrote before the tool was enforced to be run. Now you are held accountable for the fixing the rule when you just came into this class to add a new field. First off this leads to developers to not wanting to make code changes even though they know it is for the best. Also you are now held accountable from someone else’s debt. The original developer might have moved to another team, and hasn’t seen the debt they have incurred. Now the original developer can’t learn from this experience. Hopefully when a rule violation is found against someone else’s code it will be shown to them so they get to see the debt they incur. Developers should be taught to write good code in the first place, but without good mentors that is going to be hard. Especially when the Experts can’t explain why they do the things they do. I don’t think that Static Analysis tools should replace code reviews and agile pair programming. Code reviews should still be done, and require at least another set of eyes on each line of code written. This will help foster the mentorship and help less experienced programmers with a more experienced developer.
ConclusionUse Static Analysis tools as a guidelines not a mandate. Use these tools as training guides for younger developers, and for the advice they are built to give. Enforce rules on a case by case basis not at an overall perspective. By enforcing these tools it causes developers to rebel, make the code worse, and not to want to make code changes because they are going to be responsible for fixing the rule that they might have good reason to disagree with — passionately. Please let us know how have you dealt with these tools or situations in the past. What pain have you felt from these tools? References: Dreyfus model: Developing Expertise: Herding Racehorses, Racing Sheep by Dave Thomas
Posted in Best Practices
Use FindBugs and consult Bill Pugh’s talk at JavaOne 2008 called ‘Using FindBugs in Anger’ about what to do about large code bases (in short use your discretion about what bugs can be left alone and what needs to be fixed).
Some comments about PMD:
Out of the box, PMD can be overly verbose. It is easy to let it run more rules than is necessary. One of the “expert” developers should set up the rulesets that will be enforced. Even after that, the rules should still be considered guidelines. As the article mentioned, there are often good reasons for a particular code construct that may violate a PMD rule. PMD supports both annotations and special comments that allow the developer to indicate to PMD (and to other developers) that they are ignoring the PMD rule on purpose. A good developer will also add a comment indicating why they are ignoring the rule. It is also possible to suppress PMD warnings for an entire class, so in the example from the article, the developer has a better choice: make the fix for the NPE, then suppress PMD warnings for the class. This should also serve as an indication that the class needs to be revisitied, but gives the developer a way to make a critical fix on a tight deadline without having to “fix” a bunch of legacy code that has worked fine for years.
Sorry but I think you are wrong here.
1) many static code analysis tools are useless without some configuration. Pmd is one of such tools. Findbugs on the other hand is pretty much OK out of the box. Experts tune these tools until they have the right set of rules and then use the tools to make sure juniors don’t screw up (which they will otherwise). Almost every day I find stuff with checkstyle or findbugs that one of the guys in my team committed and I make them fix it. This ranges from stupid stuff to otherwise extremely hard to spot concurrency bugs. This alone is a reason to never ever go without static code analysis.
2) the tools don’t lie and most of the rules (that survive after you configure) are based on sound principles. An expert understands these principles and will recognize violations of rules as a problem.
3) Arguing against well established practices is a sign of an advanced beginner, not of an expert. It’s very simple: either the rule is always right or it is always wrong. In the latter case, configure your tool to not apply the rule. That’s OK, we all do this. In the former case: zero tolerance is the only right thing to do. The tool makes this dead easy.
I recently inherited a project that was big pile of stinking crap and all my static code analysis tools confirmed this (though technically I didn’t need those to confirm this). When maintaining big stinking piles of code, analysis tools, test tools, etc. are your friend.
Step 1: fix all of those findbugs issues. You know dereferencing null values, redundant null checks, misguided use of the synchronized keyword, etc. Step 2, use code coverage tools to figure out what code is not being tested. Step 3: fix this, ruthlessly deleting dead code in the process (if it has no test and nobody seems to use it: delete and ask questions later). Step 4: Refactor, the hell out of the project. Breaking things is going to happen from time to time. But then once you fix it, your code will be better. Step 5 work on the code until you have 0 warnings with the code analysis tools running on each save in eclipse (no expert would turn those off). This probably requires some decision taking on which rules to keep and which not. Step 6: turn on code cleaning on save in eclipse. Really, this saves so much time and arguing. Our eclipse config comes straight from maven so all team members use it. Hooks up the code analysis tools as well. Great stuff.
Indeed no replacement for code reviews, unit, integration, and manual end-to-end testing. But then nobody claims this and you shouldn’t rely on those processes either to find stupid rule violations and instead focus more on correctness & design of the code.
This is a never ending debate. Although I agree the argument of experts evading the tool enforcement, what is the guarantee that experts do not commit mistakes? There is no reason why the tool usage always results in a dreadful situation. In fact, I find not using a tool has led to pain than otherwise. Unit testing, code metrics are measures of code quality, no matter how expert the developer(s) is/are.
What matters is the amount of tool usage. In a highly automated environment where daily builds are happening more than once every day, monitoring commits and generating reports, it makes a lot of sense to integrate these tools. Having this in place is just as good as 50% work done.
I think enforcing the tool usage (by making it a part of commit) is the issue. Instead, senior members in the team could take the responsibility of doing smart code reviews. This is as good as using the tool itself (might be better at times). However, the advantages of unit testing tool or code metrics tool are too good to be ignored.