Starting with Java Ecosystem version 2.2 (compatible with SonarQube version 4.2+), we no longer drive the execution of unit tests during Maven analysis. Dropping this feature seemed like such a natural step to us that we were a little surprised when people asked us why we’d taken it.
Code coverage by unit tests is one of the practices that we are pushing the most at SonarSource. We have currently more than 10,000 unit tests running daily to cover the platform and keep adding every day. Of course, Sonar is not the place to start from to do Test Driven Development, Sonar is not an IDE, Sonar will not create unit tests for you and Sonar will not replace precious Agile developers. But once the first bunch of code has been commited, you should certainly use it to manage unit tests and code coverage. The objective of this post is to show how to use Sonar to meet this requirement.
You probably know already that JaCoCo is the most performant code coverage engine. But you might not know that you can now combine it with Sonar to assess the code coverage by integration tests. This was the most voted Sonar issue (SONAR-613) and the latest version of the Sonar JaCoCo Plugin solves it. I am now going to explain how.
A debate occurred within the team about a year ago to decide whether Sonar should enable to report on error details of failing unit tests. One year later, it sounds bizarre that this debate could take place, given the comprehensive solution provided by Sonar to review and improve unit tests and code coverage.
Here is a one minute video to summarize this one year of effort :
Of course, when you start developing a new application, the Sonar web server is not the place to start from to do Test Driven Development; Sonar is not an IDE, Sonar will not create unit tests for you and Sonar won’t replace your precious Agile developers. But once the first bunch of code has been commited, you should certainly use it to monitor and review the code health and more particularly the unit tests and code coverage.
At a glance, you get it all on the Sonar project dashboard : total number of tests, tests in error, tests in failure, skipped tests, code coverage and total duration of unit tests. On any of those measures, you not only get an indication of the short term trend through the usual green and red arrows but also alerts if any threshold has been reached.
Then, to get the bigger picture, you can access the TimeMachine to see evolution of the latest versions/snapshots. In the following example, number of unit tests and code coverage are increasing whereas unit tests duration decreases, which is pretty good :
Obviously, when you have seen all the numbers, you will want to understand how it breaks down. So on every measure, you can click to drill down.
In this example, you see that a unit test fails in FindbugsRulesRepositoryTest.java but what is wrong ?
And what’s true for unit tests in error is also true for code coverage: drill down and then display code coverage detail:
In fact, Sonar offers multiple entry points (aka hunting tools) to improve code coverage. That really depends of what you are looking for.
1. To make sure coverage is homogeneous across packages, simply use the treemap
2. To understand why a module has low coverage, drill down on the percentage
3. For quick wins, check the classes cloud: any big red class is a target
4. To reduce risk on the project, the best entry point is the “Most complex & less tested files” in the Hotspot where files that have the highest uncovered cyclomatic complexity are shown
To get all that, you just have to :
1. Download Sonar
2. Unzip Sonar
3. Launch the Sonar web server (sonar.xx start)
4. Launch the Sonar Maven plugin (mvn sonar:sonar)
A lot of work was done during the last 40 years to enable the emergence and formalization of models in order to evaluate software quality. SEI Maintainability Index and ISO 9126 standard (9126-3 for source code quality) are the results of several decades of research. Those models bring value in the field of software analysis and they should be respected for that. When discussing source code quality, I often see people referring to those models as bibles. They do not make their own judgment and sometimes simply forget to use the models with common sense.
About a year ago, I met an expert in charge of implementing a commercial tool to evaluate source code quality. After he explained to me the advanced model used to calculate metrics on usability, reliability, maintainability… I asked him a naive question : how do you integrate Continuous Integration (CI) and Test Driven Development (TDD) practices or even latest emerging object oriented metrics in your quality evaluation process ? The answer came straight back with a smile : “that is a developer thing”. In other word, the model is the model and should be taken as is.
But what’s about evolution of practices, improvement of build infrastructure and new languages ? Should it simply be ignored when evaluating the quality of source code with a model that was established 15 years ago ?
I have been working on Sonar for several years now and I do believe in the product and do see the value that the platform brings to development teams on a daily basis. But increasing the quality of development does not simply consist in analyzing the source code quality.The approach we prone is very much Return On Investment oriented. At any time, you should know what is the next action you are going to take base on ROI. Now that I have implemented a solid development infrastructure including a CI engine, now that TDD is part of the culture of developers, now that functional traceability is part of my process, I can focus on improving quality of the source code and have a real action plan.