Code Coverage with significance
83,9% - what does that even mean?
Conversations about unit test coverage usually sound like this:
A: “What’s your coverage?”
B: “About 83,9%”
C: “Meh. Solid.”
A: “Solid? That’s incredibly high!”
D: “Ours is 40% but we have a lot of generated code so it’s still high.”
This shows that the perception of code coverage is highly subjective and most of the time does not have the informative value that a precise percentage indicator like "83,9%" suggests. As soon as you have a portion of code in your project, that is not covered on purpose (like generated code) but is still included in the coverage analysis, you lose the significance of the coverage indicator. You're only able to tell that some of your code is covered and maybe you can deviate from the percentage that your coverage is "high" or "low". You can tell that your coverage is going up or down, but not even that is dependable as it does not take into account how much of the new code is purposefully not tested. How many tests are missing? What is the risk that you take on every new release? You can not provide a satisfactory answer to these questions.
There are divergent opinions on what "good" code coverage is, varying somewhere between 60% and 90% from my experience. I claim: Your unit test coverage is good when 100% of the code you want to test is covered by unit tests. And in my opinion this is measurable and doable and makes the coverage indicator significant again.
Purposefully untested code
In every larger project there are some portions of code that you do not want to test with unit tests. The first step towards a meaningful code coverage indicator is to identify these portions. Then coverage tools like Jacoco and reporting tools like SonarQube can help you exclude these portions from the coverage report - usually by defining exclusion patterns like in the picture below.
Code of 3rd party libraries
Just stating the obvious. As changes on this code are not in your cognizance, neither are its tests. In a normal project setup 3rd party libraries are not included in the coverage report by default.
Usually code, that is intended for testing purposes only, is separated from the production code which makes it easily excludable. There may be some exceptions like mock objects or testing infrastructure that exist near the production code, in which case you should specifically exclude them from your coverage report.
Changes in this code are done by a code generator. You should assume that the generated code is correct, when the generator works correctly. That said, if you wrote the generator yourself, you obviously still have to test the generator. Often the exclusion of generated code is easy as it is usually located in a separated package and you can define a simple exclusion pattern for it.
Sometimes it is a bit more complicated, for example when you use a convenience framework like Lombok in Java, that generates accessors, constructors and the like for you. It results in bytecode where handwritten and generated code are present within the same class, which makes it nearly impossible to only exclude the generated methods. Most of the time Lombok is used in objects like DTOs, JPA entity classes etc., that are only holding some property fields and no business logic. One possible solution here is to define these whole classes as "not to be tested" and make them easily identifiable with a consistent naming convention or by moving them into similar packages so you can use patterns like "**/*Dto.*" or "**/dto/**" to exclude them completely.
A little off topic: When you don't use Lombok and you write your getters, setters and constructors/builders yourself, that does not mean you have to unit test them explicitly. When all of your business code is unit tested then all needed getters, setters and constructors should be tested implicitly and show up as covered in the coverage report. If they don't then it means that either the tests for your business code are not complete or that the getters/setters are not actually used and you should just delete them. Unfortunately things like equals() and hashcode() are a whole different story because of their high cyclomatic complexity.
Code better covered by other types of tests
A good example for code that is better tested by non-unit tests are repository classes that access your database. Writing unit tests for them is possible but cumbersome because they usually use a lot of framework API that has to be mocked. Also repository classes typically don't contain much logic besides database access so unit tests don't test that much. IMO it is better to write tests that actually integrate with a (more or less) real database and do without unit tests.
Another example are classes like Spring configurations annotated with @Configuration. They are not intended to be tested by unit tests. Their purpose is to construct a working application context which is better checked by good integration tests. This is why I usually exclude "**/*Config.*" from the unit test coverage report.
The new meaning of 83,9%
When you put a little effort into defining the correct exclusions for your code coverage you are rewarded with an actually meaningful coverage percentage. It means that when you have 83.9% code coverage you are actually missing 16.1% of unit tests that you should write - that's a valuable piece of information! It is also possible that you can actually reach 100% unit test coverage! How cool is that?
Well to be honest - realistically it is still nearly impossible to reach 100% in a larger project. There are always things like private constructors to prevent instantiation or code paths that can not be reached in tests because some static framework dependencies can not be mocked and probably a dozen other reasons that prevent you from providing reasonable unit tests in some weird cases. But if that leads to "only" 98% test coverage at least you know that exactly 2% of your production code has the risk to break without you noticing and you can consciously accept, assess and communicate that risk.