Five common mistakes in unit-testing
Having worked with unit-testing and test-driven development for the past three years, I repeatedly see people making the same common mistakes. These mistakes don’t necessarily undermine the benefits of unit-testing but they do serve to make the process less than ideal:
- Writing monolithic tests. More often than not, I see people writing one test method for each method they are testing with all their asserts contained within that test method. While not incorrect, this will impede testing as most test suites stop at the first failed assert. For example, if a test method has ten asserts and the first one fails, no conclusion can be immediately made regarding the remaining nine asserts. As a result, the first failure must be fixed before the remaining nine can be tested. This is particularly frustrating when tests are automated and run on a regular basis. Consider splitting your tests by requirement and including no more than the minimum asserts necessary inside a test method to test the associated requirement.
- Not running the tests on a regular basis or altering them as requirements change. Once tests are written and they pass, many developers do not run those tests again. Months later, changes in requirements and code lead to a drift between the tests and what they are testing. Should someone run the tests again, the tested code will appear to be incorrect and confusion will result. As such, it is important to run tests on a frequent basis, ideally automatically, and to keep tests in line with requirements.
- Improper description of what test cases are testing. Unit tests cannot exist in a vacuum. Mislabelled or undescribed tests only serve to confuse which requirements they are testing. Sometimes asserts are self-explanatory but more often they are not and require further output to describe failures. Consider naming your test methods to describe what they are testing (eg.
testThatMethodFooResultsInCondition
) and/or include a written explanation inside the assert statement to be printed alongside the failure. - Noisy test output. Depending on the testing library you are using, your tests may already be generating needless output. Adding more output will only result in confusing test results. Positive test results should never output data to the test log. Negative test results should make use of the output facilities within the testing framework for uniform formatting.
- Improper use of the test case API. This is a small detail but deserves to be mentioned. I have seen many test cases which do not make good use of the testing framework under which they are running, leading to unhelpful output. For instance, in JUnit
assertTrue(foo == 3)
is not equivalent toassertEquals(3, foo)
. The former will output a generic failure message while the latter will explicitly output the expected and actual results. Pay careful attention to the test framework’s assert method signatures as order will often matter. For instance, in JUnit,assertEquals(3, foo)
is not equivalent toassertEquals(foo, 3)
and will result in two different failure messages.
September 16, 2011 at 3:16 pm
Thanks for the post! I’m trying to identity common unit testing mistakes for a paper and this served some good points.