Automated software testing, part 2: the types

Navigation: part 1 | part 2 | part 3 | part 4 | part 5 | part 6


In part 1, I explained some of the reasoning behind automated software testing. Now let's explore a few of the more common types of automated tests that we often deal with.

An overview of the common test types

As I mentioned in the previous post, the common types that I'll focus on are unit testing, integration testing, and system testing.

A unit test, as the name implies, is intended to test a small "unit" of code. Most often, in object-oriented programming, such a unit would be equivalent to a class. It is very important to only test that unit and not increase the scope in these tests.

An integration test's purpose, fundamentally, is to test the interaction between two or more units. Realistically, integration tests are useful when multiple units (often from multiple tiers, to borrow a term from n-tier architecture) are tested to see whether a specific task will successfully execute with all units relevant to this task instantiated and talking to each other.

Finally, a system test runs the entire software product, unmodified, and exercises it to verify that the application actually works in the "real world" and that its critical functionality is accessible and working.

Determine when each type is appropriate

It's very important to test various pieces of functionality at the correct level. Unfortunately, I know of no simple set of rules to guarantee success in this decision. There are always unique circumstances that influence how a particular feature or its parts should be tested. However, I can provide some general guidelines.

Unit tests should not see "the big picture", as it were. They should be blissfully unaware of anything outside the scope of the unit - with the important exception of the interfaces with which the current unit interacts. Such unit tests are, by their very nature, rather limited. Pretty much all they can, and should, do is verify that the unit under test is performing its calculations or other data or state manipulations correctly under various circumstances that could conceivably arise. A useful thing to remember when designing a unit (and writing its tests) is that in the future, it might not be used in the same context as what you're currently designing it for. Other people might instantiate it in another part of the product and pass all kinds of stuff to it that you may not have expected. Writing robust unit tests helps to flesh out edge cases and to handle them more explicitly.

Integration tests should be written when non-trivial interactions occur between units. They help to verify that a somewhat larger piece of "the big picture" works as intended. For example, when one unit receives data, modifies it according to its rules, and sends it to another unit for storage in a database, you probably want to be sure that whatever got into the database is what you expected. This isn't guaranteed to be the case even when the two units in question have good tests themselves, because the database handling unit could be expecting a slightly different format of data than it gets from the first unit, and when it gets the unexpected format, it doesn't store it correctly. This is where an integration test can help to spot the problem.

System tests should only be written as necessary, as they generally take a relatively long time to run (i.e., not milliseconds) and are often more difficult to write and maintain, especially when the entire system is itself complex. These tests should exercise "the big picture". They should cover major tasks that the product is designed to handle and, unlike integration tests, they should be run in a real (or at least as realistic as possible) environment, with the entire application, as I mentioned above. These tests can verify that all the interactions between the various internal and external components of your product happen as expected. Yes, external components factor into system tests as well. For example, if your product is a website that has a Twitter feed, a system test could ensure that the twitter feed is displayed correctly. This makes system tests more susceptible to failures caused by external issues (such as Twitter feeds being inaccessible), but the upside is that confidence in the overall product is increased.

Generally speaking, you should test at the lowest responsible level. If a unit test will do, then don't put your behavior verification in an integration test. And if an integration test is good enough to verify a certain flow, then don't create a system test for it. It can be difficult to determine what the lowest responsible level is for a particular piece of functionality, but referring to the above distinctions can help.

Another way to guide this decision is to think of the levels as a pyramid:

               /\
              /  \
             /    \
            /      \
           / SYSTEM \
          /__________\
         /            \
        /              \
       /  INTEGRATION   \
      /                  \
     /____________________\
    /                      \
   /                        \
  /           UNIT           \
 /                            \
/______________________________\

The topmost level, system testing, gives you the best view of the world (if you were to climb this pyramid, and if it were not just an ASCII drawing) and it has the smallest area, so system tests should be the fewest in number. Consequently, there should be more integration tests than system tests, and they should have a more limited view. And last but not least, there should be a whole lot of unit tests, and they should see very little.

Don't overdo it

Sometimes it's very easy to get carried away with testing and write way too many tests that either don't provide much value, or even decrease value. As I mentioned in the last post, brittle tests cause maintainability problems. Testing every little aspect of a unit in a unit test, or testing a lot of the same functionality in both unit and integration tests, is a waste of time and productivity. When application behavior needs to be modified later on, you (or someone else) will be spending too much time fixing all the tests that broke just because a single code path was modified. Naturally, it's probably a good idea to write a test for the new code path, but that's different.

Along the same lines, don't be afraid to delete tests that have lost their value. You'll speed up your test runs and remove cruft. Just be sure that the tests you are removing are indeed useless and aren't there to exercise some obscure code path. (This is where test naming becomes very important, but that's a topic for another time.)

On a related note, I recommend not paying much attention to code coverage metrics. At best, they can function as merely a hint that something might be wrong. For example, 20% code coverage is probably bad. Crucially, 100% code coverage is also bad! Remember all that stuff about brittle tests? That's what 100% coverage will usually get you.

Design matters

The design and architecture of your application can make or break your ability to write good tests. In general, a more decoupled design will lead to greater testability. Dependency injection will allow stubbing/mocking to be downright simple and fun, and let you easily isolate units for testing.

Next steps

All of this theory is nice, but until you get enough practice determining what tests to write and how to write them, you'll make silly mistakes. It's okay, we've all been there. Keep practicing and you'll get better at it!

Check out part 3, a more in-depth discussion of the first type, unit tests.

 
comments powered by Disqus