1. Introduction
In an earlier article (see Test Infected: Programmers Love Writing Tests, Java Report, July 1998, Volume 3, Number 7), we described how to use a simple framework to write repeatable tests. In this article, we will take a peek under the covers and show you how the framework itself is constructed.
We carefully studied the JUnit framework and reflected on how we constructed it. We found lessons at many different levels. In this article we will try communicate them all at once, a hopeless task, but at least we will do it in the context of showing you the design and construction of a piece of software with proven value.
We open with a discussion of the goals of the framework. The goals will reappear in many small details during the presentation of the framework itself. Following this, we present the design and implementation of the framework. The design will be described in terms of patterns (surprise, surprise), the implementation as a literate program. We conclude with a few choice thoughts about framework development.
2. Goals
What are the goals of JUnit?
First, we have to get back to the assumptions of development. If a program feature lacks an automated test, we assume it doesnít work. This seems much safer than the prevailing assumption, that if a developer assures us a program feature works, then it works now and forever.
From this perspective, developers arenít done when they write and debug the code, they must also write tests that demonstrate that the program works. However, everybody is too busy, they have too much to do, they donít have enough time, to screw around with testing. I have too much code to write already, how am I supposed write test code, too? Answer me that, Mr. Hard-case Project Manager.
So, the number one goal is to write a framework within which we have some glimmer of hope that developers will actually write tests. The framework has to use familiar tools, so there is little new to learn. It has to require no more work than absolutely necessary to write a new test. It has to eliminate duplicated effort.
If this was all tests had to do, you would be done just by writing expressions in a debugger. However, this isnít sufficient for testing. Telling me that your program works now doesnít help me, because it doesnít assure me that your program will work one minute from now after I integrate, and it doesnít assure me that your program will still work in five years, when you are long gone.
So, the second goal of testing is creating tests that retain their value over time. Someone other than the original author has to be able to execute the tests and interpret the results. It should be possible to combine tests from various authors and run them together without fear of interference.
Finally, it has to be possible to leverage existing tests to create new ones. Creating a setup or fixture is expensive and a framework has to enable reusing fixtures to run different tests. Oh, is that all?
3. The Design of JUnit
The design of JUnit will be presented in a style first used in (see "Patterns Generate Architectures", Kent Beck and Ralph Johnson, ECOOP 94). The idea is to explain the design of a system by starting with nothing and applying patterns, one after another, until you have the architecture of the system. We will present the architectural problem to be solved, summarize the pattern that solves it, and then show how the pattern was applied to JUnit.
3.1 Getting started- TestCase
First we have to make an object to represent our basic concept, the TestCase. Developers often have tests cases in mind, but they realize them in many different ways-
The Command pattern (see Gamma, E., et al. Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, Reading, MA, 1995) fits our needs quite nicely. Quoting from the intent, "Encapsulate a request as an object, thereby letting youÖ queue or log requestsÖ" Command tells us to create an object for an operation and give it a method "execute". Here is the code for the class definition of TestCase:
Every TestCase is created with a name, so if a test fails, you can identify which test failed.
public TestCase(String
name) {
fName= name;
}
public abstract
void run();
Ö
}
Figure 1 TestCase applies Command
3.2 Blanks to fill in- run()
The next problem to solve is giving the developer a convenient "place" to put their fixture code and their test code. The declaration of TestCase as abstract says that the developer is expected to reuse TestCase by subclassing. However, if all we could do was provide a superclass with one variable and no behavior, we wouldnít be doing much to satisfy our first goal, making tests easier to write.
Fortunately, there is a common structure to all tests- they set up a test fixture, run some code against the fixture, check some results, and then clean up the fixture. This means that each test will run with a fresh fixture and the results of one test canít influence the result of another. This supports the goal of maximizing the value of the tests.
Template Method addresses our problem quite nicely. Quoting from the intent, "Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithmís structure." This is exactly right. We want the developer to be able to separately consider how to write the fixture (set up and tear down) code and how to write the testing code. The execution of this sequence, however, will remain the same for all tests, no matter how the fixture code is written or how the testing code is written.
Here is the template method:
protected void setUp() {
}
protected void tearDown() {
}
Figure 2 TestCase.run() applies Template Method
3.3 Reporting results- TestResult
If a TestCase runs in a forest, does anyone care about the result? Sure- you run tests to make sure they run. After the test has run, you want a record, a summary of what did and didnít work.
If tests had equal chances of succeeding or failing, or if we only ever ran one test, we could just set a flag in the TestCase object and go look at the flag when the test completed. However, tests are (intended to be) highly asymmetric- they usually work. Therefore, we only want to record the failures and a highly condensed summary of the successes.
The Smalltalk Best Practice Patterns (see Beck, K. Smalltalk Best Practice Patterns, Prentice Hall, 1996) has a pattern that is applicable. It is called Collecting Parameter. It suggests that when you need to collect results over several methods, you should add a parameter to the method and pass an object that will collect the results for you. We create a new object, TestResult, to collect the results of running tests.
public TestResult()
{
fRunTests= 0;
}
}
protected TestResult createResult()
{
return new TestResult();
}
Figure 3: TestResult applies Collecting Parameter
If tests always ran correctly, then we wouldnít have to write them. Tests are interesting when they fail, especially if we didnít expect them to fail. Whatís more, tests can fail in ways that we expect, for example by computing an incorrect result, or they can fail in more spectacular ways, for example by writing outside the bounds of an array. No matter how the test fails we want to execute the following tests.
JUnit distinguishes between failures and errors. The possibility of a failure is anticipated and checked for with assertions. Errors are unanticipated problems like an ArrayIndexOutOfBoundsException. Failures are signaled with an AssertionFailedError error. To distinguish an unanticipated error from a failure, failures are caught in an extra catch clause (1). The second clause (2) catches all other exceptions and ensures that our test run continues..
catch (Throwable
e) { // 2
result.addError(this, e);
}
finally {
tearDown();
}
}
public synchronized void addFailure(Test
test, Throwable t) {
fFailures.addElement(new
TestFailure(test, t));
}
TestResult is an extension point of the framework. Clients can define their own custom TestResult classes, for example, an HTMLTestResult reports the results as an HTML document.
3.4 No stupid subclasses - TestCase again
We have applied Command to represent a test. Command relies on a single method like execute() (called run() in TestCase) to invoke it. This simple interface allows us to invoke different implementations of a command through the same interface.
We need an interface to generically run our tests. However, all test cases are implemented as different methods in the same class. This avoids the unnecessary proliferation of classes. A given test case class may implement many different methods, each defining a single test case. Each test case has a descriptive name like testMoneyEquals or testMoneyAdd. The test cases donít conform to a simple command interface. Different instances of the same Command class need to be invoked with different methods. Therefore our next problem is make all the test cases look the same from the point of view of the invoker of the test.
Reviewing the problems addressed by available design patterns, the Adapter pattern springs to mind. Adapter has the following intent "Convert the interface of a class into another interface clients expect". This sounds like a good match. Adapter tells us different ways to do this. One of them is a class adapter, which uses subclassing to adapt the interface. For example, to adapt testMoneyEquals to runTest we implement a subclass of MoneyTest and override runTest to invoke testMoneyEquals.
Java provides anonymous inner classes which provide an interesting Java-specific solution to the class naming problem. With anonymous inner classes we can create an Adapter without having to invent a class name:
The simplest form of pluggable behavior is the Pluggable Selector. Pluggable Selector stores a Smalltalk method selector in an instance variable. This idea is not limited to Smalltalk. It is also applicable to Java. In Java there is no notion of a method selector. However, the Java reflection API allows us to invoke a method from a string representing the methodís name. We can use this feature to implement a pluggable selector in Java. As an aside, we usually donít use reflection in ordinary application code. In our case we are dealing with an infrastructure framework and it is therefore OK to wear the reflection hat.
JUnit offers the client the choice of using pluggable selector or implementing an anonymous adapter class as shown above. To do so, we provide the pluggable selector as the default implementation of the runTest method. In this case the name of the test case has to correspond to the name of a test method. We use reflection to invoke the method as shown below. First we look up the Method object. Once we have the method object we can invoke it and pass its arguments. Since our test methods take no arguments we can pass an empty argument array:
Here is the next design snapshot, with Adapter and Pluggable Selector added.
Figure 4: TestCase applies either Adapter with an anonymous inner class or Pluggable Selector
3.5 Donít care about one or many - TestSuite
To get confidence in the state of a system we need to run many tests. Up to this point JUnit can run a single test case and report the result in a TestResult. Our next challenge is to extend it so that it can run many different tests. This problem can be solved easily when the invoker of the tests doesnít have to care about whether it runs one or many test cases. A popular pattern to pull out in such a situation is Composite. To quote its intent "Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly." The point about part-whole hierarchies is of interest here. We want to support suites of suites of suites of tests.
Composite introduces the following participants:
Next, we introduce the Composite participant. We name the class TestSuite. A TestSuite keeps its child tests in a Vector:
Figure 5: TestSuite applies Composite
Finally, clients have to be able to add tests to a suite, they can do so with the method addTest:
Here is an example of creating a TestSuite:
3.6 Summary
We are at the end of our cookís tour through JUnit. The following figure shows the design of JUnit at a glance explained with patterns.
Figure 6: JUnit Patterns Summary
Notice how TestCase, the central abstraction in the framework, is involved in four patterns. Pictures of mature object designs show this same "pattern density". The star of the design has a rich set of relationships with the supporting players.
Here is another way of looking at all of the patterns in JUnit. In this storyboard you see an abstract representation of the effect of each of the patterns in turn. So, the Command pattern creates the TestCase class, the Template Method pattern creates the run method, and so on. (The notation of the storyboard is the notation of figure 6 with all the text deleted).
Figure 7: JUnit Pattern Storyboard
One point to notice about the storyboard is how the complexity of the picture jumps when we apply Composite. This is pictorial corroboration for our intuition that Composite is a powerful pattern, but that it "complicates the picture." It should therefore be used with caution.
4. Conclusion
To conclude, letís make some general observations:
5. Acknowledgements
Thanks to John Vlissides, Ralph Johnson, and Nick Edgar for careful reading and gentle correction.