2010年6月23日 星期三

[Xcode] Test driving for OCUnit

Here's a common development scenario: You've just finished building your application. You're confident that it works. You've put it through the paces, exercising every feature and checking every output. You're proud to say that it's stable, at least for the moment. But when you think about the next feature you need to add, you can't deny that low-priority thread of fear running in the back of your mind: “If I change the code, what might break?” You could certainly try to manually test out all the features of your application after each change, but after the second time doing this you're sure to become bored and make a mistake. Ironically, you feel like you don't have time to test.

If that scenario is familiar to you, here's the good news: Your computer has the cycles to keep that low-priority fear thread in check. If you supplement your visual inspection with automated tests, the computer will happily check the results for you, as often as you like. It's understood that it isn't always easy to write an automated test, but the effort you spend is a one-time investment that just keeps paying you back every time the test is run. And if that's not enough, writing the test might reveal opportunities for a better design.

Note that you don't need to wait until your project is complete to unit test; in fact, it's best to integrate it into your development cycle from the start.

This article introduces you to unit testing with OCUnit—a unit testing framework for Objective-C that integrates with Xcode. Step by step, we'll walk through how to write automated OCUnit tests for an Xcode project.

Why Unit Test?

Before we start writing unit tests, you might be wondering what value it adds beyond the obvious: reducing the number of defects. Well, if you think writing unit tests is like eating your vegetables—probably a good thing, but you'd rather reach for the dessert—then you're missing out on some very tasty dishes. Here are just a few quick reasons to start writing more unit tests:

  • Healthier Software. It's one thing to say that your code works today, but how confident are you that future changes won't break something? Automated unit tests serve as change detectors that give you confidence to add new features, fix bugs, and refactor code with impunity.

  • Design Improvements. When you write a test for code you've yet to write, you have a unique opportunity to eat your own dog food, as they say. Indeed, the test is the very first client of your code, and if writing the test for an API is difficult then using the API will be equally difficult. You notice undesirable coupling immediately, before it begins to pollute your design.

  • Solid Foundation. Your code may depend on components developed outside of your project. If one of those components breaks or changes in an incompatible way, your code's foundation crumbles. Tests quickly detect when something underneath you has changed unexpectedly. Moreover, a failing test is a quantifiable test case that verifies the existence of a problem, and validates when it's been fixed.

  • Executable Documentation. Tests document how code is intended to be used by providing working examples of the code in action. Simply put, tests don't lie. Not only do they show how to call a method, or a series of methods, they also document correct and exceptional usage patterns. For example, unit tests that demonstrate various usage patterns without leaking memory give you confidence beyond “the code works.”

  • Accelerated Development Pace. It seems counter-intuitive to think that writing more tests will help you deliver better software faster, but it's true. Running tests tightens up the code-test-deploy cycle by eliminating rework and alerting you to problems before they start to compound.

Testing Frameworks

You've probably already heard of at least one testing framework in the xUnit family. There's JUnit for Java, Test::Unit for Ruby, PyUnit for Python, CppUnit for C++, and many others. Most of these are ports inspired by the original SUnit for Smalltalk, and they all share a few things in common. You write test cases that include assertions about the code under test. Test suites are collections of test cases that are run uniformly by a test runner. When the tests are run, they check their own results and provide an unambiguous pass or fail message.

This article focuses on using OCUnit to unit test Objective-C code. It's not the only unit testing framework for Objective-C—for example, UnitKit is another excellent choice. They're both easy to use and integrate transparently with Xcode. But we'll use OCUnit in this article— it's considerably more mature than most, has great features and stability, and it also has a lot of momentum in the Apple community.

That being said, OCUnit is just a tool. What unit testing tool you decide to use isn't nearly as important as actually writing and running unit tests. So let's get started.

Installing OCUnit

The latest version of OCUnit for Xcode at the time of this writing (v39) comes in two flavors: an installer package that automatically installs OCUnit into root-level directories on your boot volume, and a script that builds OCUnit from scratch and copies OCUnit files into subdirectories of your home directory.

For a quick and easy start, use the root installer package. Simply download the most recent version of OCUnitRoot, a DMG file, from the www.Sente.ch website. Once you've downloaded the DMG, run the root installer package. Before it installs anything, you get to preview the root-level directories where the OCUnit files will live. The documentation, for example, can be found in the /Developer/Source/OCUnit/Documentation directory when the installation is complete.

Adding OCUnit to an Xcode Project

Now that you've installed OCUnit, you're ready to add OCUnit tests to your Xcode project. To demonstrate how that's done, we'll add tests to an example project that's distributed with Xcode so you get the feel for how this works.

Start by copying the Temperature Converter project located in the /Developer/Examples/AppKit/TemperatureConverter directory to a directory of your choosing. The copied version of the project will be our testing playground, and you can always go back to the original project for a fresh start.

Then open the copied version of the project and run it by clicking the Build and Go button. You should see the user interface shown in Figure 1.

TemperatureConverter.jpg

Figure 1: The Temperature Converter Application.

You can now enter temperature values in any of four different units—Kelvin, Centigrade, Fahrenheit, and Rankine—and the display is updated to show the temperature in the other three units. It's a trivial application, which makes it a good starting point for writing unit tests. Indeed, after entering a few temperature values you may already have ideas for tests that you'd write to continually check that temperatures are being converted correctly.

But before we can write tests, we need to add OCUnit to the Xcode project. This lets us add OCUnit tests to the same project, keeping the code and its tests in close proximity. We'll run the tests in a separate target of the project.

Follow these steps to add OCUnit to the Temperature Converter project:

  1. Create a new target to run the tests. Choose "Project > New Target" and select the "Cocoa > Test Framework" target type. After clicking the "Next" button, name the new target "Test" Click the "Finish" button to create the new target in the "Targets" group of Xcode's "Groups & Files" browser.

  2. Add the OCUnit framework. In Xcode's "Groups & Files" browser, open the "Frameworks" folder inside of the "Temperature Converter" folder. Then, right-click on the "Linked Frameworks" folder and choose "Add > Existing Frameworks" Select the /Library/Frameworks/SenTestingKit.framework, then click the "Add" button. In the dialog that appears, check the "Test" target and uncheck the "Temperature Converter" target. Finally, click the "Add" button.

  3. Create a new group for the test cases. Right-click on "Temperature Converter" at the top of Xcode's "Groups & Files" browser and select "Add > New Group". Rename the newly created group to "Test Cases".

When you're done, Xcode's "Groups & Files" browser should look similar to Figure 2. Now we're ready to write some tests.

AddToProject.jpg

Figure 2: The OCUnit Framework Added to the Project.

Writing a Test

Let's start by writing a test to check a few temperature values displayed in Figure 1. We could test the values through the user interface, but it's likely to be more volatile than the underlying code. And, in general, testing GUI components can be difficult. But don't let that be a deterrent to starting to write good automated tests.

We can leverage the fact that Cocoa applications cleanly separate the user interface code from the underlying business logic. Each field of the display has a binding that is connected to an appropriate temperature transformer. If a transformer returns the expected temperature value for a given input temperature value, then any field whose value is bound to the transformer will display the correct result. So rather than testing that the field bindings are hooked up correctly, we'll focus on testing that the underlying transformers produce the correct temperature values to display.

Creating a Test Case

To create an OCUnit test case, create a subclass of SenTestCase, as follows:

  1. Right-click on the "Test Cases" group in Xcode's "Groups & Files" browser and select "Add > New File"
  2. In the dialog that appears, select "Cocoa > Objective-C SenTestCase subclass" then click the "Next" button.
  3. In the dialog that appears, name the test case file TemperatureTest.m. Make sure to check the box that automatically creates the TemperatureTest.h file.
  4. Add the test case to the "Test" target by checking the "Test" target and unchecking the "Temperature Converter" target. Click the "Finish" button to create two files in the "Test Cases" group: TemperatureTest.h and TemperatureTest.m.

When you're done, Xcode's "Groups & Files" browser should look similar to Figure 3.

CreateTestCase.jpg

Figure 3:The Test Cases Created.

Implementing a Test Method

The next step is to add a test method to our test case. A test method includes assertions that check whether the code under test meets your expectations. For example, the laws of nature dictate that a value of 273.15 on the Kelvin temperature scale—the freezing point of water—should always convert to a value of 0.0 on the Centigrade temperature scale. Let's write a test for that.

Modify the TemperatureTest.m file as follows:

#import "TemperatureTest.h"
#import "CentigradeValueTransformer.h"

@implementation TemperatureTest

- (void) testCentigradeFreezingPoint
{
CentigradeValueTransformer *transformer =
[[CentigradeValueTransformer alloc] init];

NSString *kelvinFreezingPoint = @"273";

NSNumber *centigradeFreezingPoint =
[transformer transformedValue:kelvinFreezingPoint];

STAssertEquals(32, [centigradeFreezingPoint intValue],
@"Centigrade freezing point should be 32, but was %d instead!",
[centigradeFreezingPoint intValue]);

[transformer release];
}

@end

The Temperature Converter project includes a CentigradeValueTransformer class that knows how to convert from Kelvin to Centigrade, and back again. The test uses the STAssertEquals macro to assert that the transformedValue: method of the CentigradeValueTransformer returns a temperature value of 32 given a Kelvin temperature value of 273.

By convention, the first parameter of the STAssertEquals macro is the expected value and the second parameter is the actual value. The assertion passes if the expected value matches the actual value, as defined by the == operator. The third parameter specifies an optional message to display if the assertion fails. Passing in a value of nil causes the assertion to print a default message if the assertion fails. Unfortunately, the default message isn't always properly formatted. To make the test as informative as possible, we use a format string as the third parameter and a variable for the format string as the fourth parameter. (All STAssert*() macros take a message argument and variadic parameters that are passed to NSString's stringWithFormat: method).

Running the Test

Before we can run the test, we need to add any classes referenced by the test case to the "Test" target. To do that, click on the CentigradeValueTransformer.m file in the "Classes" folder and drag it into the "Sources" folder of the "Test" target.

Next, make sure that the "Test" target is selected as the active target in the upper left corner of the Xcode window. Then build the "Test" target by clicking the "Build" button.

Not surprisingly, the test fails with the following error displayed in the "Errors and Warnings" Smart Group, as shown in Figure 4:

- [TemperatureTest testCentigradeFreezingPoint] : '<>' should be equal to '<>'
Centigrade freezing point should be 32, but was 0 instead!

FailedTest.jpg

Figure 4: A Failed Test Selected in the Xcode Editor.

When you click on the error, the assertion that failed is highlighted in the code editor. Test results are also logged in the "Build Results" window (choose "Build > Detailed Build Results" shown in Figure 5.

BuildWindow.jpg

Figure 5: A Failed Test Logged in the Xcode Build Results Window.

What Went Wrong?

Obviously the test failed because we mixed up our temperature scales. The assertion expects a temperature value of 32, which is correct on the Fahrenheit scale. But the CentigradeValueTransformer returns the temperature on the Centigrade scale.

To make the test pass, modify the test method to use the following assertion:

STAssertEquals(0, [centigradeFreezingPoint intValue],
@"Centigrade freezing point should be 0, but was %d instead!",
[centigradeFreezingPoint intValue]);

Now re-run the test by clicking the "Build" button, and the build should succeed.

Creating Test Fixtures

The CentigradeValueTransformer has a reverseTransformedValue: method that converts from Centrigrade back to Kelvin. That sounds like something we should test.

Add the following test method to the TemperatureTest.m file:

- (void) testKelvinFreezingPoint
{
CentigradeValueTransformer *transformer =
[[CentigradeValueTransformer alloc] init];

NSString *centrigradeFreezingPoint = @"0";

NSNumber *kelvinFreezingPoint =
[transformer reverseTransformedValue:centrigradeFreezingPoint];

STAssertEqualObjects([NSNumber numberWithInt:273],
[NSNumber numberWithInt:[kelvinFreezingPoint intValue]],
@"Kelvin freezing point should be 273, but was %d instead!",
[kelvinFreezingPoint intValue]);

[transformer release];
}

This test method uses the STAssertEqualObjects macro to assert that the reverseTransformedValue: method of the CentigradeValueTransformer returns a Kelvin temperature value of 273 given a Centigrade temperature value of 0. The STAssertEqualObjects macro passes if the expected and actual objects—two NSNumber objects in this case—are equal to each other as defined by the isEqual: method of the expected object.

Run the test to make sure it passes. The build should succeed, but our work is not done. Writing a second test method revealed the worst of all code smells: duplication. Notice that each test method creates a new transformer object at the beginning and releases it at the end. If we continue down this path, we'll have to remember to add this lifecycle code to each test method. And at some point we'll forget to do that and our tests will start to leak memory. In the safety of a passing test, we can refactor to remove the duplication by creating a test fixture to run multiple tests.

First, define a transformer instance variable in the TemperatureTest.h file, as follows:

#import 

#import "CentigradeValueTransformer.h"

@interface TemperatureTest : SenTestCase
{
CentigradeValueTransformer *transformer;
}

@end

Next, in the TemperatureTest.m file, extract the object creation code from each test method into a setUp: method and the object cleanup code into a tearDown: method, as follows:

#import "TemperatureTest.h"

@implementation TemperatureTest

- (void) setUp
{
transformer = [[CentigradeValueTransformer alloc] init];
}

- (void) tearDown
{
[transformer release];
}

- (void) testCentigradeFreezingPoint
{
NSString *kelvinFreezingPoint = @"273";

NSNumber *centigradeFreezingPoint =
[transformer transformedValue:kelvinFreezingPoint];


STAssertEquals(0, [centigradeFreezingPoint intValue],
@"Centigrade freezing point should be 0, but was %d instead!",
[centigradeFreezingPoint intValue]);
}

- (void) testKelvinFreezingPoint
{
NSString *centrigradeFreezingPoint = @"0";

NSNumber *kelvinFreezingPoint =
[transformer reverseTransformedValue:centrigradeFreezingPoint];

STAssertEqualObjects([NSNumber numberWithInt:273],
[NSNumber numberWithInt:[kelvinFreezingPoint intValue]],
@"Kelvin freezing point should be 273, but was %d instead!",
[kelvinFreezingPoint intValue]);
}

@end

Finally, run the test to make sure that creating a test fixture didn't break anything.

It's important to note that each test method is run in a unique instance of the TemperatureTest class. That is, when you run the test case, two instances of the TemperatureTest class are created. Therefore, because all instance variables will be reset between test method invocations, the test methods must not assume any pre-conditions or set any post-conditions that other tests will depend upon.

Venturing Off the Happy Path

OK, so we've tested that a valid temperature value is converted as expected. Sadly, things don't always go according to plan, so let's be good little programmers by testing at least one boundary condition.

The transformedValue: method of the CentigradeValueTransformer class will throw an exception if the object passed as the parameter doesn't respond to the doubleValue: method. (Our previous tests used NSString objects, which do respond to doubleValue:.) By supplying an NSObject instance, which doesn't respond to doubleValue:, we can test that the transformedValue: method guards against bad values.

Add the following test method to the TemperatureTest.m file:

- (void) testBadValueThrowsException
{
NSObject *badValue = [[NSObject alloc] init];

STAssertThrows([transformer transformedValue:badValue],
@"Should raise exception!");
}

The test uses the STAssertThrows macro to assert that an exception is raised when the transformedValue: method is invoked with an object that doesn't respond to doubleValue:. (If you're using the new @throw style exceptions in Objective-C to throw very specific exception classes, then use the STAssertThrowsSpecific macro to assert that the code raises a specific exception.)

We've used several assertion macros: STAssertEquals, STAssertEqualObjects, and STAssertThrows. OCUnit provides several more assertions, defined in SenTestCase.h, including

  • STAssertNotNil(object, message, ...)
  • STAssertTrue(expression, message, ...)
  • STAssertFalse(expression, message, ...)
  • STAssertThrowsSpecific(expression, exception, message, ...)
  • STAssertNoThrow(expression, message, ...)
  • STFail(message, ...)

Your test methods can use one or more of these assertions, each of which must pass for the test method to pass. If you find yourself using the same sequence of assertions in multiple test methods, consider writing a custom assertion macro that encapsulates several OCUnit-provided assertions for reuse across tests.

Writing a Test Suite

A test suite is simply a collection of test cases to be run together. When you run an Xcode target that contains OCUnit tests, a default test suite is created that includes all the test cases found in the runtime environment. That is, the OCUnit framework automatically finds and runs all methods with names beginning with test, taking no parameters, and returning no value, which are defined in subclasses of SenTestCase.

If you need more control, you can programmatically arrange tests into a custom test suite. The following contrived example demonstrates how OCUnit tests can be arbitrarily nested—a test suite containing one test method and a test suite with all test methods from a test case:

SenTestSuite *suite = [SenTestSuite testSuiteWithName: @"My Tests"];

[suite addTest:
[TemperatureTest testCaseWithSelector:@selector(testCentigradeFreezingPoint)]];

SenTestSuite *anotherSuite =
[SenTestSuite testSuiteForTestCaseClass:[TemperatureTest class]];
[suite addTest: anotherSuite];

Running the Tests with Your Build Process

Writing just one test is an investment in your application's future, but you need to keep running the tests to realize the return on that investment. Every time the tests are run, they pay you back by verifying that your code, and code written by others, continues to meet expectations.

Before checking in code changes to your version control system, run a quick suite of localized tests that cover those changes. If the tests don't pass, the code isn't ready to be committed to version control. Even when the local tests do pass, there's always a probability that your changes will adversely affect an area of code that the localized tests don't cover. And as the size of your project grows it may become impractical to run all the tests before checking in code.

An automated, unattended build process can help you capitalize on your testing investment. You have better things to do than manually run builds all day, so put a computer to work running tests asynchronously for you on a regular interval. The following script, for example, uses the xcodebuild command to run the "Test" target and call the notify.sh script if the build fails:

#!/bin/sh

xcodebuild -target Test

if [ $? != 0 ]
then
sh notify.sh
exit 1
fi

Schedule the script to be run by cron, for example, to continually integrate your project. If a build fails, the notify.sh script can send email to the team, make your cell phone beep with a text message, light up a visual device such as a red lava lamp, or anything else that alerts the team that they're now working with tainted goods.

Test-Driven Development

So far we've been writing tests for code that's already written. Test-driven development is a valuable design technique. You write a test first, and then write the code that makes the test pass. In other words, you write a test to think through what it is you're trying to build, and how you'll know when you're done, before actually diving into the implementation. That thought process often influences your design in surprising ways.

Writing a Test

Say, for example, we want to add a feature to our trivial Temperature Converter application that lets a user type in the name of a city, and the application calculates the boiling point of water in that city. The temperature at which water boils is a function of the current barometric pressure being reported in the city, as defined by the following formula:

Boiling Point of Water = 49.161 * Ln(Barometric Pressure) + 44.932

If the barometric pressure in Denver, Colorado is currently 24.896 inches of Mercury, then the boiling point of water is approximately 202 degrees Fahrenheit. Before thinking about how to write code to generate that answer, put a stake in the ground by writing a test:

- (void) testBoilingPointOfWaterInDenver
{
TemperatureCalculator *calculator =
[[TemperatureCalculator alloc] init];

NSNumber *boilingPoint =
[calculator boilingPointOfWaterInCity: @"Denver"];

STAssertEquals(202, [boilingPoint intValue],
@"Boiling point should be 202, but was %d instead!",
[boilingPoint intValue]);

[calculator release];
}

The test invokes the boilingPointOfWaterInCity: method of the TemperatureCalculator class. The test won't compile until we create that class and method. But first, you might be thinking "How are we going to test that?" It's an important question to be asking, because if code is difficult to test then it's likely going to be difficult to use.

Listening to Design Feedback

The problem here is the results are nondeterministic—the test will only pass if the pressure in Denver is 24.896 when the test is run. This is problematic because the implementation we had in mind was to have the TemperatureCalculator look up the current pressure by accessing The Weather Channel over the network. Perhaps we should rethink that decision.

Thankfully, the test offers the design insight that it would be more convenient if TemperatureConverter was decoupled from any particular weather lookup implementation. To do that, we define an informal WeatherLookupService protocol:

@interface NSObject (WeatherLookupService)

- (double)currentBarometricPressure:(NSString*)city;

@end

Then we modify the TemperatureCalculator class to include a setWeatherLookupService: method:

@interface TemperatureCalculator : NSObject
{
id weatherLookupService;
}

- (void)setWeatherLookupService:(id)delegate;
- (NSNumber*)boilingPointOfWaterIn:(NSString*)city;

@end

The boilingPointOfWaterInCity: method could then use the specified weather lookup service to look up the current barometric pressure for the city.

Making the Test Pass

This design decision opens up a fissure in the API that lets us bypass connecting to a network. Our test can simply define a currentBarometricPressure: method so that the test conforms to the informal WeatherLookupService protocol. In other words, the test poses as an object that can provide weather data. But rather than connecting to the network for data, the test simply returns canned data:

- (double)currentBarometricPressure:(NSString*)city {
return 24.896;
}

The test then needs to set itself as the WeatherLookupService of the TemperatureCalculator in order for the currentBarometricPressure: method above to be called when the TemperatureCalculator asks for the barometric pressure:

- (void) testBoilingPointOfWaterInDenver
{
TemperatureCalculator *calculator =
[[TemperatureCalculator alloc] init];

[calculator setWeatherLookupService: self];

NSNumber *boilingPoint =
[calculator boilingPointOfWaterIn: @"Denver"];

STAssertEquals(202, [boilingPoint intValue],
@"Boiling point should be 202, but was %d instead!",
[boilingPoint intValue]);

[calculator release];
}

Notice that we're only testing that the TemperatureCalculator interacts with an object conforming to the WeatherServiceLookup protocol. We've effectively fooled the TemperatureConverter into thinking that it was sending messages to the network-aware weather lookup service. We'll need to test the "real" weather lookup service at some point, but in the meantime the unit test helps us focus on getting one thing working at a time. And in the process, we ended up creating a better design, as shown in Figure 6.

Classes.jpg

Figure 6: Testing Yields Better Designs

Conclusion

Writing (and running) unit tests not only improves the quality of your application today and makes it more economical to change your code in the future, it also improves your designs through tangible feedback. This all adds up to more time for you to write quality code. Indeed, testing helps you write better software, faster.

reference from http://developer.apple.com/tools/unittest.html

沒有留言:

張貼留言

Hello