Archive

Archive for the ‘test-driven’ Category

New Tutorial: Getting Started With UISpec on XCode 4

July 8th, 2011 Comments off

One of the most popular pages on this website is the tutorial that helps you getting started with UISpec; i guess there’s a great demand for acceptance testing frameworks on Objective-C.

Anyway, I have just made a switch from XCode 3 to XCode 4, a huge improvement. A lot has changed in the user interface so I decided to upgrade the UISpec tutorial, but kept the old one in case someone prefer the old IDE:

Cheers!

Automatic Acceptance Testing for iPhone

February 23rd, 2011 Comments off

I’ve been playing around with XCode and iOS SDK lately, particularely with various alternatives for achieving automatic acceptance testing. (Which I belive is the only long-term solution to fight The Regression Testing problem.)

One of the most promising solutions I’ve found is UISpec, modeled after Ruby’s mighty popular RSpec. Even though it still feels a bit rough cut, UISpec provides an easy way to write tests that invoke your application’s user interface; no recording, just plain Objective-C.

Anyway, in case you’re interested, I published a tutorial on how to set UISpec up for your project: Getting Started With UISpec.

Categories: software development, test-driven Tags:

The Swamp of Regression Testing

January 17th, 2011 14 comments
Have you ever read a call for automation of acceptance testing? Did you find the idea really compelling but then came to the conclusion that it was too steep a mountain for your team to climb, that you weren’t in a position to be able to make it happen? Maybe you gave it a half-hearted try that you were forced to abandon when the team was pressured to deliver “business value”? Then chances are, like me, you heard but didn’t really listen.
You must automate your acceptance testing!

Why? Because otherwise you’re bound to get bogged down by the swamp of regression testing.

Regression Testing in Incremental Development

Suppose we’re building a system using a traditional method; We design it, build it and then test it – in that order. For the sake of simplicity assume that the effort of acceptance testing the system is illustrated by the box below.

Box depicting a specified system

The amount of acceptance testing using a traditional approach

Now, suppose we’re creating the same system using an agile approach; Instead of building it all in one chunk we develop the system incrementally, for instance in three iterations.

The system is divided into parts (in this example three parts: A, B and C).

An agile approach. The system is developed in increments.

New functionality is added with each iteration

Potential Releasability

Now, since we’re “agile”, acceptance testing the new functionality after each iteration is not enough. To ensure “potential releasability” for our system increments, we also need to check all previously added functionality so that we didn’t accidentally break anything we’ve already built. Test people call this regression testing.

Can you spot the problem? Incremental development requires more testing. A lot more testing. For our three iteration big  example, twice as much testing.

The theoretical amount of acceptance testing using an agile approach and three iterations, is twice as much as in Waterfall

Unfortunately, it’s even worse than that. The amount of acceptance testing increases exponentially with the number of increments. Five iterations requires three times the amount of testing, nine requires five times, and so on.

The amount of acceptance testing increase rapidly with the number of increments.

This is the reason many agile teams lose velocity as they go, either by yielding to an increasing burden of test (if they do what they should) or by spending an increasing amount of time fixing bugs that cost more to fix since they’re discovered late. (Which is the symptom of an inadequately tested system.)

A Negative Force

Of course, this heavily simplified model of acceptance testing is full of flaws. For example:

  • The nature of acceptance testing in a Waterfall method is very different from the acceptance testing in agile. (One might say though that the Waterfall hides behind a false assumption (that all will go well) which looks good in plans but becomes ugly in reality).
  • One can’t compare initial acceptance testing of newly implemented features, with regression testing of the same features. The former is real testing, while the second is more like checking. (By the way, read this interesting discussion on the difference of testing and checking: Which end of the horse.)

Still, I think it’s safe to say that there is a force working against us, a force that gets greater with time. The burden of regression testing challenges any team doing incremental development.

So, should we drop the whole agile thing and run for the nearest waterfall? Of course not! We still want to build the right system. And finding that out last is not an option.

The system we want is never the system we thought we wanted.

OK, but can’t we just skip the regression testing? In practice, this is what many teams do. But it’s not a good idea. Because around The Swamp of Regression Testing lies The Forest of Poor Quality. We don’t want to go there. It’s an unpleasant and unpredictable place to be.

We really need to face the swamp. Otherwise, how can we be potentially releasable?

The only real solution is to automate testing. And I’m not talking about unit testing here. Unit testing is great but has a different purpose: Unit-testing verifies that the code does what you designed it to do. Acceptance testing on the other hand verifies that our code does what the customer (or product owner, please use your favorite word for the one that pays for the stuff you build) asked for. That’s what we need to test to achieve potential releasability.

So, get out there and find a way to automate your acceptance testing. Because if you don’t, you can’t be agile.

Cheers!

Did I say don’t unit-test GUIs?

January 3rd, 2008 1 comment

Isn’t life funny? Two weeks ago I stated my opinion that unit-testing graphical user interfaces isn’t worth the trouble. Now I find myself doing it, writing unit-tests for GUI components.

What happened, did I come to my senses or go crazy (pick the expression that fits your point of view) during Christmas holidays? No, I still think that unit-testing user interfaces is most likely a waste of time, but for some special occasions, like this one, they will pay off.

My task is to make changes to a tool box control in a legacy application. The control is full of application logic and it has a strong and complicated relationship to an edit area control. There are two things I need to do:
First I have to pull out the application logic of the tool box and break the tight relationship to the edit area. Then I need to push that application logic deeper into the application core, so that the tool box could be used via the plug-in framework.

I figured I had two options. I could either refactor the tool box and make the changes in small steps, or I could rebuild the complete tool box logic and then change the controls to use the new programming interface.
I found the rebuild alternative too risky. The code base is full of booby traps and I have to be very careful not to introduce bugs. Therefore I decided to go with refactoring.

But refactoring requires a unit-testing harness, which of course this application doesn’t have. Trust me; you don’t want to refactor without extensive unit-testing, so here I am setting up the tests around the involved GUI controls. It’s a lot of work, especially since I don’t have a decent framework for creating mock objects, but it’ll be worth the effort once I start messing around with the code.

As a conclusion I’d like to refine the “Don’t unit-test GUI” statement to read “Don’t unit-test GUI unless you’re refactoring it, and there’s no easier way to make sure your changes doesn’t break anything.”

Cheers!

Don’t unit-test GUI

December 20th, 2007 21 comments

I’m currently rereading parts of the book Test-Driven Development: A Practical Guide, by David Astels. It’s a great book in many ways, well worth reading, but I have objections to one particular section in the book.
The author tries to convince me that developing my user interfaces using a test-driven approach is a good thing to do. I disagree.

I love TDD, it’s one of the most powerful development techniques I know, but it’s not without limitations. For one thing, code with unit-tests attached is more difficult to change. This is often an acceptable price to pay since the benefit of producing and having the unit-tests is usually greater.

But the return of the investment isn’t always bigger than the price, and sometimes the cost of change exceeds the benefit of protection. That’s why most developers won’t use TDD for experimental code, and that’s why I’m not using it to test my user interfaces.

I prefer to develop GUIs in a RAD environment, visually dragging and dropping components, moving them around, exchanging them for others if better ones are to be found. In that process unit-testing just gets in my way. After all, the GUI is supposed to be a thin layer, without business logic, so there is only so much to test.

One could theoretically test that the form contains certain key components, that they are visible, have a certain position or layout, stuff like that – but I find that kind of testing too specific for my taste.

In my opinion, unit-testing should test functionality, not usability. It shouldn’t dictate whether I decide to show a set of items in a plain list or in a combo-box. What it should do, is test that the business logic of my code produce the correct set of items, and leave the graphical worries to the testers.

This brings us to something that is sometimes misunderstood: Unit-testing can never replace conventional testing. Some things are just better left to humans. Like testing user interfaces.

Cheers!

Tools of The Effective Developer: Make It Work – First!

October 29th, 2007 15 comments

I’ve come across many different types of developers during my nearly two decades in the business. In my experience there are two developer character type extremes: the ones that always seek and settle with the simplest solution, and the ones that seek the perfect solution, perfect in terms of efficiency, readability or code elegance.

Developers from the first group constantly create mess and agony among fellow developers. The second group contain developers that never produce anything of value since they care more for the code than they do for the result. The optimal balance is somewhere in between, but regardless of what type of developer you are: you should always start by making it work, meaning implement the simplest solution first.

Why spend time on an implementation that isn’t likely to be the final one, you might ask. Here’s why:

  1. The simple solution helps evolving the unit-testing safety net.
  2. The simple solution provide rapid feedback, and may prevent extensive coding of the wrong feature. It is like a prototype on the code level.
  3. The simple solution is often good enough, and – with a working solution ready – you are less inclined to proceed and implement a more complex solution unless you really have to. Thus avoiding premature optimization and premature design, that makes you add features that might be needed in the future.
  4. With the simple solution in place, most integration and programming interfacing is done. That makes it easier to implement a more complex solution.
  5. While implementing the simple solution, your knowledge of the system increases. This helps you make good decisions later on.

This may all sound simple enough to you. After all, the habit of Making It Work First comes naturally to many developers. Unfortunately, for me, I’m not one of those developers. I still let more or less insignificant design issues consume an unnecessary amount of time. The thing is, it is hard to find the perfect design on the first try. The perfect design may not even exist, or cost too much to be worth the effort.

That is why I struggle to attain the habit of Making It Work First.

Previous posts in the Tools of The Effective Developer series:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View
  5. Tools of The Effective Developer: Fail Fast!

I’m back!

October 28th, 2007 Comments off

I’m back from my three week vacation!

I had a great time, but as suspected I wasn’t able to stay away from computers. In the warm evenings, just for fun, I started to implement a ray tracer in the D Programming Language.

I have been looking for a suitable project that would give me a chance to get deep into D, and a ray tracer seems to be the perfect fit. D is supposed to be great at floating point programming and now I have the chance to find out on my own.

To make it a little more interesting I have used a more top-down breath-first kind of approach than I normally do. I want to see how that affects a test-driven development technique. As a part of the experiment I keep a detailed development log which I plan to share with you when I reach a certain point. It could be within a week or take several months depending on work load and inspiration level.

So stay tuned. I’ll be back with ray tracing, or other topics that comes across my sphere of interest.

Cheers!

How to automate acceptance tests

October 5th, 2007 Comments off

In a comment to my previous post AC wonders how I automate acceptance testing. He considers that as being done by real testers. Well, he’s absolutely right. I expressed myself a bit sloppy, so let me use this post to explain what I meant to say.

Acceptance testing is done by the customer to make sure she got what she ordered, to make sure you delivered the right system. This process cannot and should not be automated. What you can do – and this was what I meant to say – is to automate acceptance tests, and use them during construction.

The customer defines the acceptance tests. These are valuable to the developer since they can be used to validate the system as it develops. The sooner you get a hold on these test cases the better, so make sure you press the customer to produce them early. Better yet, help the customer in the process. That way you can help making the test cases automatable.

So, how do you design a test so that it can be automated? First of all you need to stop thinking in terms of user interface. Instead you should describe the test, in plain text, and in terms of action, function and result. Look for possible test data, edge cases, exceptional uses and describe those as well.

The second thing you need to do is to make the system in such a way that it can be automated. Separating the GUI from the business logic is often all you need to do to achieve that. You then automate the acceptance tests in the same way as you automate integration tests. In fact, they could even become a part of your integration testing harness. The only difference being the fact that they are defined by the customer.

I’m not saying this is easy. (Here’s a nice post which discusses when to automate.) I’m not saying it can be done for all of your acceptance tests. What I say is: it can be done for far more tests than you might think. For example, we are customizing a GIS system for an internal customer’s needs. The application is user controlled and involve plenty of complex actions, modifying graphical data and properties. Still we were able to automate most of the acceptance tests.

We spent a lot of time writing code to set up fixtures, initiate actions and check the results. It was worth every second though. You see, manually running one of our acceptance test cases usually takes several minutes. By having the computer do all the work, time is reduced significantly and free up developer time. We use the automated test cases individually, almost like an extended compiler, to verify features as we implement them. And at night we run all of our acceptance tests to get feedback on how far we’ve come and to spot unexpected problems.

But the real acceptance testing is still going to be done manually, by the customer, at the end of the project. Just like AC pointed out.

Cheers!

Categories: software development, test-driven Tags:

Tools of The Effective Developer: Fail Fast!

October 2nd, 2007 5 comments

It’s a well known fact that we regularly introduce errors with the code we write. Chances are slim to get it right on the first try. If we do, the risk is great that changing requirements and murdering deadlines will mess things up later on.

It’s also well known that the cost of failure increases with time. The sooner you discover the flaw, the easier it is to fix. In other words, if we are going to fail, there are good reasons to do it fast.

When developers talk about failing fast they usually refer to the defensive coding technique that is based on assertions and exception handling. It’s true that assertions are the very foundation of failing fast, they should be your first line of defense against bugs. But it doesn’t stop there. Failing fast should pervade your whole process of making software. You should fail fast on all levels.

The most effective fail fast-technique is automated testing, the fastest way to get feedback. Be sure to write the tests first. And don’t just automate unit-testing; integration and acceptance testing are often easier to automate than you might think. The key is to isolate your code using mock objects.

The fail-fast property doesn’t apply to code and systems alone. It should be used on the project level too. By using agile practices like short iterations, small releases, and on-site customers you create an environment of continuous feedback. It will help you steer the project to success, or – by failing fast – avoid a disaster. Kate Gregory puts it this way in a nice post from 2005:

“Failure can be a good thing. If it saves you from following a doomed path for a year, you’re glad to have failed early rather than late.”

Failing takes courage, failing increases your knowledge, failing calls upon action. That’s why I like the habit of Failing Fast.

This was the fifth post in this series. Here are the other Tools of The Effective Developer posts:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View

Quit Debugging!

September 17th, 2007 21 comments

I have a confession to make: I used to be addicted to debugging. Yes, it’s true. When I got hooked – damn you Delphi – I wasn’t able to see the dark side, the demonic side of the debugger. It lured me into the path of quick fixes. Heed my warning: debuggers are bad!

Fortunately I’m one of the lucky few who have been able to recover from this particularly addictive behavior. I’ve been clean – thank you jUnit – for almost 5 years now. And you can do it too, you can let go of the safety zone that these integrated debuggers provide, and break free just like I did.

The first thing to do is to realize that there is a better alternative: test-driven development. To get rid of a bug, the right thing to do is not to fire up your debugger, but to write a unit-test to reveal it. If necessary, keep writing tests and go deeper and deeper into your code. Eventually the tests will tell you what is wrong, and they’ll even point out a solution for you.

I know that using a debugger may seem like a faster way to find and extinguish a bug, but that is just an illusion. Here are the reasons:

  1. TDD improves the design. Being forced to think testability tends to divide your code into small manageable pieces. This will make your code a bad breeding ground for bugs.
  2. Tests remain useful for a long time. They become an addition to your testing harness, which helps protecting your code against future infestations. The work spent in a debugging session can never be reused.
  3. Unit-testing saves time, a lot! While this isn’t immediately obvious, the long term effects are huge. Think of it: all those debugging sessions can be run automatically at your command, whenever you want, how often you want, and in just a matter of seconds. All you have to do to achieve this is to let go of the debugger, and write relevant tests.
  4. Unit-testing gives you courage. There’s nothing like a good harness to make you feel invincible. I still remember the first time I felt the real power of unit-testing. I was working on a huge legacy application and had developed a new set of functionalities, using TDD for the very first time. Several months later I realized I had to do a major rewrite. The rewrite was risky business and took me a couple of days to complete. When finished, I ran the unit-tests which all came out green! I could be confident the program worked just as before the rewrite. And the best part: I drew the conclusion out of just five seconds of testing. Boy, I still get the goose bumps.

[PREACHING OFF]

Of course debuggers are useful tools. In certain situations they are even invaluable. For someone who’s new to the software they provide a great way of getting to know it. The problem is that a debugger makes you lazy. So be sure to get rid of it as soon as you identify a testing strategy.

Cheers!