Archive for the ‘software development’ Category

New Tutorial: Getting Started With UISpec on XCode 4

July 8th, 2011 No comments

One of the most popular pages on this website is the tutorial that helps you getting started with UISpec; i guess there’s a great demand for acceptance testing frameworks on Objective-C.

Anyway, I have just made a switch from XCode 3 to XCode 4, a huge improvement. A lot has changed in the user interface so I decided to upgrade the UISpec tutorial, but kept the old one in case someone prefer the old IDE:


Incremental Development and Regression Testing Paper Released

May 17th, 2011 No comments

As you might know I wrote about the regression testing problem in The Swamp of Regression Testing. One of the commenters of that post, Ricky Clarkson, thought that the extensive use of the agile word got in the way of the underlying message.

I’d love to see a copy of this article without “if you don’t do X, you won’t be agile”, because where I work, agile isn’t a goal, but quality and quick turnaround is. If I passed this around, the word ‘agile’ would get in the way of an otherwise great message.

That made so much sense that I decided to do just that. I wrote a new version, a complete rework of the original post, and published it as a separate document so that it can be easily passed around.

You find it under Shared Stuff on my website, or by following the below link.

Incremental Development and Regression Testing (PDF)

Please don’t hesitate to contact me if you have any feedback to share.


Categories: Anouncements, software development Tags:

Automatic Acceptance Testing for iPhone

February 23rd, 2011 No comments

I’ve been playing around with XCode and iOS SDK lately, particularely with various alternatives for achieving automatic acceptance testing. (Which I belive is the only long-term solution to fight The Regression Testing problem.)

One of the most promising solutions I’ve found is UISpec, modeled after Ruby’s mighty popular RSpec. Even though it still feels a bit rough cut, UISpec provides an easy way to write tests that invoke your application’s user interface; no recording, just plain Objective-C.

Anyway, in case you’re interested, I published a tutorial on how to set UISpec up for your project: Getting Started With UISpec.

Categories: software development, test-driven Tags:

The Swamp of Regression Testing

January 17th, 2011 14 comments
Have you ever read a call for automation of acceptance testing? Did you find the idea really compelling but then came to the conclusion that it was too steep a mountain for your team to climb, that you weren’t in a position to be able to make it happen? Maybe you gave it a half-hearted try that you were forced to abandon when the team was pressured to deliver “business value”? Then chances are, like me, you heard but didn’t really listen.
You must automate your acceptance testing!

Why? Because otherwise you’re bound to get bogged down by the swamp of regression testing.

Regression Testing in Incremental Development

Suppose we’re building a system using a traditional method; We design it, build it and then test it – in that order. For the sake of simplicity assume that the effort of acceptance testing the system is illustrated by the box below.

Box depicting a specified system

The amount of acceptance testing using a traditional approach

Now, suppose we’re creating the same system using an agile approach; Instead of building it all in one chunk we develop the system incrementally, for instance in three iterations.

The system is divided into parts (in this example three parts: A, B and C).

An agile approach. The system is developed in increments.

New functionality is added with each iteration

Potential Releasability

Now, since we’re “agile”, acceptance testing the new functionality after each iteration is not enough. To ensure “potential releasability” for our system increments, we also need to check all previously added functionality so that we didn’t accidentally break anything we’ve already built. Test people call this regression testing.

Can you spot the problem? Incremental development requires more testing. A lot more testing. For our three iteration big  example, twice as much testing.

The theoretical amount of acceptance testing using an agile approach and three iterations, is twice as much as in Waterfall

Unfortunately, it’s even worse than that. The amount of acceptance testing increases exponentially with the number of increments. Five iterations requires three times the amount of testing, nine requires five times, and so on.

The amount of acceptance testing increase rapidly with the number of increments.

This is the reason many agile teams lose velocity as they go, either by yielding to an increasing burden of test (if they do what they should) or by spending an increasing amount of time fixing bugs that cost more to fix since they’re discovered late. (Which is the symptom of an inadequately tested system.)

A Negative Force

Of course, this heavily simplified model of acceptance testing is full of flaws. For example:

  • The nature of acceptance testing in a Waterfall method is very different from the acceptance testing in agile. (One might say though that the Waterfall hides behind a false assumption (that all will go well) which looks good in plans but becomes ugly in reality).
  • One can’t compare initial acceptance testing of newly implemented features, with regression testing of the same features. The former is real testing, while the second is more like checking. (By the way, read this interesting discussion on the difference of testing and checking: Which end of the horse.)

Still, I think it’s safe to say that there is a force working against us, a force that gets greater with time. The burden of regression testing challenges any team doing incremental development.

So, should we drop the whole agile thing and run for the nearest waterfall? Of course not! We still want to build the right system. And finding that out last is not an option.

The system we want is never the system we thought we wanted.

OK, but can’t we just skip the regression testing? In practice, this is what many teams do. But it’s not a good idea. Because around The Swamp of Regression Testing lies The Forest of Poor Quality. We don’t want to go there. It’s an unpleasant and unpredictable place to be.

We really need to face the swamp. Otherwise, how can we be potentially releasable?

The only real solution is to automate testing. And I’m not talking about unit testing here. Unit testing is great but has a different purpose: Unit-testing verifies that the code does what you designed it to do. Acceptance testing on the other hand verifies that our code does what the customer (or product owner, please use your favorite word for the one that pays for the stuff you build) asked for. That’s what we need to test to achieve potential releasability.

So, get out there and find a way to automate your acceptance testing. Because if you don’t, you can’t be agile.


Scrum Masters Should Be Cross-Functional

December 7th, 2010 No comments

I have enjoyed a two day Certified Scrum Master course with Chet Hendrickson and Henrik Kniberg as teachers. A big part of the course was dedicated to questions and answers (implemented as a Scrum exercise) and lots of interesting subjects were elaborated upon by Chet and Henrik. I figured I’d document some of them on this blog, starting with this question:

“Should Scrum Masters code?”

Both Henrik and Chet agreed that the Scrum Master role may or may not be a full time job depending on several circumstances like team size, the maturity (in Scrum a sense) of the team, etc. In fact, a Scrum Master should strive for making her role redundant.

The Scrum Master is needed the most in the beginning, but once the worst impediments are removed and the team gets more comfortable with the process, the role becomes less important. At that point, the Scrum Master work load becomes unevenly distributed, and tends to be concentrated towards the beginning and end of the sprints. When that happens, you have basically two options:

Alternative 1: Make the Scrum Master role for this team a part time assignment.

Alternative 2: Keep the full time Scrum Master but have her or him participate in development activities when there’s time for that.

Of these two, the second one is much preferred, mostly because a “drop-in” Scrum Master will be less effective in discovering and removing impediments. So, in that perspective having a Scrum Master that is cross-functional seems like a good idea.

Categories: software development Tags:

Martin Fowler: Agile Doesn’t Matter That Much

July 30th, 2010 4 comments

Martin Fowler has an interesting post up called Utility vs Strategic Dichotomy. The point he’s making is that there are essentially two kinds of software projects, which he names utility and strategic, and that they need to be treated entirely different in the way you staff and run them.

Utility projects, he defines, are the ones that provide functions that are often necessary and crucial to run the business, but not a differentiating factor in competition with other companies.

A classic example of a utility IT project is payroll, everyone needs it, but it’s something that most people want to ‘just work’

Then he delivers hammer blow number one.

“Another consequence is that only a few projects are strategic. The 80/20 rule applies, except it may be more like 95/5.”

What Martin Fowler is saying is that a vast majority of us are doing work that is of no strategic importance to the business we support. I’m sure this statement cuts a wound in many developer’s and project manager’s self images, mine included. But I’m with Martin on this one. It’s time our sector gets its feet back on the ground and we start to realize that we’re here to support the business. Not the other way around.

Hammer blow number two.

Another way the dichotomy makes its influence felt is the role of agile methods. Most agilists tend to come from a strategic mindset, and the flexibility and rapid time-to-market that characterizes agile is crucial for strategic projects. For utility projects, however, the advantages of agile don’t matter that much.

I have a little harder swallowing this one. That the agile movement doesn’t “matter that much” while developing utility systems doesn’t ring true to me. Yes, time to customer is not as critical as in strategic projects, but that doesn’t mean that building the right system isn’t important to utility projects as well. And the agile values will help reducing cost and risk in any project, be it utility or strategic.


The Boy Scout Rule

July 26th, 2010 4 comments

You may have heard of The Boy Scout Rule. (I hadn’t until I watched a recorded presentation by “Uncle Bob”.)

You should always leave the campground cleaner than you found it.

Applied to programming, it means we should have the ambition of always leaving the code cleaner than we found it. Sounds great, but before we set ourselves to the mission we might benefit from asking a couple of questions.

1. What is clean code?
2. When and what should we clean?

Clean Code

What is clean code? Wictionary defines it like so:

software code that is formatted correctly and in an organized manner so that another coder can easily read or modify it

That, by necessity, is a rather vague definition. The goal of being readable and modifiable to another programmer, although highly subjective, is pretty clear. But what is a correct format? What does an organised manner look like in code? Let’s see if we can make it a little more explicit.

Code Readability

The easiest way to mess up otherwise good code is to make it inconsistent. For me, it really doesn’t matter where we put brackets, if we use camelCase or PascalCase for variable names, spaces or tabs as indentations, etc. It’s when we mix the different styles that we end up with sloppy code.

Another way to make code difficult to take in is to make it compact. Just like any other type of text, code gets more readable if it contains some space.

Although the previous definition mentions nothing about it, good naming of methods, classes, variables, etc, is at the essence of readable code. Code with poorly named abstractions is not clean code.

Organized for Reusability

Formatting code is easy. Designing for readability, on the other hand, is harder. How do we organize the code so that it is easily read and modifiable?

One thing we know, is that long chunks of code is more difficult to read and harder to reuse. So, one goal should be to have small methods. But to what cost? For instance, should we refactor code like this?

SomeInterface si = (SomeInterface)someObject;

Into this?


What about code like this?

string someString;
if (a >= b)
  someString = “a is bigger or equal to b”;
  someString = “b is bigger”;

Into this?

string someString = a >= b ? “a is bigger or equal to b” : “b is bigger”;

In my opinion, the readability gains of the above refactorings are questionable. Better than reducing the number of rows in more or less clever ways, is to focus on improving abstractions.

Extract ’til you drop

Abstractions are things with names, like methods or classes. You use them to separate the whats from the hows. They are your best weapons in the fight for readability, reusability and flexibility.

A good way to improve abstractions in existing code is to extract methods from big methods, and classes from bulky classes. You can do this until there is nothing more to extract. As Robert C. Martin puts it, extract until you drop.

That leads me to my own definition of clean code.

• It is readable.
• It is consistent.
• It consists of fine grained, reusable abstractions with helpful names.

When to clean

Changing code is always associated with risk and “cleaning” is no exception. You could:

• Introduce bugs
• Cause build breaks
• Create merge conflicts

So, there are good reasons not to be carried away. A great enabling factor is how well your code is covered by automatic tests. Higher code coverage enables more brutal cleaning. A team with no automatic testing should be more careful, though, about what kind of cleaning they should do.

Also, to avoid annoying your fellow programmers it’s best to avoid sweeping changes that may result in check-in conflicts for others. Restrict your cleaning to the camp ground, the part of the code that you’re currently working on. Don’t venture out on forest-wide cleaning missions.

If the team does not share common coding rules, there’s a risk that you end up in “Curly Brace Wars”, or other religious software wars driven by team members’ individual preferences. This leads to inconsistency, annoyance and loss of energy. The only way to avoid it is to agree upon a common set of code rules.

With all that said, I have never imposed “cleaning” restrictions on my teams. They are allowed to do whatever refactorings they find necessary. If they are actively “cleaning” the code it means they care about quality, and that’s the way I like it. In the rare cases excessive cleaning cause problems, most teams will handle it and change their processes accordingly.


The Essence of Project Management

July 20th, 2010 No comments

I have a very simple philosophy when managing software development projects. It keeps me from derailing in this ever changing environment that is our craft. Regardless of whatever methodology and practices we currently use, I turn to these three questions for guidance.

  1. Is the team as productive as it can be?
  2. Is it doing the right things?
  3. Is it delivering on expectations?

To me those questions represent the essence of project management. Finding the answers, identifying the impediments and removing them are what I consider my most important tasks as a project manager; not to impose a particular methodology.

Productive teams

Good questions to ask while analyzing the productivity of a team might be:

  • Does the team contain the right people?
  • Is it getting rapid answers?
  • Is it getting quick feedback?
  • Is it optimized for long term or short term productivity?

In software development, the only measure of team productivity is its ability to produce working software. So if you want to maximize your team’s productivity you should focus on those capable of creating software. Make sure they can spend as much time as possible producing customer value.

For that reason, I think twice before I ask my developers to perform some compliance activity, like writing a report. It’s almost always better to have someone else do stuff like that. If I need information I can always ask, and if writing a report is necessary, I can collect the information and write the report myself. That way I contribute to keeping the team focused and productive, which is more important than to ease my own work load.

Monitoring the team’s productivity is an important task of the project manager. I do this mainly by being around the team a lot. Taking an interest in each individual’s working situation and task at hand is the best way to pick up problems and impediments.
Other important sources of information (although not as efficient as MBWA) are the daily meetings and iteration retrospective.

If you’re really lucky; with the right material and inspiring goals, the team jells and becomes hyper productive. That is every project manager’s dream, but few get to experience it.

Efficient teams

But being productive is not enough. What we really want is efficiency, and to be efficient you need to be both productive and produce the right things. Right in this sense means the right features, with the right quality, in the right order.

To eliminate the risk of producing wrong features you need to work closely with the customer. Make the feedback loop as short as possible, preferably by utilizing an onboard customer, or at least by holding regular review meetings.

Another thing you’d really want to do is to have the planned features prioritized. For some reason many customers (here in Sweden at least) are generally unwilling to do this for you.
“I decided what I want to include in the release, didn’t I? All those features are equally important.”

The problem with leaving priorities to the team is that the system will be developed in an arbitrary order, most likely with the easiest features first. While that may have a value in some situations, the downside is that it decreases the project’s chance of success. The reason is, that without customer value as the leading beacon, chances are that you’ll end up with a lot more bells and whistles than you can afford. And when you find that out it’ll be too late to rectify.

Dealing with expectations

How do you define project success? Here is one definition:

If the output is in line with what the customer expects, the project is successful.

The good thing with this definition is that it implies another variable to work with besides output, namely customer expectations. If the answer is yes to both questions 1 and 2 above, we know we are maximizing the output. But if that still isn’t good enough, we have only one option left. We must change the customer’s expectations, or fail.

Again, the best way to tune the customer expectations is a tight feedback loop. If the customer sees the progress regularely, she will adjust her expectations accordingly, or have a chance to react to reality. Either way, the chance for success increases.

Never lose focus on the end goal.


Two Halves Make One Whole

July 17th, 2010 No comments

I invested an hour watching a recorded speak by Martin Fowler and Neal Ford at some Paris convention. It was a well spent hour — as always with Martin Fowler — with some interesting angles on communication and feedback, and on why agile works. Especially the part regarding pair programming caught my attention.

According to Martin Fowler and his companion pair programming is an effective development practice because:

  1. We can only use one part of our brain at one time, and
  2. “The driver” and “the navigator” of a pair programming unit are using different parts of their brain and are thus complementing each other.

I have no idea if this is a proven truth, but it certainly conforms to my own experience with pair programming. The driver (the one with the keyboard) usually ends up in a state of completing-the-task-at-hand focus, while the navigator, a little more detached, makes bigger sweeps and thinks more on the big picture. For me pair programming has always resulted in better design, less bugs and a lot more fun.

If you want to watch for yourself, but don’t want to spend the full one hour, you’ll find the pair programming part 36:20 into the presentation.


Great Bug Reports

May 25th, 2010 No comments

Is your team maintaining a bug report dump? When new issues get reported at a pace the team cannot keep up with it’s in trouble. At some point the team must yield and bug reports will start to pile up. The unattended bug reports lose their value over time and become a terrible waste.

Shallow Testing

There are many possible reasons for a bug tracking system to turn into a dump of wasted effort. One reason is when the testing team is practicing shallow testing. That is when the testers file bug reports right after discovering bugs, leaving any further investigation to the developing team. This is convenient for the testers but bad for the overall process of finding and fixing bugs.

The shallow testing technique results in many bug reports, most of them with poor quality. But the measure of effective testing is not how many bug reports it produces. What counts is the amount of bug reports that lead to bug fixes. To be effective we need great bug reports.

Great Bug Reports

So, what does a great bug report look like? We need to ask another question to answer that. What does the developer need to fix a bug? The first step of a successful bug elimination is to reproduce it in a controlled development environment.
In order to do that he needs to:

  1. Put the system in a state from which the error can be produced.
  2. Perform the actions that produce the error.
  3. Know how to identify the error.

Most bug reports contain enough information to make step 2 and 3 clear. Clues toward step 1, on the other hand, are usually missing. As a result the developers end up doing additional testing to pin down the problems.

If we do a better job with our bug reports we can get two really important effects. We help developers spend less time on each bug and instead fix more of them, and we increase the chance that the report will be acted upon. This is why thorough testing on behalf of the testing team is so important.

Focus On the Right Things

The basic formula to produce great bug reports is as follows.

  1. A descriptive name.
    The most important details of the bug presented in a sentence.
  2. Steps to reproduce.
    This is the part that distinguishes great bug reports from lesser ones. A well-described sequence of action steps that describes how to reproduce the bug removes uncertainties and signals thorough research. Developers are therefore more inclined to act upon such a bug report than a fuzzy one.
    It’s also important that the reproduction steps start at a known state, preferably at a clean program start.
  3. Expected results.
  4. Actual results.
  5. Any possibly relevant attachments, for example screen-shots and data.

Depending on the system you’re using, a multitude of additional parameters may be set to categorize the bug further. Provide the extra information if applicable, but it’s not essential so don’t let it steal focus from the basic formula.

Bug Mining

The foundation of a great bug report is built with great testing. Great testing is hard work and requires a special mindset: the love of bugs and the willingness to spend a lot of time examining each new specimen.

When a great tester discovers a bug the first thing she does is not to fire up the error reporting tool and start describing it. She leaves that for later. Instead she goes about to find the answers to these two questions:

  1. Can I reproduce this bug?
  2. If so, what is the smallest number of steps I need to do so?

This usually requires several program resets, attacking the bug from different angles. I like to call this process Bug Mining and it’s the tester’s most important task.

Why all the hard work?

Why should the tester have to do all this work, you might ask. After all, the developers introduced the bugs in the first place so they should clean up their own mess, right? Besides, isn’t it better if the testers spend their time testing and finding more bugs?

The problem is this: If the testers prioritize quantity over quality we’ll end up with a very ineffective test-correction-test cycle, and an ever growing mountain of unresolved bugs.

The more time developers spend on each bug the fewer bugs get corrected. And if fewer bugs get fixed, more time will pass between bug discovery and bug correction, making it even more expensive to fix.
Since the developers are the only ones that can make the bugs go away you really need to think twice before making them do work someone else can do just as well. This is especially true for Bug mining.

Guiding Principles

As a summary, here are some guiding principles to produce great bug reports:

  • Focus your effort on the steps to reproduce the bug. They should be clear, complete and as short as possible.
  • Only one bug per report.
    Handling several bugs as one is problematic for several reasons. For example, what should you do if the bug report contains bugs with different severity? Also, it’s difficult to describe several bugs in the name sentence so some might be overlooked. A third reason is the communication problem that happens when only some of the reported bugs get corrected.
  • A few really good bug reports is better than plenty of not so good ones. Quality is better than quantity when it comes to testing.