Archive

Archive for the ‘test-driven’ Category

PowerShell TDD: Testing ShouldProcess

July 12th, 2022 No comments

The built-in PowerShell feature ShouldProcess is considered best practice for commands that perform destructive operations.

function Remove-Something {
    [CmdletBinding(SupportsShouldProcess)]
    param()
    if ($PSCmdlet.ShouldProcess("Something", "Permanently remove")) {
        # Code that performs the destructive operation
    }
}

The problem with ShouldProcess is that it may prompt the user for manual confirmation, which makes automated testing difficult or impractical. The normal approach to automate testing of processes that involves manual prompting is to mock the part which performs it. However, since ShouldProcess is a .NET method on an automatic context variable ($PSCmdlet), testing frameworks like Pester aren’t able to replace its implementation with a mock.

Fortunately, there is a way to make ShouldProcess adhering commands fully testable. It just requires a little bit of extra work, such as wrapping ShouldProcess into a PowerShell function of its own.

Wrapping ShouldProcess

Here’s a PowerShell function that wraps the ShouldProcess method and makes it possible to use mocks to bypass the prompting.

function Invoke-ShouldProcess {
    # Suppress the PSShouldProcess warning since we're wrapping ShouldProcess to be able to mock it from a Pester test.
    # Info on suppressing PSScriptAnalyzer rules can be found here:
    # https://github.com/PowerShell/PSScriptAnalyzer/blob/master/README.md#suppressing-rules
    [Diagnostics.CodeAnalysis.SuppressMessageAttribute('PSShouldProcess', '')]
    [CmdletBinding()]
    param (
        [Parameter(Mandatory)]
        [System.Management.Automation.PSCmdlet]
        $Context,
        [Parameter(Mandatory)]
        [string]
        $Target,
        [Parameter(Mandatory)]
        [string]
        $Operation,
        [string]
        $Message
    )
    if ($Message) {
        $Context.ShouldProcess($Message, $Target, $Operation)
    } else {
        $Context.ShouldProcess($Target, $Operation)
    }
}

Actually, there are four overloads of the ShouldProcess method, but the code above only handles the two most useful ones (in my opinion).

The next step is to replace the ShouldProcess method in the command, and use the wrapper instead.

function Remove-Something {
    [CmdletBinding(SupportsShouldProcess)]
    param()
    if (Invoke-ShouldProcess -Context $PSCmdlet -Target "Something" -Operation "Permanently remove") {
        # Code that performs the destructive operation
    }
}

Note that you need to provide the command’s $PSCmdlet variable as the context on which ShouldProcess should be invoked.

Mocking Invoke-ShouldProcess

Now our command is ready for some Pester testing. Here are a couple of test stubs that show you how to mock the wrapper. Hopefully, you’ll be able to take it from here and start testing your own ShouldProcess-aware commands.

It 'Should perform the operation if the user confirms' {
    Mock Invoke-TDDShouldProcess { $true }

    Remove-Something
        
    # Code that verifies that the operation was executed
}

It 'Should not perform the operation if the user aborts' {
    Mock Invoke-TDDShouldProcess { $false }

    Remove-Something

    # Code that verifies that the operation was not executed
}

Introducing TDDSeams

I have created a PowerShell module named TDDSeams which has wrappers for ShouldProcess and its sister ShouldContinue, as well as a couple of helper methods that implement the best practice behavior described by Kevin Marquette in his excellent post Everything you wanted to know about ShouldProcess.

You can install the TDDSeams module from an elevated PowerShell with the following command.

Install-Module TDDSeams

Also, if you are interested in test-driven PowerShell development, check out my previous post PowerShell TDD: Testing CmdletBinding and OutputType.

PowerShell TDD: Testing CmdletBinding and OutputType

July 11th, 2022 No comments

A while ago, I decided to add PowerShell to my automation toolbox. Since I believe the best way to learn a new language is to do test-driven development and PowerShell has a fantastic module for this, called Pester. The framework is super intuitive, easy to do mocking in, and allows you develop your code design from tests.

However, one thing bugged me. I didn’t seem to be able to write code that tested whether a function had declared CmdletBinding (i.e. was an Advanced Function), or if it had declared OutputType.

# How can I test this?
function MyFunction {
    [CmdletBinding(SupportsShouldProcess, ConfirmImpact='High')]
    [OutputType('String')]
    ...
}

Google didn’t have anything useful on this subject so I tried my luck on StackOverflow and mclayton pointed me in the right direction (although, as mclayton puts it: it’s a bit of a mouthful). It turns out that I can use the built-in Abstract Syntax Tree (AST) and specifically the Param block attributes to find out if CmdletBinding is declared.

Testing CmdletBinding

The below function takes a command as input and looks for a CmdletBinding param block attribute.

function Test-CmdletBinding {
[OutputType([Bool])]
[CmdletBinding()]
    param (
        # Parameter help description
        [Parameter(Mandatory)]
        [System.Management.Automation.CommandInfo]
        $Command
    )

    $attribute = $command.ScriptBlock.Ast.Body.ParamBlock.Attributes | where-object { $_.TypeName.FullName -eq 'CmdletBinding' };

    $null -ne $attribute
}

You can then use the helper function in a Pester test, like this.

It "Should be an advanced function" {
    $c = Get-Command -Name MyCommand
    Test-CmdletBinding $c | Should -BeTrue
}

Testing CmdletBinding Arguments

That’s great, but what about arguments, like SupportsShouldProcess or ConfirmImpact?

[CmdletBinding(SupportsShouldProcess, ConfirmImpact='High')]

How can I test for those? Well, that’s where we get mouthful bits I guess, but the good news is that it’s doable. Here’s a helper function that can test those scenarios. It takes a command, an argument name, and an optional argument value and returns true if the command meets those conditions.

function Test-CmdletBindingArgument {
    [CmdletBinding()]
    [OutputType([Bool])]
    param (
        # Parameter help description
        [Parameter(Mandatory)]
        [System.Management.Automation.CommandInfo]
        $Command,
        [Parameter(Mandatory)]
        [string]
        $ArgumentName,
        [Parameter()]
        [string]
        $ArgumentValue
    )

    $attribute = $command.ScriptBlock.Ast.Body.ParamBlock.Attributes | where-object { $_.TypeName.FullName -eq 'CmdletBinding' };

    if ($attribute) {
        $argument = $attribute.NamedArguments | where-object { $_.ArgumentName -eq $ArgumentName };

        if ($null -eq $argument) {
            # The attribute does not have the argument, return false
            $false
        } elseif ($argument.ExpressionOmitted) {
            $ArgumentValue -eq '' -or $ArgumentValue -eq $true
        } elseif ($argument.Argument.Extent.Text -eq '$true') {
            $ArgumentValue -eq '' -or $ArgumentValue -eq $true
        } elseif ($argument.Argument.Extent.Text -eq '$false') {
            $ArgumentValue -eq $false
        } else {
            $ArgumentValue -eq $argument.Argument.Value
        }
    } else {
        # No such attribute exists on the command, return false
        $false
    }
}

The code handles both implicit and explicit values, e.g.

[CmdletBinding(SomeAttribute)]
[CmdletBinding(SomeAttribute=$true)] # same as having the attribute declared
[CmdletBinding(SomeAttribute=$false)] # same as not having the attribute declared
[CmdletBinding(SomeAttribute='Some Value')]

Here are a couple of examples of Pester tests using the helper function.

It "Should have CmdletBinding with ConfirmImpact set to High" {
    $c = Get-Command -Name MyCommand
    Test-CmdletBindingArgument $c -ArgumentName 'ConfirmImpact' -ArgumentValue 'High' | Should -BeTrue
}

It "Should have SupportShouldProcess declared" {
    $c = Get-Command -Name MyCommand
    Test-CmdletBindingArgument $c -ArgumentName 'SupportShouldProcess' | Should -BeTrue
}

Testing OutputType

Testing OutputType requires a slightly different approach. Since the OutputType attribute declares a type we have to access it through the positional arguments (instead of named arguments that were used for CmdletBinding).

Here’s a helper function to verify that a given command has declared a given type as its output type.

function Test-OutputType {
    [OutputType([Bool])]
    [CmdletBinding()]
    param (
        # Parameter help description
        [Parameter(Mandatory)]
        [System.Management.Automation.CommandInfo]
        $Command,
        [Parameter(Mandatory)]
        [string]
        $TypeName
    )

    $attribute = $command.ScriptBlock.Ast.Body.ParamBlock.Attributes | where-object { $_.TypeName.FullName -eq 'OutputType' };

    if ($attribute) {
        $argument = $attribute.PositionalArguments | where-object {
            if ($_.StaticType.Name -eq 'String') {
                $_.Value -eq $TypeName 
            } elseif ($_.StaticType.Name -eq 'Type') {
                $_.TypeName.Name -eq $TypeName 
            } else {
                $false
            }
        }
        if ($argument) {
            $true
        } else {
            $false
        }
    } else {
        $false
    }
}

Note that a type can be declared as a string or as a type, but the helper function handles both cases:

[OutputType([Bool])]
[OutputType('System.Bool')]

And here’s an example of how it can be used within a Pester test.

It "Should have Output type Bool" {
    $c = Get-Command -Name MyCommand
    Test-TDDOutputType $c -TypeName 'Bool' | Should -BeTrue
}

Introducing TDDUtils

I have created a PowerShell module called TDDUtils, which contains the above functions (although named with a TDD prefix) as well as more general versions that allows you to test other attributes than CmdletBinding and OutputType.

 You can install it from an Administrator PowerShell with the below command.

Install-Module TDDUtils

I plan to add more useful functions to that module as I go on with my PowerShell TDD journey, and if you want to contribute or just give me some suggestions, feel free to contact me on Twitter.

New Tutorial: Getting Started With UISpec on XCode 4

July 8th, 2011 No comments

One of the most popular pages on this website is the tutorial that helps you getting started with UISpec; i guess there’s a great demand for acceptance testing frameworks on Objective-C.

Anyway, I have just made a switch from XCode 3 to XCode 4, a huge improvement. A lot has changed in the user interface so I decided to upgrade the UISpec tutorial, but kept the old one in case someone prefer the old IDE:

Cheers!

Automatic Acceptance Testing for iPhone

February 23rd, 2011 No comments

I’ve been playing around with XCode and iOS SDK lately, particularely with various alternatives for achieving automatic acceptance testing. (Which I belive is the only long-term solution to fight The Regression Testing problem.)

One of the most promising solutions I’ve found is UISpec, modeled after Ruby’s mighty popular RSpec. Even though it still feels a bit rough cut, UISpec provides an easy way to write tests that invoke your application’s user interface; no recording, just plain Objective-C.

Anyway, in case you’re interested, I published a tutorial on how to set UISpec up for your project: Getting Started With UISpec.

Categories: software development, test-driven Tags:

The Swamp of Regression Testing

January 17th, 2011 14 comments
Have you ever read a call for automation of acceptance testing? Did you find the idea really compelling but then came to the conclusion that it was too steep a mountain for your team to climb, that you weren’t in a position to be able to make it happen? Maybe you gave it a half-hearted try that you were forced to abandon when the team was pressured to deliver “business value”? Then chances are, like me, you heard but didn’t really listen.
You must automate your acceptance testing!

Why? Because otherwise you’re bound to get bogged down by the swamp of regression testing.

Regression Testing in Incremental Development

Suppose we’re building a system using a traditional method; We design it, build it and then test it – in that order. For the sake of simplicity assume that the effort of acceptance testing the system is illustrated by the box below.

Box depicting a specified system

The amount of acceptance testing using a traditional approach

Now, suppose we’re creating the same system using an agile approach; Instead of building it all in one chunk we develop the system incrementally, for instance in three iterations.

The system is divided into parts (in this example three parts: A, B and C).

An agile approach. The system is developed in increments.

New functionality is added with each iteration

Potential Releasability

Now, since we’re “agile”, acceptance testing the new functionality after each iteration is not enough. To ensure “potential releasability” for our system increments, we also need to check all previously added functionality so that we didn’t accidentally break anything we’ve already built. Test people call this regression testing.

Can you spot the problem? Incremental development requires more testing. A lot more testing. For our three iteration big  example, twice as much testing.

The theoretical amount of acceptance testing using an agile approach and three iterations, is twice as much as in Waterfall

Unfortunately, it’s even worse than that. The amount of acceptance testing increases exponentially with the number of increments. Five iterations requires three times the amount of testing, nine requires five times, and so on.

The amount of acceptance testing increase rapidly with the number of increments.

This is the reason many agile teams lose velocity as they go, either by yielding to an increasing burden of test (if they do what they should) or by spending an increasing amount of time fixing bugs that cost more to fix since they’re discovered late. (Which is the symptom of an inadequately tested system.)

A Negative Force

Of course, this heavily simplified model of acceptance testing is full of flaws. For example:

  • The nature of acceptance testing in a Waterfall method is very different from the acceptance testing in agile. (One might say though that the Waterfall hides behind a false assumption (that all will go well) which looks good in plans but becomes ugly in reality).
  • One can’t compare initial acceptance testing of newly implemented features, with regression testing of the same features. The former is real testing, while the second is more like checking. (By the way, read this interesting discussion on the difference of testing and checking: Which end of the horse.)

Still, I think it’s safe to say that there is a force working against us, a force that gets greater with time. The burden of regression testing challenges any team doing incremental development.

So, should we drop the whole agile thing and run for the nearest waterfall? Of course not! We still want to build the right system. And finding that out last is not an option.

The system we want is never the system we thought we wanted.

OK, but can’t we just skip the regression testing? In practice, this is what many teams do. But it’s not a good idea. Because around The Swamp of Regression Testing lies The Forest of Poor Quality. We don’t want to go there. It’s an unpleasant and unpredictable place to be.

We really need to face the swamp. Otherwise, how can we be potentially releasable?

The only real solution is to automate testing. And I’m not talking about unit testing here. Unit testing is great but has a different purpose: Unit-testing verifies that the code does what you designed it to do. Acceptance testing on the other hand verifies that our code does what the customer (or product owner, please use your favorite word for the one that pays for the stuff you build) asked for. That’s what we need to test to achieve potential releasability.

So, get out there and find a way to automate your acceptance testing. Because if you don’t, you can’t be agile.

Cheers!

Did I say don’t unit-test GUIs?

January 3rd, 2008 1 comment

Isn’t life funny? Two weeks ago I stated my opinion that unit-testing graphical user interfaces isn’t worth the trouble. Now I find myself doing it, writing unit-tests for GUI components.

What happened, did I come to my senses or go crazy (pick the expression that fits your point of view) during Christmas holidays? No, I still think that unit-testing user interfaces is most likely a waste of time, but for some special occasions, like this one, they will pay off.

My task is to make changes to a tool box control in a legacy application. The control is full of application logic and it has a strong and complicated relationship to an edit area control. There are two things I need to do:
First I have to pull out the application logic of the tool box and break the tight relationship to the edit area. Then I need to push that application logic deeper into the application core, so that the tool box could be used via the plug-in framework.

I figured I had two options. I could either refactor the tool box and make the changes in small steps, or I could rebuild the complete tool box logic and then change the controls to use the new programming interface.
I found the rebuild alternative too risky. The code base is full of booby traps and I have to be very careful not to introduce bugs. Therefore I decided to go with refactoring.

But refactoring requires a unit-testing harness, which of course this application doesn’t have. Trust me; you don’t want to refactor without extensive unit-testing, so here I am setting up the tests around the involved GUI controls. It’s a lot of work, especially since I don’t have a decent framework for creating mock objects, but it’ll be worth the effort once I start messing around with the code.

As a conclusion I’d like to refine the “Don’t unit-test GUI” statement to read “Don’t unit-test GUI unless you’re refactoring it, and there’s no easier way to make sure your changes doesn’t break anything.”

Cheers!

Don’t unit-test GUI

December 20th, 2007 21 comments

I’m currently rereading parts of the book Test-Driven Development: A Practical Guide, by David Astels. It’s a great book in many ways, well worth reading, but I have objections to one particular section in the book.
The author tries to convince me that developing my user interfaces using a test-driven approach is a good thing to do. I disagree.

I love TDD, it’s one of the most powerful development techniques I know, but it’s not without limitations. For one thing, code with unit-tests attached is more difficult to change. This is often an acceptable price to pay since the benefit of producing and having the unit-tests is usually greater.

But the return of the investment isn’t always bigger than the price, and sometimes the cost of change exceeds the benefit of protection. That’s why most developers won’t use TDD for experimental code, and that’s why I’m not using it to test my user interfaces.

I prefer to develop GUIs in a RAD environment, visually dragging and dropping components, moving them around, exchanging them for others if better ones are to be found. In that process unit-testing just gets in my way. After all, the GUI is supposed to be a thin layer, without business logic, so there is only so much to test.

One could theoretically test that the form contains certain key components, that they are visible, have a certain position or layout, stuff like that – but I find that kind of testing too specific for my taste.

In my opinion, unit-testing should test functionality, not usability. It shouldn’t dictate whether I decide to show a set of items in a plain list or in a combo-box. What it should do, is test that the business logic of my code produce the correct set of items, and leave the graphical worries to the testers.

This brings us to something that is sometimes misunderstood: Unit-testing can never replace conventional testing. Some things are just better left to humans. Like testing user interfaces.

Cheers!

Tools of The Effective Developer: Make It Work – First!

October 29th, 2007 15 comments

I’ve come across many different types of developers during my nearly two decades in the business. In my experience there are two developer character type extremes: the ones that always seek and settle with the simplest solution, and the ones that seek the perfect solution, perfect in terms of efficiency, readability or code elegance.

Developers from the first group constantly create mess and agony among fellow developers. The second group contain developers that never produce anything of value since they care more for the code than they do for the result. The optimal balance is somewhere in between, but regardless of what type of developer you are: you should always start by making it work, meaning implement the simplest solution first.

Why spend time on an implementation that isn’t likely to be the final one, you might ask. Here’s why:

  1. The simple solution helps evolving the unit-testing safety net.
  2. The simple solution provide rapid feedback, and may prevent extensive coding of the wrong feature. It is like a prototype on the code level.
  3. The simple solution is often good enough, and – with a working solution ready – you are less inclined to proceed and implement a more complex solution unless you really have to. Thus avoiding premature optimization and premature design, that makes you add features that might be needed in the future.
  4. With the simple solution in place, most integration and programming interfacing is done. That makes it easier to implement a more complex solution.
  5. While implementing the simple solution, your knowledge of the system increases. This helps you make good decisions later on.

This may all sound simple enough to you. After all, the habit of Making It Work First comes naturally to many developers. Unfortunately, for me, I’m not one of those developers. I still let more or less insignificant design issues consume an unnecessary amount of time. The thing is, it is hard to find the perfect design on the first try. The perfect design may not even exist, or cost too much to be worth the effort.

That is why I struggle to attain the habit of Making It Work First.

Previous posts in the Tools of The Effective Developer series:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View
  5. Tools of The Effective Developer: Fail Fast!

I’m back!

October 28th, 2007 No comments

I’m back from my three week vacation!

I had a great time, but as suspected I wasn’t able to stay away from computers. In the warm evenings, just for fun, I started to implement a ray tracer in the D Programming Language.

I have been looking for a suitable project that would give me a chance to get deep into D, and a ray tracer seems to be the perfect fit. D is supposed to be great at floating point programming and now I have the chance to find out on my own.

To make it a little more interesting I have used a more top-down breath-first kind of approach than I normally do. I want to see how that affects a test-driven development technique. As a part of the experiment I keep a detailed development log which I plan to share with you when I reach a certain point. It could be within a week or take several months depending on work load and inspiration level.

So stay tuned. I’ll be back with ray tracing, or other topics that comes across my sphere of interest.

Cheers!

How to automate acceptance tests

October 5th, 2007 No comments

In a comment to my previous post AC wonders how I automate acceptance testing. He considers that as being done by real testers. Well, he’s absolutely right. I expressed myself a bit sloppy, so let me use this post to explain what I meant to say.

Acceptance testing is done by the customer to make sure she got what she ordered, to make sure you delivered the right system. This process cannot and should not be automated. What you can do – and this was what I meant to say – is to automate acceptance tests, and use them during construction.

The customer defines the acceptance tests. These are valuable to the developer since they can be used to validate the system as it develops. The sooner you get a hold on these test cases the better, so make sure you press the customer to produce them early. Better yet, help the customer in the process. That way you can help making the test cases automatable.

So, how do you design a test so that it can be automated? First of all you need to stop thinking in terms of user interface. Instead you should describe the test, in plain text, and in terms of action, function and result. Look for possible test data, edge cases, exceptional uses and describe those as well.

The second thing you need to do is to make the system in such a way that it can be automated. Separating the GUI from the business logic is often all you need to do to achieve that. You then automate the acceptance tests in the same way as you automate integration tests. In fact, they could even become a part of your integration testing harness. The only difference being the fact that they are defined by the customer.

I’m not saying this is easy. (Here’s a nice post which discusses when to automate.) I’m not saying it can be done for all of your acceptance tests. What I say is: it can be done for far more tests than you might think. For example, we are customizing a GIS system for an internal customer’s needs. The application is user controlled and involve plenty of complex actions, modifying graphical data and properties. Still we were able to automate most of the acceptance tests.

We spent a lot of time writing code to set up fixtures, initiate actions and check the results. It was worth every second though. You see, manually running one of our acceptance test cases usually takes several minutes. By having the computer do all the work, time is reduced significantly and free up developer time. We use the automated test cases individually, almost like an extended compiler, to verify features as we implement them. And at night we run all of our acceptance tests to get feedback on how far we’ve come and to spot unexpected problems.

But the real acceptance testing is still going to be done manually, by the customer, at the end of the project. Just like AC pointed out.

Cheers!

Categories: software development, test-driven Tags: