Quit Debugging!

September 17th, 2007

I have a confession to make: I used to be addicted to debugging. Yes, it’s true. When I got hooked – damn you Delphi – I wasn’t able to see the dark side, the demonic side of the debugger. It lured me into the path of quick fixes. Heed my warning: debuggers are bad!

Fortunately I’m one of the lucky few who have been able to recover from this particularly addictive behavior. I’ve been clean – thank you jUnit – for almost 5 years now. And you can do it too, you can let go of the safety zone that these integrated debuggers provide, and break free just like I did.

The first thing to do is to realize that there is a better alternative: test-driven development. To get rid of a bug, the right thing to do is not to fire up your debugger, but to write a unit-test to reveal it. If necessary, keep writing tests and go deeper and deeper into your code. Eventually the tests will tell you what is wrong, and they’ll even point out a solution for you.

I know that using a debugger may seem like a faster way to find and extinguish a bug, but that is just an illusion. Here are the reasons:

  1. TDD improves the design. Being forced to think testability tends to divide your code into small manageable pieces. This will make your code a bad breeding ground for bugs.
  2. Tests remain useful for a long time. They become an addition to your testing harness, which helps protecting your code against future infestations. The work spent in a debugging session can never be reused.
  3. Unit-testing saves time, a lot! While this isn’t immediately obvious, the long term effects are huge. Think of it: all those debugging sessions can be run automatically at your command, whenever you want, how often you want, and in just a matter of seconds. All you have to do to achieve this is to let go of the debugger, and write relevant tests.
  4. Unit-testing gives you courage. There’s nothing like a good harness to make you feel invincible. I still remember the first time I felt the real power of unit-testing. I was working on a huge legacy application and had developed a new set of functionalities, using TDD for the very first time. Several months later I realized I had to do a major rewrite. The rewrite was risky business and took me a couple of days to complete. When finished, I ran the unit-tests which all came out green! I could be confident the program worked just as before the rewrite. And the best part: I drew the conclusion out of just five seconds of testing. Boy, I still get the goose bumps.

[PREACHING OFF]

Of course debuggers are useful tools. In certain situations they are even invaluable. For someone who’s new to the software they provide a great way of getting to know it. The problem is that a debugger makes you lazy. So be sure to get rid of it as soon as you identify a testing strategy.

Cheers!

Be Sociable, Share!
  1. she
    September 17th, 2007 at 12:56 | #1

    “Unit-testing gives you courage.”

    Come on man, dont write such stuff … It reads a bit like drinking coca cola brings you the cutest girls to bed … 😉

  2. September 17th, 2007 at 13:01 | #2

    “Come on man, dont write such stuff … It reads a bit like drinking coca cola brings you the cutest girls to bed”

    Your point being? 😉

  3. September 17th, 2007 at 16:07 | #3

    Debuggers are not bad, Only the programmers who wrote it.

  4. Aare
    September 17th, 2007 at 16:49 | #4

    Yeah, debugging is too low-level and too slow! Much more useful is to write some kind of visual representation for your algorithm state and values it outputs. It’s amaizing how long you can manage this way without any debugging, unit-testing or even assertions. 🙂

    • September 18th, 2007 at 08:20 | #5

      I agree that a good upfront design is something that improves code quality as well. Don’t take it too far though, it’s really difficult to take everything into account on beforehand. I recommend: design some, implement some (with unit-tests first), design some, and so on.

  5. David Frey
    September 17th, 2007 at 18:43 | #6

    My issue with unit tests is that it’s so easy for them to become outdated when the code that they depend on changes.

    If I commit changes to the Foo module then this can break the Bar module’s unit tests because they may depend on the previous behaviour of the Foo module.

    • September 18th, 2007 at 08:24 | #7

      David, I totally agree with Robin Luckey (see comment below) on this one. Be sure to decouple your unit-tests with mock objects if they depend upon functionality provided elsewhere. This is critical if you want to be successful with TDD.

  6. Robin Luckey
    September 17th, 2007 at 22:12 | #8

    @David,

    Your unit tests are either not really unit tests, or you are writing code first and tests second.

    How can a change to Foo break a test for Bar? Bar’s test should only test Bar, and Foo’s tests should only test Foo.

    If Bar is running Foo code during its tests, this should be changed. Bar should use mock objects to guarantee that only Bar code runs during a Bar test.

    And isn’t a unit test exactly serving its purpose when it breaks something?

  7. September 17th, 2007 at 23:25 | #9

    Do unit tests find the location of access violations? Haha

    • September 18th, 2007 at 08:26 | #10

      Yes, James, they do. Just be sure to keep your functions/methods short.

  8. Aare
    September 18th, 2007 at 06:17 | #11

    Also another problem with unit-testing is that it is done by “stupid robot”, ie preprogrammed tests. Basically all these algorithms do is look at the variables and assert if they don’t meet the “valid values”-template. Any brain-dead programmer can do that :). All we need instead is write some kind of “visualization units” that would instead of testing themselves output variables to the screen in easy to read form, preferably with previous execution-time and iteration history. Variables could be represented with colored pies and circles (not filled circle indicating null pointer for example). From this two dimensional table it would be really streightforward to see what changed from previous execution times and why the new changes are not working.

    • September 18th, 2007 at 08:36 | #12

      Aare, any brain-dead programmer can not do unit-testing as fast as the computer. And even brain-dead programmers get bored testing the same thing over and over and over and over again 🙂

      I might be missing the point with your idea of visualization-units, but why involve a human being? Why not let the computer analyze?

      • Aare
        September 18th, 2007 at 17:53 | #13

        Because same as with optimization, premature Automatization is the root of all evil 🙂

        But seriously, because in the end, it is not computer who analyzes things, but your yourself. And you are not even analyzing these things directly, but from a distant, by writing code in a separate source code file. It’s kind a like checking the weather with a c-program instead of looking out of the window yourself.

        And do we really need that much speed in testing? Because if changes to the source code are small, they are done incrementally to a single function of a single class. Then there are only few test-cases affected by these changes, and manually iterating through them would probably be just as fast as writing these changes. (assuming we would have some kind of visualization-unit-based testing framework)

        And again when the code changes are big, even our brain-dead programmer would be much faster to go through all test cases, than it would take to re-write and re-automatize all those complex and long forgotten unit-tests.

        Yeah, may be I have somekind of automatization-fobia, but I really prefer testing everything manually. One reason for that is that programming work in itself is a lot like “testing everything manually”. 🙂

        • September 19th, 2007 at 07:32 | #14

          OK, I can see that you like doing things manually. But I have seen the light, and I’ll never go back that road 🙂

        • Aare
          September 19th, 2007 at 12:17 | #15

          Actually the testing method I’m suggesting can also be presented in opposite light. All I’m trying to do is automatize testing as much as possible in such cases where fully automatized unit-testing is not possible, in so called system and application level testing. For example if you are using some physics-engine, and algorithm outputs cannot be verified with plain asserts. Then it would be nice to have a framework for iterating test-cases, so that human can “see” algorithm innerworkings, compare and analyze them between execution times.

  9. rickard
    September 18th, 2007 at 07:03 | #16

    The problem with traditional unit testing is the absurd amounts of tests it generates. Each test covers only a fraction of all possible situations. As you put it: “If necessary, keep writing tests and go deeper and deeper into your code”. After a while you end up with a lot of test code, often of very low quality, since you copy-paste old tests, changing maybe just a particular input value or so. Maintaining all this test code is not funny.

    Better then is to use a property-based testing framework, like QuickCheck for Haskell or ScalaCheck for Scala. Instead of specifying concrete test cases in JUnit style, you specify _properties_ that describes the behaviour of your units. The framework then automatically generates loads of test cases and tests the properties. Now, instead of a dozen of concrete tests, you have one general property that describes your unit, which is a lot easier to understand and maintain. Also, since the framework is able to generate many more test cases than you would ever write or even think of, testing coverage improves drastically.

  10. September 18th, 2007 at 08:48 | #17

    Rickard: You are right, the biggest problem of unit-testing is the amount of code it produces and maintenance it requires. But the same things that applies to production code, applies to test code as well: keep your code DRY. Copy and pasting is not a good strategy, not even for testing.

    With that said, I’ve never heard of a property based testing framework that you describe. It sounds really interesting. Do you know of any such framework for other languages? (I haven’t had the fortune to learn Haskel or Scala yet)

  11. Michael
    September 18th, 2007 at 20:23 | #18

    Um, testing *is* a form of debugging! A nice debugger makes it pleasant to step through a bit of code in such a way that you easily grasp what is happening over time. If you aren’t using a debugger along with your testing I would suspect that there are some real “jewels” in your code that nevertheless passes all tests.

    • September 19th, 2007 at 07:46 | #19

      “A nice debugger makes it pleasant”

      There is the danger. Soon you’ll find yourself relying on your debugger, and your unit-tests become second-class citizens (so to speak).

      Regarding the “jewels”, that risk exists for every program – regardless of the testing technique. It’s how you tackle the bug when it surface that matters. I just happen to believe that unit-testing reduces the risk of “jewels” within your code.

      Of course I use debuggers every now and then, especially with legacy code, code I haven’t written myself. But I have found that with proper test-driven development, I rarely need the debugger.

  12. September 19th, 2007 at 23:00 | #20

    If you really want safety, check out Ocaml and (if you’re feeling daring) Haskell. The type system can guaranty you all kinds of crap which OO people can only dream about.

    Raganwald did a nice post on it:
    http://weblog.raganwald.com/2007/07/can-your-type-checking-system-do-this.html

  13. Ric
    September 16th, 2011 at 05:42 | #21

    Fascinating read… from a test neophyte.
    Learning a lot this evening as I read through the article & comments.

Comments are closed.