Archive

Archive for March, 2008

Is agile only for elites?

March 28th, 2008 18 comments

I’m back from the ESRI Developer Summit. While suffering from severe jet lag I’ve spent the last couple of days in slow reflection. The biggest impact the conference had on me was a keynote held by Alan Cooper.

Early in his talk he put me in a defensive mode by stating that agile processes are bad for developing quality software. Alan means that the idea of little or no upfront design is ridiculous and will result in either expensive development costs or crappy software.
Instead he believes in having all outstanding uncertainties removed with a thorough and detailed design, thus developing a “blueprint” for a production team to follow. Additionally, in opposition to the agile manifesto, change is nothing he seems to embrace.

Most business executives believe that writing production code is a good thing. They assume that getting to production coding early is better than getting to it later. This is not correct. Writing production code is extremely expensive and extremely permanent. Once you’ve written it, it tends to stay written. Any changes you might make to production code are 1) harmful to the conceptual integrity of the code, and 2) distracting and annoying to the programmers. Annoying your programmers is more self-destructive to a company than is annoying the Board of Directors. The annoyed programmers hold your company’s operations, products, and morale in the palms of their hands.

So he wants us to go back to Waterfall. Doesn’t that give me the right to discard his thoughts without further reflection? No, I don’t think so.

It’s easy to forget that the “traditional” development processes were not created to make our lives as developers miserable. They emerged from common knowledge of that time, and they were formulated to address real problems. We would be foolish to disregard that experience from the past.

Let me be clear with one thing. I don’t agree with Alan. I do believe we can produce high quality software with agile methods, where design evolves with the production code. But I did, after my initial defensive reflex, find his perspective refreshing.

Alan’s talk is not published anywhere, but the general ideas are documented on his company’s website.

Software construction is slow, costly, and unpredictable.
[…]
Unpredictable is by far the nastiest of these three horsemen of the software apocalypse. Unpredictable means 1) you don’t know what you are going to get; and 2) you won’t get what you want. In badness, slow and costly pale in comparison to unpredictable.
[…]
The key, it seems, is vanquishing unpredictability, which means determining in advance what the right product design is, determining the resources necessary to build it, and doing so. As the airline pilots say, “Plan your flight, and fly your plan.”

Alan’s solution to the development problems of today is to divide work into three separate fields of responsibilities, something he calls “The Triad”.

Interaction design is design for humans, design engineering is design for computers, and production engineering is implementation. Recognizing these three separate divisions and organizing the work accordingly is something I call “The Triad.” While it cannot exist without interaction designers, it depends utterly on teasing apart the two kinds of engineering which today, in most organizations, are almost inextricably linked. It will take some heroic efforts to segregate them.

Collaboration with the customer (or users), as the agile methodologies suggest, is out of the question according to Alan. Why let the least qualified make the most important decisions, he reasons. Instead, Alan Cooper advocates the use of interaction designers (HCI experts). Thus, he identifies three key roles: design engineers, production engineers and interaction designers.

Production engineers are good programmers who are personally most highly motivated by seeing their work completed and used for practical purposes by other people. Design engineers are good programmers who are personally most highly motivated by assuring that their solutions are the most elegant and efficient possible.
Interaction designers’ motivations are very similar to those of design engineers, but interaction designers are not programmers. Although most programmers imagine that they are also excellent interaction designers, all you have to do to dissuade them of this mistaken belief is to explain that interaction designers spend much of their time interviewing users.

Alan doesn’t rule out agile methods completely. He thinks they have a place, but only as a part of the design process.

Currently there is a pitched battle raging in the programmer world between conventional engineering methods and Agile methods. What neither side sees is a path of reconciliation; where Agile and conventional methods can effectively coexist. Providentially, the Triad reconciles them very well. The lean, iterative, problem-solving work of the software design engineer is the archetype of Agile programming. The purposeful, methodical construction work of the production engineer is the quintessence of conventional software engineering, particularly the type espoused by disciples of Grady Booch’s Rational Unified Process, or RUP. Both methods are correct, but only when used at the correct time and with the correct medium.

Despite Alan’s thought provoking keynote, I’m still a believer of agile methods for the whole development process. I think it’s possible to build rigid software with little upfront design, a readiness for change, rapid feedback and customer collaboration. The problem I see is that it demands a lot more from us developers. Knowing the language and how to program the platform is no longer enough. We need system and interface design skills, as well as social skills. We also need to master important but difficult techniques like unit testing and code refactoring.

Maybe agile is only for teams of elites?

Categories: software development Tags:

D Update Mitigates Comparison Gotcha

March 11th, 2008 No comments

The D programming language tries to make a clear distinction between comparison by equality and comparison by identity. It does so by offering two different operators, one for each purpose.

// Are a and b equal?
if (a == b) { ... }

// Are a and b the same object?
if (a is b) { ... }

As I’ve written about in a previous post this could be a source of confusion for D newbies, like myself. Someone who’s used to the comparison semantics of Java, for instance, is likely to learn to separate equality and identity the hard way. Until now, that is.

This code, where the equality operator is (wrongly) used to compare identities, used to produce a nasty Access violation error at runtime.

SomeObject a = null;
if (a == null) { ... } //<-- Access Violation

Now, with the newly released versions of the D compiler (the stable 1.028 and the experimental 2.012,) this error is nicely caught by the compiler, which kindly instructs us to use the identity operator instead.

Error: use 'is' instead of '==' when comparing with null

Unfortunately, it won’t help us discover all cases of wrongly used equality operators. For instance, this piece of code still produces the dreadful runtime AV.

SomeObject a = null;
if (a == a) { ... } // <-- Access Violation

Still, the update is a big improvement and will make it easier for a lot of people while learning the language.

Cheers!

Categories: D Programming Language Tags:

Off to the land of opportunity

March 10th, 2008 No comments

Next week I’m going to attend the 2008 ESRI Developer Summit in Palm Springs. It will be my first time in the USA and I’m really looking forward to the trip. Who knows, I might even meet one of you guys over there.

Categories: off-topic Tags:

OT: Rest in peace Gary

March 6th, 2008 No comments

I’m sad to see that the original author of the role-playing game AD&D, Gary Gygax, has died. I’d like to send a warm, thankful thought to the one who’s work meant so much to me and my role-playing friends in our youths.

I would like the world to remember me as the guy who really enjoyed playing games and sharing his knowledge and his fun pastimes with everybody else.

Rest in peace man.

Categories: off-topic, role-playing Tags:

Who wants a sloppy workplace?

March 6th, 2008 11 comments

I’m not the kind of person who is easily annoyed, but there is one thing that gets into my skin – all the time. Inconsistently structured code. I hate arbitrary indentation, spacing and line breaks. It is close to impossible for me to assimilate a piece of sloppy code without first running it through a beautifier.

I ran into such a code again today, and for some reason I started to reflect upon my negative reactions. What is it about messy looking code that makes me dislike it so much? The first thing that came to my mind was this: The source code is where I spend most of my time at work, and who wants a sloppy workplace?

At second thought, that didn’t seem to hold – at least not for me. Here are a couple of pictures of my desk at work.

My workplace 1My workplace 2

My computer corner at home is even worse. Clearly, I’m not a person who cares about a tidy workplace. So what’s the reason then? Why can’t I stand to look at sloppy code while I’m perfectly OK with turning my desk into a dump? Well, it beats me.

If you have an idea, please let me know.

Cheers!

Categories: habits Tags:

Virtual or Non-Virtual by Default, Do We Really Have To Choose?

March 4th, 2008 3 comments

When it comes to the question of whether methods should be virtual by default or not, there are two schools of thought. Anders Hejlsberg, lead architect of C#, describes them in an interview from 2003.

The academic school of thought says, “Everything should be virtual, because I might want to override it someday.” The pragmatic school of thought, which comes from building real applications that run in the real world, says, “We’ve got to be real careful about what we make virtual.”

As I told you about in my post from last week, I have left the “pragmatic school of thought” to join the “academic” camp. The main reason was unit-testing, that – in my opinion – calls for a more flexible object model than the one of C#. When unit-testing, I often want to use components in unusual ways, all in the name of dependency breaking, and therefore I like their methods to be virtual.

But, it wasn’t – and still isn’t – an easy pick; Virtual by default does bring some serious problems to the table. Again from the interview with Anders:

Every time you say virtual in an API, you are creating a call back hook. As an OS or API framework designer, you’ve got to be real careful about that. You don’t want users overriding and hooking at any arbitrary point in an API, because you cannot necessarily make those promises.

Whenever they ship a new version of the Java class libraries, breakage occurs. Whenever they introduce a new method in a base class, if someone in a derived class had a method of that same name, that method is now an override—except if it has a different return type, it no longer compiles. The problem is that Java, and also C++, does not capture the intent of the programmer with respect to virtual.

C# captures the intent better and avoids versioning problems, and Java offers the flexibility needed for unit-testing. Which is better? The answer seems to be: it depends. But, do we really have to choose? Why can’t we have both? Well, I think we can, and Java has shown the way to achieve it.

Java annotations is a powerful language feature. (The concept was rightfully stolen from C# where it is called custom attributes.) With it one can attach metadata to parts of a program, to give additional information to the compiler or other external agents. In other words, annotations can be used to extend the programming language without the need for changing the language itself.

A good example is the @override annotation.

class SomeClass extends SomeBaseClass {

  <strong>@override</strong>
  void someMethod() { … }

}

From the Java documentation:

[@override indicates] that a method declaration is intended to override a method declaration in a superclass. If a method is annotated with this annotation type but does not override a superclass method, compilers are required to generate an error message.

The use of the @override annotation takes care of the problem when name changes to virtual methods silently breaks behavior of descending classes; Or, which is more common, when misspelled names of intended overrides aren’t captured at compile-time. Now, with the introduction of the @override annotation, you can help the compiler help you to fail fast. It is now possible to show your intention better than what was possible in the pre annotation days.

Unfortunately Java doesn’t take this concept all the way. They could have introduced an @virtual annotation as a complement to @override, and thus reach the same level of expressiveness as C#, but without forgoing the flexibility of the Java object model. It would be the perfect middle-way, and provide the best of both worlds.

Class SomeBaseClass {

  <strong>@virtual</strong>
  void someMethod() { … }

}

The benefit of an annotation (or custom attribute) based solution is that it’s configurable. It would be possible to alter the compiler’s behavior based on context or environment. For instance one could enforce the use of @virtual and @override in production code. Additionally, one could relax the controls when necessary, like in test projects or legacy code, to mere warnings or complete silence.

Wouldn’t that be better than the all or nothing solutions of today?

Cheers!

Categories: C#, java, programming Tags: