Is agile only for elites?
I’m back from the ESRI Developer Summit. While suffering from severe jet lag I’ve spent the last couple of days in slow reflection. The biggest impact the conference had on me was a keynote held by Alan Cooper.
Early in his talk he put me in a defensive mode by stating that agile processes are bad for developing quality software. Alan means that the idea of little or no upfront design is ridiculous and will result in either expensive development costs or crappy software.
Instead he believes in having all outstanding uncertainties removed with a thorough and detailed design, thus developing a “blueprint” for a production team to follow. Additionally, in opposition to the agile manifesto, change is nothing he seems to embrace.
Most business executives believe that writing production code is a good thing. They assume that getting to production coding early is better than getting to it later. This is not correct. Writing production code is extremely expensive and extremely permanent. Once you’ve written it, it tends to stay written. Any changes you might make to production code are 1) harmful to the conceptual integrity of the code, and 2) distracting and annoying to the programmers. Annoying your programmers is more self-destructive to a company than is annoying the Board of Directors. The annoyed programmers hold your company’s operations, products, and morale in the palms of their hands.
So he wants us to go back to Waterfall. Doesn’t that give me the right to discard his thoughts without further reflection? No, I don’t think so.
It’s easy to forget that the “traditional” development processes were not created to make our lives as developers miserable. They emerged from common knowledge of that time, and they were formulated to address real problems. We would be foolish to disregard that experience from the past.
Let me be clear with one thing. I don’t agree with Alan. I do believe we can produce high quality software with agile methods, where design evolves with the production code. But I did, after my initial defensive reflex, find his perspective refreshing.
Alan’s talk is not published anywhere, but the general ideas are documented on his company’s website.
Software construction is slow, costly, and unpredictable.
[…]
Unpredictable is by far the nastiest of these three horsemen of the software apocalypse. Unpredictable means 1) you don’t know what you are going to get; and 2) you won’t get what you want. In badness, slow and costly pale in comparison to unpredictable.
[…]
The key, it seems, is vanquishing unpredictability, which means determining in advance what the right product design is, determining the resources necessary to build it, and doing so. As the airline pilots say, “Plan your flight, and fly your plan.”
Alan’s solution to the development problems of today is to divide work into three separate fields of responsibilities, something he calls “The Triad”.
Interaction design is design for humans, design engineering is design for computers, and production engineering is implementation. Recognizing these three separate divisions and organizing the work accordingly is something I call “The Triad.” While it cannot exist without interaction designers, it depends utterly on teasing apart the two kinds of engineering which today, in most organizations, are almost inextricably linked. It will take some heroic efforts to segregate them.
Collaboration with the customer (or users), as the agile methodologies suggest, is out of the question according to Alan. Why let the least qualified make the most important decisions, he reasons. Instead, Alan Cooper advocates the use of interaction designers (HCI experts). Thus, he identifies three key roles: design engineers, production engineers and interaction designers.
Production engineers are good programmers who are personally most highly motivated by seeing their work completed and used for practical purposes by other people. Design engineers are good programmers who are personally most highly motivated by assuring that their solutions are the most elegant and efficient possible.
Interaction designers’ motivations are very similar to those of design engineers, but interaction designers are not programmers. Although most programmers imagine that they are also excellent interaction designers, all you have to do to dissuade them of this mistaken belief is to explain that interaction designers spend much of their time interviewing users.
Alan doesn’t rule out agile methods completely. He thinks they have a place, but only as a part of the design process.
Currently there is a pitched battle raging in the programmer world between conventional engineering methods and Agile methods. What neither side sees is a path of reconciliation; where Agile and conventional methods can effectively coexist. Providentially, the Triad reconciles them very well. The lean, iterative, problem-solving work of the software design engineer is the archetype of Agile programming. The purposeful, methodical construction work of the production engineer is the quintessence of conventional software engineering, particularly the type espoused by disciples of Grady Booch’s Rational Unified Process, or RUP. Both methods are correct, but only when used at the correct time and with the correct medium.
Despite Alan’s thought provoking keynote, I’m still a believer of agile methods for the whole development process. I think it’s possible to build rigid software with little upfront design, a readiness for change, rapid feedback and customer collaboration. The problem I see is that it demands a lot more from us developers. Knowing the language and how to program the platform is no longer enough. We need system and interface design skills, as well as social skills. We also need to master important but difficult techniques like unit testing and code refactoring.
Maybe agile is only for teams of elites?
I mostly agree with Alan, in that you can’t iterate yourself to a solution if your starting place isn’t even close. Anytime I’ve sat down and coded without fully understanding what I was trying to achieve, the results quickly got ugly.
Even though I agree with him on splitting up the different types of development, often between graphic designers, interface experts, database experts and algorithmic coders, I still think you need one single person at the top to ‘direct’ it, or it will lack a coherent vision.
And, once you know where you are going, building the system in small tight iteration cycles takes only marginally longer, but reduces significant risk. Waterfall, as we know, is extremely risky, and prone to failure.
I don’t think agile is for a team of elites (what happened to egoless programming?), but the ‘purer’ forms are definitely better optimized for smaller projects. Risking a few months of work on a hunch is quite different than risking hundreds of man-years of work.
Paul.
http://theprogrammersparadox.blogspot.com
I too think that a good analysis and upfront design is a key to success. I wouldn’t go as far as Alan suggests though, since I think that in many cases the product could be valuable even in an unfinished state. As long as it is stable, I see no problem in releasing increments to the customer. The key is to make it stable, and that’s why unit-testing (and automatic acceptance testing) is essential to the agile way of producing software.
Also, there shouldn’t come as a surprise that Alan takes this idea to the opposite extreme. One could say that he has bet his company on it, producing “blueprints” for his customers to implement.
Oh yes, and I saw that you, on your blog, elaborated on that “single person at the top”. Interesting read – as always.
Good point. Is software ever in a finished state? For me its always been a perpetual work-in-progress. Releases are just a bit cleaner and better tested, that’s all.
I like the idea of blueprints, I just don’t know how to get other developers to follow them correctly. I noticed Alan made a point of saying they would stay around afterwards to help get it implemented. 🙂
Also, thanks for the encouragement, it means a lot when it come from an established and highly respected blogger like yourself. That director analogy had been bouncing around my head for a while now, but a number of different related threads kept popping up lately. Collaboration is ok, but a single unified vision is often critical to building a ‘great’ product.
Paul.
I think the issue with agile is that the tools don’t exist for it to work right. I know that is a big claim, but look at the term used above to describe one of the key tool: “code refactoring”. You start with a pile of code that is very good at describing what is to be done but very bad at describing why, and you try to have a computer work with it. That, IHMO, will never work well enough.
What is needed is for the agile design methods to be performed at or above the level of design that Alan is talking about. For example; take the UI. Being able to show the costumer the UI, get feedback and then in a mater of hours implement the stuff from the interview and show them the results would be gold. To do that the dev team can’t be working with C# code. They need to be describing what the UI is not what it does.
Being able to look at the design and “agilely” restructure it (not the implementation but the design its self) would be an extraordinary ability.
You enter a very interesting area, one that I myself give a lot of thought occasionally. If it’s the tools or the languages that are failing us, I’m not sure, but the truth is that we’re spending a lot more time figuring out “how” to do things, rather than “what” to do. But then again, if it was that easy there would be no need for us developers, right? 🙂
You may be interested in reading a post about how unlikely it is that traditional development can be “fixed”: Just Add Niagra! . You may also be interested in checking out this new reddit for Agile: here .
Alan Cooper’s comments and recommendations about software design and construction — which is not actually his area of expertise or even interest, but happens to be mine — sound exactly like my standard comments are recommendations about interaction design — which, interestingly enough, is not mine and is his area. Both of us think the other guy’s work would be so much better this way. Interesting, huh?
I’m not sure what you mean, but it sounds interesting 🙂 Could you please elaborate on this?
Well, I’m not sure what I mean, I was just struck by the symmetry. Several ideas suggest themselves:
1. Maybe this “separate design from implementation” thing is only attractive to people who don’t know what they’re talking about, who view the “other” discipline to which they wish to apply it as only an annoying hinderance (or something like that) 😉
2. OTOH: I think Cooper is advocating for software what he has separately advocated for his own field, interaction design. And I know interaction designers who desperately want permission to work that way, and cite Cooper constantly to that end. But I don’t see that yearning in the software arena, and don’t feel it myself: in software, we see this as a step backwards. Cooper attempts to forestall this objection in his writings and website, but how many actual practitioners have to disagree with the out-freyn pundit before they win by majority rule?
3. In any case, he nowhere I know of addresses the question of how these designs get communicated to the implementers, how we translate between the design-centered quick-and-unconstrained language to the details and robustness implementation language. In interaction design, this is the difference between “HTML written strictly to produce screen shots” and “templates and JavaScript and Flash and what-not intended to produce similar HTML”: the finished product of the designer *is* the specification for the implementer. But if I choose to design software in something unconstrained like Java or LISP, my concrete design is no help at all to my implementer working in C# or Ada: the turn-over dynamics are quite different, enough to make me wonder about the wisdom of transferring the process.
Hi Jack,
I find a separate design very attractive because I want to be able to leverage my abilities. Given the length of time it often takes to get a sophisticated software product past the 1.0 stage, if I want to build something big and complex my choices are a) take twenty years, or b) loosely rely on other people to build the right pieces.
It’s not that I am controlling for no reason, if my design is predicated on specific architectural features, someone only seeing or understanding a subset of the design may jump to the wrong conclusions. I want to allow some degree of freedom, but the final pieces have to fit together.
I try — each time I build something new — to make it bigger and better than my previous work, but I’ve definitely bumped up against my own ability to complete the whole project by myself. Big teams often produce significant compromises, as everyone gets their say; people are happy, but the ideas get watered down. To get to the next level I need delegate the work, but not lost control of the output.
Paul.
Paul,
There’s something to what you say, of course, and some kinds of software need that separate design activity. Not all need that much dedicated energy in the design, though. Maybe rocket trips to the moon (the historical context for a lot of software engineering discipline), for instance. But some surprisingly large and complex things seem to work better by incrementalism. Even before Linus took over, UNIX was much better described as collaborative, iterative design — and at that, substantially built from Multics and other efforts. And of course Linux specifically has been a jaw-dropping exercise in evolutionary unified design and implementation. The big question might be how to recognize which approach is more promising, before you’ve committed too much to the other one!
Hi Jack,
UNIX I suppose, is a great example since — to me, at least — I like the older versions better. There ‘was’ a philosophy and the result was that the systems were simpler and cleaner and more understandable. Once Linux and GNU became the ‘in’ thing, as more and more people contributed the complexity sky-rocketed. Some of it admittedly because the new versions do a few more things than the older ones, but lots of it is really artificial (accidental). Where I used to ‘know’, I can now only ‘guess’.
Personally, I would love to go back to ground-zero again and write my own operating system, learning from the mistakes of the past. I would (I dream) start with the basic ideas, but extend them in a rational and consistent manner. Clean up all of the messes, right all of the wrongs. But of course, by myself I’d need hundreds if not thousands of years.
It took twenty years to get to our modern version of Linux. A committee would only produce Multics again. Are our only choices between complexity created by committee or complexity created by time? Iteration is nice but it is a slow moving beast, and it must stay close to where it started.
I always feel that modern software is only a small fraction of what it can be; what it should be; and I sense that our own past is what is holding us back (when we aren’t too busy trying to forget it).
Paul.
But pre-Linux UNIX was still collaborative and unplanned, once it escaped Bell Labs. There was a zeitgeist, but Bell/AT&T, BSD, and the man vendors all researched improvements, traded features, and evolved.
The “Big Plan” idea is what killed commercial UNIX, creating the void Linus filled. Well, it was “the war of the Big Plans,” which introduces a new factor a bit OT from this thread, I grant, but it was also true that none of the Big Plan companies was actually managing to deliver the product. It was too big for a single company, not only in the mass of cod and cost of labor, but in the ownership of the vision and evolution.
(Disclaimer: I worked for one of those Big Plan Unix companies in the 1980s, on their UNIX; maybe I’m jaundiced. And in my next gig, I worked at another, a smaller company with a bigger plan, and even less delivery and follow through, so maybe I’m jaundiceder. But neither looking around at the time, nor looking back at the history, was any vendor apparent that was actually delivering the goods!)
Hi Jack,
So wouldn’t it be great, if you could take six months to a year and produce a blueprint of your idea operating system? One that you could give to a team and be confident that it was built the way you specified it?
No more ‘big plans’, no more wars. You don’t have to set it in motion and wait for twenty years. Just experienced people building full and complete products.
Did I mention I wanted a pyramid too?
Paul.
The question is exactly whether it can be done — at all, let alone in any given boundaries like “six months.” Many people have spec’ed out operating systems in times shorter or longer, for reasons academic, commercial, or recreational. If any of them had got it right, there would have been no more … but there were, and always will be. An OS is too big, too useful, and too much fun to ever settle down long enough to be implemented and used.
Hi Jack,
It is a great question, one worthy of an real in-depth answer:
Software Blueprints
Paul.
what is the role of customers and end-users on an agile process team, sir?
im really hard up in understanding it,,
@phoebe
Good question Phoebe, here is my take:
Agile methodologies go about that somewhat differently. Extreme Programming for instance requires an “on board” customer, a representative within the project to speed up the feedback loop. This is what allows XP teams to start the process of producing software early and with less formal and worked through requirements than more traditional methodologies.
Scrum has a similar setup, with a special role, called Product Owner, responsible for the requirements. Even though it’s not required, you get the best results if the PO is “on board” and work side by side with the team to reach the goal.
End users are used in both methodologies as the ultimate testers since both XP and Scrum support a “Ship early – Ship often” attitude.
Regards!