Archive for November, 2007

The Firepower of Teams

November 28th, 2007 4 comments

I just finished reading an interesting post by Ben Collins-Sussman. In an attempt to argue that distributed version control is not the silver bullet, he classifies us and puts us into two groups, which he calls the 80% and the 20%.

The 20% folks are what many would call “alpha” programmers — the leaders, trailblazers, trendsetters, the kind of folks that places like Google and Fog Creek software are obsessed with hiring. These folks were the first ones to install Linux at home in the 90’s; the people who write lisp compilers and learn Haskell on weekends “just for fun”; they actively participate in open source projects; they’re always aware of the latest, coolest new trends in programming and tools.

The 80% folks make up the bulk of the software development industry. They’re not stupid; they’re merely vocational. They went to school, learned just enough Java/C#/C++, then got a job writing internal apps for banks, governments, travel firms, law firms, etc. The world usually never sees their software. They use whatever tools Microsoft hands down to them — usally VS.NET if they’re doing C++, or maybe a GUI IDE like Eclipse or IntelliJ for Java development. They’ve never used Linux, and aren’t very interested in it anyway. Many have never even used version control. If they have, it’s only whatever tool shipped in the Microsoft box (like SourceSafe), or some ancient thing handed down to them. They know exactly enough to get their job done, then go home on the weekend and forget about computers.

Ben took a lot of heat for his oversimplification. People, it seems, don’t like to be categorized, a fact I too discovered when trying to make a distinction between Developers and Programmers. They especially don’t like to hear that they’re mediocre, not special, or less worth, which is the undertone of Ben’s 80% category.

During my military service I learned that statistically, only two soldiers in a group of eight would be effective in a combat situation. The rest would be pinned down, unable to return fire. Thus, one could say that the firepower of a squad comes from a fraction of it’s soldiers. I think this is often true for development teams as well. A few members are accountable for a majority of the produced value.

The funny thing is, that some of the best value-bringers I’ve met in my career does not fit the 20% alpha-geek group defined by Ben. And many of the “elite” programmers, the ones that do fit the description, have been remarkably inefficient. One reason for this could be that bleeding edge isn’t always the best strategy to achieve business value.

So, the question we should ask ourselves is not whether we’re in the 20%, it’s: Do I add to the firepower of my team? What can I do to bring more value?

Optimize Late, Benchmark Early

November 23rd, 2007 3 comments

When should you start optimizing your system? There are different opinions: Some think you should finish the implementation of all important features before starting the process of tuning. Others mean that this is way too late and instead advocate early optimizations, arguing that it’ll reduce the risk of substantial rewrites later on.

I recommend doing optimizations as late as possible. The reason is that project requirements always change – a fact which seems to be a law of nature – and an optimized system is usually harder to change. Leaving the tuning tasks for later also minimizes the risk of wasting time on insignificant optimizations, that matters little to the performance of the final system.

But dealing with optimizations later does not mean you can ignore performance completely. Remember, failing fast is an important principle of software development, so you want to discover critical performance issues as early as possible. You should therefore monitor performance, starting as early as possible, by creating test cases that provide benchmarks as the system develops. Not only will that give you a feeling of the overall performance of the system, it will also help you measure the effects of your optimizations – when that time comes.

So, Optimize Late but Benchmark Early.


Categories: software development Tags:

Logo Design

November 21st, 2007 2 comments

This is a paid review.

I’m not a graphic designer. In fact I stink at everything visual, so design work like layout, images, icons and logos are better handled by somebody else. Unfortunately this is not always possible, and I’m sometimes forced to produce crappy stuff on my own.

Therefore I got quite excited when asked to review The Logo Creator from Laughingbird Software. According to their website anyone could create logos that “look like a Photoshop guru spent hours laboring over!” The question was, could I do it too?

I’m happy to say the answer is yes. I find working with The Logo Creator a joy. It’s easy to create your own logotype based on one of the templates. You can change, add or remove any component of the template, or create your own from scratch. This gives you a lot of creative freedom. I for instance turned this template:

A Logo Creator Template

into this logotype:

A logotype for my website?

Which is one of the candidates for becoming the logotype of this website.

I’m not completely happy with the Logo Creator’s user interface, but since it’s so useful to me I overlook the quirkiness. That, and the price of $29.95 – there are seven available themes – makes this a highly recommendable product in my opinion. So, if you need to do some Logo Design of your own, check out The Logo Creator.

Now, if you’ll excuse me I have some logos to create.

Cheers! 🙂

Categories: review, software, tools Tags:

Tools of The Effective Developer: Whetstones

November 16th, 2007 1 comment

As a programmer you are the ultimate software development tool, and like any tool you need regular care to stay effective. If you don’t invest enough in self-improvement you’ll end up useless, with a blunt sword. I’ve seen many programmers wielding blunt swords, unable to fight their ways out of old habits and paradigms. Don’t let that happen to you. Sharpen your sword regularly.

First and foremost you should take good care of your body. Health is the most important property, and it’s achieved with exercise, nutritious food, and enough sleep; The hallmarks of software developers. 😉

The second most important sharpening activity is constant learning. If we stop learning we become frozen in the slice of time we call life. So try to learn something new every day.

For me, the best way to acquire new knowledge is by reading books. Since I spend so much time in front of the computer screen, a book is a welcome break. I can bring it and read it wherever I want, including the bed and the toilet.

Acquiring knowledge is one thing, making it stick is another. If you want to keep your knowledge for a long time, you should practice. The most effective practice is to teach others what you know. Gather your workmates and hold a lecture, or write about it on your weblog.

If your company allows it, or if you’re lucky enough to have spare-time, make a hobby project. Try to implement something based on your new knowledge. Rereading is the worst kind of practice, but something I resort to sometimes.

My most used whetstones are reading, writing and running a million hobby projects. What are yours? Which ones would you like to have? It doesn’t matter how you do it, as long as you keep your sword sharp.

Previous posts in the Tools of The Effective Developer series:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View
  5. Tools of The Effective Developer: Fail Fast!
  6. Tools of The Effective Developer: Make It Work – First!
Categories: habits, learning, software development, tools Tags:

Why so general?

November 14th, 2007 8 comments

I’ve been thinking a lot lately on the sad state of our business. A majority of software development projects fail in terms of money, time or quality. Why is this so? Why is developing software so expensive and so unpredictable?

Paul W. Homer touches an interesting subject in his post named Repeated Expressions. He means that use cases, test cases and ultimately the source code, are all different ways of specifying the same system. If we could cut down the redundancy, he argues, we would save time.

Others blame the methods we use for developing software. While this is true to some extent it isn’t the whole picture. The C3 project for example, the birthplace of Extreme Programming, were eventually canceled. So even the most agile approach may fail.

No, the main problem is most likely the complex nature of today’s software technology. But complexity is usually a sign of poor abstraction, and this fact has occupied my mind of late. The problem I see is that we are using general purpose languages to create domain-specific applications.

In a GPL we must make all kinds of low-level decisions: how to communicate, how to store the data, how to handle security, etc. We make our decisions based on technology, and once we make our choice we’re usually stuck with it. Wouldn’t it be great if we had a language in which we could focus on the business logic instead, wouldn’t it be great if technology was abstract? That would not only save us a lot of time, it would enable us to exchange one solution for another by running our code through a different compiler.

Is this an unrealistic approach? I don’t think so. The language would have to be domain specific, and you’d have to trade some creative freedom for the increased productivity. But many applications produced today don’t need that freedom anyway. In fact, many applications would benefit from being restricted and having good choices made for them.

Take websites as an example. Most of them have a similar structure, similar ways of communicating, similar ways of storing data, etc. Creating a DSL that would support the basic features of a web page is certainly possible, although you’d probably need pretty advanced layout and graphical features not to impose too much restrictions on the visual design.

The real problem would be to implement the different compilers to support the language. Communication protocols, data storage, formats, security, platform independence, all those things that makes web development difficult would have to be addressed by the compiler. A worthy task indeed, but – as with all complex things – better dealt with in one place. And think of the possibilities it would bring: one could build a compiler that generates an AMP site, one that generates the combination of ASP.NET and SQL Server, JSF, Ruby on rails, you name it. All from the same specification.

Wouldn’t that be great?


Categories: software development Tags:

The Extinction of Programmers

November 9th, 2007 31 comments

I read an article named The Future of Software Development by Alex Iskold. He predicts a future where only a few high quality software engineers will be able to serve the world’s need of computer systems.

With a bit of discipline and a ton of passion, high quality engineers are able to put together systems of great complexity on their own.

The idea is that fewer but more specialized people will be able to do more in less time.

Equipped with a modern programming language, great libraries, and agile methods, a couple of smart guys in the garage can get things done much better and faster than an army of mediocre developers.

I’d like to take this prediction a bit further. Well, quite a bit further actually. I see the extinction of software engineers altogether.

Programming as we know it, is a tedious, highly repetitive and error-prone business. It’s the task of telling the computer how to do things, rather than what to do. In other words: we are still dealing with computers at a very low level. To me, programming sounds like a task suitable for computers.

I see a future where we tell the computer, a kind of super-compiler, what we want to achieve. The input is our specification, and the output is a complete and tested system. All we have to do is specify, verify and then push the deploy button.

That would, using my definition, make us Developers rather than Programmers. There’ll be, as Alex pointed out, room for less and less programmers until they finally face extinction. The last programmer will probably live to see the super-compiler’s AI get clever enough to perform self-debugging and self-improvement. (I didn’t say this was in the near future.)

When I mentally put myself into this version of the future, a part of me protests: Where’s the fun in that? Well, I guess that’s just a sign of wrong focus. That part of me still embrace technology rather than business value.


Categories: software development Tags:

D Gotchas

November 6th, 2007 21 comments

I have been playing around with the D Programming Language lately, and I love it. D combines the low-level control of C and modern productivity features like garbage collection, a built in unit-testing framework and – the most recent feature – real closures.

But D is still a young language, and as such a little rough around the edges. It jumps out and bites me every now and then, forcing me to change some of my most common coding habits. Here are a couple of gotchas that made me trip more than once.

Premature Convertion

What’s the value of the variable after the assignment below?

real a = 5/2;

In D, the answer is 2. The reason for this unintuitive behavior is D’s arithmetic conversion rules which only takes operand types into consideration. A division between two integers result in an integer as well. The desired type, the type of the resulting variable, is disregarded.

To get the desired result we need to convert at least one of the operands to a floating-point number. This can be done in two ways. Either literally:

real a = 5.0/2; // a=>2.5

Or with an explicit cast:

real a = cast(real)5/2; // a=>2.5

Note that you must convert the operand, not the result. So this won’t work:

real a = cast(real)(5/2); // a=>2

The Premature Conversion gotcha is a particularly nasty one. It compiles and runs, which means only testing can reveal this bug.

Testing for identity

I’m a defensive programmer. I like to put assertions whenever I make assumptions in my code. By far, the most common assumption I make, is that an object is assigned. Here’s how I normally do it:

assert(o != null, "o should be assigned!");

In D, this is a big gotcha. The code above works as long as o is not null. If o is unassigned, we’ll get a nasty Access Violation Error. Here’s another example:

SomeObject o = null;
if (o == null) // <= Access Violation
  o = new SomeObject;

The reason is that D supports overloaded operators, in this case the equality operators (== and !=). Unlike Java, D converts the equality operator into a method call without checking for null references. So, internally, the above code gets converted to the following:

SomeObject o = null;
if (o.opEquals(null))
  o = new SomeObject;

Since o is null, the call to opEquals result in an Access Violation. Instead you should use the is operator to check for identity.

if(o is null) ...


assert(o !is null, ...)

Despite the tripping, I actually like the idea of a separate identity operator. After all, “is a equivalent to b?” is a different question than “are a and b the same object?”. But, as we say in Sweden, It’s difficult to teach old dogs to sit.


Categories: D Programming Language Tags:

My Article On D Published

November 4th, 2007 No comments

An article I wrote for Sweden’s biggest computer magazine, Datormagazin, was published in the november issue. It’s an introduction to D, and I guess it’s the first time D gets mentioned in Swedish computer press.

Categories: D Programming Language Tags:

D got real closures

November 3rd, 2007 No comments

Thank you Vladimir for bringing this to my attention.

I have reported on this blog that D doesn’t have real closures. My opinion was that it didn’t matter all that much, but many people thought otherwise. Now it seems like D got it after all. The latest release of the experimental 2.0 version announce full closure support.

It would be interesting to know if the implementation is based on the solution that Julio César outlined in his blog reaction. Not that it matters.


Categories: D Programming Language Tags: