Home > C#, java, programming > Virtual or Non-Virtual by Default, Do We Really Have To Choose?

Virtual or Non-Virtual by Default, Do We Really Have To Choose?

When it comes to the question of whether methods should be virtual by default or not, there are two schools of thought. Anders Hejlsberg, lead architect of C#, describes them in an interview from 2003.

The academic school of thought says, “Everything should be virtual, because I might want to override it someday.” The pragmatic school of thought, which comes from building real applications that run in the real world, says, “We’ve got to be real careful about what we make virtual.”

As I told you about in my post from last week, I have left the “pragmatic school of thought” to join the “academic” camp. The main reason was unit-testing, that – in my opinion – calls for a more flexible object model than the one of C#. When unit-testing, I often want to use components in unusual ways, all in the name of dependency breaking, and therefore I like their methods to be virtual.

But, it wasn’t – and still isn’t – an easy pick; Virtual by default does bring some serious problems to the table. Again from the interview with Anders:

Every time you say virtual in an API, you are creating a call back hook. As an OS or API framework designer, you’ve got to be real careful about that. You don’t want users overriding and hooking at any arbitrary point in an API, because you cannot necessarily make those promises.

Whenever they ship a new version of the Java class libraries, breakage occurs. Whenever they introduce a new method in a base class, if someone in a derived class had a method of that same name, that method is now an override—except if it has a different return type, it no longer compiles. The problem is that Java, and also C++, does not capture the intent of the programmer with respect to virtual.

C# captures the intent better and avoids versioning problems, and Java offers the flexibility needed for unit-testing. Which is better? The answer seems to be: it depends. But, do we really have to choose? Why can’t we have both? Well, I think we can, and Java has shown the way to achieve it.

Java annotations is a powerful language feature. (The concept was rightfully stolen from C# where it is called custom attributes.) With it one can attach metadata to parts of a program, to give additional information to the compiler or other external agents. In other words, annotations can be used to extend the programming language without the need for changing the language itself.

A good example is the @override annotation.

class SomeClass extends SomeBaseClass {

  <strong>@override</strong>
  void someMethod() { … }

}

From the Java documentation:

[@override indicates] that a method declaration is intended to override a method declaration in a superclass. If a method is annotated with this annotation type but does not override a superclass method, compilers are required to generate an error message.

The use of the @override annotation takes care of the problem when name changes to virtual methods silently breaks behavior of descending classes; Or, which is more common, when misspelled names of intended overrides aren’t captured at compile-time. Now, with the introduction of the @override annotation, you can help the compiler help you to fail fast. It is now possible to show your intention better than what was possible in the pre annotation days.

Unfortunately Java doesn’t take this concept all the way. They could have introduced an @virtual annotation as a complement to @override, and thus reach the same level of expressiveness as C#, but without forgoing the flexibility of the Java object model. It would be the perfect middle-way, and provide the best of both worlds.

Class SomeBaseClass {

  <strong>@virtual</strong>
  void someMethod() { … }

}

The benefit of an annotation (or custom attribute) based solution is that it’s configurable. It would be possible to alter the compiler’s behavior based on context or environment. For instance one could enforce the use of @virtual and @override in production code. Additionally, one could relax the controls when necessary, like in test projects or legacy code, to mere warnings or complete silence.

Wouldn’t that be better than the all or nothing solutions of today?

Cheers!

Categories: C#, java, programming Tags:
  1. December 30th, 2008 at 14:04 | #1

    hi..
    i would just like to ask whether i should learn java or c++ or any language.. what should i vote for?
    im 15 years old and i know a little of some languages but cant make up my mind on what to pursue…

    so.. what language should i learn?
    i love open source and cross-platform capabilities.. ^^

    • December 31st, 2008 at 00:16 | #2

      Well Kevin, that question does not have an easy answer so let me put it this way: The language doesn’t matter. Really. You need to master several different languages in order to become a great software developer anyway so the starting point is not all that important.
      If you’re thinking in terms of Java or C++ I’d definitely go with Java. The reasons are absense of pointers and a simpler object model.
      There are several alternatives though. Ruby, D and JavaScript are some of my favourites, but I do most work these days in C#, PHP and Object Pascal.
      Good luck with your choice.

  2. December 30th, 2008 at 14:05 | #3

    by the way, please email me on your response.. thanks

  1. March 5th, 2008 at 08:04 | #1
  2. August 18th, 2009 at 07:17 | #2