Computers, Science, Technology, Xen Virtualization, Hosting, Photography, The Internet, Geekdom And More

The Great Microkernel Debate

Posted on | February 19, 2009 | 4 Comments

Its been some years since the great debate between Linus Torvalds (and friends) vs Andy Tanenbaum on usenet. I’ve only linked the debate itself with an encyclopedic reference, the links to the debaters more or less point to individuals as they were when the great debate happened.

If you are a sci-fi fan, you’ll realize that 2010 is soon upon us. If we go by Hollywood fiction, we should be sending a spacecraft to rescue a buggy HAL any day now. Yet, we’re still debating the design of kernels that support operating systems while cramming more cores into a single processor and milking threads for their entire worth. Fibril, did someone say fibril? Why do programmers cover their noses like someone passed gas when someone says fibril? Even Rusty produced the anti thread.

What gives?

I love Linux, I love working with Linux, I love developing software that works strictly on Linux (who loves portability kludges, please raise your hand?) and I love getting feedback from people all over the world who use my programs. Linux is a success because it is a tight knit, easy to debug, easy to learn kernel. Because GNU programs are so damn portable, Linux is a raging success. Hacking at Linux gives me the best of two worlds, studying very brilliant kernel developers (Linus and everyone else) while studying code that makes portability and standards compliance an art (GNU). And, (quote Arlo Guthrie) friends, GNU is a movement!

You might not realize, if the GNU project had finished their kernel (the HURD) in the promised amount of time, Linus would have never started work on Linux. The problem is, microkernels are typically extremely hard to debug. Every file system (and most other things) that typically reside in ‘kernel space’ are run as a process. This presents interesting issues when all of the components of an operating system (file systems, watchdogs, etc) need to talk to each other. A great example of this problem can be summed up in a single funny question:

Would you take a laxative and a sleeping pill at the same time?

Yet, we have viable microkernels such as virtual machine monitors that do limited jobs very well. You might know virtual machine monitors by another name, hypervisors.

Then we have emerging microkernel based projects such as HelenOS that have been ported to so many architectures even when the component model was scarce .. and that solved the IPC race problems. Are we looking at an egregious mistake when developing for x86 then porting later, after the component model is in place? Are we blinded by the success of a single monolithic design (however brilliant) ?

I will eagerly watch for and patch updates to my monolithic kernel that I use and love, however I don’t believe the monolithic design to be the end all of kernel architecture, despite the success of Linux.

Xen is now usable, HelenOS is not. In a few years time, I suspect that both will be very usable and the ‘great debate’ will continue.

Meanwhile, flames are welcome, that’s why I have a comment form.


4 Responses to “The Great Microkernel Debate”

  1. Manu
    February 22nd, 2009 @ 5:28 pm

    Interresting. Could you point to some papers and articles about the problem of debugging microkernel and the IPC race problem ?

  2. tinkertim
    February 23rd, 2009 @ 2:05 am


    Yes, I will in a post this coming week.

  3. Manu
    March 23rd, 2009 @ 5:30 pm

    Hey, I’m still waiting for your post. I’m hungry :)

  4. tinkertim
    April 2nd, 2009 @ 7:16 pm

    If you can’t google for the same, I’ll provide some links.

Leave a Reply

  • Monkey Plus Typewriter
  • Stack Overflow

  • Me According To Ohloh

  • Meta