Posted on | May 22, 2008 | Comments Off
In short, yes but the quality and originality of the releases might dwindle. Mark Shuttleworth (funder and founder of Ubuntu) wants to explore the possibility of all major GNU/Linux distributions releasing new versions on the same schedule. Major distributions would be Red Hat, Debian, Ubuntu, etc.
The idea looks very good on paper. Software developers would have a much easier time writing applications that work well on any distribution since this massive merge of efforts would result in more common ground. The end result would be the ‘big boys’ acting as a metronome from which everything else obtained time.
I’m not so sure this is a good idea, at least for now, for a number of reasons.
The FLOSS development model has changed. We have new tools to manage our source code. Git, Mercurial and Canonical’s own Bazaar are becoming increasingly popular. Subversion is still quite popular, cvs has gone to live in a museum. We like tools that allow us to form small teams within a project and branch off to work on various things independently.
When its soon time for another release, we open what is called a merge window, a period of time for everyone to get what they have completed and made stable into one location. At this point we work out the bugs and get the software packaged for people to enjoy.
The above example illustrates just one program. A mega example of that would be the Linux kernel.. hundreds of people working in small teams tackling work on various sub systems all combine their efforts in one short period of time for a release.
Now, Imagine if you were making your own distribution. You want your next release to include the latest and greatest version of all programs offered. One by one you obtain the latest versions of everything, package them up so that they install on your distribution correctly and then resolve your own bugs. Sometimes when one program changes a few others will stop working, as the distribution packager its up to you to fix that and communicate these issues with the authors of these programs.
This process takes time. You might decide to delay your release a few days in order to include some last minute changes from various projects. Those projects might delay their relase in order t include some last minute changes to various libraries that they use. The authors of those libraries might delay their release in order to resolve some issue with their compiler. Its a trickle down effect.
Mark’s plan does address this. Everyone could just agree on the versions of software that will be included in each release. This can only put releases one or more steps behind the latest stable versions of things that are available. This also means most distributions will just re-package Red Hat for the core of their operating system (kernel, drivers, compilers, core utilities, etc).
I am a firm believer that quality software is the direct result of a programmer scratching a personal itch. For instance, my plans for my own version of GNU/Linux named “Gridnix” will begin coming to life near the end of this year. I would not be making Gridnix if existing distributions did what I wanted. If major distributions synchronize releases, they’d all be pretty much the same.
I’d want to include the latest versions of the programs that I put into Gridnix, I know too well what happens when you try to merge on a deadline. The Linux kernel is able to pull this off because its written by some of the best software developers that the world has to offer. 95% of the userland programs offered in a GNU/Linux distribution are written by small teams of hobbyists, not Red Hat hackers.
Should I feel pressure to synchronize Gridnix releases with other operating systems who benefit from millions of dollars in funding? Should the public see my efforts as sub-standard simply because a small team of hackers can’t keep up with Ubuntu? No, absolutely not. I might add, Gridnix is based on Ubuntu server (Hardy LTS) but is radically different in its approach.
Should home hackers feel pressured to produce new versions of their stuff in time for the next release window so that various distributions can pick it up? Absolutely not, we do what we do because we enjoy programming and helping our neighbor. Should research institutes that are making high performance computing a reality for average users slow their work to keep jumping kernels every six months just to get their work included? No.
Right now, there is no real pressure, everyone keeps their own comfortable release schedule, this has worked for years. If its not broken (or in danger of breaking), why fix it? We can’t risk lowering the quality bar just to better appeal to commercial vendors. Microsoft did this when they lowered the Vista minimum requirements to keep Intel happy, look what happened there.
I really do like the idea of making free software more appealing to the enterprise market. I don’t think that a consolidated/synchronized release schedule is the way to go about doing this initially. If we really want to make them drool, we should slow down and take a good hard look at what companies are doing with GNU/Linux servers.
We need ease of clustering, ease of virtualization, distributed quotas, simple disaster recovery, stronger security and better administrative tools. We need to prove the benefits of disposable infrastructure and quit marrying such things to specific kernels. We need file systems that offer verifiable audit trails, better distributed storage and we need it to happen in a few clicks. This and more is what enterprise users want.
I just don’t see how a synchronized release schedule is going to help until we really re-examine and better cater to the server market. Ubuntu is proof that GNU/Linux makes a viable and functional desktop, I’ve used it since its first release. Why not focus on giving enterprise users what they want instead of what they don’t want on a predictable schedule?
I see Mark’s vision, but I think he missed a very important step.