NO CARRIER

Computers, Science, Technology, Xen Virtualization, Hosting, Photography, The Internet, Geekdom And More

Breaking With Conventional Wisdom – Sometimes

Posted on | March 6, 2010 | No Comments

Despite standards, industry accepted best practices, free templates for planning networks and even common sense, not everyone can follow conventional wisdom when deploying a cloud. In fact, at least in my experience, the only times I have been able to do it ‘correctly from the start’ have been when I start completely from scratch. Most people want a relatively painless path to upgrading, not an investment in a whole new infrastructure.

This means, business nodes and storage nodes aren’t always going to have the luxury of a private interconnect. In fact, every single server in a farm may be on its own vlan, which kind of rules out things like OpenAIS, OpenMPI or anything else that needs multicast to work. Then we take the most uniquely challenging (some even say god forsaken) industry known to man, IAAS, and add that to the mix. Its very typical for successful hosts to start with just a few cheap servers completely oblivious to one another and just continue to grow in that direction.

Using tools like Xen, AoE, iSCSI, ZFS, LustreFS, Hypertable and others, you can typically reduce a host’s server footprint by 45 – 55% while raising their available capacity to approximately the same metric. As transcendent memory evolves in Xen, the metric will only increase. Additionally, with the cost of infiniband dropping, the ideal ‘poor man’s’ SAN rapidly becomes a tenable strategy even for basement based providers. For someone with a little cash just starting out, these tools begin to look like a brilliant rainbow leading to a pot of gold. The guy with 100 servers on different vlans in different parts of (or in entirely separate) data centers sees things completely differently, he sees an increasingly unmanageable mess and lots of unforseen costs in taming it.

This is not the first time I have said this, enterprise distros do not pay enough attention to the industry that gives them the broadest data center penetration. There is a lot of “this is the proper way to do  it, so lets assume everyone can and will do it that way” thinking that clogs the arteries when speaking of adopting modern methods.

I was having a not untypical chat with Steven Maresca , both of us have been working with (and deploying) Xen since the early days of  v2. I was discussing the pain involved in keeping track of what guest was on what server when dealing with central storage. Obviously, you don’t want to start a guest on more than one node if its not using a cluster file system. The most elegant answer to this is a (more or less) centralized version of Xenstore, with watches running on every dom-0 updating the central copy in (close to) real time.

Dare I go the cheapest route and just use multicast? It would be the least invasive approach, at least as far as modifications to the Xen tools are concerned. However, if I do that, I completely shut out over 1/3 of the people who might actually find it useful enough to try and t est.

The point is (yes, I have one, but that’s  not the point) the things that will be included in the first Gridnix release may seem questionable to some people. The code will be solid, but the methodology might seem odd to people who have only worked on enterprise networks where physical access and complex switching fabric is a part of life. So, while, yes – multicast should be an option in the above example, it probably will NOT be available in the first release. If you can use multicast, you can use conventional TCP sockets with a simple auth mechanism. Its just one of a hundred examples that I could think of that would result in many people saying “Dude, what the hell??”. If I don’t create races, its really irrelevant anyway.

I just want to make it clear, there is a method to my madness, as several DWTH??!!s have come in since I began sharing code with a few other people.

Comments

Leave a Reply





  • Monkey Plus Typewriter
  • Stack Overflow

  • Me According To Ohloh

  • Meta