Posted on | October 31, 2007 | No Comments
It is extremely difficult to keep up with 30,000 + web hosts (in the US alone) to keep a bird’s eye view of what is being adopted in the virtualization market. I spent a few hours today clicking banners and digging up statistics to make sure that my “Who’s Who” in the industry is kept up to date.
I spend a full day (8 – 10 hours) every three months doing nothing but visiting the web sites of data centers and web hosting companies just to see how certain technologies are being adopted. I’ve been really curious to see who is adopting 10G fiber for their virtual farms and what they’re doing with it.
Looking at offerings and technical papers on nearly 200 web sites, it seems that we’re still stuck on iSCSI over copper when it comes to storage that powers virtual farms. I’m wondering, is 10G Ethernet still too cost prohibitive for wide spread adoption, or are we stuck in a “If its not broken, don’t fix it” rut?
Clusters of any kind should be able to be “spilled”, I’ll explain the term. If you take 10 drops of water and bring them together, you get a puddle with very little effort. You should be able to “spill” 10 – 30 virtual machines into various configurations with out much effort. If that’s going to happen, true storage on demand must become more of a reality. Sure, you could do that on computers that you own, but its difficult to do on someone else’s machines that you lease a portion of.
In my ideal world, every consumable resource on a virtual farm should be sold as a metered service capped by global quotas. In other words, if you pay a host for 10 GB of ram and 2 TB of disk, you should be able to create as many virtual machines across an entire farm of multi core machines as you like, up to your limit. You should be able to create your own storage networks as well. CPU usage should be metered and sold similar to bandwidth in the 95′th percentile. You pay for what you use, for however long that you use it, with a monthly base commitment.
For that to happen, every user who is able to create virtual machines will need a fraction of at least 1 -2 10G pipes, permitting them:
- A virtual private SAN (storage area network) 2 – 4G
- A virtual private LAN (private interconnect) 1 – 2 G
This gives the user dedicated internal bandwidth for their storage and interconnect needs, permitting things like many isolated MySQL clusters to operate at peak performance. Public drops (connections to the Internet) can still be gig copper. Proprietary storage solutions exist that offer this (many based on 4G fiber) however pound for pound, proprietary solutions end up costing more in the long run than going with 10G, even at its current price tag.
My ideal nodes in this virtual farm would have at least 1x dual channel 10G Ethernet, 8 gig-e (2 quad) copper nics, plenty of RAM (16 GB Min) and some small SAS or SCSI drives for local needs. With chunks of a 10G pipe, virtual machines could swap to network storage without taking much of a hit when using dirty paging, making migration very easy.
Wow, I just spent a ton of money And I haven’t even talked about the switches needed to make that happen. Sure, hosts could build it however it would be very difficult to make affordable offerings with that kind of configuration.
So, my prediction, as soon as 10G gets cheaper, virtual racks will take over a significant portion of the shared hosting market. Customers want to pay only for what they use while being able to use whatever they need. Unless vendors start replicating real world networks (including guaranteed internal bandwidth) it remains a goal just out of reach. I don’t think we’re going to see truly viable disk QoS for some time to come, so the only way to manage it remains at the spicket