Wednesday, 2 June 2010

VDI or Cloudtop Computing?

Virtualization is one of those things that has kept IT shops humming over the last few years

There are two interesting and complimentary rules in IT
  1. When a technology takes off, vendors will keep trying to find “the next big thing” to earn more of your business.
  2. Until something takes off, most IT staff doesn’t have time to worry about it.
Virtualization is one of those things that has kept IT shops humming over the last few… several… Many years. When it first came out I was in an R&D group at a mid-sized company, and from that perspective was able to grasp the early adopter use case and run with it. That use case is still one of the primary drivers for virtualization in the data center, though it is growing less important as other uses come to the fore – that was consolidation of underutilized servers, which enterprises had a lot of, because budgeting was by project and Larry Ellison’s famous “Lots of little servers” comment had become IT staff’s very real nightmare.


No sooner had IT departments found a solid way to put virtualized servers to work than vendors started creating/rebranding/vaporizing The Next Big Thing. I played with a virtualized firewall long before any IT person saw a need for one, storage virtualization got a shot in the arm and vendors started pushing it in their talks – and how it helped server virtualization, everything was going to be virtualized… and one day may well be.
And one of the things that vendors jumped on that IT just hasn’t really bought yet is Virtualized Desktops. Vendors hype it, they pay analyst firms for quotes about the wave of the future, but you don’t see 10 bazillion enterprises running Virtual Desktops. Does that mean Virtual Desktop Infrastructure (VDI) is a failure? Not. Quite. Yet.
There is an interesting bit where the changing infrastructure of today sometimes points the way to the technologies of tomorrow. Apple’s change to BSD and the ensuing release of all sorts of devices running the “Apple” OS as iSOMETHING is a good example of that phenomenon. That may be what we’re going to see with VDI.


If a large portion of your infrastructure moves to the cloud (and by cloud I mean internal or external, just so you know how I’m using the term), then you will have a lot of apps that can be accessed via web browser. Some entire groups may be able to move to the browser for all but a few categories of work. In the case of external cloud, I don’t think we’re to the point where you could save next year’s business plan out to the cloud and rest assured that it was as secure as in your data center. There are those who would argue with me, but I’m guessing none of them are security peeps.
That means that for some groups in some organizations there will soon only be limited need of internal systems. If your need for compute power is dropping per-user because it has been shifted to a need “in the cloud”, then it starts to make sense to take the next logical step and virtualize those people’s desktops so that when they ARE running a fat app, it’s not on their machine. Why would you  do that? Well, I think that VMWare has done a good job of showing the benefits in terms of security and compliance with software licensing, I haven’t seen anyone talk about the benefits to managing the flow of hardware. Want a Mac? Cool, we no longer have to maintain software on it, just AV and VDI client. I think that’s a bigger benefit than is immediately obvious, adding a desktop OS means a whole lot of work to make sure all the apps run on it, but it doesn’t mean that if your desktops are virtualized.
And using VDI for remote workers means no more work installing and updating desktops and laptops all over the world. Run updates in the data center, and they’ll get them next time they log in.


Looking into the crystal ball, I see three distinct groups falling out of this. Those who can be 100% cloud applications, those who can make use of VDI for the few applications they need on the desktop, and thoseVDI.With.Cloud who still require a fat desktop. Let’s look at how to figure this bit out.
  • Cloudtop – those whose systems have been moved to the cloud, that don’t have sensitive documents to work on (assuming public cloud) and don’t do a ton of traveling.
  • Virtual Desktop – those who have a few apps in the cloud but still need to get at some legacy applications, or those who need to work on sensitive materials with an external cloud provider.
  • Fat Desktop – those who need to do a lot of really intensive drawing work (architects with AutoCAD for example) or those who travel a lot. This is also the default, since everyone is here today.
There’s always something though. If you’re going to implement cloudtop computing or virtual desktop infrastructure, you’ll need to have a fat pipe out of your data center. Let’s face it, this is mostly traffic over and above what you’re seeing today. For every document saved to the network there is at least one that never leaves the local hard disk. The same is true for most other computing needs. In a VDI infrastructure, ALT-TAB carries a network burden, for example. For employees on the same physical network as the virtual servers or internal cloud, this is not a big deal unless your network is already near maxed out. For remote employees – even at another company office – it starts to become a burden on the infrastructure. Products like F5’sWOM or EDGE Gateway can go a long way to resolving these issues, but you do need to consider them before implementing.

Original Article - Cloud Computing Journal

Join Us:

No comments: