One part of the debate on cloudonomics that often gets overlooked is the effect of over-provisioning. Many people look at the numbers and say they can run a server for less money than they can buy the same capacity in the cloud. And, assuming that you optimize the utilization of that server, that may be true for some of us. That that's a very big and risky assumption.
People are optimists - well, at least most of us are. We naturally believe that the application we spend our valuable time creating and perfecting will be widely used. That holds true whether the application is internal- or external-facing.
In IT projects, such optimism can be very expensive because we feel the need to purchase many more servers than we typically need. On the other hand, and with the typical lead time of many weeks or even months to provision new servers and storage in a traditional IT shop, it's important to not get caught with too little infrastructure. Nothing kills a new system's acceptance more than poor performance or significant downtime due to overloaded servers. The result is that new systems typically get provisioned with far more infrastructure than they really need. When in doubt, it's better to have too much than too little.
As proof of this it is typical for an enterprise to have server utilization rates below 15%. That means that, on average, 85% of the money companies spend with IBM, HP, Dell, EMC, NetApp, Cisco and other infrastructure providers is wasted. Most would peg ideal utilization rates at somewhere in the 70% range (performance degrades above a certain level), so that means that somewhere between $5 and $6 of every $10 we spend on hardware only enriches the vendors and adds no value to the enterprise.
Even with virtualization we tend to over-provision. It takes a lot of discipline, planning and expense to drive utilization above 50%, and like most things in life, it gets harder the closer we are to the top. And more expensive. The automation tools, processes, monitoring and management of an optimized environment require a substantial investment of money, people and time. And after all, are most companies even capable of sustaining that investment?
I haven't even touched on the variability of demand. Very few systems have a stable demand curve. For business applications, there are peaks and valleys even during business hours (10-11 AM and 2-3 PM tend to be peaks while early, late and lunchtime are valleys). If you own your infrastructure, you're paying for it even when you're not using it. How many people are on your systems at 3:00 in the morning?
If a company looks at their actual utilization rate for infrastructure, is it still cheaper to run it in-house? Or, does the cloud look more attractive. Consider that cloud servers are on-demand, pay as you go. Same for storage.
If you build your shiny new application to scale out - that is, use a larger quantity of smaller commodity servers when demand is high - and you enable the auto-scaling features available in some clouds and cloud tools - your applications will always use what they need, and only what they need, at any time. For example, at peak you might need 20 front-end Web servers to handle the load of your application, but perhaps only one in the middle of the night. In this case a cloud infrastructure will be far less costly than in-house servers. See the demand chart below for a typical application accessed from only one geography.
So, back to the point about over-provisioning. If you buy for the peak plus some % to ensure availability, most of the time you'll have too much infrastructure on hand. In the above chart, assume that we purchased 25 servers to cover the peak load. In that case, only 29% of the available server hours in a day are used: 174 hours out of 600 available hours (25 servers x 24 hours).
Now, if you take the simple math a step further, you can see that if your internal cost per hour is $1 (for simplicity), then the cloud cost would need to be $3.45 to approach equivalency ($1 / 0.29). A well-architected application that usea autoscaling in the cloud has the ability to run far cheaper than in a traditional environment.
Build your applications to scale out, and take advantage of autoscaling in a public cloud, and you'll never have to over-provision again.
Notice: This article was originally posted at http://CloudBzz.com by John Treadway.
People are optimists - well, at least most of us are. We naturally believe that the application we spend our valuable time creating and perfecting will be widely used. That holds true whether the application is internal- or external-facing.
In IT projects, such optimism can be very expensive because we feel the need to purchase many more servers than we typically need. On the other hand, and with the typical lead time of many weeks or even months to provision new servers and storage in a traditional IT shop, it's important to not get caught with too little infrastructure. Nothing kills a new system's acceptance more than poor performance or significant downtime due to overloaded servers. The result is that new systems typically get provisioned with far more infrastructure than they really need. When in doubt, it's better to have too much than too little.
As proof of this it is typical for an enterprise to have server utilization rates below 15%. That means that, on average, 85% of the money companies spend with IBM, HP, Dell, EMC, NetApp, Cisco and other infrastructure providers is wasted. Most would peg ideal utilization rates at somewhere in the 70% range (performance degrades above a certain level), so that means that somewhere between $5 and $6 of every $10 we spend on hardware only enriches the vendors and adds no value to the enterprise.
Even with virtualization we tend to over-provision. It takes a lot of discipline, planning and expense to drive utilization above 50%, and like most things in life, it gets harder the closer we are to the top. And more expensive. The automation tools, processes, monitoring and management of an optimized environment require a substantial investment of money, people and time. And after all, are most companies even capable of sustaining that investment?
I haven't even touched on the variability of demand. Very few systems have a stable demand curve. For business applications, there are peaks and valleys even during business hours (10-11 AM and 2-3 PM tend to be peaks while early, late and lunchtime are valleys). If you own your infrastructure, you're paying for it even when you're not using it. How many people are on your systems at 3:00 in the morning?
If a company looks at their actual utilization rate for infrastructure, is it still cheaper to run it in-house? Or, does the cloud look more attractive. Consider that cloud servers are on-demand, pay as you go. Same for storage.
If you build your shiny new application to scale out - that is, use a larger quantity of smaller commodity servers when demand is high - and you enable the auto-scaling features available in some clouds and cloud tools - your applications will always use what they need, and only what they need, at any time. For example, at peak you might need 20 front-end Web servers to handle the load of your application, but perhaps only one in the middle of the night. In this case a cloud infrastructure will be far less costly than in-house servers. See the demand chart below for a typical application accessed from only one geography.
So, back to the point about over-provisioning. If you buy for the peak plus some % to ensure availability, most of the time you'll have too much infrastructure on hand. In the above chart, assume that we purchased 25 servers to cover the peak load. In that case, only 29% of the available server hours in a day are used: 174 hours out of 600 available hours (25 servers x 24 hours).
Now, if you take the simple math a step further, you can see that if your internal cost per hour is $1 (for simplicity), then the cloud cost would need to be $3.45 to approach equivalency ($1 / 0.29). A well-architected application that usea autoscaling in the cloud has the ability to run far cheaper than in a traditional environment.
Build your applications to scale out, and take advantage of autoscaling in a public cloud, and you'll never have to over-provision again.
Notice: This article was originally posted at http://CloudBzz.com by John Treadway.
Join Us: http://bit.ly/joincloud
No comments:
Post a Comment