Saturday, 31 October 2009

Which Cloud Is Right for You?

When enterprises and data centers evaluate cloud technologies and various solutions move their applications to the cloud computing paradigm, there are many important decisions to consider, such as Will the applications be portable? Are there significant code changes and workflow changes related to moving to the cloud? Are there compromises to consider when it comes to privacy, security, data availability and compliance? What would be the smoothest transition from traditional to cloud and how to avoid the proverbial "vendor lock-in"?

In this session Peter Nickolov, President & CTO at 3Tera, will answer these questions and discuss how the creation of application portability and interoperability among Public, Private, Hybrid Clouds and different vendors is what, once achieved, will unleash much wider cloud computing adoption.

Original Article -

Friday, 30 October 2009

Moving to the Cloud: What's Really Required

When we started talking with a wide range of IT managers and companies in early 2008, we quickly encountered a fascinating dichotomy – Cloud Computing is really easy / Cloud Computing is really hard.  What made this so interesting is that the casual users were saying cloud computing was easy and the hard-core users were claiming that it was hard.  Amazon and a number of other cloud providers have made major advancements since this time, but the “it’s easy / it’s hard” split still exists.

Today, if you want to use the cloud and deploy a server, it is really quite easy to “build” a server from the base templates offered by the cloud providers.  There are consoles available to launch servers including providers' control panels (Amazon, RackSpace, Terramark), plug-ins for Firefox (ElasticFox), and third party products like RightScale.  Start from a predefined image, add your edits, and poof – you have a server running in the cloud.

It becomes a lot more complicated when you try to integrate an application with multiple servers running in the cloud with your existing data center infrastructure.  When I say infrastructure, I mean all of your existing networking, services (DNS, DHCP, LDAP, Identity), build processes, third party applications; basically, the whole of your IT environment that you depend on to make things work.

When you deploy applications in the cloud, they are running on an infrastructure built and maintained by the cloud provider.  This means that there is a certain amount of control that is transferred to the provider –the underlying control and assignment of resources they require in order to manage their environment. You need to understand this new environment, select the appropriate resources, and adapt your application to it.  But moving an application that’s been running in your enterprise infrastructure, with all its associated processes and relationships, to a cloud provider that has its own way of doing things is where using the cloud gets hard.
To highlight some of the difficult areas, we’ll examine a set of issues across a variety of cloud providers out there.  Because there’s a lot of ground to cover, I’ll break up the posts into multiple parts dealing with storage, networking, management, performance, and security. 

We’ll start with storage since it represents the real identity of the server and all that is important to your application and business. Stay tuned.

Thursday, 29 October 2009

What is Cloud Computing?

IT industry analyst IDC defined Cloud Computing as: "Cloud computing is the IT development, deployment and delivery model enabling real-time delivery of products, services and solutions over the Internet."

 In other words, it's using the Internet's processing power and storage space to host and deliver software applications to users,...

...and to store their data.

Cloud computing - which includes Software as a Service (SaaS) - means that your programs and data are held on servers which are owned and managed by a third party. You then connect to them over the Internet.

There are many good business reasons for using cloud computing. You do not have the capital outlay for the server hardware, software licences and data centres. You can pay by subscription, or even on a "pay as you go" basis.

Also, the servers are located in secure data centres and managed to a higher level than is normally possible for most small and medium businesses.

Wednesday, 28 October 2009

Gartner: Brace yourself for cloud computing

ORLANDO, Fla.--Cloud computing isn't going to be vapor much longer, Gartner said Tuesday.
The general idea--shared computing services accessible over the Internet that can expand or contract on demand--topped Gartner's list of the 10 top technologies that information technology personnel need to plan for. It's complicated, poses security risks, and computing technology companies are latching onto the buzzword in droves, but the phenomenon should be taken seriously, said analyst Dave Cearley here at the Gartner Symposium.

Specifically, companies should figure out what cloud services might give them value, how to write applications that run on cloud services, and whether they should build their own private clouds that use Internet-style networking technology within a company's firewall.

Cloud computing takes several forms, from the nuts and bolts of Amazon Web Services to the more finished foundation of Google App Engine to the full-on application of Companies should figure out what if any of those approaches are most suited to their challenges, Gartner said.

Gartner analyst Carl Claunch
Gartner analyst Carl Claunch
(Credit: Stephen Shankland/CNET)
The advice came as part of a talk on top trends coming in 2010 that companies should incorporate into their strategic planning, if not necessarily their own computer systems. The full list of 10: 1. cloud computing; 2. advanced analytics; 3. client computing; 4. IT for green; 5. reshaping the data center; 6. social computing; 7. security--activity monitoring; 8. flash memory; 9. virtualization for availability; and 10. mobile applications.
Second on the list is virtualization--not just in the broad sense of technology that lets a single computer run multiple operating systems simultaneously, where it's become a fixture in data centers, but as a means to keep computing services up and running despite computer failures, said analyst Carl Claunch.

Virtual machines can be moved from one physical machine to another today. Later, by keeping two machines tightly synchronized, a failure in a primary machine can be eased over rapidly by moving the active service to the backup machine, Claunch said.

"We should start seeing this roll out in the next year or two from vendors," he said.

The Gartner hype cycle takes on the PC.
The Gartner hype cycle takes on the PC.
(Credit: Gartner)
For PCs, virtualization is arriving, too.

"Think of applications in bubbles," Cearley said. "They can run on client devices or up on a server," with virtualization providing the encapsulation technology to move the work around. The official corporate computing environment can run side by side with employees' home computing environment.
That, along with cloud computing, enables more freedom for people using PCs.

"We're looking at a time when the specific operating system and device options matter a lot less," Cearley said. "You could use a home PC or a Macintosh with a managed corporate image running on that particular device...We see more companies providing a stipend (for) employee-owned PCs."

Make your data center modular.
Make your data center modular.
(Credit: Screenshot by Stephen Shankland/CNET)
Another idea: modular data centers. You don't have set up your IT gear in storage containers, but do divide them into pods that each have their own computing, power, and cooling, Claunch said. That makes it easier to pay as you go, to adapt to new technologies, and to increase energy efficiency by partitioning hot hardware from cooler hardware.

Green IT is important--and changing in its nature. It's not just a matter of buying efficient computers, but also of using computers to increase the efficiency of other parts of the business, Cearley said. For example, analytics can improve the efficiency of transportation of goods.

Next comes applications for mobile devices. "That has great potential for creating different experience or stickiness for your customers," Cearley said.

And mobile x86 processors from Intel and AMD could make software development easier, too, he added.
Social networking will happen internally and externally.
Social networking will happen internally and externally.
(Credit: Gartner)
Social-networking applications, broadly defined, also should be on company radar screens. The technology can take the form of internal corporate social networks, interactions with customers, and use of public services such as Facebook and Twitter.

Companies need to get a handle on what's going on--and potentially business purposes such as understanding how the corporate brand is perceived.

"Social network analysis will be moving from a somewhat arcane discipline to a much more mainstream component of your social computing strategy," Cearley said.

Tuesday, 27 October 2009

The substance behind cloud computing hype

Software as a service and cloud computing have come together to create a confusing range of vendor offerings that are being billed as the next greatest thing. But as the area evolves it is bringing new opportunities for resellers as well as significant caveats. All of the major vendors are now in some way involved, so it is important to monitor developments closely.

Telecom’s integration arm Gen-i has been involved in cloud computing for years. “Our capabilities span all three types of cloud services,” says head of infrastructure and business applications, Leanne Buer. “These are Infrastructure (IaaS), Platform (PaaS) and Applications (SaaS). We also offer communications-based solutions (CaaS) as hosted services.

"We are able to handle everything from the most simple cloud computing requirements through to clients with extremely complex needs. We are currently working on opportunities in the IaaS and CaaS space, which we can then build on as clients become more comfortable with IT being delivered over the cloud.”

Gen-i is seeing a growing trend towards cloud computing and significant growth in the range of services being offered over the cloud. The technology and the economic landscape have now reached a point where these types of offerings deliver both a cost benefit and an organisational benefit in scalability, speed of deployment and flexibility.

“We have been working in this space for many years and have a solid base of clients that currently use our hosted security (Safecom) as well as productivity SaaS offerings (Managed Mail),” says Buer. “Gen-i will also soon be launching a range of new infrastructure-based services offered through our cloud. We expect the overall uptake in computing, desktop and applications from the cloud to increase pretty sharply over the next 18 months as more clients see the benefits hosted solutions offer.”

Buer notes that it is important to realise that IaaS, SaaS and PaaS solutions may not always be the most effective solution for every business. “Each business should assess SaaS against their current and long-term requirements, which is something we do with all our clients,” she says.

Microsoft’s Online Services offers business email, calendar and tasks, internal websites, voice and video conference calls and indication of people’s availability. “Within the next 12 months Microsoft will have Office Web Applications available as well,” says national technology officer Brett Roberts. “The Microsoft solution will stretch across the desktop, browser and phone. Using the cloud services model is an attractive option for SMBs, as it has the potential to lower IT costs and it will ensure IT costs are more transparent. This purchasing model is certainly right for the current economic climate.”

Locally, Microsoft has been seeing those who already use its Office, Exchange and Sharepoint products getting interested in, and making enquiries about, the SaaS offerings.

“To address data security and service availability, we develop clear service level agreements with partners and customers,” says Roberts. “Our cloud solution also has a ‘reverse gear’ where data can be retrieved and stored alternatively should customers choose to do things differently.”

In addition to its SaaS offering, Microsoft provides the Windows Azure Platform. This is an internet-scale cloud services platform hosted in Microsoft datacentres, which provides an operating system and a set of developer services that can be used individually or together. The Azure platform can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities.

Microsoft’s business model is to work with local partners. “The majority of our revenue comes through the partner or reseller channel,” says Roberts. “In New Zealand we have 3500 partners, and are working closely with them to understand, position and sell Microsoft Online Services and cloud computing solutions.

NetSuite provides a SaaS business management solution, encapsulating Financials/ERP, CRM and e-commerce. The current economy has accentuated the significant trend towards cloud computing by SMEs, as well as larger businesses, says international products VP, Craig Sullivan.

“Businesses have seen their IT budget grow, and increasingly consumed, by maintenance and overhead, with little left over for innovation and gaining competitive advantage. Cloud computing fits the bill because it provides a zero capex solution coupled with easy, pay-as-you-go subscription services that eliminate costly maintenance and IT spend.”

Security and service availability remain a key area for any service provider to address. This is especially important when running a business application that is running core ERP, CRM and Ecommerce processes. NetSuite addresses each of these areas, and for service availability, provides a 99.5 percent service level commitment.

“We’re seeing robust growth in APAC as a whole,” says Sullivan. “According to Springboard Research, the market for SaaS-based ERP in Asia Pacific (excluding Japan) is estimated to grow from US$35 million in 2008 to US$193 million by 2012.”

There are substantial opportunities for the reseller channel, he says. “With the current economy, resellers of on-premise software have been particularly exposed to the unpredictability and risk of selling on-premise software,” says Sullivan. “The benefits of SaaS for resellers is that it delivers recurring revenues on customer subscriptions, year-over-year.”

IBM is seeing immense interest in internal clouds and about the SaaS/cloud market generally. “Growth within the past 12 months has been cautious, partly due to a lack of major local players,” says emerging technologies executive, Rob Varker.

“Google, Amazon and IBM have been offering cloud platforms, and there has been some takeup of these international offerings, but there has not been a large scale movement by major customers yet. Impediments to growth have been partly real, and partly perception, focusing upon discomfort with a new thing and concerns over security.”

Nevertheless, IBM’s view is that the market is likely to grow exponentially during the next 12 month, driven by reduced TCO and a favorable ROI. One issue is that companies frequently don’t review the TCO of existing infrastructure, which can grow and become less economic as the company expands.

Within the next four months, IBM will be introducing a virtual server service offering, designed to provide a managed hosting service to the SMB market. The service will scale and grow as required, and will integrate with existing services.

“For resellers, it is important to understand what aspects of client infrastructure are ready for cloud infrastructure,” says Varker. “The key to cloud computing is to standardise, rationalise, simplify and automate. For resellers, it is important to focus upon standardising and rationalising first.”

Revera is a locally owned and operated data centre and computing infrastructure provider. “We have feet in both SaaS and cloud computing camps,” says business development manager, Robin Cockayne.

“In the SaaS area, we provide ghosted IT infrastructure, maintaining our position as a pure-play infrastructure provider rather than software reseller. For software resellers we package monthly wholesale rated options, including software licenses, and additional service layers—such as help desk, storage, and disaster recovery. For example, we have spearheaded Microsoft’s local push into hosted CRM, adding necessary automated provisioning to our VDC (Virtual Data Centre) hosting platform to streamline channel delivery. Five years ago, SaaS was spooky. Not any more. It’s now a popular conversation inside IT departments and boardrooms.”

Revera also functions as a pure cloud infrastructure provider, concentrating on local clouds. As businesses accept that utility computing can provide the computing, network and storage resources that they need, deployment of applications into these environments is expected to accelerate.

“Point solutions, like payroll, CRM, and accounting, have spearheaded the SaaS market. Now we can expect to see chunkier applications lift off,” says Cockayne. “However, people should be aware that a not all clouds are the same. The current cloud hype fixates on international services over the internet. But a cloud is simply an easy way to connect to the things you need. And not all of that is provided by international internet. Domestic internet and private clouds are much less risky and more flexible.”

Revera white labels capability for resellers. “One SaaS example is locally owned CRM integrator Complete Solutions, which launched a CRM SaaS offering called CRMNow, bringing to market on-demand Microsoft Dynamics,” says Cockayne. “A feature of Microsoft Dynamics is multi-tenanted architecture, enabling hosting partners to run a single copy of the server application, but support multiple customers simultaneously and securely, easing hosting chores.”

Citrix has had a considerable amount of international success in the cloud computing market, selling solutions to some of the biggest vendors in the industry, including Amazon. “The key to a successful cloud venture is the delivery of content to the user,” says systems engineering manager, Chris Lockery.

“For example, using technologies like application virtualisation to deliver apps over any medium is a great enabler. Citrix covers SaaS and cloud through solutions such as the XenServer hypervisor, XenApp application delivery and XenDesktop desktop virtualisation tool. Also, this month has seen the availability of Citrix’s NetScaler VPX solution – a virtualised load balancing appliance which brings security and flexibility to acceleration solutions in the datacentre. “

Citrix has seen a spike in interest about cloud computing and SaaS from service providers - both in the enterprise and moving into the SME/SMB market. The company has seen special interest from service providers using Citrix products for application and desktop delivery and moving the market toward a hosted infrastructure model.

“Cloud in New Zealand is an emerging field,” says Lockery. “It’s an area that a lot of service providers are looking into and begining to invest in. There’s a big opportunity for small and medium businesses in New Zealand to benefit from the cost and management savings that IT as a service and cloud computing can bring them.”

Datasouth has traditionally been a provider of network infrastructure and software development services. In more recent times these services have evolved to include the supply of software and services to the mid market and SME business sectors.

“Datasouth’s focus in the cloud computing sector is to provide a complete solution,” says general manager, Craig Gerken. “We are not only hosting a client’s application or network infrastructure, but also supporting this environment at both the server and desktop level - as well as providing development and customisation of their line of business applications, business intelligence solutions and their collaboration and communication platforms.”

From Datasouth’s perspective, the interest in this area is from organisations having a requirement to deploy new applications, but the lacking capacity or capability to support this with existing on-premise network infrastructure. “A key example that we have seen a number of times is a client wanting to deploy Microsoft Office SharePoint Server, but being restricted by their existing infrastructure platform - and not having budget available to invest in the required platform upgrades.”

Software and services is still very much in its infancy. “I think the current hype will continue to grow over the next 12 months,” says Gerken. “What will be interesting over this time will be the increase in uptake by organisations moving to this deployment model. What is certain is that resellers that only focus on providing on-premise infrastructure platform support will find their business opportunities will substantially reduce over the coming years.”

Monday, 26 October 2009

Anatomy of a Cloud Consultant

Earlier this week I was asked to participate in a cloud panel with a group of so called cloud experts. The panel focused on the state of the cloud industry. I have been on many of these cloud panels in the last year and have found it to be pretty vague what defines a "cloud expert".

So what is a cloud expert/consultant? First let's go to wikipedia. According to the site, in the broadest sense, "a consultant is a professional who provides advice in a particular area of expertise. A consultant is usually an expert or a professional in a specific field and has a wide knowledge of the subject matter. A consultant usually works for a consultancy firm or is self-employed, and engages with multiple and changing clients. Thus, clients have access to deeper levels of expertise than would be feasible for them to retain in-house, and may purchase only as much service from the outside consultant as desired."

So a Cloud Consultant is basically an "expert" in the realm of cloud computing. Someone who has a deep and broad level of experience and understanding of the problems introduced by moving to a cloud based environment. This sounds straight forward enough.

So how do you qualify a cloud expert? This is where things start to get complicated. First of all, unlike other areas of IT there is no professional certification for "cloud consultants". So choosing a professional cloud consultant or service firm is a matter of doing your due diligence. To help, I've compiled a brief check list of things you may want to look for when selecting your cloud consultant.

1. Experience - As in any profession, experience solving real world problems is probably more important then anything else. Has your potential consultant done anything of consequence? What other companies has your consultant worked with, what major obstacles have they solved and how? On the flip side, if they claim 10 years experience as a cloud consultant, dig deeper, how did this obvious previous experience related to what more recently has been referred to as the cloud? Some possibly answers may include experience in Grid or Distributed computing, building large multi-location data center architectures, load balancing schemes, web server clustering or other elastic methodologies.

John M Willis is prime example with extensive experience in related areas of expertise such as Enterprise Systems management. Using this related experience Willis has been able to transfer those skills built up over decades into a thriving cloud consulting operation.

I'd also keep in mind that cloud computing isn't something new, but instead the intersection of several existing technologies. Make sure your consultant has the right mix of experience in the areas that are of most concern to you and your business.

2. Code - Often consultants do very little more then make recommendations that others must implement. This can be useful, but more often running code is more useful. One of the best and easiest ways to find great cloud consultants is look for those consultants who have taken it upon themselves to create open source cloud related Projects. The Boto Project by Mitch Garnaat is a perfect example. Garnaat is a longtime AWS consultant, a doer who is an active community member on the AWS community discussion boards, he proved his worth by his actions in the community and producing a project that helps thousands around the globe. It also helps that he's been working with AWS since 2006.

3. Community Engagement - As I mentioned previously, community involvement is another great way to gauge experience. Places like the the AWS discussion boards, or various other discussion groups are ideal places to find those hidden gems. They also provide valuable insight into the capabilities of the given consultant in a public setting. Is your consultant a troll who picks fights or are they a helpful member of the community? A quick Google search and you'll have your answer.

4. Blogs & Whitepapers -Blogs have also become very useful ways to determine a cloud consultants vision and capabilities. Although they may not shed to much light on their actual experience they do provide a potential channel by which you could find a consultant.

Randy Bias a well regarded cloud consultant provides what he describes as a StrengthsFinder Report to help potential consumers in their selection. The report provides a review of the knowledge and skills acquired and can give a basic sense of your consultants abilities. According to Bias, the report provides insight into the natural talents of the consultant and can give true insight into the core reasons behind their successes and why you should select them.

5. Interview - Like any job, interview your consultant. Ask them questions that would gauge their qualifications. Start off by asking them the ultimate trick question, "what is cloud computing?". Good answers avoid the specifics of the technology but instead focus on the opportunities. Bad answers are things like saying "Salesforce" or "Virtualization" or "VMware".

Keep in mind if you ask a 100 people what cloud computing is, you'll probably get 200 answers. So if you are wondering how would I answer, the question? Here you go, this one is on the house. "Broadly I see cloud computing as a new method to market, management and deploy software and or infrastructure using the web. Or more simply -- web centric software and infrastructure."

You may also want to refer to specific definitions use things like the wikipedia definition or the NIST definition as your benchmark. If your consultant says according to NIST or uses other well regarded "cloud luminaries" that isn't necessarily a bad thing. Just make sure you agree with them. For instance, according to Larry Ellison may be good if your getting a job with an Oracle shop, but no so good for a Google App Engine gig.

6. References - Your only as good as your last job. So make sure to do your homework and ask the right questions. What did the consultant do, what problems did they solve, what technologies and platforms did they use and why was it a cloud project?

In closing, I do believe that a major obstacle to cloud computing consultants is the lack of accreditation. One possibly solution is to create an official professional cloud certification. One model could be similar to the IT Architect Certification Program provided by the Open Group. The Open Group certification program provides a framework for accreditation of third parties to establish IT Architect certification programs affiliated to The Open Group. The framework of accreditation and certification is specifically intended to standardize the process and criteria for IT Architect professional certification and establish a foundation for the required skills and experience necessary to achieve such a distinction. Basically, the Open Group has created a basic way for you to select someone with a standard level of knowledge required to preform the job of a IT Architect. Similarly, this could be applied to the job of a cloud consultant / Architect.

Good Hunting.

Friday, 23 October 2009

Irish Cloud Computing Firm Saaspoint Raises $2 million

Saaspoint, the Irish owned cloud computing consulting and applications company, has raised USD $2 million to support the next phase of its development. Enterprise Ireland, the Irish Government agency responsible for the development and promotion of the indigenous business sector, and a number of private investors have taken a small equity stake in the company. Cloud computing is the term applied to Software-as-a-Service (SaaS) applications delivered over the Internet.

“We intend to use the funds raised to build further intellectual property (IP) and invest in our existing and future cloud computing applications,” commented John Appleby, chairman, Saaspoint. “We had significant international interest in this round which augurs well for future fund raising. There is an appetite out there for leaders in the cloud computing space.”

The company, which has revenues in excess of USD $4m, employs 34 people, 15 of which are long term specialist contractors. Appleby added that the company is currently recruiting for international business development staff.

“The global market for software has changed significantly creating a new environment for software companies characterised by demand for greater flexibility, global delivery and cost-effective solutions. Cloud computing is a core part of this ‘New Software Economy’ and Saaspoint has the right technology and business strategy to respond to the new environment and capitalise on these emerging trends,” commented Jennifer Condon, Manager of the Software Division in Enterprise Ireland. “Enterprise Ireland is delighted to invest in this company and to support it in achieving its export growth potential.”

Original Article -

Thursday, 22 October 2009

Odoo - On Demand (SaaS) Offer from Open ERP

Open ERP is one of the most appreciated Open Source management software, with more than 700 downloads per day. It's available today in 18 languages and has a world network of partners and contributors with more than 90 partners and 1000 contributors. Such software has arisen from the blend of high code quality, well-judged architecture and use of free technologies. In fact, you may be surprised (if you're an IT person) to find that the size of whole Open ERP setup is less than 90 MB when you've installed the software.

Open ERP has released its new service offer - Odoo, the On demand ERP solution with minimal costs involved for end user. It is a SaaS (Software As A Service) offer from Open ERP, which provides an access to end user without any investment or any infrastructural cost. It is mainly dedicated to small and/or medium enterprises and budding enterprises with limited IT budgets. With Odoo, you can get a ready-to-use and complete enterprise management software in a few clicks. Odoo is based on the latest stable version of Open ERP. It's a self-service and low-cost offer with a unique price that includes:

* Open ERP Hosting with high bandwidth and servers,
* Incremental backups servers,
* Software + Infrastructure as a Service,
* Maintenance with bugfixes and automated migrations,
* Open ERP control centre, etc.

The subscription to Odoo is free, one need to pay at the end of the month, only if he is satisfied. With Odoo, one pay only what he really use @ 0,60C per hour. Also the 60 hours of use per month  is for free.  Get more informations under the website: This solution offers the possibility to get Open ERP in three clicks and use it anywhere at any time. You can subscribe to this solution here subscribe

Original Article -

Wednesday, 21 October 2009

Red Hat Says Windows and KVM Are Talking to One Another

Red Hat said Wednesday that its KVM hypervisor, its pet virtualization scheme, can talk to Microsoft's Windows Server and that customers can now deploy jointly supported server virtualization environments that combine Windows Server and Red Hat Enterprise Linux (RHEL). The two companies have done the tests and validated the widgetry.

In February, when Red Hat started mapping out its intentions to major on KVM, the two companies agreed to validate and support each other's virtualization and operating system platforms.
So now RHEL 5.4 and the Kernel-based Virtual Machine (KVM) hypervisor can entertain Windows Server 2003, 2008 and 2008 R2 guests and Windows Server 2008 Hyper-V, Hyper-V Server 2008, Windows Server 2008 R2 Hyper-V and Hyper-V Server 2008 R2 can host RHEL 5.2, 5.3 and 5.4 guests.

RHEL 5.4, the latest cut of the Red Hat distribution, includes both Xen and for the first time KVM, but Red Hat's heart belongs to the open source KVM, which is supposed to be snazzier because it's part of the Linux kernel. Anyway, Red Hat bought Qumranet, the outfit behind the project, for $107 million cash a year ago and that means it can control KVM's destiny.

Red Hat also said Wednesday that Microsoft products certified on Windows Server and Red Hat products certified on RHEL are also supported in heterogeneous virtualized environments to give customers more deployment choice and flexibility.

As part of the kernel, Red Hat could already guarantee that the 3,000 applications certified for RHEL work on top the KVM hypervisor with no changes necessary.

Original Article -

Tuesday, 20 October 2009

Defining Enterprise Cloud Computing

What is enterprise cloud computing? Simply stated, it’s a behind-the-firewalls use of commercial, Internet-based cloud technologies specifically focused on one company’s or one business environment’s computing needs.  Enterprise cloud computing is a controlled, internal place that offers the rapid and flexible provisioning of compute power, storage, software, and security services to meet your mission’s demands.
It combines the  processes of a best in class ITIL organization with the agility of managed, global infrastructure to make your IT faster, better, cheaper, and safer.  Enterprise cloud computing gives your business agility, survivability, sustainability, and security.

I believe commercial solutions-whether its Google's cloud or Amazon's web services-may be perfect for many companies.  But, some corporations and government agencies are not going to be comfortable outsourcing their information and services to the internet-based cloud.  For agencies like mine, and for many corporations, keeping such precious gems in our own possession is a foregone conclusion.
Hence, enterprise cloud computing is your answer.

As a CIO, you are under enormous pressure.  A recent global study by IBM summarized these pressure points.

First, today's CIO must be able to use technology to drive a company's innovation.  He needs to understand the plans and goals of the agency and introduce IT that will achieve those goals-working with the business.  This is certainly not IT for IT's sake; it is IT for the company's sake.  It is the integration of IT and business for maximum corporate leverage and competitiveness.   To achieve the innovation the company needs, today's CIO must increase the flexibility and efficiency of infrastructure to support business changes and provide a foundation for new IT insertion.

Second, today's CIO must be a savvy value creator.  She must be able to produce or increase profit from existing company property.  This may be connecting the enterprise's data in new ways to give new insights and improve decisions.  Or, it may be improving IT's overall understanding of the business that allows IT to bring forth new technologies to shift the business forward.  The creation of value for today's CIO cannot come with increased expense.  Today's CIO must be thrifty and manage internal costs to free dollars to create innovation and value.  Standardizing and centralizing infrastructure can keep IT costs to a minimum.
Finally, the IBM study noted that today's CIO is a true partner with other executives in the organization.  His job is to listen to other business leaders in the company and work with them collaboratively to improve the competitiveness of the business.  Delivering innovative solutions usually requires cultural change in the business; a strong partnership between the CIO and the business leader is needed to push that cultural change through.  It is not enough, however, for the CIO to focus outside of the IT department.  Today's CIO must also look inward and inspire her own organization to excellence in innovation, service provisioning, applications development, security, and other core infrastructure disciplines while also inspiring her IT employees to improve their business acumen.

I highlight this IBM study as it demonstrates the mounting pressures facing CIOs and IT departments.  There's no new money; yet IT must produce transformations for the business.  OpEx is increasing as a percent of overall IT spend--further eating away at the CIO's ability to free funds for new developments.  I believe Enterprise Cloud Computing is a key strategy that allows the CIO to plot a path forward that reduces operating costs and also delivers a competitive advantage for the business.  Enterprise Cloud Computing gives the CIO IT infrastructure that is faster, better, cheaper, and safer.

Let's talk about these four features.   While I stated them rather simply-faster, better, cheaper, and safer-each one is an imperative from an IT perspective and all CIOs must deliver in each area.  There are many technologies and business processes that support the faster, better, cheaper, and safer goals.  I don't want anyone to leave with the impression that Enterprise Cloud Computing is the only thing that can do it.  For example, implementing sound configuration and change management processes in your IT department on top of any enterprise technology architecture will result in improvements in speed, availability, cost, and security.  Driving your applications organization to centers of excellence--for services, analytics, or information extraction will deliver improvements.  Or demanding strict standards in data and information management will help in all areas.  However, I do believe Enterprise Cloud Computing offers a great return on investment and makes it a premier strategy for the CIO to achieve business value while reducing costs and complexity.  How does Enterprise Cloud Computing improve the agility, sustainability, survivability, and security of IT so IT returns value to mission?  Let's go through a few examples.

Every CIO needs the ability to quickly and rapidly provision new infrastructure to support business needs.  Think of the likely scenarios facing your business.
  • If you're FEMA, you need rapid ability to provision IT services for national emergency response.
  • If you're the Department of State, you need to be able to respond to terrorist bombings or earthquakes to provide immediate US citizen services and diplomatic support to host nations.
  • Similarly, if you're H&R Block and gearing up for tax season, you need to grow your services quickly and just in time to meet consumer surge-for those few months of the year but not for the whole year.
  • If you're Burger King and preparing for a mass marking campaign to promote "Transformers," you want fast capacity for the marketing hype without long-term excess.
  • Or, if you're Toys R Us, you want your web services to safely and rapidly respond to the holiday crunch and then return to off-season levels at an affordable price.
Cloud computing makes all of this possible through inexpensive, commodity-based components that you can daisy chain together with a few clicks on the keyboard.  You need temporary storage or fast dedication of storage to a new effort; the storage cloud can do that.  If you need compute power, you can call up a dozen new servers to improve processing time and then decommission those servers when your peak has subsided.  Or, if you need on-demand security services to process sensitive data, Cloud Computing can give you a private enclave with a full suite of security services in a matter of minutes and not months.

Let's talk more about what Enterprise Cloud Computing is and how it can easily fit into your technology strategy. 

As I noted in the beginning, Enterprise Cloud Computing is behind the firewall and contained within a business enterprise (one company, one agency, or one supply chain, for example).  A true cloud environment contains many layers and extends beyond the compute platform.  To gain early advantage in Enterprise Cloud Computing, you need at least a storage cloud and a compute/processing cloud sitting on top of your network cloud.  However, this requires each cloud user to bring database, application, web services, and security services with them.  You'll achieve goodness in this scenario but you may not actually achieve faster, better, cheaper, and safer overall IT.

A robust Enterprise Cloud Computing environment will have storage, compute, database, application, and maybe additional layers.  It will support rapid stand-up of development and test environments to reduce time to market by applications developers.  It will include standard data management approaches and flexible storage schemes to allow broad re-use of corporate data without storing the data in multiple locations.  It should have security as a service that dictates a common, consistent approach to assuring the identity, access, and audit of individuals and systems.  Finally, a robust Enterprise Cloud will be built on a solid foundation of common processes, management, and governance principles to keep the cloud optimized.

If I look back at CIA's technology strategy for the past few years, we were headed to an Enterprise Cloud all along, even if we didn't call it that.

Like all of you, we are moving from point-to-point networks to private MPLS clouds to gain efficiency, redundancy, and peak performance.

Virtualized servers were also a core strategy for CIA.  Limitations on power, space, and cooling in our data centers meant adding more physical servers was not a long-term option.  We needed to reduce our number of physical servers, increase our overall computing power, and do it in a smaller square foot of real estate.  I suspect many of you have faced the same M&E demands.  If you've only virtualized your servers, you've already started down the Enterprise Cloud Computing path and you can claim some early success!  Abstracting the operating system and software from the hardware on which it runs is a foundation upon which cloud computing is built.

In the storage area, we've all used storage area networks and network-attached storage (SAN and NAS).  Combined with storage resource management, this was a precursor for storage clouds.  Connecting your storage to your servers has matured in Enterprise Cloud Computing, whereby you can dynamically connect any storage to any server thus allowing corporate re-use of data in a multitude of virtual ways through single physical copies.  SAN and NAS were also precursors for dynamic allocation of storage to support expanding and contrasting computationally intense applications.  This sort of "rubberband" theory of storage is a fundamental element of the Enterprise Cloud.

A fourth example of how our technology strategy was already headed to cloud is in our applications.  We were already on a march to all web-based applications, ideally requiring very thin clients on the opposite end.  Building one application to meet many business needs or mashing applications together for new business value is our corporate approach.  Not all applications will be built on the software as a service model but we will continue on that path in as much as possible.  Underlying our applications is a nearly 7-year old strategy for a services oriented architecture that componentizes key services for security, access administration, search, and so forth.  Continued evolution from SOA to Software as a Service increases our cloud success.

And, a fifth example of how our strategy was easily aligned with Enterprise Cloud Computing is management of our enterprise data.  We've been focused on an Enterprise Data Layer for several years.  Sure, we wanted to minimize our storage waste.  But, we really needed confidence that everyone was using the single authoritative source of data and we needed confidence we could resurrect that data in the event of another 9-11.  We wanted applications, systems, and users to be served by the data layer rather than build individual, redundant small, data islands.  Enterprise Cloud Computing allows us to manage our data corporately while giving the perception of individual stovepipes that operate and respond with high mission performance.
I suspect that most medium- and large-organizations have similar technology plans that also dovetail nicely into Enterprise Cloud Computing.  Don't think of the Cloud as a perpendicular technology approach aimed at derailing your current plans.  It isn't.  Instead, think of the Cloud as an energy booster welded onto the side of your existing plans to get you to your goal state faster through commercial, commodity technologies and approaches.

In summary, there are many IT benefits to the company and benefits to the IT department from Enterprise Cloud Computing.  Let's go back to our simple mantra:  faster, better, cheaper, safer.

Perhaps the most prolific feature of Cloud Computing is the ability to deliver services faster.  The storage, compute, and infrastructure layers of Enterprise Cloud Computing allow you to rapidly increase capacity to respond to peaks in demand.  You don't need long-lead times to pre-provision service.  You can keep a minimal amount of excess capacity on hand to quickly expand.  And, because you are built on commodity platforms, you can get more capacity without lengthy acquisition cycles.  You have a new app that needs to stress test for the weekend?  Just reassign some processing power to accommodate the load and return the power to the business for Monday morning demands.

Enterprise Cloud Computing also allows IT to be better.  When you have just a few copies of your enterprise data and you can stripe those copies around multiple physical data centers, you've improved your disaster recovery posture.  Your availability numbers can also soar with Enterprise Cloud Computing.  Ready-made redundancy protects from failure and drives your mean-time-between-failure sky high while driving your mean-time-to-repair down to zero.  It's also better from a Green IT perspective.  Enterprise Cloud Computing still takes power, space, and cooling but it allows you to maximize the capacity in any single powered unit.

This also drives us to a cheaper IT environment.  Legacy environments required 30 or 50% utilization rates across suites of physical servers to manage wide ranging deltas in performance.  Each physical server was always on and always draining power.  With Enterprise Cloud Computing, you can maximize the load on a physical rack while spreading the compute power across virtual servers, thus allowing you to respond to wide-ranging performance demands without adding physical equipment.  Enterprise Cloud Computing improves OpEx by forcing rigid adherence to standards thus reducing change management overhead and associated engineering review labor.

Enterprise Cloud Computing also helps IT deliver a safer environment.  The standards-base of the cloud reduces the complexity and variety of the infrastructure, allowing you to deploy patches more rapidly across the enterprise.  By keeping the cloud inside your firewalls, you can focus your strongest intrusion detection and prevention sensors on your perimeter; thus gaining significant advantage over the most common attack vector-the Internet.  By virtualizing storage, you protect against a physical intruder that might be intent on taking your server or disk out of the data center for exploitation.  If you take the cloud to the desktop and deliver your office automation through a virtual desktop, you reduce workstation security anomalies and greatly improve edge protection.  Finally, as we move to security services in the cloud, you hit a multi-dimensional safety model that extends beyond people and their accesses.  The multiple dimensions allow you to make security decisions based on who you are, where you are at that moment, what kind of data or app you want, and what kind of device you have.

Assuming I've convinced you Enterprise Cloud Computing is better for IT, let's focus for a minute on the top 10 reasons why your boss will love Enterprise Cloud Computing.

10. She won't get a big IT bill every time she wants a new application.
9. He can threaten to outsource the whole lot of his IT department and, he may actually be able to do so!
8. She can move her people to lower rent locations without moving a mini-data center with them.
7. It's fun to go in and switch off a random machine in the data center to get people's attention-without actually stopping any work!
6. It's "green."  She can turn her server huggers into tree huggers.
5. He never has to hear whining about needing new infrastructure; he can just buy some more capacity every once in a while.
4. She can claim to be a "hypervisor".  They're a lot cooler and more powerful than mortal supervisors.
3. When she fires her maniacal, disgruntled employee, one click on security services will keep all of the company's intellectual property safe from damage or exfiltration.
2.  He can tell his golf buddies he has his own cloud, and he's not just blowing cigar smoke.
1.  Having all of her projects "completed virtually" is way better than having all her projects "virtually complete!

Does all of this really free the infrastructure?  Is the title of this talk "Enterprise Cloud Computing:  The Infrastructure's Ultimate Revenge" for real?  Can I get IT over its reputation as being too slow, too expensive, and too rigid?  As the CIO or head of infrastructure, if I walk down this path, can I deliver IT that is faster, better, cheaper, and safer?  My unequivocal answer to you is "Yes."  You can use Enterprise Cloud Computing as the infrastructure's ultimate revenge.  You can turn the conversation away from statements such as:
  • "My business unit was ready but IT was behind schedule."
  • "The infrastructure delivered ahead of schedule and I'm waiting for my business unit to set a date for training."
  • "My business unit could have predicted that but the application used bad data."
  • "I'm sorry boss; we had the right data we just made the wrong prediction."
  • "The application was ready to deliver but the infrastructure wasn't there."
  • "The infrastructure's ready to go whenever your app is."
The Infrastructure's Ultimate Revenge is that it won't be your scape goat any more.  Enterprise Cloud Computing done right will take the wait out of applications delivery.  As soon as new code is ready to drop, the infrastructure's waiting to take it.  If you need a 5000 person test environment for 30-days, the infrastructure can delivery that in a day and give it to you for the month you need it.

The Infrastructure's Ultimate Revenge is an IT commodity that gives your business a perfect inventory model for data storage and compute power.  You won't waste money on excess capacity and you won't waste time waiting for new capacity.

Finally, the Infrastructure's Ultimate Revenge means IT is out of your critical path.  Delivering you an environment that is faster, better, cheaper, and safer so you can accomplish your business objectives.  Enterprise Cloud Computing allows you to cut the ball-and-chain of the past or release the albatross from around your neck. You're no longer stuck with slow, lethargic, antiquated solutions.  Enterprise Cloud Computing allows your company to break away, soar, and be successful.

In summary, the Infrastructure's Ultimate Revenge is more than faster, better, cheaper, and safer IT.  It "frees your Business to Change Strategically."

Monday, 19 October 2009

Unlocking the Cloud with Enterprise Private PaaS

Like the agility benefits of public Platform as a Service (PaaS), but concerned about lock-in, security, integration and compliance? Considering an enterprise private PaaS for running Java apps? In his session at Cloud Computing Conference & Expo, Mohamad Afshar, VP of Product Management at Oracle, will answer these questions as well as discuss best practices for building and operating a private PaaS.
The 4th International Cloud Computing Conference & Expo is co-located with the 7th International Virtualization Conference & Expo and will be taking place November 2-4, 2009 at the Santa Clara Convention Center, Santa Clara, CA.

This session compares and contrasts public and private approaches to PaaS, discusses the key capabilities and benefits of an enterprise private PaaS based on a shared Java and database platform. It's a must for anyone considering using a public PaaS or building out a private PaaS.
About the Speaker
Mohamad Afshar, PhD, is VP of Product Management at Oracle. He has product management responsibilities for Oracle's middleware portfolio and is part of the team driving Oracle's investments in the SOA on Application Grid - which brings together SOA and data grid technologies to ensure predictable low latency for SOA applications. He has a PhD in Parallel Systems from Cambridge University, where he built a system for processing massive data sets using a MapReduce framework.

4th International Cloud Computing Conference & Expo

At the 4th International Cloud Computing Conference & Expo, November 2-4, 2009, being held in the Santa Clara Convention Center, Santa Clara, CA, more than 1,500 delegates will find out how cloud computing is transforming the way that enterprises everywhere build and deploy applications. Now held three times a year - in New York, Prague, and Santa Clara - the Cloud Computing Conference & Expo series is the fastest-growing Enterprise IT event, devoted to every aspect of delivering massively scalable enterprise IT as a service. The event is co-located with our 7th International Virtualization Conference & Expo.

Original Article -

Friday, 16 October 2009

First Open Source 4G Mobile Cloud Platform

Funambol introduced the world's first open source 4G (fourth-generation) mobile cloud platform for device management and synchronization. The platform enables management of mobile devices, and synchronization of diverse mobile media, over WiMax and LTE with social networks, email systems and PCs. The company's software leverages the prevailing Open Mobile Alliance (OMA) standards for device management (DM) and data synchronization (DS). Funambol is working with some of the largest companies in the industry to deploy innovative 4G mobile cloud sync services over the coming months.

"4G will usher in new mobile services, such as live video, videoconferencing and HD movies," said Fabrizio Capobianco, Funambol CEO. "As a result, 4G's increased bandwidth and sensitivity requires even more need for mobile device management and sync. It must be easy and reliable for users to access and share rich media, anytime, anywhere. There is simply no alternative to an open source approach for optimizing the 4G user experience and performance."

The wireless industry is rapidly advancing from 2G and 3G networks to 4G. A major benefit of 4G is that it provides greater wireless broadband capacity for more users than prior technologies. Mobile operators around the globe are racing to be first to market with WiMax and LTE networks.

At the same time, a major question is how to monetize business opportunities created by 4G. As 4G is a premium high-speed service, WiMax and LTE providers are evaluating numerous scenarios to make it convenient and compelling for users to access rich mobile media. One issue associated with 4G is that because it transmits large amounts of data, it is more susceptible and sensitive to myriad conditions that degrade performance. This places an even greater burden on service providers to successfully perform critical mobile device management and syncing.

Original Article -

Thursday, 15 October 2009

Assessing the maturity of cloud computing services

The number one challenge in cloud computing today is determining what it really is, what categories of services exist within the definition and business model and how ready these options are for enterprise consumption.

Forrester defines cloud computing as a standardised IT capability (services, software, or infrastructure) delivered via Internet technologies in a pay-per-use, self-service way.

While definition is crucial to having a fruitful discussion of cloud, the proper taxonomy and maturity of these options is more important when planning your investment strategy.

To this aim, Forrester has just published our latest Tech Radar that maps the existing cloud service categories (not the individual vendors within each category) along maturity and value impact lines to help you build your strategic roadmap.

The report identified 11 service categories that fall into three classes of cloud services – software you rent (Software-as-a-service, or SaaS), middleware services and platforms that help developers build cloud-based applications, and infrastructure services and platforms that are places to deploy cloud applications.

Note that we did not restrict the second and third categories to Platforms-as-a-service (PaaS) and Infrastructure-as-a-service (IaaS) platforms – because there is value in discrete middleware and infrastructure services in the cloud just as there is in your datacentre.

While Amazon Web Services is clearly an IaaS leader providing a robust set of compute, storage and middleware services, there are also discrete offerings, like Boomi’s cloud integration service that does no more than integration and does not need to be a full platform player to be valuable. In fact there can be significant advantage that comes from this type of focus on just one thing.

Oracle’s Larry Ellison continues to grab headlines for his assertion that the cloud isn’t anything new and to his credit he’s at least half right.

Just because applications are delivered from a cloud infrastructure doesn’t mean any of the aspects of application design or components of a service-oriented architecture don’t apply – in fact they have just as much relevance as they did on-premise.

But how these services are delivered is what is different. There is a clear difference between an on-premise, single instance deployment of TIBCO and the highly scalable, multi-tenant integration service from Boomi.

Tenancy, shared economics, virtualised deployment, and cloud service-to-service integration are game changers for those who are using these services. Not only do they bring potential cost advantage to applications that might otherwise have been deployed on-premise but they create opportunities for new business applications that simply wouldn’t be feasible any other way.

That’s clearly the case for Cryoport, a maker of cryogenic containers for live medical tissue.

Delivering these materials safely was very hands-on and expensive until cloud computing came along. Same with genome research; medical research universities and pharmaceutical companies had to make large investments in HPC labs to crunch the massive volumes of DNA data, until cloud computing economics came along.

But cloud computing isn’t the be-all, end-all that many portray and they can’t suit all uses today and may not in the future either. Thus it behooves you to build your roadmap using guideposts to the maturity and applicability of these emerging options.

We hope this report helps you bound and plan your cloud investments and look forward to your feedback on how we can assist further.

Original Article -

Wednesday, 14 October 2009

Fourth type of cloud computing

It is pretty much agreed in the industry that there are three expressions of cloud computing: software-as-a-service (SaaS), platform-as-a-service (Paas) and infrastructure-as-a-service (Iaas). Several vendors, hoping to gain some marketing and branding traction, have started to call what they’re doing “the fourth type of cloud computing.” For the most part, I don’t believe what their offering is the fourth type of cloud computing.

But wait! What is cloud computing?

As with many so called new approaches to information technology, the concept of cloud computing is based upon a foundation of what has come before. That being said, many suppliers have decided to brand just about anything that is web-based as “cloud computing.” It seems to me a more relevant, functional description is needed before we can truly understand what’s new, what’s been around for a long time and only then, can a better definition be accepted.

My colleagues and I worked on a complete definition and published it as “The Cloud Codex.” After watching the market and various types of innovation, we’re in the process of revising that framework or taxonomy.
We believe that, to be truly be called cloud computing, several things simply must be available. Things such as the environment must be publicly accessible. APIs must be available allowing organizations to develop their own management environments or use those offered by others. It must support multiple tenets (subscribers). There must also be a Web-based management environment that makes it possible to operate and administer all functions of the subscriber’s cloud environment. Very granular accounting/costing information must be available to subscribers allowing a fine level of control and the creation of chargeback mechanisms. The environment must both be scaleable and elastic allowing subscribers to control their use of cloud resources. The subscriber must be offered a set of self-service, very rapid provisioning tools. The environment must be virtualized and hardware supplier independent.

Many of the so called cloud suppliers are offering products that don’t really meet the minimum requirements to be called cloud computing. These suppliers are offering things that might be called cloud-like. Some suppliers clearly are just trying to create a fog.

Different types or expressions of cloud computing

At this point, three different approaches to cloud computing have emerged. Each, of course, is aimed at a different audience, having different needs, and may or may not relate to the others.
  • Software-as-a-service (SaaS) is where an application is made available over the Web. It appears to be a client/server application in which the web browser is the client and the server(s) supporting the applicaiton are somewhere on the network.
  • Platform-as-a-service (Paas) is seen as a service offered by a supplier that is based upon a set of progamatic interfaces, a set of services and an applicaiton development environment. At this point, most of these are proprietary making migration from one service to another problematic.
  • Infrastructure-as-a-service (Iaas) is seen as an offering of computing and storage somewhere out on the Web. The subscriber is offered the ability to host virtual clients or virtual servers and the supporting storage in one or more of the suppliers’ datacenters.

What’s new?

In the past few weeks, I’ve seen a number of suppliers try to present their service offerings as the fourth instance of cloud computing.  In most cases, these services appear to be merely advisory or development consulting services.  These services, of course, don’t really meet the minimum requirement to be called an expression of cloud computing.

I’ve thought of a number of possibilities for a fourth type of cloud computing. One example would be a service that would front-end a number of cloud computing offerings allowing an automatic deployment across multiple cloud computing suppliers’ environments. This apporach would support workload management, workload service level management, workload failover and the like.

How would you define the fourth type of cloud computing?

Original Article -

Tuesday, 13 October 2009

IDC: Cloud will be 10% of all IT spending by 2013

IDC's updated IT Cloud Services Forecast predicts that public cloud computing will make up $17.4 billion worth of IT purchases and be a $44 billion market by 2013. 

The IDC predictions presumably did not account for the estimated $19 billion, out of the U.S. government's $70 billion IT budget, that Federal CIO Vivek Kundra has vowed to spend on cloud computing. It also does not count spending on private cloud. IDC did not respond to requests for comment about its methods.

"It's increasingly apparent to everybody that this is a real phenomenon and not entirely marketing hype," said Jeff Kaplan, principal analyst at Boston-based consulting firm THINKstrategies. He said the numbers are an important indicator of the potential for cloud services.

Kaplan said that IDC had correctly forecast the economic downturn as a factor in the growth of cloud computing, but noted that there was a flip side as well. Even if buyers are attracted to the cloud pricing and consumption model, they're strapped for cash. The forecast states that actual spending is about six months behind 2008 predictions. "[IT buyers] still don't have money to spend on anything," even if there's a cheap cloud option. Another problem is persistent confusion about what constitutes cloud computing.
"There is a land grab on right now -- the truth is the market hasn't grown as fast as it could have," said Kaplan, because of the hype and overblown claims by vendors trying to cash in on the cloud label. That has left important enterprise buyers suspicious and confused, despite the growth of Amazon and Rackspace's cloud businesses.

Rackspace's last quarterly report roughly matches the IDC breakdown. Rackspace claimed at the end of August that it has 51,000 cloud customers, up from 16,000 last year. Revenue, however, was only $13 million, less than 10% of its net revenue for the year. Amazon does not disclose or differentiate cloud computing figures in its reports, but estimates range as high as $400 million per year in cloud revenue, a small part of the $20 billion retail giant.

The five-year forecast shows cloud computing, defined by IDC as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), taking up 5% of the total IT market. It is projected to rise to 10% by 2013, with the market growing by 26% each year.

The IDC forecast also predicts that SaaS spending will remain the greatest part of cloud spending. It currently makes up 50% of all cloud spending, but by 2013, it will only be 39% of the projected $44 billion. Servers, storage and application/PaaS will be gaining strongly, while infrastructure will remain 20% of overall spending on cloud.

IDC projects cloud will be 27% of all new spending -- new technologies, new companies, and new products instead of maintenance or lifecycle spending -- and that means, as vendors try to position themselves, cloud computing will spend the next five years as the hottest new IT market in town.

"Now, it's up to the industry to get out of its own way and let it happen." Kaplan said.

Original Article -,289142,sid201_gci1370788,00.html#

Monday, 12 October 2009

Five9 Announces Cloud Computing Platform for Call Centers

Five9 has announced a Cloud Computing Platform for Call Centers that enables building native software integrations between Five9 on-demand call center software and other enterprise software applications. Now, call centers across horizontal functions – Sales and Marketing, Customer Service and Support, Finance and Collections – can integrate Five9’s call center software with existing CRM applications, create unified agent desktops or build industry-specific solutions.

The Cloud Computing Platform for Call Centers is complemented by the Five9 Developer Program. By joining the program, software vendors, systems integrators, call center consultants, developers and I.T. professionals can gain access to a customer-grade development “sandbox” environment for the platform that supports full on-demand inbound and outbound call center capabilities, API documentation including code samples, and a dedicated developer support team.

Original Article -

Friday, 9 October 2009

SaaS can help SMBs

SaaS can help SMBs automate without spending huge amount on software and hardware capital costs. During his presentation at Interop, Lakshmi Narayan Rao, Marketing Director—Global Channels, Jamcracker Inc shared some real world examples to highlight the benefits of SaaS for SMBs

ERP, CRM, BI etc might sound like alien terms for a small organization. A small organization has real business problems and wants to solve them with the help of IT without getting into the complexities. Lakshmi Narayan Rao, Marketing Director—Global Channels, Jamcracker Inc feels that SMBs often resist automation and computerization owing to the high capex requirements.

“Small and medium businesses (SMBs) are the backbone of the nation's economy, particularly in developing countries. They constitute the bulk of the industrial base and also contribute significantly to their exports as well as to their Gross Domestic Product (GDP) or Gross National Product (GNP). There are 6 Million SMBs in India,  50% of industrial output comes from SMBs,  42% of India’s total exports are SMBs, which also form 80% of the total number of industrial enterprises. Yet the PC penetration in SMBs is less than 10%, with less than 1% using automation,” Rao said.

Indian SMBs suffer from problems of suboptimal scales of operations and technological obsolescence. Business automation is the key to solve these problems. Just like companies share a common business center, IT can also share the same pool of resources lying outside the organization’s premise by using cloud computing or SaaS. With benefits like pay as you go versus CAPEX/initial capital commitment, easy to adopt and expand infrastructure, low start-up and operating costs etc, cloud computing is the best solution for SMBs.

In a case of a Fleet Management Solution for a mid-size transportation company – DFC, the on-premise solution proposed required multi-location Hardware and Software project,  3 months of development activity, 1 month of implementation and Rs 10 lakhs + 35% maintenance fee annually totaling to 3 Year cost out-lay of Rs 17 lakhs. Compared to this the same solutions offered on a SaaS platform had a development Time of 15 days, setup fee of Rs 1.5 lakhs, monthly subscription of Rs 5,000 (all inclusive) and a total 3 Year cost out-lay of Rs 3.3 lakhs.

The company moved from Excel sheets to dedicated browser based-personalized SaaS Business Application for complete Fleet Management within 2 weeks time with zero Capex and low Opex Solution.

Concluding his presentation Rao said, “SaaS is the closest that comes to answering the above Wish list and more so in turbulent times.”

Original Article -

OpSource Cloud Public Beta Now Available

OpSource, Inc., a provider Cloud operations, today announced that OpSource Cloud, a Cloud to bring together the availability, flexibility and community of the public Cloud with the security, performance and control the enterprise requires, is now in public Beta and available for on-line purchase by the hour from

OpSource Cloud is designed to allow IT departments to manage their security as they would within their internal IT infrastructure. Upon sign-up, each customer receives a Virtual Private Network and sets the degree of public Internet connectivity they wish to grant, from totally private to fully available. OpSource Cloud also provides user customizable security with a compliant, high-performance multi-tier Cloud architecture and full user control over firewall settings. In addition, other enterprise quality controls include user level login and passwords, operational permissions and departmental and sub-department reporting, all backed by a 100% SLA. OpSource Cloud is built on the same standard technology already being employed by enterprise IT. This ensures out-of-the-box trust, compatibility and productivity. With virtualization playing such a key role in Cloud computing, OpSource has selected VMware for virtualization and application portability between the Cloud and Enterprise.

For users weaned on the public Cloud experience, OpSource Cloud offers online sign-up, pay by the hour usage, and rich online communities. And immediate availability and flexibility ensure that developers and enterprise IT alike have access to enterprise quality computing capability when they need it, paying for it only when it is being used. This is the promise of Cloud computing and OpSource Cloud delivers it.

Thursday, 8 October 2009

Datacenter energy costs outpacing hardware prices

Last week's EmTech meeting played host to a panel that focused on managing energy use in datacenters, featuring representatives from Lawrence Berkeley Labs and the Uptime Institute, along with people from Google, Intel, and Yahoo. Almost everyone but Yahoo's Scott Noteboom discussed where power efficiencies were improving, and identified areas that still needed work, so their points are best organized by topic, rather than speaker. Noteboom described how Yahoo built an extremely energy-efficient data center in upstate New York, which we'll get back to in a bit.

Nearly all of the speakers recognized that, from a market perspective, building an efficient datacenter is increasingly critical. Jonathan Koomey, who's affiliated with LBL and Stanford, said that power use by US datacenters doubled between 2000 and 2005, despite the fact that the period saw the dot com bust. Uptime's Kenneth Brill told the audience that, currently, the four-year cost of a server's electricity is typically the same as the cost for the server itself, while John Haas of Intel said that 2010 is likely to be the point where the electricity costs of a server over its lifetime will pass the price of the hardware.

The price for power also tends to get magnified at the datacenter level. Haas said that, based on his company's estimates, one watt saved at the server can save 2.84W at the datacenter level, while Google's Chris Malone claimed that cooling dominates the additional costs, running at about twice the cost of everything else combined. When the additional infrastructure costs are considered, Brill said, the minimum capital expenditure for a $1,500 server has now cleared $8,000 when the power and air conditioning infrastructure is considered. "Profits will deteriorate dramatically if datacenter costs don't get contained," he concluded.

Fortunately, the panel described potential solutions that exist at nearly every level of the datacenter. More efficient processors are a key driver; the LBL's Koomey said that processing efficiency, measured as computations per kilowatt-hour, has been doubling about every 1.6 years, going back to the vacuum tube era of the 1940s. Plotted on a logarithmic scale, it made for a remarkably linear trend line. Intel's Haas told Ars that it's not only the computational efficiencies that are driving this trend—current Intel processors devote a million gates per socket towards energy management and, when idle, only consume about 30 percent of what they do under heavy loads. As a result, the company estimates that replacing 185 single-core servers from 2005 could be done with 21 modern Xeons, yielding energy savings of nearly 90 percent.

The increasing efficiency of the processor is starting to drive companies to look elsewhere for further gains. So, for example, Malone described how Google has ditched its UPS units, and instead placed a small battery on the motherboard side of the power converters. Charging it via the DC lines allows a UPS functionality with what he called "nearly perfect efficiencies."

But the biggest remaining efficiencies are likely to be in management and facilities. Nearly everyone agreed that the worst possible efficiency came when hardware is sitting idle, since even the most efficient hardware draws a significant amount of power even when not doing anything. Both Brill and Koomey said that the companies that do cloud computing are far, far better at avoiding this than business or scientific users, and the lessons they've learned really need to be adopted elsewhere. Brill also pointed out that although Intel provides power management capabilities at the processor level, these often have to be activated via software, and a lot of companies don't bother.

At the facilities level, since cooling dominates, controlling its use is the clearest path to greater efficiency. Several of the speakers said that although a lot of the hardware has recommended operating temperatures, these are often based on out-of-date information—the hardware can actually tolerate temperatures that are quite a bit higher. In addition to raising the temperature, Google's Malone said that controlling the airflow and using evaporative cooling can avoid the use of chillers entirely, yielding significant power savings. Haas pointed out that a group he works with, called The Green Grid, has an interactive map that will display how often the outdoor air temperature in a given location is below a set level, indicating that cooling can be avoided entirely.

With all these options available, the biggest problem is generally institutional inertia. As Koomey put it, there are split incentives: the cheaper hardware may not be efficient, which produces what he termed "perverse behavior." Haas pointed out that a company's energy buyers and IT managers may not report to the same executives, leaving them with little incentive to cooperate to lower a different division's costs.
All of that nicely set the stage for Yahoo's Noteboom, who described what happens when a company does organize a data center for energy efficiency. As recently as 2005, Yahoo was entirely dependent on colocalization facilities that hadn't been built with efficiencies in mind. As a result, he estimated that 60 percent of the power was wasted, and the facilities went through enormous amounts of water. Since then, Yahoo has built five new facilities, each incorporating new ideas. Although its server footprint has grown by a factor of 12 since 2005, datacenter costs are only one-quarter of what they used to be.

He then described Yahoo's latest facility, being built in upstate New York. The buildings are oriented according to the prevailing winds that come off the Great Lakes, with vents along the walls and a high central cupola that allows waste heat to escape. "The entire building is an air handler," he said, noting that servers are also laid out within it in order to maximize their fans' impact. As a result, the current estimate is that it will only require external cooling for 212 hours in an average year, which will be provided via evaporative cooling. Yahoo estimates that the cost to cool it will only be about a single percent of what they were paying during the colocation days.

The company is looking into the potential for further improvements, like shutting off server fans in favor of larger, more efficient external ones and eliminating UPS systems. They're also exploring cross-facilities load management—sending work to facilities where power and cooling costs are lower.
Overall, the Yahoo presentation gave a good sense of what's possible when a company focuses its attention on datacenter energy use and does a thorough adoption of best practices. The cost figures provided by the rest of the panel suggested that,if current trajectories continue, it will be harder to justify not making this sort of effort.

Wednesday, 7 October 2009

The Cloud Transition: What Does It Mean For You?

We are standing on the threshold of a new transition in information technology and communications; a radical departure from current practice that promises to bring us new levels of efficiency at a vastly reduced cost. Cloud computing is full of potential, bursting with opportunity and within our grasp.

But, remember, that clouds always appear to be within our grasp and bursting clouds promise only one thing: rain!

As with all radical transitions, it takes time for the various pieces to fall into place. Some of them are already in place; some of them have yet to be considered. In this article, we will take a look at both and try to gauge where we are today and what work still remains. In addition, we will try to understand what this means to the various stakeholders involved.

Cloud composition
So what is the cloud and who are the stakeholders involved. There are many definitions available, but in simple terms, cloud computing involves providing an information technology service that is accessed remotely. This access can be over a public or private infrastructure, but for our purposes, it is probably useful to consider the Internet as a reference delivery infrastructure.
With this in mind, a simple cloud model would include the following stakeholders:
  • The cloud service provider
  • The cloud connectivity provider
  • The Internet
  • The user connectivity provider
  • The user
The cloud service provider is based in a data center (which we assume he controls for simplicity), where he has a number of servers running the cloud service being provided (e.g. a CRM system, a remote mail system, remote file repository, etc.). He is responsible for ensuring that the servers are up and running, are available at all times and that there are enough of them to service all the users who have subscribed to the service.

The cloud connectivity provider delivers Internet access connections to the cloud service provider and ensures that the cloud service provider has enough bandwidth for all of the users who wish to access the cloud service simultaneously. He must also ensure that these connections and the bandwidth requested are always available.

The user accesses the service remotely, typically through a web browser over the Internet. He also needs Internet access, which is provided by a connectivity provider (e.g. ISP), but only enough to ensure that he can access the service quickly and without too many delays. The connectivity provider ensures that his connection and required bandwidth is always available.

Which leaves us with the Internet. Who is responsible for this? The connectivity providers will typically have control over their parts of the network, but they must rely on other connectivity service providers to bridge the gap between them. The beauty of the Internet is that they do not have to know about all the actors in the chain of delivery. As long as they have a gateway to the Internet and the destination IP address, then the packets can be directed to the right user and vice versa.

The Internet itself is made up of a number of interconnected networks, often telecom service provider networks, who have implemented IP networks and can provide connectivity across the geographical region where they have licenses to provide services.

This brings the Internet and the cloud within the grasp of virtually everyone.

Cloud considerations
For cloud services to work, there are four fundamental requirements that need to be met:
  • There must be an open, standard access mechanism to the remote service that can allow access from anywhere to anyone who is interested in the service
  • This access must have enough bandwidth to ensure quality of experience (i.e. it should feel like the service or application is running on your desktop)
  • This access must be secure so that sensitive data is protected
  • This access must be available at ALL times
Some of these fundamentals are in place and are driving adoption of cloud services. The Internet and IP networking have grown to a point where it provides the perfect access mechanism. It is a global network, accessible from anywhere as Internet connectivity is now virtually ubiquitous. The bandwidth of the Internet is also not an issue - it is only a question of how much you are willing to pay for your connectivity.
Nevertheless, for users in particular, a modestly priced Internet connection provides all the bandwidth they need to access the cloud services they require.
So far so good!

Cloud service providers are extremely conscious of the fact that availability and security are key requirements and generally ensure that there are redundant servers, failover mechanisms and other solutions to ensure high availability. They also provide trusted security mechanisms to ensure that only the right people get access to sensitive data.

Still on track then!

Tuesday, 6 October 2009

Cloud-Based Email Archiving Provider Expands Partners

With the explosive growth of software-as-a-service (SaaS) solutions, cloud-based services providers, managed service providers (MSPs), value-added resellers (VARs) and global system integrators (GSIs) are all looking for ways to enhance their suite of offerings. Further, many of these organizations have concluded that their on-premise and hosted email offerings are incomplete without an email archiving component. The LiveOffice Partner Program provides these organizations with feature-rich technology and the sales and marketing support needed to become successful resellers of hosted archiving solutions.

Recently recognized by leading analyst industry firm Gartner, Inc. as the largest provider of outsourced email archiving services in North America based on total number of clients as of yearend 20081, LiveOffice is expanding its partner program to better meet the evolving needs of its partner community. “Cloud-based email archiving is hot, and our enhanced program provides LiveOffice partners with all of the tools they need to hit the ground running,” said Jim O’Hara, vice president of sales and business development for LiveOffice. “As more resellers look to expand their SaaS offerings around email, our solutions provide unique value for them by creating a recurring revenue stream, keeping their margins intact and helping build their SaaS portfolio.” LiveOffice continues to ride the rapidly growing wave of interest around cloud-based email archiving.

The email archiving market, according to Gartner, grew by 25.7 percent in 2008.2 In addition, Gartner estimates that “the number of customers using [hosted] service providers for email archiving grew 59 percent in 2008 to reach 23,624 organizations, while the growth in the number of [on-premise] product customers grew only 26 percent to reach 29,092.”2 As more organizations look to migrate to cloud-based email (Microsoft Exchange Online, Google Gmail, etc.), the resulting market potential for hosted archiving is expected to grow significantly.

The LiveOffice Partner Program provides partners with instant access to its turnkey suite of email archiving solutions, while also providing the sales, marketing and technical support required to serve their clients. Working together with LiveOffice, partners are able to provide their clients with hosted email archiving solutions that meet a wide range of mailbox management, legal discovery and compliance requirements.
The LiveOffice Partner Program features:
  • Comprehensive SaaS Training: Partners benefit from a comprehensive training program designed specifically for them and their clients. From extensive technical documentation to online training videos, LiveOffice ensures that its partners have access to as much (or as little) training as they wish. 
  • Dedicated Archiving Specialist: Each LiveOffice partner is assigned a dedicated account manager to help them successfully market and sell cloud-based archiving. Each account manager is an archiving expert and brings years of specialized knowledge to their role. 
  • Easy Integration: LiveOffice provides API-level integration to facilitate simple and rapid incorporation of email archiving services into partners’ existing systems and processes. 
  • Cloud Start Marketing Kit: This kit helps partners jumpstart their pipeline building and marketing efforts with collateral, web copy, email marketing templates, technical documentation and more. 
  • Resource Center: The LiveOffice Resource Center is a one-stop shop for partners and their clients. This comprehensive online portal contains pre-recorded webinars, whitepapers, case studies and online product tours detailing the benefits of cloud-based email archiving solutions.
Original  Article -