Friday, 30 July 2010

BBC E-mail: UK troops use iPad app for fire mission training

For the first time, UK troops are using a special app developed for the iPad to learn how to handle a fire mission. That's when artillery is being fired at the enemy from several miles away. In early trials at the Royal School of Artillery in Wiltshire, troops have learned the jargon and procedures more quickly than before, when they were sat listening to lessons from instructors. It's hoped smartphone and tablet technology could be used to speed up training across the army.

Front line Lance Bombardier Jason Markham from 1st Regiment Royal Horse Artillery has already served in Afghanistan and is in training for a second tour. He told Newsbeat how it works. He said: "I'd be at the HQ - the troops on the ground would call me for fire support and they'd give me the target location and a description of the target.
If we can use this sort of technology, we can probably shorten the amount of training and that is pretty key nowadays when people are so committed to operations in Afghanistan
Major Rich Gill Army training officer
"We'd use all of that information to come up with a solution. "This has been designed to let us practise, so that when we get out there into theatre we're a lot slicker. "It makes it more fun instead of being sat in a classroom looking at a presentation being given information. "If you're on a course you can take this back to the block and practise with it, even have little competitions with it." Soldiers back at the CP (command post) have to learn how to communicate with those on the front line, to make sure the right weapons are fired, at the right time and at the right target.

3D app They're using the iPad back at base in Wiltshire to sharpen their skills. Major Rich Gill is an army training officer, who has been involved in rolling out the app. He said: "If we can use this sort of technology, we can probably shorten the amount of training and that is pretty key nowadays when people are so committed to operations in Afghanistan.

Army iPad app 

Soldiers at the Wiltshire base say the iPad app has made training fun "There's so much they need to do before they go there and when they come back. "If we can use this to reduce the amount of training it'll be fantastic."
But the investment in iPads comes at a time when the armed forces are facing budget cuts. Major Gill insists the move is good value for money as it saves on other costs like printing reference guides and manuals.

He told Newsbeat that 15 other options were considered and denies that using Apple's gadget is a gimmick: "That was one of the first things we considered and it's not about the gimmicky side of it. "You can get all the material that you need on there at the right time and it's really mobile as well. At the time this project came about this was the best bit of hardware on the market." And he says it's not a case of cutting out mistakes: "From what we've done over in Afghanistan with the training we've been through, we don't really make mistakes. "The training back here is the best it can be and if we can improve it then that's exactly what we need to be doing."

The same team is also working on a 3D app for army pilots, who have to learn to recognise different vehicles from the air as part of their training.

Join Us:

Blue Coat Introduces Cloud-Based Analysis System

Blue Coat Systems has launched a new service for its PacketShaper customers

Blue Coat Systems has turned to the cloud to offer an analysis system that it claimed would improve network performance. The Blue Coat Assessment Service uses some of the functionality of its own PacketShaper appliance to provide the analysis.

The system has been designed to allow Blue Coat resellers to deploy a PacketShaper appliance in the network of a customer and then, at the conclusion of a data collection period, typically seven days, use the cloud service to perform the analysis of the network traffic.

According to Nigel Hawthorn, Blue Coat's VP of marketing, EMEA, "PacketShaper has been around for 10 years and we've long suggested using it for network analysis. But it's not been an easy process - systems engineers have to put device in customer network, come back a few days later, gather that information, manipulate it and try to work out what's important to the customers

He said it was much less complex that the previous system. "The reseller only has to put in the data once. And the system doesn't send all the data to the cloud, just the metadata enough to compile the graphs and pie charts."
While the cloud-based service will offer reports on traffic performance, protocols and applications being used, it won't offer details on individual users. "You'd be able to pick up that Spotify, for example, was being used but not who was using it," said Hawthorn. He added that the PacketShaper could be tweaked according to the demands of the organisation's users. "A university could tone down peer-to-peer so it didn't impact other levels or a company could allow VoIP," he said.

The new service will be available to resellers in August.

The Cloud Services Market is Looking Like a Red Ocean

This post is part of our ReadWriteCloud channel, which is dedicated to covering virtualization and cloud computing. The channel is sponsored by Intel and VMware. Challenges come with upgrading networking. Learn more in this ReadWriteWeb special report, made possible by Intel and VMware: Moving Past Legacy Networking in a Virtualized Data Center.

Earlier this week, Opsource announced a partner program. The news came on Monday, the first day developers could download code from OpenStack, a separate initiative that has had considerable attention this week.

Both companies provide cloud infrastructure based upon open source. They could easily be part of a same open-cloud network such as OpenStack. But for now Opsource says it will not join the effort lead by Rackspace. Our guess is Opsource competes to some extent with Rackspace and is looking at other alternatives.

Rackspace, Opsource and almost two dozen other companies are now offering a variety of cloud infrastructures. It's an increasingly crowded market that is due for some consolidation. But for the moment, the market is an example of how companies perceive the opportunities the cloud provides.

Rackspace, as we know, provides public cloud infrastructure. Opsource offers cloud infrastructure to the enterprise, service providers and systems integrators.

Opsource calls its program a partner ecosystem. These partners include integrators, developers, ISVs, cloud platform companies and telecom providers.

Opsource expects 50% of its revenue to come from these partners, and points in particular to telecommunications companies. CEO Treb Ryan:
"The feedback we're getting from the telecoms is that a request for cloud is becoming an increasing part of the RFP's they are seeing from customers. Usually it comes as a request for bundled services (such as managed network, internet access, hosting and cloud.) One of our European telecoms stated they couldn't bid on $2 million a month worth of contracts because one of the requirements was cloud."
Ryan says customers want to work with one company:
"Most likely it's a combination of not wanting to go to separate vendors for separate services (i.e. a colo company for hosting, OpSource for cloud, a telecom for network, and a managed security company for VPN's) they want to get it all from one vendor completely integrated. Secondly, I think many customers have a trusted relationship with their telecom for IT infrastructure services already and they trust them more than a third-party company."
Opsource and Rackspace are two well-established companies in the cloud computing space. But the number of providers in the overall market is beginning to morph.

John Treadway of Cloud Bzz put together a comprehensive list. He says the market is looking more like a red ocean:
"I hope you'll pardon my dubious take, but I can't possibly understand how most of these will survive. Sure, some will because they are big and others because they are great leaps forward in technology (though I see only a bit of that now). There are three primary markets for stacks: enterprise private clouds, provider public clouds, and public sector clouds. In five years there will probably be at most 5 or 6 companies that matter in the cloud IaaS stack space, and the rest will have gone away or taken different routes to survive and (hopefully) thrive."
Lots of blood in the water. Who's going to get eaten first?

Thursday, 29 July 2010

7 Non-Obvious SaaS Startup Lessons From HubSpot

7 Non-Obvious SaaS Startup Lessons From HubSpot
1.  You are financing your customers. Most SaaS businesses are subscription-based (there's usually no big upfront payment when you signup a customer).  As a result, sales and marketing costs are front-loaded, but revenue comes in over time.  This can create cash-flow issues.  The higher your sales growth, the larger the gap in cash-flows.  This is why SaaS companies often raise large amounts of capital.

Quick Example: Lets say it costs you about $1,000 to acquire a customer (this covers marketing programs, marketing staff, sales staff, etc.).  If customers pay you $100/month for your product and stay (on average) for 30 months, you make $3,000 per customer over their lifetime.  That's a 3:1 ratio of life-time-value to acquisition cost.  Not bad.  But, here's the problem.  If you sign up 100 customers this month, you will have incurred $100,000 in acquisition costs ($1,000 x 100).  You're going to make $300,000 over the next 30 months on those customers by way of subscriptions.  The problem is that you pay the $100,000 today whereas the $300,000 payback will come over time.  So, from a cash perspective, you're down $100,000.  If you have the cash to support it, not a big deal.  If you don't, it's a VERY BIG DEAL.  Take that same example, and say you grew your new sales by 100% in 6 months (woo hoo!).  Now, you're depleting your cash at $200,000/month.   

Basically, in a subscription business, the faster you are growing, the more cash you're going to need

2 Retaining customers is critical. In the old enterprise software days, a common model was to have some sort of upfront license fee — and then some ongoing maintenance revenue (15–20%) which covered things like support and upgrades.  Sure, the recurring revenue was important (because it added up) but much of the mojo was in those big upfront fees.  The holy grail as an enterprise software startup was when you could get these recurring maintenance fees to exceed your operating costs (which meant that in theory, you didn't have to make a single sale to still keep the lights on).   In the SaaS world, everything is usually some sort of recurring revenue.  This, in the long-term is a mostly good thing.  But, in the short-term, it means you really need to keep those customers that you sell or things are going to get really painful, very quickly.  Looking at our example from #1, if you spent $1,000 to acquire a customer, and they quit in 6 months, you lost $400.  Also, in the installed-software world, your customers were somewhat likely to have invested in getting your product up and running and customizing it to their needs.  As such, switching costs were reasonably high.  In SaaS, things are simple by design — and contracts are shorter.  The net result is that it is easier for customers to leave.

Quick math: Figure out your total acquisition cost (lets say it's $1,000) and your monthly subscription revenue (let's say again say it's $100).  This means that you need a customer to stay at least 10 months in order to "recover" your acquisition cost — otherwise, you're losing money.

It's Software — But There Are Hard Costs. In the enterprise-installed software business, you shipped disks/CDs/DVDs (or made the software available to download).  There were very few infrastructure costs.  To deliver software as a service, you need to invest in infrastructure — including people to keep things running.  Services like Amazon's EC2 help a lot (in terms of having flexible scalability and very low up-front costs), but it still doesn't obviate the need to have people that will manage the infrastructure.  And, people still cost money.  Oh, and by that way, Amazon's EC2 is great in terms of low capital expense (i.e. you're not out of pocket lots of money to buy servers and stuff), but it's not free.  By the time you get a couple of production instances, a QA instance, some S3 storage, perhaps some software load-balancing, and maybe 50% of someone's time to manage it all (because any one of those things will degrade/fail), you're talking about real money.  Too many non-technical founders hand-wave the infrastructure costs because they think "hey we have cloud computing now, we can scale as we need it."  That's true, you can scale as you need it, but there are some real dollars just getting the basics up and running.

Quick exercise: Talk to other SaaS companies in your peer group (at your stage), that are willing to share data.  Try and figure out what monthly hosting costs you can expect as you grow (and what percentage that is of revenue).

It Pays To Know Your Funnel. One of the central drivers in the business will be understanding the shape of your marketing/sales funnel.  What channels are driving prospects into your funnel?  What's the conversion rate of a random web visitor to trial?  Trial to purchase?  Purchase to delighted customer?  The better you know your funnel the better decisions you will make as to where to invest your limited resources.  If you have a "top of the funnel" problem (i.e. your website is only getting 10 visitors a week), then creating the world's best landing page and trying to optimize your conversions is unlikely to move the dial much.  On the other hand, if only 1 in 10,000 people that visit your website ultimately convert to a lead (or user), growing your web traffic to 100,000 visitors is not going to move the dial either.  Understand your funnel, so you can optimize it.  The bottleneck (and opportunity for improvement) is always somewhere.  Find it, and optimize it — until the bottleneck moves somewhere else.  It's a lot like optimzing your software product.  Grab the low-hanging fruit first.

Quick tip: Make sure you have a way to generate the data for your funnel as early in your startup's history as possible.  At a minimum, you need numbers on web visitors, leads/trials generated and customer sign-ups (so you know the percentage conversion at each step).

Read The Rest…
Full Credit to:

Vertical Industry Use Of Cloud Computing – Who’s Getting Involved?

Here are some useful cloud based chart illustrations on the future and direction of cloud computing.
Thanks to our friends at: MineCast and Famapr for the research and insights.

Wednesday, 28 July 2010

Build Your Business Case for the Cloud Today

My partner and I consult with a variety of firms on the topic of cloud adoption. It is surprising how few really understand the strategic and ROI benefits of its application

Total Cost of Ownership is an important number to use when evaluating IT and in particular when comparing extant systems to cloud solutions. Often the thinking is that what an organization is using now is a better place than making a change - and the primary driver in most people's minds is cost.

Utilizing TCO is the only way managers can evaluate apples to apples. Often there are apples to oranges comparisons and this is why more organizations aren't jumping into the cloud faster. They simply aren't doing the proper math to realize the savings.

read more

Cloud Computing Delivering on its Promise, but Doubts Still Hold Back Adoption

The Majority of IT Departments now Use Cloud-Based Applications, with Users Reporting Better Security and Lower Costs

Waltham, Mass., July 22, 2010 – Mimecast®, a leading supplier of cloud-based email security, continuity, policy control and archiving, today announced the results of the second annual Mimecast Cloud Adoption Survey, an annual research report examining attitudes to cloud computing services amongst IT decision-makers in UK and US businesses.  The survey, conducted by independent research firm Loudhouse, reveals that a majority of organizations (51 percent) are now using some form of cloud computing service—and the levels of satisfaction amongst those companies is high across the board.  Conversely, companies not yet using cloud services cite concerns around cost and security.

The survey shows that of those businesses using cloud services, 74 percent say that the cloud has alleviated internal resource pressures, 72 percent report an improved end-user experience, 73 percent have managed to reduce their infrastructure costs and 57 percent of say that the cloud has resulted in improved security.  However, not everyone is convinced: 74 percent of IT departments still believe that there is always a trade-off between cost and IT security and 62 percent say that storing data on servers outside of the business is always a risk.

Highlights from the Research

Cloud Services are now the Norm:
- The majority of organizations now use cloud-based services.  The report found that 51 percent of organizations are now using at least one cloud-based application.  Adoption rates amongst US businesses are slightly ahead of those in the UK with 56 percent of respondents using at least one cloud-based application, compared to 50 percent in the UK.  This is a significant rise from the 2009 survey, when just 36 percent of US businesses were using cloud services.

- Two thirds of businesses are considering adopting cloud computing.  Encouragingly for vendors, 66 percent are now considering adopting cloud-based applications in the future.  Again, US businesses are ahead of the UK in their attitudes to the cloud with 70 percent considering cloud services, compared to 60 percent in the UK.

- Email, security and storage are the most popular cloud services.  62 percent of the organizations that use cloud computing are using a cloud-based email application.  Security and storage are the next most popular, used by 52 percent and 50 percent of organizations with at least one cloud-based service respectively.  Email services are most popular with mid-size businesses (250-1,000 employees), with 70 percent of these organizations using the cloud to support this function.  Smaller businesses (under 250 employees) are most likely to use the cloud for security services, with larger enterprises (over 1,000 employees) most likely to make use of cloud storage services.

Cloud Attitudes are Split Between the 'Haves' and 'Have-Nots':
- Existing cloud users are satisfied.  Security is not considered to be an issue by existing cloud users: 57 percent say that moving data to the cloud has resulted in better security, with 58 percent saying it has given them better control of their data.  Among current cloud users, 73 percent say it has reduced the cost of their IT infrastructure and 74 percent say it has alleviated the internal resource pressures upon the department.
- Security fears are still a barrier.  Three quarters (74 percent) of IT departments agreed with the statement 'there is always a trade-off between cost and IT security,' suggesting that many organizations feel cloud solutions are less secure than more expensive on-premise alternatives, simply due to their lesser cost.  Storing data on servers outside of the business was also seen as a significant security risk by 62 percent.
- IT is concerned that adopting cloud will not initially result in cost savings.  58 percent of respondents thought that replacing legacy IT solutions will almost always cost more than the benefits of new IT.
- Cloud concerns stem from a lack of clarity.  One reason for the negative perceptions of cloud services among non-users seems to be a lack of clear communication from the industry itself.  54 percent of respondents said the potential benefits of cloud delivery models are overstated by the IT industry.

Supporting Quotes
"It is great to see that cloud computing has now been embraced by the majority of organizations," commented Peter Bauer, CEO and co-founder of Mimecast.  "The fact that more than 50 percent of businesses are now using cloud-based applications – with two thirds currently considering adopting them – is hugely encouraging for the industry and a clear indication that IT is increasingly willing to innovate in order to get better value for money, increased reliability and greater control  of their data."
"The research shows that there is a clear divide within the IT industry on the issue of cloud computing," commented Peter Bauer, CEO and co-founder of Mimecast.  "While those organizations that have embraced cloud services are clearly reaping the rewards, there are still a number who are put off by the 'cloud myths' around data security and the cost of replacing legacy IT.  It is now up to cloud vendors to educate businesses and end users to ensure that these concerns do not overshadow the huge potential cost, security and performance benefits that cloud computing can bring."
About the Research
The Mimecast Cloud Adoption Survey was carried out by Loudhouse Research on behalf of Mimecast in June 2010.  The survey polled over 500 IT professionals from a range of businesses and organizations based in the UK and USA.  The full report can be downloaded here:

About Mimecast
Mimecast ( delivers SaaS-based enterprise email management including archiving, discovery, continuity, security and policy.  By unifying disparate and fragmented email environments into one holistic solution that is always available from the cloud, Mimecast minimizes risk and reduces cost and complexity, while providing total end-to-end control of email.  Founded in the United Kingdom in 2003, Mimecast serves approximately 3,000 customers worldwide and has offices in Europe, North America, Africa, the Middle East and the Channel Islands.

Tuesday, 27 July 2010

Telco Cloud Services - Cisco's Vision for the Smart Cloud

One useful way to dissect the Cloud market is to quantify what it means to telcos. Not only might they use it internally for their own IT operations, but it's now the most strategic factor affecting their enterprise customer market, i.e. How do they build and sell 'Telco Cloud Services'?

Gartners Magic Quadrant for this industry identified AT&T as the industry leader because of the strength of their Cloud services vision, especially integration with their existing telco network services.

Cloud Cisco
While in this mode it's also helpful to review vendors like Cisco and their strategy. Telcos pretty much build out their product platforms on equipment from vendors like Cisco and others, and so it's quite likely their Cloud portfolio will grow in a similar way.
The Cisco white paper 'Cloud - What a Business Leader Must Know' (15 page PDF) explains their vision for the role of their technology as the enabler of a "Cloud Network Platform". It's intended as a snapshot introduction for senior executives and succeeds well at this. In particular I thought it reflected the essential innovation theme:
"Cloud accelerates your business by allowing you to transform ideas into marketable products and services with greater speed."
and that for telcos this means "Cloud unlocks several opportunities for higher margin services. Cloud will enable entirely new business models and revenue streams". Absolutely, and I'd suggest a number of perspectives can help build a framework for achieving this:
  1. The Smart Data-centre
  2. Inter-Cloud Services
  3. Cloud UC
1. The Smart Data-centre
Hosting your applications "in the Cloud" typically refers to using the modern day form of web hosting, Cloud providers like Amazon AWS, where they cater for the very specialized needs of a very specific customer segment, like providing CloudFusion for PHP developers.

In contrast Cisco will play more in the private 'Enterprise Cloud' scenario, where the same underlying technologies are used to automate a much broader array of internal IT operations. As IDC highlights this is a much larger $11.8 billion market, versus $718 million for public Clouds, and although the same technologies are involved it is an entirely different buying market with more business-driven needs, and critically for sellers, an entirely different buying process.

Fundamentally this is more about an overall 'data-centre transformation' project that is likely to be led from the CIO level and with the end result of smarter IT operations achieved through better automation of the underlying machinery.

The core technology is one of IT workflow automation, where dynamic orchestration of IT resources from across multiple, distributed data-centres is virtualized into a single service, achieving many business benefits like improved Business Continuity.

Cloud software from vendors such as Enomaly is powerful because it can play a key role in both scenarios, web hosting and enterprise data-centre, and offers the provisioning platform required to achieve this. As described in the Intel 'Cloud Builder' white paper it uses the same design principles as the Internet itself to achieve a self-organizing, high-availability computing environment.
"The Cloud must run like a decentralized organism, without a single person or organization managing it. Like the Internet, it should allow 99% of its day-to-day operations to be co-ordinated without a central authority. Applications deployed on a Cloud managed in this fashion are more adaptive and fault tolerant because single points of failure are reduced or eliminated"
Cisco is offering complimentary technologies for the network layer, and so in combination they address the total IT estate that an enterprise owns and operates, and can automate it end-to-end, ie. Cloud represents this whole of the IT environment, and makes it smarter.

2. Inter-Cloud Services
A key part of this smartness will come from "Inter-Cloud Services".
One essential facet of automating these data-centre processes like procuring disk space is that it will occur across multiple service providers, not just one, and this will require technical standards for facilitating this type of exchange.

Service providers will be able to invest in assets, like physical infrastructure, that is published to a "marketplace" for other service providers to consume, re-brand and deliver to their clients, achieving premium levels of Business Continuity by doing so.

The telecomms industry standards group the TMF develops and maintains these types of standards, with Inter-Cloud Services being launched recently as part of their overall Cloud program for telcos.
This is a project team of vendors like Comptel and Progress who are working with Cisco to provide software for automating this type of cross-provider provisioning. Their recent Powerpoint presentation on this work explains how it can leveraged to achieve 'Unified Service Delivery Management', a singular approach to service orchestration across networks and data-centres of multiple providers.

3. Cloud UC
The presentation focuses on the product area most important to telcos, 'UC' - Unified Communications.

This represents their core telephony products in a modern IP world, referring to desktop VoIP, Instant Messanging, voicemail and so forth, and it highlights the core value of this interoperability.

Microsoft recently entered the market, with their USP being they offer a complete software-centric approach that fits perfectly with this Cloud approach. I.e. they can now offer a single software estate, SQL Server, Sharepoint, Exchange et al, and now also telephony too, all of which can be better automated and delivered to much higher standards through this Enterprise Cloud approach. This can drive huge cost savings through efficiency consolidation.

With regards to Inter-Cloud Services this shows how "the Cloud" will also offer business value through its role as a service and data broker. IBM, Cisco, Lucent et al each offer UC products, each with the same message of business benefit through increased staff productivity through collaboration tools, however in many cases the different software isn't compatible.

Gateways exist to link one to another however what is really needed is a general improvement in the data sharing capabilities of the Internet. That way there is a "loosely coupled" peer to peer exchange of information like 'Presence', 'Follow Me' messaging and other personal data, but still within a context where corporate policy is applied to ensure security and compliance.

Achieving this interoperation in a standardized way, via Inter-Cloud Services, will be one of the first of many benefits Cloud delivers.

What do you think?
Join in the discussion in our Cloud Ventures Linkedin industry group:

read more

Storm Clouds – Disruptive Technologies Creating the ‘New Normal’

As much as information technology has changed in the last 10 years, the next decade promises even more significant change.

And as cloud technology becomes more prevalent, IT enterprises will be driven to reconsider the status quo around just about everything we know including Physical Infrastructure, Virtualization, Automation, Service Management, and Security.

Cloud technology and virtualization of virtually everything means rethinking the economic models around physical infrastructure, the emergence of a new class of providers as well as a greater degree of standardization around virtualized OS and middleware configurations. There are increasing expectations from consumers of data center services - instant provisioning and de-provisioning, just for openers. And with a new generation of workers that expect anytime, anywhere access to corporate data, securing your data outside of your data center walls has become a greater business imperative albeit far more difficult.

Join Us:

Monday, 26 July 2010

Cloud Computing Confidence Expected to Drive Economic Growth

The flexibility of cloud computing could help organizations recover from the current global economic downturn, according to 68 percent of IT and businesses decision makers who participated in an annual study commissioned by Savvis, Inc., a provider of cloud infrastructure and hosted IT solutions for enterprises. 

Vanson Bourne, an international research firm, surveyed more than 600 IT and business decision makers across the United States, United Kingdom and Singapore, revealing an underlying pressure to do more with less budget (the biggest issue facing organizations, cited by 54 percent of respondents) and demand for lower cost, more flexible IT provisioning.

read more

Small Businesses Embrace Cloud Services to Grow Their Business

Cloud enablement provider, Parallels, on Tuesday responded to the demands of small businesses by adding seven Cloud services to the Application Packaging Standard (APS) catalog. Parallels estimates that the small business need for Cloud services will grow to nearly $19 billion by 2013 and service providers must develop a full service offering to meet this considerable opportunity and "Profit from the Cloud". With the recent incorporation of licensing capabilities in the APS 1.2 specification, service providers can now quickly and easily license, manage and offer applications that small businesses need and demand.

Friday, 23 July 2010

Apple says iPhone and iPad are at use in most Fortune 100 companies

More than 80 percent of Fortune 100 companies are using the iPhone, and about 50 percent of the Fortune 100 are deploying or testing the iPad, Apple revealed Tuesday.

Apple Chief Operating Officer Tim Cook announced those figures during the company's quarterly earnings conference call Tuesday evening. He said that the company is selling iPads and iPhones "as fast as (they) can make them," including those sales to the enterprise market

The iPhone has steadily grown in the enterprise market since it was first introduced in 2007. But Apple's comments on Tuesday would seem to suggest that the iPad has found faster adoption in the enterprise market.

The remaining question is: What are those businesses using the iPad for? Apple's executives did not provide any indication, though numerous companies have publicly embraced the device, giving some idea of where the iPad is being used.

Earlier this month, Wells Fargo revealed it initially bought 15 iPads used to demonstrate products at an investor conference. While the company spent two years looking at the iPhone, it spent just two weeks to approve the iPad for use. The company's experience with the iPad has led it to buy "a bunch" more.

In addition, Mercedes-Benz has used the iPad to sell cars, allowing sales people to handle credit applications on the touchscreen tablet device. The company is now considering using iPads at all 350 of its U.S. dealerships.

And SAP has developed its own iPad application, allowing managers to approve shipping of customer orders. The company also has a handful of other custom applications planned for development.  

Join Us:

IT Budgets Pointing to the Cloud, Expansion

At first, one financial services company didnt believe a cloud provider was a possible option because of financial compliance rules and client audit concerns. But it was surpised when the provider said it could meet all its needs. But the provider, in this case BlueLock LLC in nearby Indianapolis, said it could meet all the security rules, service levels and disaster recovery needs

Thursday, 22 July 2010

Expand’s WAN Optimization Enables $250,000 of Annual Savings for Ridley Inc.

Expand Networks' Technology Improves Efficiency through Consolidated IT Initiatives

Roseland, NJ – July 19th 2010 – Expand Networks ( the leader in WAN optimization for branch office consolidation and virtualization, has been chosen by leading animal nutrition company, Ridley Inc., to assure the success of major IT initiatives, including server consolidation, thin-client computing and bandwidth optimization. Having initially deployed Expand as part of a bandwidth consolidation project, the $200,000 investment paid for itself in just over 6 months. Ridley is now investing a further $180,000 in new Expand technology in order to meet renewed bandwidth requirements needed to support ongoing business growth and strategic IT initiatives, and is set to recoupyear on year savings of $250,000 from the deployment.

Ridley Inc. manufactures and markets animal nutrition and health products and is a publicly traded company on the Toronto Stock Exchange. Based on a business ethos of 'continuous improvement', it operates from 42 locations across North America employing over 1200 staff. With most locations being extremely harsh and dusty environments, such as feed mills, Ridley embarked on a thin-computing strategy, removing servers and computers from remote locations and delivering server based computing from its central location in Mankato, Minnesota.

Chad Gillick, IT Manager, Ridley Inc. explains, "We recognize the importance of IT in enabling the business to streamline its processes, gain greater efficiencies, increase productivity and reduce costs. Expand's WAN optimization technology has been a critical enabler in helping us deploy strategic IT initiatives such as server based computing and virtual desktop infrastructure."

By deploying the Expand solution Gillick has been able to remove expensive desktop and laptop computers at the remote sites and replace with thin client terminals, without the need for costly bandwidth upgrades.

Having looked at competitive offerings from Riverbed and Cisco, Gillick chose Expand because of its superior capabilities in accelerating Citrix and web based traffic. Furthermore, due to the fact that Ridley utilizes a managed WAN service from AT&T, Expand's WAFS capabilities with QoS, enabled Gillick to tailor traffic flows and dynamically manage bandwidth requirements 'on the fly'.

Gillick continues, "Not only did we recoup our initial $200,000 investment in just over 6 months based purely on efficiency and productivity savings, we're also gaining significant financial and operational benefits in other areas. For instance, without the Expand solution we would have needed a 45mbps connection at the central site that would have cost in the region of $26,000 per month. With Expand we were able to reduce this to a 9mbps link costing $4,500, an annual saving of over $250,000. On top of this, it's enabling of server consolidation and thin client computing, we have managed to reduce our technical refresh costs which were running at $400,000 annually down to $220,000. We believe we will be reaping the benefits of the Expand solution for many years to come."

Adam Davison, corporate VP sales and marketing, Expand Networks, comments, "We continue to see tremendous traction in the market place for our technology. It's especially rewarding when an organization such as Ridley Inc., that clearly places a huge emphasis on its IT infrastructure to deliver business agility and competitive advantage, chooses Expand's solution over competitive offerings. This is further validation of Expand's technology and demonstrated why WAN optimization is becoming a catalyst and enabler for many of today's strategic IT initiatives."

More information on Ridley's WAN optimization project can be found at:

Join Us:

How to Get Rich Quick with Cloud Computing

You know that a technology has hit the mainstream when it appears in PCWorld. Such is the case for cloud computing, a topic PCWorld considers in its recent piece Amazon Web Services Sees Infrastructure as Commodity. 

Despite the rather banal title, this article makes some interesting points about the nature of commoditization and the effect this will have on the pricing of services in the cloud. It's a good article, but I would argue that it misses an important point about the evolution of cloud services. Of the three common models of cloud–SaaS, PaaS, and IaaS–it's the later, Infrastructure-as-a-Service (IaaS), that most captivates me. I can't help but begin planning my next great start-up built using the virtualization infrastructure of EC2, Terramark, or OpSource.

But despite a deep personal interest in the technology underpinning the service, I think what really captures my imagination with IaaS is that it removes a long-standing barrier to application deployment. If my killer app is done, but my production network and servers aren't ready, I simply can't deploy. All of the momentum that I've built up during the powerful acceleration phase of a startup's early days is sucked away—kind of like a sports car driving into a lake. For too many years, deployment has been that icy cold water that saps the energy out of agile when the application is nearing GA.

read more

Wednesday, 21 July 2010

The future of Desktop Virtualization (VDI)?

If you follow the news about VDI you have probably noticed that the market is quickly heating up and booming with new products, vendors and solutions.

The old remote desktop session through Microsoft's RDP protocol is not enough anymore and organizations (and users) are now demanding remote access with local desktop like experience.

Vendors like VMware, Citrix, Microsoft, Quest, and Virtual Bridges amongst others are in a frenetic race to bite as much as they can from this market. This is a market that will reach 50 million users by 2013 and generate $65.7 billion in revenue according to a Gartner research from 2009.

I gathered some of my thoughts about the subject to compose this article. The essential idea here is to provide an insight of where desktop virtualization is heading to, or should be heading to in my opinion.

The Apps
Many organizations have already understood that VDI is all about applications and personalisation. Apps and personalisation are resultants of what the user is and what he does. Moving forward I foresee VDI solutions allowing GuestOS to be a mere conduit to publish apps. Users and administrators will be able lift the application layer and change the base GOS. The apps will run on top of any flavour of GOS – Windows, Linux, Chrome or Android. What will allow this to happen is application virtualization or application layering.

GOS Parenting

because there's need to support a wide range of hardware's and device drivers; GOS are heavy and contain a large footprint. In the future lighter GOS with smaller footprints and ability to maintain a single master base image and serving multiple instances will be an essential component to alleviate CPU, memory and storage workloads. Today's techniques, such as Linked Clones, are available out of the GOS. I see that ultimately as a feature provided by the GOS. We will see virtual desktops making function and API calls to the base image for those non-cached functions and/or transactions.


A new technique to manage application deployment called layering has recently been released by Unidesk. This technology allows the GOS and the applications to be broken into layers that can be either assigned by the administrator or installed by the users. The technology still maturing but it provides the administrator with the ability to control what application layers users are allowed to have on their desktops environment. Considering that applications are now layers (imagine something like sequential ESX snapshots) they should also be self contained and GOS independent. This will allow application portability across GOS (The Apps) and multiple end-point device support, such as MAC, iPhone, Sius, Blackberries or your TV.

Public VDI Brokers

Today VDI brokering is an organization's internal function. I envisage the existence of public VDI brokers with standardised set of security frameworks that will act as the secure gateway to desktops inside organizations. There will be no requirement for VPNs from an end-point device to the virtual desktop and all devices will have a factory default application for the public VDI brokers. This public multi-tenant VDI broker will allow seamless user experience from multiple end-point devices from anywhere, anytime, and will also be able to broker to a number of VDI infrastructures.

The Cloud
Desktops hosted off-premise, on the cloud, does make sense to me; however bandwidth and latency are still a problem to be addressed. Using public VDI brokers mentioned before users will be able connect to public non-persistent desktops on the cloud whilst secure mechanisms will guarantee that user profiles are safely download from the organizational premises.
User profiles and personalization would reside inside private networks being able to be applied to any public desktop, with any GOS, on the cloud. This once again brings us the idea of application and profile independency. As of today most of the security mechanisms are based around providing access desktops. Tomorrow I see major security mechanisms inside the desktops to protect the user data.
The idea here is cloud-enabled public non-persistent desktops!

Thin Desktop (Cached Mode)
Thin Desktops ate for portable devices. They are the same desktop, however will incorporate a feature to allow synchronization of critical and selected user data with end-point devices. Bookmarks, applications, folders and files could be bookmarked for offline use with iPAD or Mobiles as an example.
These devices will have been shipped with runtime plug-ins to allow the download of those virtualized applications, and they will be able to execute the synchronized offline content.

Online Application Stores

Online application stores will provide application on-demand. These applications will have a standard common factor compatible with most GOS or devices, which will have a plug-in pre-installed to support these stores.
The administrator will have the ability to pre-select or assign applications to be used by users. These applications once executed at a desktop will have its execution linked to the user's profile and will be available for automatic download from any new desktop the user connect to.

A first step has been given by Worth checking out!

User Profile Autonomy
The user profile holds information about the GOS user preferences and application customizations. Moving forward I see these same profiles hosting information about integration with the public application stores, list of applications allowed and blocked etc. User profiles will not be only compatible to a single GOS, but portable to multiple GOS and end-point devices. I also predict the concept of public or cloud based profile and user personalization datastores, hosted by companies like VeriSign and RSA in partnership with VMware (RTO) or AppSense.

In a simplistic approach the traditional VDI architecture looks similar to this:

I am idealizing something more in line to this:

The Client Hypervisor
This is subject for a long discussion but let me cut the crap and go straight to what I think. Type 1 hypervisors will without doubt change the way organizations think about employee's owned computer.

I personally like the idea of the organization's owned asset. At the end of the day laptops are a work tool and I never had to buy my own laptop to work for any organization. On the other side, employees don't want to have their laptop prepared by their company to accept two or more different GOS.

Unless client hypervisor comes integrated with the hardware I don't think there will much space for adoption. Soon or later hardware vendors will ship their hardware with Type 1 hypervisors, but even like that I have my doubts about adoption because that would defeat the purpose of the whole VDI stack.

I consider the ability to port profile and personalization to multiple environments more important than being able to run a local GOS secured by the organization's policies. Remote Desktops will be available to anyone, anytime, anywhere.

Extreme cases like when there is no internet connectivity might be a use case for Type 1, but then again, there will be the offline and cached modes to allow the user to get to the critical data.

Display Protocols
There is a big argument about display protocols going on at the moment. I'll not comment on performance or quality of each one of the products; however I'm confident that existing and upcoming technologies will be able to deliver the desktop like experience. Some performance improvements and features implementation are still needed but as times go by they will all be able to deliver on that. This is probably one of the VDI technologies that will evolve faster than any other.

These are some of my thoughts about the future of VDI and I think everything I list here is achievable with existing technologies. What are your thoughts on the future of VDI?


vSphere 4.1 – What has changed for VMware View?

VMware released this week vSphere 4.1, a dot release packed with new features and improvements on key performance areas. How do all these new features and performance improvements will affect VMware View VDI infrastructures? Off course everything helps on the overall, but I decided to list and comments on those few improvements that will provide best benefits for VDI.

Transparent Memory Compression (TMC) – this new feature is particularly important when running several desktop VM on a single host – does that resemble VDI? The feature is a new overcommit technique that compresses on the fly the virtual pages that should be otherwise swapped on disk. Each virtual machine has a compression cache where vSphere can store compressed pages of 2KB or smaller size. TMC is enabled by default on ESX/ESXi 4.1 hosts but the administrator can define the compression cache limits or disable TMC completely.

On VDI environments, often bound by memory the results in a performance regain of 15% when there's a fair amount of memory over-commitment and a regain of 25% in case of heavy over-commitment.


Storage I/O Control – This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on.

VDI environments are highly IO intensive and disk contention will be quickly perceived by users when interacting with their desktops. The ability to monitor and take automated actions over VMs under disk contention will definitely help to maintain VDI infrastructures running smoothly.

Scalable vMotion – vSphere 4.1 supports up to 8 concurrent virtual machines live migrations. The engine has been significantly reworked to reach a throughput of 8GB/sec on a 10GbE link, 3 times the performance scored in version 4.0.

In VDI infrastructures where workloads can drastically change according to users interactions and activities, the ability to offload hosts quicker is crucial to maintain a stable environment for users.

Distributed Resource Scheduler (DPM) – DPM now has a set of scheduled tasks to help control it, turning it on and off at certain times of the day if you'd like. Disabling DPM will bring all the hosts out of standby, to help guarantee that no hosts get stuck in a useless state.

On VMware View infrastructures this feature improvement will provide flexibility to set hosts to power off at night hours and automatically bring them online prior to business hours. That's GreenIT!

Memory footprint reduction- The hostd footprint and memory consumption (down by 40%) has been greatly reduced, speeding up some operations by a factor of 3x. Less memory overhead equals to more VDI VMs per host, or more memory available to give to users.

vCenter Server 4.1 – vCenter 4.1 also introduces better performance when used in conjunction with VMware View. The creation of new virtual desktops now is 60% faster and their power on timing is 3.4 times faster.
  • 3,000 virtual machines per cluster (compared to 1,280 in vSphere 4.0)
  • 1,000 hosts per vCenter Server (compared to 300)
  • 15,000 registered VMs per vCenter Server (compared to 4,500)
  • 10,000 concurrently powered-on VMs per vCenter Server (compared to 3,000)
  • 120 concurrent Virtual Infrastructure Clients per vCenter Server (compared to 30)
  • 500 hosts per virtual Datacenter object (compared to 100)
  • 5,000 virtual machines per virtual Datacenter object (compared to 2,500)
I am anxiously waiting for VMware announcements on View 4.5, and the support to more than 8 hosts per cluster when using View Composer. Unfortunately I have not inside track to know if this will change.
VMware vCenter Server Heartbeat 6.3 –View Composer v1.1 and v2.0 services can now be protected using vCenter Server Heartbeat.
Have I missed anything?


Tuesday, 20 July 2010

Introducing MozyPro Cloud Backup via @constantcontact

Introducing MozyPro Cloud Backup via @constantcontact


Join Us:

7 Exciting Cloud Computing Start Ups To Keep An Eye On!

Nimbula's – Cloud Operating System technology delivers Amazon EC2-like services behind the firewall. Allowing customers to efficiently manage both on- and off-premises resources, Nimbula Director quickly and cost effectively transforms under-utilized private data centers into muscular, easily configurable compute capacity while supporting controlled access to off-premise clouds.

Okta, – A startup helping companies manage applications running on cloud infrastructure, recently announced that it has raised $10 million in a first round of venture funding.
Appirio – Is a cloud solution provider offering products and professional services that help enterprises accelerate their adoption of cloud technologies. Appirio's innovation and expertise have been recognized by BusinessWeek as one of America's Most Promising Startups and by AlwaysOn as On-Demand Company of the Year

StorSimple – Uniquely provides you with the ability to deploy on-premises, next-generation storage to address today's application-related challenges while providing you with the ability to leverage public or private cloud storage when you're ready

Nimbsoft – With the Nimsoft Monitoring Solution (NMS), you can monitor and manage all business applications, from the datacenter to the cloud, including SaaS, hosted, and virtualized environments—all with a single product, architecture, and console.

HostedFTP – A cloud based file sharing service that offers customers the ability to drag and drop and send large files from anywhere over the cloud. They provide you with your own Amazon AWS storage account with FTP server in the cloud.

GreenQloud - The Worlds First Truly Green Compute Cloud based on clean electricity produced by environmentally friendly geothermal and hydroelectric energy.

    Cloud Is Defined, Right?

    At the end of the odd but intriguing movie Existenz, one of the primary characters looks at the other after killing a bunch of people and says "We're still in the game, right?" With the implication that you the viewer really don't know if they're still in the Virtual Reality game they were playing.

    Sometimes, Cloud feels like that. I can just go "We're still in the cloud, right?" Here we are, it is 2010, the pundits have been hailing cloud for years, and yet there is still a vast gulf of understanding of what is the cloud, exactly out there. Recently I was involved in a Twitter conversation with Mike Fratto (of Network Computing), Andy Ellis (of Akamai), Lori (of F5, as am I), with occasional input from Dustin Amrhein (of IBM), Greg Knieriemen (of Chi Corp and the InfoSmack podcast), Vanessa Alvarez (of Frost and Sullivan), and Tom Petrocelli (formerly of where it became painfully clear that it is indeed not at all settled. Not even amongst such an august group of individuals.

    read more

    Monday, 19 July 2010

    Cloud Security Strategies: Where Does IDS Fit in? - - Business Technology Leadership

    Security practitioners diving into cloud computing must make older security tools like IDS work in this new world. In a CSO podcast last week, Stu Wilson, CTO of IDS provider Endace, sought to explain how this older technology is still relevant in enterprise cloud security strategies.

    CSO also reached out to IT security practitioners through various LinkedIn security forums for an informal, unscientific poll. Here are views from four additional perspectives.

    IDS is just like many other security tools, it may be useful as part of a security program, but the deployment details are critical. IDS deployment in the context of cloud computing starts with the questions of what assets you are trying to protect, where are those assets, where are the attacks likely to originate from and can you effectively monitor for such attacks with an appropriate signal-to-noise ratio?

    Cloud consumers are still likely to face the threat of intrusion into their own enterprise networks and systems. IDS may be appropriate at the boundaries between those enterprise networks and other networks, including the Internet.

    Cloud providers are also likely to face intrusion threats, and once again IDS may be useful. Here the threat vectors may be from arbitrary Internet hosts or from customers. This makes topographic decisions about IDS deployment more complex. If the cloud provider is using virtualization for hosting PaaS or IaaS, then intrusion monitoring may need to be at the hypervisor level, and I doubt that many IDS appliance vendors have a compelling story for that.

    Both consumers and providers face internal attack threats. How well any IDS can function to detect misuse or abuse by insiders is a good topic for debate, but the common practice is to rely much more on analysis of various types of audit logs to detect such attacks than on intrusion detection. Certainly pattern-based IDS could be used to detect some categories of internal attacks, but it would not be useful for detecting misuse of privileged credentials to extract sensitive data. Anomaly-based detection might be able to detect such internal threats, but once again the number of organizations that use this for internal attack detection is probably insignificant.

    John Kinsella, founder of Protected Industries

    I work with the cloud as both a user, consultant, and, in the interest of full disclosure, I'm working on a secure cloud offering. A few thoughts while wearing those different hats: The old security problems didn't go away when people "moved to the cloud." They just get distracted by all the new problems.

    Continue Reading

    Join Us:

    Your Friend is sharing the "Apple and HTC lead charge as smartphone market looks set to grow and grow" article with you.

    A recent survey of 4,028 US consumers by ChangeWave has thrown up a number of illuminating statistics, which you might consider as predictable as they are informative. The chief takeaway is that people are keen on buying smartphones like never before, with 16% of respondents saying that they'll be taking the plunge within the next 90 days, which marks the biggest increase in the survey's history. Secondly, and crucially for vendor loyalists, Apple and HTC seem to be the biggest beneficiaries (or are they the stimulants?) of this interest, with both improving their share by over 50 percent between March and June of this year. RIM and Motorola have taken a tumble in that same timespan, while Palm has sadly failed to register even a single percentage point. We can definitely see the Droid X and BlackBerry 6 remedying things for the big boys, but Palm's route out of ignominy seems a little less straightforward. You'll find a chart of customer satisfaction -- dominated by Apple in imperious fashion -- after the break, and the full breakdown at the source link.

    Via: Electronista
    Source: ChangeWave Research

    Friday, 16 July 2010

    Gartner creates Cloud Council. Good news for both Cloud Computing Vendors and Users

    A good move as Gartner moves forward to create a Cloud Computing council which will help both Vendors and Customers.


    Gartner has established two Global IT Councils — one on cloud computing and one on IT maintenance. Each of the Global IT Councils has been tasked with developing a set of basic rights and/or responsibilities for their specific area of technology. Members have discussed the issues candidly and in depth, and offered their real-world observations about problems in these areas, as well as practical, actionable recommendations for resolving them.

    Gartner Global IT Council: Cloud Services

    The Gartner Global IT Council for Cloud Services has defined key rights and responsibilities of service consumers that will help both providers and consumers establish and maintain successful business relationships. This document describes some of the most pressing rights and responsibilities along with the reasons why they are necessary. Additional critical rights and responsibilities will emerge as cloud services mature.

    Get your FREE copy of Gartner Global IT Council for Cloud Services.

    Join Us:

    Taking the First Step to Cloud Computing – Standardization

    Standardization Process Gives You the Ability to Highly Optimize Your Infrastructure Moving Forward

    In my daily interactions with clients, I am often asked what the first step should be toward Cloud Computing. Invariably, my answer is 'Standardization.' Standardization has far more importance than just being the first step to cloud, but it also makes the transition much faster and more secure. I want to spend a little time talking about why standardization is so important, then delve into how it will ease your cloud migration.

    Southwest Airlines was one of few airlines (if not the only one) to retain profitability during the recent financial downturn and major events that negatively impacted the industry as a whole. In my opinion, the largest reason for Southwest Airlines success is that they standardized on only one type of aircraft - the Boeing 737. Other airlines, who fly several different aircraft, have to stock parts, train mechanics and pilots for each type of aircraft they operate. What this situation creates is a massive overhead of parts, people, certifications, training and much more. Southwest only has to stock parts for one line of aircraft, train mechanics and pilots for one type of aircraft and worry about one size of aircraft at a terminal. The routine day to day problems that they encounter will be the same and there are processes and procedures to deal with them. This is a highly streamlined way to do business, and it all is a result of standardization.

    I use the Southwest Airlines example almost religiously because it shows what a company can gain when they standardize internally. In IT, there are several areas to standardize, but you will get the most 'bang for your buck' in standardizing the operating system and application stack (to include programming language). Hardware has become a commodity and most operating systems in production today can install on just about any vendor's hardware (Oracle Sun, HP, Dell, IBM, etc.) so long as the processor architecture (Intel/AMD x86, SPARC, PPC) is supported. It is best to standardize on one (or at most two) hardware platforms so that training and support for your hardware is streamlined and efficient. The same applies to the operating system as you do not want to have to hire several different people with varying skill sets to accomplish the same tasks. At the application layer, having the same application server and programming language makes deploying and maintaining the applications much easier. If you are able to standardize to the point where you can get all of the above from a single vendor, you have the ability to leverage a single support infrastructure across all product lines. Yo do not have to worry about the hardware vendor pointing at the OS vendor while they in turn point to the application stack vendor.

    As an example, I use Sun Microsystems (now part of Oracle) to illustrate how this all works. I'll start in reverse order with the programming language - Java. Java is a very powerful language in and of itself, but what is more important is that it was designed to be multi-platform from the beginning. The same code can be installed onto any server with a Java Run-time Environment or Java Application Server and it will run without issue. The code is not compiled to the operating system and thus not tied to only one OS. This is also huge in terms of avoiding vendor lock-in because if you no longer want to use a certain Java Application Server, you can simply choose another and migrate over without issue. There are many Java Application Servers on the market and they all accomplish the same thing (serving up Java applications), but they do differ in additional features (such as management) offered. Sun, of course, has their own Java Application Server, and it is optimized to run on the Solaris operating system (although it does also run on various Linux distributions as well). This brings me to the next layer, the operating system. Many businesses deal with the issue of having to maintain different operating systems (such as Linux, UNIX and Microsoft), and it is probably the single largest headache and inefficiency that exists in IT today. As a matter of fact, the primary reason that VMware (and other virtualization vendors) have been so successful is that they were able to put multiple operating systems on the same hardware. This does net you some efficiencies at the hardware layer, but you still have to individually manage the operating systems of the virtual machines.

    So, we've talked about keeping your programming language, application server and operating system standardized, and I have previously stated that hardware is a commodity now a days, so why not buy the hardware from the same vendor if possible? Fortunately enough, Sun does make hardware as well. Again, this provides you with the ability to point to a single vendor for support issues as well as reap the efficiencies gained where the vendor optimizes pieces of the stack to work better with each other than they would spread across different vendors. This is Oracle's new claim to fame, and they will be leading with that marketing pitch going forward. Aside from price being a sticking point (if it happens to be one) there is no reason not to employ your entire technology stack from a single vendor.

    Now, how is this the first step into the cloud? The cloud as it exists today is a melding of various open source technologies that provide a platform or framework on which you can deploy your applications. The major cloud computing vendors have already optimized their stacks and offer you several areas in which to enter the cloud. You can buy infrastructure as a service and deploy your own application servers and code on top of the IaaS or you can buy the platform as a service, where the application server is already decided, and you just deploy your code on top of the platform. Lastly, you can also buy into a SaaS application and hook into an API, if provided, to leverage the software more to the needs of your business. No matter which path you take, it will be significantly easier to step into the cloud if you have standardized internally and know exactly which components you need to migrate and how those components interact with other parts of the stack. The standardization process gives you these insights as well as the ability to highly optimize your infrastructure moving forward. These same principles apply to building internal private clouds or hooking into public clouds. There are so many other things that are gained with standardization, and I will cover those in future articles, but the scope of this article was to articulate how standardization is the first step toward a successful cloud migration.

    Feel free to ask any questions in the comments section or email me via the address in the "Contact Me" section.

    Original Article -

    Join Us:

    Thursday, 15 July 2010

    Dell Expands Cloud Strategy

    Dell intends to use the Windows Azure platform appliance, introduced by Microsoft, as a part of its Dell Services Cloud

    Dell and Microsoft Corp. on Monday announced a strategic partnership in which Dell intends to use the Windows Azure platform appliance, introduced by Microsoft, as a part of its Dell Services Cloud to develop and deliver next-generation cloud services. The Windows Azure platform appliance will allow Dell to deliver private and public cloud services for Dell and its enterprise, public, small and medium-sized business customers. Dell will also work with Microsoft to develop a Dell-powered Windows Azure platform appliance for enterprise organizations to run in their data centers.

    The News

    * Dell Services will begin implementing the limited production release of the Windows Azure platform appliance to host public and private clouds for its customers, leveraging its vertical industry expertise in providing options for the efficient delivery of flexible application hosting and IT operations. Dell Services will also provide advisory services, application migration, and integration and implementation services.
    * Dell will work with Microsoft to develop a Windows Azure platform appliance for large enterprise, public and hosting customers to deploy in their own data centers. The appliance will leverage infrastructure from Dell combined with the Windows Azure platform.

    Cloud Computing Responds to Changing Business Needs

    Dell and Microsoft understand cloud computing delivers significant efficiencies in infrastructure costs and allows IT to be more responsive to business needs. Recognizing that more organizations can benefit from the flexibility and efficiency of the Windows Azure platform, Dell and Microsoft have partnered to deliver an appliance to power a Dell platform-as-a-service (PaaS) Cloud.

    Microsoft announced today at its Worldwide Partner Conference here the limited production release of the Windows Azure platform appliance, a turnkey cloud platform for large service providers and enterprises to run in their own data centers. Customers and initial partners like Dell using the appliance in their data centers will have the scale-out application platform and data center efficiency of Windows Azure and SQL Azure offered by Microsoft today.

    Dell Data Center Solutions (DCS) has been working with Microsoft to build out and power Windows Azure platform since its launch. Dell will leverage the insight it has gained as a primary infrastructure partner for the Windows Azure platform to ensure that the Dell-powered Windows Azure platform appliance is optimized for power and space to save ongoing operating costs, and performance of high-scale cloud services.

    Dell is a top provider of cloud computing infrastructure and its client roster includes 20 of the top 25 most heavily trafficked Internet sites and four of the top global search engines. Dell has been custom-designing infrastructure solutions for the leading global cloud service providers and hyperscale data center operators for the past three years. Through this customer insight, Dell has developed deep expertise about the specialized needs of organizations in hosting, HPC, Web 2.0, gaming, social networking, energy, SaaS, plus public and private cloud builders.

    Cloud Services Reduce Complexity and Increase Efficiency

    With the combined experience of Perot Systems and Dell, Dell Services delivers vertically-focused cloud solutions, unencumbered by traditional labor-intensive business models. Dell Services operates clouds today, delivering managed and support software-as-a-service to more than 10,000 global customers. In addition, Dell has a comprehensive suite of services designed to help customers take advantage of public and private cloud models. The new Dell PaaS powered by the Windows Azure platform appliance will allow Dell to offer customers an expanded suite of services, including cloud-based hosting and transformational services to help organizations move applications to the cloud.

    Join Us:

    Why All the Fuss Around Cloud API Standards?

    Lately there have been several discussions around cloud API standards, and I am failing to understand why it is a big deal. Lets first identify two type of standards:

    1. Syntax Standards
    2. Functional Standards

    For cloud lot of attention is given to syntax standards, e.g. using same method signatures. E.g. some IaaS enablers are using exact same method signatures in their APIs as Amazon EC2. After integrating with several cloud providers we found API differences (Syntax) may seem annoying but they really don’t matter much, as it is trivial effort to deal with them. E.g. how hard it is to deal with a difference like start.server , or start-server, or s-server all doing the same thing i.e. starting a server on various clouds?

    In terms of functional standards, it is ignorant to expect that all cloud providers will provide the exact same set of functionality. Usually the way innovation works is that someone pushes the limit and deliver a new functionality and then other players in the industry try to catch up and implement the same functionality (given there is customer demand for it). Once the new functionality is implemented by all vendors it becomes an standard. The innovation cycle continues as some other vendor in the industry steps up to deliver new functionality and rest of the vendors follow to catch up. This cycle keeps on repeating till the technology becomes obsolete. It would be really stupid for anyone to expect that vendors will stop innovating after they reach some agreed common functionality. After integrating with several cloud APIs (IaaS), we have identified following functionality as the expected “standard” for the cloud (IaaS), and all major cloud players are delivering this (plus more). The method signatures may not be identical,across all the providers, however, the functionality offered by various providers is comparable for following.

    Instance Management: Launch, Terminate, Suspend, Resume, Describe, GetIP, List
    Image Management: ListImages, DescribeImage, CreateImage
    Storage Management: CreateVolume, DeleteVolume, AttachVolume, DetachVolume, ListVolumes, VolumeStatus
    Security: CreateKeyPair, DeleteKeyPairs, DescribeKeyPairs
    Backup Management: CreateSnapShot, DeleteSnapshots,DescribeSnapshots
    IP Address Management: AllocateAddresses, ReleaseAddress, AssociateAddress, DisassociateAddress
    Firewall Setups: CreateSecurityGroup, DeleteSecurityGroup, DescribeSecurityGroups

    The above list will change as IaaS providers will innovate more and deliver more functionality. E.g. right now the biggest gap is that no cloud providers gives SLAs around network, e.g. guarantee of minimum latency, bandwidth, etc. Some providers are providing additional APIs for creating private networks/VLANs etc and over the period to time the VLAN type functions would become more common across providers.

    At Kaavo we took a top down application centric approach because we believe that cloud APIs will be different for different clouds and will evolve and we need to provide a consistent interface for deploying and managing workloads, regardless of the cloud. From the customer perspective two things are important:

    1. Ability to deploy, configure, run workloads (custom apps, SaaS, PaaS) securely and automatically on-demand within in minutes on the IaaS layer (regardless of the IaaS provider)
    2. Automation to manage run-time service levels

    Because of our top down approach anytime a new functionality is added by a cloud provider or API is changed, we are able to handle it easily. For additional reference checkout how we easily handle the differences in server attributes across providers and also the web service API we provide for deploying and managing workloads in the cloud.

    Join Us:

    Wednesday, 14 July 2010

    Top 5 CRM Services Running In The Cloud…

    One of the leaders of the pack. is known to be a market leader in CRM. It combines the best of business processes and technology that provides powerful CRM services. offers an array of CRM and business application services that allow customers and subscribers to systematically record and store business data. offers CRM solution to enhance the power and convenience of web for marketing, sales, and customer care tools needed for marketing and selling industry leading service. Its software also provides tools for managing and analyzing all the current and historical data and activities.

    NetSuite Business Software: Accounting, CRM, Ecommerce, ERP, InventoryNetsuite offers a comprehensive CRM solution set with customization tools. What makes it as a comprehensive package is because Netsuite is an on demand service with all in one front and back office solution. One of its unique features is the real time analytics dashboard that provides easy to view role specific business information which is always up-to-date.

    SugarCRM - The Cloud Is Open CRMSugarCRM’saward-winning applications offer a single system of truth for managing customer interactions across different lines of business. SugarCRM is an open-source software-solution vendor which produces the Sugar Customer Relationship Management (CRM) system.

    Microsoft Dynamics CRM Services is Microsoft’s online CRM that offers different plans for professional, advanced and enterprise level businesses. The web-based service gives more enhancement in customer service capabilities. It can automate workflow and provide analytics that helps to fill the productivity of business.

    Join Us:

    MTI Closes European Deal for VMware, Cisco & EMC Cloud Coalition (VCE)

    MTI, a provider of consulting services and comprehensive information infrastructure solutions for mid- to large-size organisations, has sold the first Vblock 1 solution in Europe. The customer, Cobweb, is a leading managed services provider in the UK and Europe. Vblock infrastructure packages are integrated and tested IT solutions stemming from the Virtual Computing Environment coalition, that combine virtualisation, networking, storage, security, and management technologies by Cisco, EMC and VMware.

    Vblock 1, which MTI also has installed in its own UK Solution Centre, is based on Cisco's Unified Computing System (UCS), EMC's CLARiiON storage system and VMware's vSphere 4 platform. The package helps organisations accelerate their adoption of private cloud environments while eliminating the risks traditionally associated with the deployment of this IT model.

    During the last six months the "Cobweb Cloud" model has seen great traction in the mid-size and enterprise space. In order to meet this market demand Cobweb required a scalable and efficient "pay-as-you-grow" virtualisation and compute platform. After many months of testing the Vblock with MTI it proved to satisfy the needs of the business. Paul Hannam CEO of Cobweb states, "the VCE coalition supported by MTI has presented an exciting opportunity for Cobweb and this new Vblock offering enables us to broaden our reach with the "Cobweb Cloud" satisfying the needs of enterprise business for both private, public and hybrid clouds."

    Cobweb was attracted to the Vblock 1 package as it offers all the benefits associated with the cloud computing such as lower total cost of ownership and faster deployment. The solution can be used as a platform to host a range of existing and new services.

    "Back in November2009, MTI invited us to perform an in-depth proof of concept on its in-house Vblock 1 system over a period of several months. Since they had always delivered on our briefs in the past we were keen to put it through its paces," said Paul Hannam, CEO at Cobweb. "We started our testing in early 2010 and not only did we find that Vblock can easily host all of Cobweb's current applications but it also offered Cobweb services such as Desktop as a Service as well as management tools and consumption reporting. MTI's expertise and initiative provided reliable assistance to keep Cobweb at the forefront of the managed service provider industry."

    Join Us:

    Tuesday, 13 July 2010

    Leading Cloud-Based Services Automation Software in UK and Europe

    NetSuite Inc., a provider of cloud-based enterprise resource planning (ERP) / financial suites, on Thursday unveiled NetSuite OpenAir, new professional services automation (PSA) and services resource planning (SRP) software for the UK and European marketplace specifically designed to help global professional services organisations automate and manage key aspects of their business - from marketing to project management, service delivery, billing, revenue management and driving repeat business from existing clients.

    With this latest version of NetSuite OpenAir, international companies now have at their disposal a powerful, integrated application suite to meet the challenges of global services delivery, including comprehensive support for multiple languages, currencies, taxation rules and employee work guidelines. More than 9,200 users at Lloyd's Register, 3,500 employees at Software AG and 2,700 at Siemens today are relying on NetSuite OpenAir for project management and resource management. For detailed product information, please visit

    NetSuite OpenAir addresses the unique and demanding needs of international mid-size and enterprise-class services organizations that need local, in-country control over key business processes, functions and workflows, as well as real-time global visibility across their entire business. This powerful capability enables services organizations to replace their current hairball of disparate, costly and often poorly integrated applications with an integrated suite that unifies project and resource management, project tracking and accounting, time and expense management, customer relationship management (CRM) and full enterprise resource planning (ERP) functionality.

    NetSuite OpenAir for international companies is a powerful solution that has the potential to do for services businesses what SAP's R/3 software did for the manufacturing industry in the early 1990s with its pioneering work in establishing best practices for ERP.

    "NetSuite OpenAir is a must have for any professional services organization trying to operate and compete on a global scale," said Zach Nelson, CEO of NetSuite. "It's a critical solution designed specifically to help international services businesses meet the challenges of global services delivery while maximizing resource utilization, operational efficiency and project profitability."

    The PSA/SRP Market Opportunity in EMEA

    According to IDC Research, uptake in SRP will grow rapidly, with more than a 10 percent annual growth every year through 2013. By then, the global SRP market will exceed US$3 billion. NetSuite's established customer base and value added reseller (VAR) footprint in Europe indicate a strong regional demand for cloud computing professional services solutions.

    Services organisations today contend with fierce competition, global expansion challenges, intense cost reduction pressures and ever growing demands for better services from clients. In order to survive and thrive, especially in today's economic environment, services organisations have to find ways to improve productivity and efficiency, provide better services to clients, gain more agility and stay nimble while reducing costs.

    "Economic and competitive pressures make SRP a must for companies with significant services revenue," said Mike Fauscette, Group Vice President of Software Business Solutions at IDC. "Siloed resources and administrative overhead are profitability killers, and SRP solutions provide the kind of proven, repeatable practices that can streamline a services organisation of virtually any size."

    NetSuite's PSA and SRP solutions provide services organisations with end-to-end, automated management of the entire services lifecycle allowing them to quickly realise benefits by abandoning archaic software systems, ad hoc spreadsheets and email lists to manage critical client projects. For the first time services companies can fully take advantage of the dramatic cost savings, service delivery improvement and productivity benefits of cloud computing.

    Join Us: