Friday 29 January 2010

Logicalis Announces Cooperative Enterprise Cloud Service

First Hybrid On-Site And Hosted Cloud Service Offering It Workload Mobility

Following the recent formation of its dedicated cloud consulting group, international solutions provider, Logicalis, today launches its Cooperative Enterprise Cloud Service. Architected on Cisco's Unified Computing System (UCS) and NetApp storage solutions, the service is the first to offer a single reference blueprint for on-site and hosted cloud services, enabling enterprises and public sector organisations to flex and scale their cloud computing strategies while ensuring seamless interoperability.

On-site services will be delivered with Logicalis' Bespoke On-Site Cloud Service (BOCS), a right-sized computing and storage infrastructure utilising Cisco's UCS stateless computing platform to enable virtualised and non-virtualised applications to coexist on the same platform.

Logicalis' hosted Cooperative Enterprise Cloud will offer a range of capacity services and, as a hybrid matched architecture, allow BOCS users to extend on-site infrastructure directly into the hosted platform. Storage will be provided in both services by NetApp.

"The matched architecture of our hosted service is critical in providing enterprise and public sector customers with the confidence of moving key workloads or critical services into the cloud," said Simon Daykin, Chief Architect at Logicalis UK, adding;

"We can now deliver 'workload containers' where all of the necessary resources to deliver an IT service, including its compute, storage, and network elements, can be mobilised between local and hosted clouds. Because both the on-site and hosted clouds are architecturally matched, these containers can be moved back and forward as logical constructs. Customers can continue to use their own hypervisor software and platform of choice, such as VMWare or Microsoft Hyper-V, to virtualise the infrastructure layer without the constraints of rigid public clouds."

The cooperative cloud model developed by Logicalis delivers the key attributes required by both enterprises and government G-Cloud CIOs, offering the economic benefits of elasticity with high levels of security, reliability, flexibility, and onshore location.

Logicalis' Cooperative Enterprise Cloud Service also drives new levels of infrastructure and energy efficiency; the right-sized nature of BOCs allows the local cloud to operate at maximum efficiency and, coupled with the hosted cloud delivering burst, scalability, off-site disaster recovery and test capacity on-demand, it ensures the service is highly optimised according to actual business need.

Tom Kelly, managing director of Logicalis UK says, "This is the first enterprise-class cloud computing service that truly addresses the challenges driving ICT change. The Cooperative Enterprise Cloud Service enables CIOs to safely exploit the commercial and operational benefits of cloud-based strategies without necessitating an all-or-nothing approach.

"Moreover, it allows customers to build and right-size on-site private clouds, and then utilise our hosted cloud to scale as required. Our belief is that matched-architecture hybrid clouds are going to be the private and public sector's first choice for their cloud computing strategies."

Available from Q2 2010 as a contracted service, Logicalis' Cooperative Enterprise Cloud Service will form part of the company's managed service portfolio. Logicalis will also be launching its 'Cisco Cloud Showcase Centre', at its Slough headquarters, in which enterprises can see and experience the cooperative cloud architecture and full workload container mobility. 

Navigating The Cloud Wave

The coming decade will result in an explosion of innovation due to cloud computing

So, you’ve run a tight ship for the past 18 months, battening down the hatches, freezing hires, and going into a virtual operations lockdown waiting for the proverbial “rainy day” to stop and the sun to come out – corporate hibernation.

That’s a sensible legacy business approach to technology capital and operations expenditures in recessionary economic climates for previous decades, when your competitors followed a similar approach given limited alternatives, and only a few corporate giants had the war chest to invest in a downturn.

The good news is that with cloud computing this decade will differ in many ways, from business strategy and driving your offering portfolio into the marketplace, to ongoing operations and support, and that includes your company’s ability to continue to operate right through the next recession at peak efficiency while adjusting for market demand without skipping a beat.

That’s where your competitors will be in 3 to 5 years; perhaps 10 years for very large organizations that are now highly dependent on legacy technology, processes, and partners to deliver their product, solution or service to their customers.

Where will you be? Will you be in lockdown mode, or using a flexible business model that allows you to quickly bring down your costs?

Why should you care? Throughout business history those companies that best adapted and leveraged changes in the technology landscape have survived and thrived.

What should you do now? Begin to think about your product, processes, partnerships, and people through the lens of technology innovations, web 2.0, and cloud computing.

When should you begin? Now would not be too soon. In the early days of the Internet few companies knew what to make of it or what to do with it.  Today, your competitors leverage it to improve their business.  Cloud computing technologies promise to cut costs making companies more efficient and highly competitive.

Where should you start? Research would be a good place to start.  Try out some of the offerings in the marketplace, and see how they might make your organization more flexible and resilient.  Look at your workloads and analyze which ones you can outsource.

Who should you entrust with this work? Large organizations have technology departments and resources to explore the possibilities, but if you’re a smaller organization, you may want to seek out the advice of an outside consultant, if that sounds a bit too much, then someone in your organization that is a change agent or is passionate about driving out inefficiencies.

Thursday 28 January 2010

What Does Elastic Really Mean?

In terms of cloud computing in application environments, elasticity is perhaps one of the more alluring and potentially beneficial aspects of this new delivery model. I'm sure that to many of those responsible for the operational and administrative aspects of these application environments, the idea that applications and associated infrastructure grows and shrinks based purely on demand, without human intervention mind you, sounds close to a utopia. While I would never dispute that such capability can make life easier and your environments much more responsive and efficient, it's important to define what elasticity means for you before you embark down this path. In this way, you can balance your expectations against any proposed solutions.

For me, the idea of elastic application environments starts with the implication that there is some sort of policy or service level agreement (SLA) that determines when to grow and when to shrink. However, just having the capability to govern your runtime with SLAs isn't enough. The SLAs should be applicable to performance metrics directly related to your applications. For example, it may be nice to be able to make operational decisions in your application environment based on the CPU usage of the physical machines supporting that environment, however it is much nicer to make those same decisions based on the average response time for requests sent to your application instances or perhaps the average time a particular message waits in your application's queue. When you have the ability to define SLAs based on these kinds of application performance metrics you can remove a lot of ambiguity that otherwise could creep in when making expansion/contraction decisions.

What's obvious is that there's no reason to have SLAs that cannot be enforced. When I think about SLA enforcement there are a couple of things that come to mind. The first is that the party responsible for enforcement should be configurable. In many cases you may want your application environment to grow and shrink based on the system's autonomic enforcement of SLAs, but I doubt this will always be the case. For example, if you are running in a pay-for-use public cloud environment, you may, in an attempt to keep costs under control, want to insert a manual approval process before the system grows. As another example, you may insert manual approval processes for contracting application environments in a production setting where demand fluctuates wildly. In any case, the ability to configure who is responsible for SLA enforcement is useful.

The second thing that comes to mind with respect to SLA enforcement is that you should be able to prioritize such enforcement. The ability to prioritize SLA enforcement means that you can ensure that conditions in some applications warrant a faster response than in other applications. This is just an acknowledgment that not all applications are created equally. Obviously if a user-facing, revenue-generating application starts to violate its SLA you want that condition addressed before you address any SLA slippage in an internal environment.

Besides the ability to define and enforce SLAs, there are certainly other capabilities that contribute to the robustness of a truly elastic application environment. One area that warrants attention is the degree to which application health monitoring and maintenance can be automated. For instance, when an application begins to leak memory and response times slow to the point that SLAs are violated, it may be more efficient to automatically address the leak by say restarting the application as opposed to adding more application instances.

These are just a few of what I'm sure are many facets that contribute to elasticity in an application environment. They happen to be the ones that bubble to the top for me, but I have no doubt there are others that may be more important for you. If you have your own ideas for elastic application environments I'd like to hear them.

Wednesday 27 January 2010

iStore Brings Cloud Computing to the Digital Oilfield

iStore on Thursday launched Digital Oilfield Online, a cloud-based hosting service for its PetroTrek software suite. The company unveiled its online cloud computing capability at the Microsoft Global Energy Forum, where it announced the immediate availability of cloud-hosted PetroTrek solutions in addition to its existing on-premises offerings.

iStore, which helps oil and gas companies access, organize and visualize exploration and production (E&P) data, has adopted the Microsoft Windows Azure Services platform for delivering Digital Oilfield solutions in the cloud.

Digital Oilfields, the business-focused decision-support technology in the petroleum industry, require substantial IT infrastructure to support data storage and information delivery. With the launch of Digital Oilfield Online, iStore will tackle the industry’s information management needs by applying the on-demand scalability, elastic computing and new cost structure offered by cloud computing.

The new service creates immediate value by eliminating costs associated with deploying an on-premises Digital Oilfield solution, saving companies as much as $200,000 on hardware and software, in addition to recurring maintenance and operating costs.

Digital Oilfield Online allows oil and gas companies to host their information-centric Digital Oilfield projects securely online. This includes asset management, production monitoring, health-environment-safety management, and joint venture collaboration. It also enables leveraging of on-premises data and external E&P content, such as IHS wells and production, through iStore’s data access and integration technology. The new service supports the Public Petroleum Data Model (PPDM), an open standard widely accepted by the industry for business-oriented data storage. This gives companies a roadmap for moving capital-intensive on-premises data management to the cloud.

“iStore is led by the market and our customers are telling us that agile, elastic IT is very important to their business going forward,” commented Barry Irani, president and CEO for iStore. Irani added, “Cloud computing not only helps our customers eliminate costly hardware and server software licenses, it jump-starts Digital Oilfield projects by cutting out the time-consuming infrastructure procurement process associated with delivering decision support solutions. The industry as a whole will make the leap when they know that their information is as safe or safer outside the corporate firewall – that’s why we consider data security the number one priority for doing business in the cloud.”

To ensure that Digital Oilfield Online meets the rigorous security standards required by the petroleum industry, CSC (NYSE:CSC), a Windows Azure Technology Adoption Program (TAP) partner, is assisting iStore with architectural review, security auditing, and the commercialization of the service.

Original Article - read more

Data Center Virtualization: Cloud Computing Still a Work in Progress

The AFCOM association recently revealed the results of a survey of 436 data center sites that showed the following trends: Cyber terrorism is an increasing concern, mainframe deployment is declining, storage deployment is increasing, and "green" technologies are definitely happening. AFCOM found that there is a shift in data centers away from mainframe computers and toward other types of servers.

That makes total sense as virtualization is the mantra of the day for those companies that are interested in optimizing their power by having several operating systems function within just one server.
Data processing and storage is done within one server as opposed to a traditional system where the network is distributed in an elaborate design comprising of several servers and workstations all attached to their own separate hardware components.

Similar to a virtual environment, all the physical resources such as additional servers, PCs, storage, hard drives, processors, and mother boards are totally eliminated.
That way, not only are we saving big time in hardware investment (good for the planet!), we are also avoiding excess maintenance costs. That’s a big thumbs up!
The “not so thumbs up “ news is that even though 60.9 percent of data centers worldwide officially recognize cyber terrorism as a real threat, only about one-third of respondents included cyber terrorism in their disaster-recovery plans. The survey goes on to note that currently only about one of every four data centers addresses cyber terrorism, and one in five has procedures in place to prevent an attack. That means the remaining four out of five data centers are left dangerously vulnerable to sophisticated malwares and viruses.
The risk of cyber attacks becomes more critical as several data centers expect massive expansion due to dramatic increase in storage demands and aggressive business plans in the next five years. The AFCOM study finds that 22.0 percent will utilize a colocation center to meet their increased space requirements and 13.8 percent will use managed Hosting services. Companies will also rely more and more on cloud computing services to meet their increasing computing needs that cannot be totally met with onsite especially if they are cash strapped.
Cloud computing – cutting costs and energy
Enterprises are increasingly embracing cloud technology as a part of their initiative to cut costs and energy. How does it work? Computational power and storage borrowed from a third party decreases the power load on the home front. You pay for what you use and waste fewer resources in the bargain. You eliminate redundant power, redundant servers, spare capacity, fail-over processes, backup servers etc and equipment never gets outdated. Services on the host side, run on shared infrastructure at high utilization as not all users will be accessing service at the same time which means there is further potential for power savings.
While some argue that cloud computing just shifts the energy consumption from the data centers to the hosts’ side, even with all the additional overhead costs and single computer power usage (on the host side) , there is a considerable decrease in net power as you take other systems offline and pay a whole lot less for cooling.
Highly specialized energy saving environments that are carefully cultivated by big players like Amazon and Microsoft are likely to waste considerably less energy than the clients who seek their cloud services.
Security concerns
But the big elephant in the room when it comes to cloud computing has always been security concerns as you let go of sensitive client data over an open network. Remote computing increases the risk of breaches. This was recently brought to the forefront with Google’s well documented case when a hacker got hold of confidential Twitter documents after breaking into an employee’s e-mail account. Certainly Google has been exceptional in embracing green computing and proclaims to run the most energy-efficient data centers in the world. But are we willing to ease up on the security of confidential data to get our carbon limits under control?

Organizations such as the Cloud Security Alliance, comprising of industry leaders, global associations and security experts, have published guidance to come up with secure cloud computing practices and have released guidelines that cover 15 security domains, ranging from computing architecture to virtualization for organizations. Just like any other emerging technology, there are certain matters that require meticulous ironing out. I’m optimistic that we will be able to make cloud work as we cannot ignore the business benefits that it brings us (lots of cash savings!) while keeping our individual and collective carbon limits under check.

Original Article - read more

The Apple tablet device is a computer, not a phone (we think...)

As most of you will already know, Mr Jobs is due to announce a new gadget today which is strongly rumoured to be a tablet based device with all of the connectivity the iPhone delivers using a 7-10" touch screen.

Strong intel suggests the device will be running iPhones OS version 3.2. if this is the case, then MobileIron's iPhone Device Management platform should support the tablet easily.

If the tablet is real and the device is a grown up (in size) iPhone then many Enterprise clients will see it as a natural alternative to bulky laptops and fiddly netbooks. With MobileIron, securing, controling and managing them will be a breeze.

The event begins @ 17:00 UK time with the big pitch starting @ 18:00. I, for one, want one already. Do you ?

If you'd like to know more about MobileIron, take a look at Cloud Distribution's web site - http://www.cloud-distribution.com/mobileiron/

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

Tuesday 26 January 2010

VMware Virtual Desktop Infrastructure and Why Performance Matters

Here is yet another great example of why I just love my job. Last week at our Xiotech National Sales Meeting we heard from a net-new educational customer out in the western US. They recently piloted a VDI project with great success. One of the biggest hurdles they were running into, and I would bet other storage cloud (or VDI specific) providers are as well, is performance predictability. This predictability is very important. Too often we see customer focus on the capacity side of the house and forget that performance can be extremely important (VDI boot storm anyone?). Rob Peglar wrote a great blog post called "Performance Still Matters" over at the Xiotech.com blog site. When you are done reading this blog, head over to it and check it out.

So, VDI cloud architects should make sure that the solution they design today will meet the requirements of the project over the next 12 months, 24 months and beyond.  To make matters worse, they need to consider what happens if the cloud is 20% utilized or if/when it becomes wildly successful and utilization is closer to 90% to 95%.  The last thing you want to do is have to add more spindles ($$$) or turn to expensive SSD ($$$$$$$$$) to solve an issue that should have never happened in the first place.

So, let’s assume you already read my riveting, game changing piece on “Performance Starved Applications” (PSA). VDI is ONE OF THOSE PSA’s!!!  Why is this important?  If you are looking at traditional storage (Clariion, EVA, Compellent  Storage Center, Xiotech Mag3D, NetApp FAS) arrays it’s important to know that once you get to about 75% utilization performance drops like my bank account did last week while I was in Vegas.  Like a freaking hammer!!  That’s just HORRIBLE (utilization and my bank account).  Again you might ask why that’s important?   Well I have three kids and a wife, who went back in to college, so funds are not where they should be at…..oh wait (ADD moment) I’m sure you meant horrible about performance dropping and not my bank account.  So, what does performance predictability really mean?  How important would it be to know that every time you added an intelligent storage element (Xiotech Emprise 5000 - 3U) with certain DataPacs you could support 225 to 250 simultaneous VDI instances (just as an example) including boot storms?  This would give you an incredible ability to zero in on the costs associated with the storage part of your VDI deployment.  This is especially true when moving from a pilot program into a full production roll out.  For instance, if you pilot 250 VDI instances, but you know that you will eventually need support for 1000, you can start off with one Emprise 5000 and grow it to a total of four elements.  Down the road, if you grow further than 1000 you fully understand the storage costs associated with that growth, because it is PREDICTABLE.

What could this mean to your environment?  It means if you are looking at traditional arrays, be prepared to pay for capacity that you will probably never use without a severe hit to performance.  What could that mean for the average end user?  That means their desktop boots slowly, their applications slow down and your helpdesk phone rings off the hook!!  So, performance predictability is crucial when designing scalable VDI solutions and when cost management (financial performance predictability) is every bit as critical.
So if you are looking at VDI or even building a VDI Storage Cloud then performance predictability would be a great foundation on which to build those solutions.  The best storage solution to build your application on is the Xiotech Emprise 5000.

Original Article - read more

Cloud Computing Certifications

Though I am not that fond of certificates in general and believe in practical hands-on to the extremes, it looks like within two years we should be aware of the training kits and certificates which exist for cloud computing and virtualization technology. 3Tera (Cloud Computing Platform and services company) announced today their cloud computing certifications and education offerings ...

"If you don't do it, in two years you won't have a business,"
According to these announcements there will be two tracks for these certificates:
  1. The Certified Cloud Operator program which is targeted at service providers, enterprises, operations professionals and system integrators who deploy and operate cloud services. It covers installing, configuring and maintaining the computing fabric used for building cloud computing services. The certification has emphasis on hardware requirements, service configuration, hardware failure troubleshooting, provisioning of customers and configuration of virtual private data centers.
  2. The the Certified Cloud Architect program, is aimed at systems architects, IT operations professionals, application developers and systems engineers who design, integrate, provision, deploy and manage distributed applications. That certification teaches the architectural concepts of 3Tera's AppLogic cloud computing platform, step-by-step deployment procedures, operating and managing applications in the cloud, best practices for security, testing and scaling applications and how to architect for business continuity.
It worth mentioning that Rackspace, Amazon (EC2) and Microsoft Windows Azure have training kits and education available, but yet they don't have cloud comuting certifications. We should keep an eye on these vendors in their offerings in the near future ...

Original Articel - read more

Monday 25 January 2010

Cloud Balancing, Reverse Cloud Bursting, and Staying PCI-Compliant

One of the concerns with cloud bursting specifically for the use of addressing seasonal scaling needs is that cloud computing environments are not necessarily PCI-friendly. But there may be a solution that allows the application to maintain its PCI-compliance and still make use of cloud computing environments for seasonal scaling efficiency. 

Cloud bursting, a.k.a. overdraft protection, is a great concept but in some situations, such as those involving PCI-compliance, it can be difficult if not impossible to actually implement. The financial advantages to cloud bursting for organizations requiring additional capacity on only a seasonal basis are well understood, but the regulatory issues that surround such implementations hinder adoption of this method to address cost-effective capacity increases when necessarily only for short periods of time.

But what if we architected a solution based on cloud bursting that offers the same type of advantages without compromising compliance with regulations and guidelines like PCI-DSS?


REVERSE CLOUD BURSTING and CLOUD BALANCING 

image
The ability to implement such an architecture would require that the PCI-compliant portions of a web application are separated (somehow, perhaps as SOA services or independently accessible RESTful services) from the rest of the application.
The non-PCI related portions of the application are cloned and deployed in a cloud environment. The PCI-related portions stay right where they are. As the PCI related portions are likely less heavily stressed even by seasonal spikes in demand, it is assumed that the available corporate compute resources will suffice to maintain availability during a spike, mainly because the PCI compliant resources have at their disposal all local resources. It is also possible –and likely – that the PCI-related portions of the application will not consume all available corporate compute resources, which means there is some capacity available to essentially reverse cloud burst into the corporate resources if necessary.

In a very simple scenario, the global server load balancer basically "reverses" the priority of data centers when answering queries during the time period in which you expect to see spikes. So all application requests are directed to the cloud computing provider's instance first except for queries that require the PCI-compliant portion, which are always directed to the corporate (cloud computing perhaps) instance. This is basically a "cloud balancing" scenario: distributing application requests intelligently between two cloud computing environments.

The variations on this theme can become more complex and factor in many more variables. For example, you could set a threshold of capacity on the corporate data center instance that allows enough corporate compute resources available to handle the highest expected transaction rate and only burst into the cloud if the corporate capacity reaches that level. That's traditional "cloud bursting." You could also reverse the burst by dipping into corporate compute resources based on thresholds designated at the cloud computing provider's instance to minimize the financial impact of utilizing a cloud computing provider as the primary delivery mechanism for the application. That would be "reverse cloud bursting." The key is to ensure that no matter where the compute resources are coming from for the primary application components it does not negatively impact the availability and performance of the PCI-compliant processes executing in the corporate cloud environment.


THE KEY IS FLEXIBILITY IN ARCHITECTURE

Without the flexibility to deploy individual components of an application (a.k.a. services) into different environments these scenarios simply don't work. Applications developed based on tightly-coupled frameworks and principles will never truly be capable of taking advantage of cloud balancing, bursting, or any architecture that relies upon specific components residing in a specific location because of regulatory issues or other concerns.

This is one of the core principles of SOA – separation of not only interface from implementation, but location-agnosticism. There are many ways to achieve this kind of location-agnosticism including on-demand generation of WSDL for client consumption that specifies end-point location based on the context of the initial request and the use of global server load balancing combined with context-aware application delivery. What's vitally important, though, is the flexibility of the underlying application architecture and the ability to separate components in a way that makes it possible to distribute across multiple locations in the first place.
If that means SOA is the answer, then SOA is the answer. If that means a well-designed set of RESTful components, so be it. Whatever is going to fit into your organizational development and architectural practices is the right answer, as long as the answer includes "location agnosticism" and loosely-coupled applications. Once you've got that down the possibilities for how to leverage external and internal cloud computing environments is limited only by your imagination and, as always, your budget.

Cloud Start-up Pulls in $8m A Round

Nasuni, a Massachusetts start-up that's about to trot out a gateway to cloud storage, has gotten $8 million in Series A funding from North Bridge Venture Partners and Sigma Partners. Nasuni founders CEO Andres Rodriguez, an ex-CTO of the New York Times, and Robert Mason pioneered a cloud storage architecture at Archivas, which Hitachi Data Systems (HDS) acquired in 2007 for $120 million.

The technology is currently at the heart of the HDS cloud initiative. The VCs both got Nasuni board seats, along with former Iron Mountain Digital president John Clancy.

The Enterprise Strategy Group says, "The commercial public cloud simply is not going to happen in a big way until the user has solid, secure control over their data. Nasuni has understood this from day one, and engineered its offerings to address this exact issue. The fact that they were able to do it so fast is a testament to focused execution by a focused team."

Original Article - read more

Friday 22 January 2010

Cloud Distribution Portfolio - Overview

there's nothing quite like a little self promotion and as it's a new year and a Friday (weak excuses I know) I thought I'd update you all on Cloud Distribution's solutions just in case you were interested...

MobileIron - Smartphone Device Management for the Enterprise


If you are selling to Enterprises who are looking at putting out fires which Apple's ubiquitous smartphone is causing, then look no further than MobileIron. iPhone Device Management, security and control is a HOT topic in the UK since Vodafone and Orange joined O2 as iPhone carriers. The reason ? cos they haven't got any management ! MobileIron provides cradle to grave support for iPhone, Backberry (as a complementary addtion to BES or as a replacement), Windows Mobile, Symbian, Android and PalmOs (in Q1 2010). Cloud are offering partner and/or end users free proof of concepts so let me know if you'd like a closer look ?

Expand Networks - Virtualised WAN Optimization & Application Acceleration


My previous employer... and a leader in WAN Optimization I'll have you know. Expand's virtual Accelerators (VACC's) deliver on-demand, cost effective scalability for customers who would like to seel their applications perform more effectively across WAN links. Apps such as Citrix, VDI, Exchange, Sharepoint, Oracle etc. etc. all hog bandwidth when delivered across networks. Expand fixes this problem by caching, compressing and applying QoS to improve the users experience back to LAN speeds.

OpSource - What Amazon's EC2 want's to be when it grows up

OpSource Cloud combines the best of Amazon like features with the security, performance and control your development projects and production applications require. You need professional features like private networks, dedicated configurable firewalls, configurable servers with burstable CPU and passwords for multiple users with role based permissions. And OpSource delivers these grown up features and more - all standard and all available on a no commit basis.

Have a great weekend and please get in touch if you'd like more details of any of the above solutions. We've got great support, great margins and (even though I do say so myself) great products so what are you wanting for ?

http://www.cloud-distribution.com

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

Thursday 21 January 2010

Cloud storage may be main focus of Apple's Lala buyout

Apple's acquisition of Lala won't form the basis of any iTunes subscription service, but instead may help Apple quickly build a cloud-storage component into the next version of iTunes. Apple will supposedly leverage Lala's current music uploading technology to give users "anywhere access" to their music library.
Michael Robertson, guest writing at TechCrunch, cites a number of insider sources who say definitively that Apple will not offer a subscription option to the iTunes Store. Instead, it will complement the current model with cloud storage, giving iTunes users the ability to "to navigate and play their music, videos and playlists from their personal URL using a browser based iTunes experience." Robertson, formerly the CEO of MP3.com, is currently the head of MP3tunes, which offers a cloud-storage service for music files similar to what he describes as the future of iTunes.

Such a strategy is one we speculated Apple would pursue, and one that sources for the Wall Street Journal also claimed would be wrapped into iTunes in a future update. Obviously, Lala's technology and engineering expertise, combined with a giant data center, could power such a feature. Robertson suggests that doing so could make an end-run around having to negotiate additional streaming licenses from record labels, since each library would be linked to a specific customer.

"Apple will link the tens of millions of previously sold iPods, Touches, AppleTV and iTablets to mobile iTunes giving users seamless playback of their media from a wide range of Apple branded devices," Robertson said. "iTunes shoppers will be able to continue to buy music and movies as they can now, with purchases still being downloaded, but once downloaded they will be automatically loaded to their mobile iTunes area for anywhere access."

Apple responsible for 99.4% of mobile app sales in 2009 (Updated)


The latest report from market research firm Gartner suggests that mobile apps are big business, and that business should only grow in the next few years. According to Gartner's numbers and those reported by Apple, Apple completely owns this market, likely grabbing almost every one of the 4.2 billion dollars spent on mobile apps in 2009. Based on Gartner's estimates and our own analysis, Apple could hold on to at least two-thirds of the market if current sales trends hold for 2010.

Apple first opened the App Store in July 2008, along with the launch of the iPhone 3G and the release of iPhone OS 2.0. Sales were brisk, with 300 million apps sold by December. After the holidays, that number had jumped to 500 million. Earlier this month, Apple announced that sales had topped 3 billion; that means iPhone users downloaded 2.5 billion apps in 2009 alone. Gartner's figures show another 16 million apps that could come from other platform's recently opened app stores, giving Apple at least 99.4 percent of all mobile apps sold for the year.

Need to manage the iPhone in an Enterprise environment ? Take a look at MobileIron - http://www.cloud-distribution.com/iphonedevicemanagement/

Wednesday 20 January 2010

Vodafone Sells 50,000 iPhones in First 24 Hours




Vodafone, the fourth carrier in the UK to get the iPhone, had an impressive first day of sales, unloading more than 50,000 iPhones on Thursday alone according to The Independent. To put that in perspective, Vodafone sold 30,000 more iPhones in a single day than Google sold Nexus Ones in a full week.

What's perhaps most interesting about the news is that in the UK the iPhone is already sold by Orange, O2 (whose exclusive deal with Apple expired at the end of 2009), and Tesco. In fact, Vodafone actually out sold Orange by a ratio of 5:3 for the first 24 hour period.

The Independent writes:
"There had been fears the group would see customers move to rival operators as a result. However, one company insider said: "When we didn't get the iPhone initially, everyone predicted that customers would leave. These sales figures have proved that wrong."
This came despite Vodafone offering customers the device on a similar tariff to Orange and O2, scotching talk that a bitter price war was set to break out. Both Orange and Vodafone have instead looked to sell the phone off the strength of their networks. Mr Laurence reiterated yesterday that the "exceptional demand" had been driven by the strength of the company's network."
If anything, Vodafone's initial success with the iPhone clearly demonstrates that each additional carrier matters to Apple's bottom-line, and may even serve as a bit of foreshadowing for the day that Verizon gets the iPhone in the United States.

IBM Claims Big Cloud Win

IBM claims to have clinched the largest enterprise cloud computing deployment in history. Seems Panasonic is migrating upwards of 300,000 desks from Microsoft Exchange and some unnamed collaboration gear to IBM's LotusLive public cloud services for e-mail, calendaring and contact management. The deal, starting initially with the first 100,000 employees and covering web conferencing, file sharing, instant messaging and project management, will ultimately include social networking with Panasonic's external partners and suppliers too

Original Article - read more

Tuesday 19 January 2010

Microsoft and HP pump $250 million into cloud computing

Microsoft and Hewlett-Packard today announced a three-year $250 million partnership to simplify IT environments through a wide range of converged hardware, software, and professional services solutions. This is a broad agreement with many components, building on the 25-year Microsoft-HP partnership, which works toward new models for application delivery, hardware architecture, and IT operations. The goal is to deliver the "next generation computing platform" by leading the adoption of cloud computing.

The duo plans to deliver a deeply integrated IT stack for business applications that connects IT infrastructure to applications for better performance, reliability, and availability with an overall lower total cost of ownership. The deal will span various company products, including HP Converged Infrastructure optimized for Microsoft Exchange 2010 and HP's Insight Software and Business Technology Optimization software portfolio with Microsoft's System Center suite. Microsoft is also committing to buy HP hardware for its Windows Azure deployments. The partnership also includes investments in HP Technology Services and Microsoft Services to provide design, implementation, and support for the joint solutions. Finally, the two will increase their investment globally in the 32,000 HP and Microsoft Frontline channel partners by 10 times.

Integrating virtualization and systems management across heterogeneous data center and cloud environments will be key to this deal. Microsoft and HP hope the partnership will result in greater customer confidence to deploy new business capabilities that leverage existing IT investments and also take advantage of cloud resources based on Windows Azure.

Cloud Computing, iSlate & Social Networking

It was last year's undoubted buzz word - but cloud computing is here to stay. Server efficiency, speed to market, instant test platforms, less capital expenditure, on demand resource are some of the oft quoted compelling reasons for switching to cloud computing and all of them will deliver quantifiable financial benefits. The key challenge for businesses in 2010 will not be to determine whether cloud computing is a viable technology but rather how can they derive benefit within their own organisations - full outsource, partial/hybrid solution? The managed hosting industry, on the other hand, will work hard during 2010 to allay the nagging doubts that exist around the Cloud's downtime and security issues. The next big cloud product offering from the hosting providers? Consultancy...

Original Article - read more

Monday 18 January 2010

Is There a Business in Physical to Cloud Conversions?

I have been thinking about if there is a place for some cloud computing vendors to come on the scene to handle what I call the 'P2C' conversion process-taking a physical machine and converting it to an image that can run on a cloud. If we look at the virtualization market, clearly P2V (physical to virtual) was an enabling technology that helped people migrate existing physical hosts into virtual machines, without having to completely rebuild systems from scratch. VMware had a product in the space (and still does) and there was also some popular products provided by 3rd parties like Platespin (who had a nice exit to Novell for ~$200MM).

Do we have the potential for the same story in the cloud?

Well, what’s the same this time around?  You have huge existing deployments of physical machines and virtual machines, some of which IT managers would like to move the cloud, just as you had IT managers who wanted to consolidate physical hosts by converting them to VMs.

But what’s different?  As I understand it, most of the cloud deployments are Linux based, and you’ve got a series of tools (Puppet, Chef, and the like) that allow administrators to very easily deploy cookie-cutter system templates very quickly.  So, the cost of migrating an existing system may be much higher than simply rebuilding through one of these systems and migrating data.

Maybe small environments are the sweet spot for a P2C product.  They are unlikely to have invested time and effort into deploying a configuration management system like Chef or Puppet, but may still want to move their physical systems into a cloud environment.   There is a consultant I know who was recently asked if he could do exactly this for a customer’s small LAMP infrastructure.  This is just one data point but I have a hard time believing there wouldn’t be other SMBs willing to pay for this kind of service.

Is there anything like this out there today?  Agree or disagree with my thesis?   Is there a business here?


Original Article - read more

Internal Clouds and Marketing the Non-Existent

A cloud which now exists within the data center leads to a rather ironic image of a misty atmosphere that causes more confusion

I was recently asked my opinion on what were the main considerations for Cloud Computing with specific emphasis on Internal Clouds.

Eager to assist I quickly gave a rundown of issues which included SLAs, distinguishing charge rates, security etc, etc.

Pleased with the response received our conversation then veered off into another direction but then it struck me - I had just fallen victim to the marketing jargon. Internal Cloud?

What on earth was he asking me and what on earth had I just answered with?

I thought back and reassessed my understanding of the Cloud to what I originally understood it as i.e. that the Cloud was beyond the realms of the data center and certainly not internal. Facts told me that Cloud storage services whether it be a backup or archive reside outside of the local data center and into the ‘clouds' even to the extent of being in another continent.


So how then does the oxymoron of ‘internal cloud' exist so much so that in depth conversations are taking place between consultants, IT managers and Storage Engineers at a data center near you? The answer is simple; marketing. Not content with pushing Cloud and SAAS as the future low end tiered storage, the term ‘internal clouds' is now being marketed and ascribed to new and upcoming products which in essence are only offering added support for virtualization or clustering.

The metaphor of an ‘internal cloud' i.e. a cloud which now exists within the data center leads to a rather ironic image of a misty atmosphere that causes even more confusion. Blow those ‘internal clouds' away from your data center and what you'll see are flexible solutions for scalability whether they are in the guise of global namespaces, clustering, grid storage or virtualization; solutions which were already known about and quite possibly already existed within your data center but were now coined as ‘internal clouds'. Hence once the haziness has disappeared it's clear to see that the internal cloud that we've been marketed with never really existed in the first place.

So should I be asked my opinion on internal clouds again, let's just say that this time my answer will require no words but a straightforward raise of the eyebrow and a protruding of the chin.

Original Article - http://cloudcomputing.sys-con.com/node/1248025

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

Friday 15 January 2010

NTT Makes Strategic Investment in Tilera

Corporation, developer of the highest performance multicore processors, today announced that NTT Finance Corporation has made an investment in Tilera.

"We are thrilled to have NTT Finance as an investor," said Omid Tahernia, Tilera CEO. "NTT groups' leadership in telecommunications makes them an ideal business development partner as we grow our business in Japan and in a growing range of vertical markets.

"Tilera's revolutionary approach to multicore processors solves the problem of scaling to many cores while maintaining low power and programmability," said Masaaki Nogawa, President of NTT Finance Corporation. "This strategic technology delivers break-through levels of performance to cloud computing, wireless communications, networking and video applications."

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

VMware Buys Zimbra; Next Step Bed, Bath, & Beyond?

I'm arguing with myself, so I'm winning and losing.

The argument?

VMware's Zimbra acquisition: a) brilliant - and necessary -- building block in the drive to domination of the evolving cloud economy? Or b) distracting activity on par with a crow's attraction to bright, shiny objects?
The question isn't whether or not Zimbra is very cool. (It is.) Or whether it's worth the $100 million VMware shelled out. For me, it's a question of ends and means.

Readers of my blog know that I am a fan of VMware. They pretty much created the market that created the need for what AppZero (and no one else) does (server-side application virtualization.)  But they also know that I have questioned where VMware and its hulking shadow of a daddy, EMC, are headed.

I first blogged the question when VMware acquired Spring Source. I thought it was interesting that a company espousing and attracting partnerships for progress in this brave new cloud world, would jump into competition with those partners.  SpringSource put VMware outside of its core infrastructure business into the development and management business.

Fast-forward.  With the Zimbra acquisition, VMware has popped up the market stack to land smack in the middle of business' most ubiquitous application - email and collaboration.  Zimbra out-Outlooks Outlook. If you're unfamiliar with it, take a quick look at the demo
(www.zimbra.com).



Anyone who has ever used Outlook/all of civilization will immediately be at home with the UI.  That same population will be tickled at the sheer elegance, practicality, and simplicity of the additional functionality.  Zimlets are mashups that let you do things that just make sense, like mouseover the word "tomorrow" in the email you're reading to see your calendar for tomorrow.  Click on it and you're in your calendar to drag, drop, and beyond.

Which brings me to Bed, Bath, & Beyond. (You wondered how I'd get here, didn't you?)  What is VMware doing?  If it is not taking direct aim at Microsoft ... If it is not positioning itself to be the Microsoft of the cloud economy .... If it is not aimed, ready, and equipped to command that dominion ... then acquisitions such as Zimbra and SpringSource are distractions from the core business that make only marginally more sense than would acquisition of Bed, Bath, & Beyond.

Steve Herrod, VMware's Chief Technology Officer blogging on the acquisition brings up the elephant in the cloud saying, "And there's one thing I'd like to address head-on. VMware vSphere is and will continue to be an outstanding platform for the deployment of Microsoft Exchange. We have heavily optimized our virtualization offerings specifically for the deployment of Microsoft Exchange, and thousands of companies are benefiting from the increased flexibility, availability, and security that comes from running Microsoft Exchange on top of VMware vSphere. We have some great material on these advantages available here.

So whether it is datacenters, desktops, application development, or core infrastructure applications, our mission will be to attack complexity and simplify IT. You'll see much more from us in this space, so stay tuned!"

That's my plan for sure.

Original Article - http://cloudcomputing.sys-con.com/node/1246168

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

Thursday 14 January 2010

How to Make Enterprise Apps Mobile-Friendly

Desktop applications can be used on a mobile device by running the app on servers in tha data center and delivering the UI to any Receiver equiped SmartPhone. The UI of many apps already include Window panes or frames that can easily be sized so they look and feel great on most Mobile Devices incuding the iPhone, Andriod and Windows Phone based devices.

Original Article - Cloud Computing Journal - read more

CA Acquires Its Cloud Computing Partner Oblicore

Oblicore's top-down methodology for SLM begins with contracts that use business language and metrics. These contracts easily integrate with technical data sources for continuous service performance measurement against contract terms and conditions. The resulting transparency and control allow customers to better manage expectations between IT and the business, as well as contracts with external service providers.

The use of service contracts to manage service delivery becomes even more important as enterprises leverage cloud services. Lack of direct management control makes these contracts the primary assurance mechanism. Oblicore will play a significant role in helping enterprises assure the quality of cloud services.

Original Article - Cloud Computing Journal - read more

Wednesday 13 January 2010

Who Do You Trust To Meter The Cloud?

Data transfer to cloud computing environments must be controlled, to avoid unwarranted usage levels and unanticipated bills from over usage of cloud services. By providing local metering of cloud services' usage, local control is applied to cloud computing by internal IT and finance teams.

Original Article - Cloud Computing Journal - read more

Dell to Sell Infobird Cloudware

Dell's cut a strategic partnership with Infobird, a Chinese cloud management vendor, to provide Infobird's cloud-based Internet communication services to Dell's SME base in China. The deal calls for Infobird to peddle Qitongbao, a SaaS cloud management platform with a contact center at its core, to Dell's enterprise clients through Dell's direct sales channels.

Original Article - Cloud Computing Journal - read more

Tuesday 12 January 2010

Optimize While You Virtualize to Get to the Cloud at Cloud Expo 2010

By consolidating low priority applications onto shared resources, virtualization has enabled IT organizations to cut costs and improve efficiency, leverage new IT investment, and improve service. In his session at the 5th Cloud Expo, Rich Corley, Founder and CTO of Akorri, will analyze the steps IT managers need to take while they optimize their virtualization environment including: troubleshooting, optimization, capacity planning and service level management.

Original Article - Cloud Computing Journal - read more

Monday 11 January 2010

Interest in Cloud Computing Up 3,233% Since 2007

Lately it seems that no matter where I go someone is telling me they've heard about cloud computing, from Newspapers to TV, it seems to be everywhere. I'm not talking about techies or the clouderati. I'm talking about your mother, your sister or brother, I'm talking about regular people you meet at dinner parties -- the everyday Joe.

If you are a frequent reader of my blog, you'll know I enjoy looking at trends. A particularly good analytics tool is found at Google's Insights for Search Tool. The site analyzes a portion of worldwide Google web searches from all Google domains to compute how many searches have been done for the terms you've entered, relative to the total number of searches done on Google over time. The site also allows the underlying characteristics of the data sets to be compared, for example against a broader industry. In our case, I compared Cloud Computing and a few other related terms against the broader "Computers & Electronics" industry to how much interest there was for cloud computing. (See Graph Below or original link)

A Few of the more interesting points.
1. The overall interest in Computers & Electronics is down about 46%
2. Interest in Cloud Computing peaked in November up an astounding 3,233% from 0 in October 2007
3. Interest in SaaS and Virtualization also remains very strong.



Virtualization, cloud to shape 2010 WAN trends

Predictions about the WAN optimization landscape in 2010 center on virtualization, cloud computing and video traffic

Virtualization, cloud computing and video traffic play a prominent role in predictions about the WAN optimization landscape in 2010. Read on to see what a few more industry watchers have to say.
Frederic Hediard, vice president of product management at Streamcore, emphasized the influence video will have in the coming year:

“The high growth in real-time video traffic on corporate networks will continue to drive the need for solutions that provide visibility, traffic prioritization and bandwidth management. When enterprises make large investments in videoconferencing equipment, they cannot settle for poor performance from these systems,” Hediard says.

Among its 2010 predictions, Riverbed noted the demand for end users to be able to connect to corporate resources no matter where they are working: “As more cloud and virtualization projects come to fruition, users will be further away from their data. More vendors including Riverbed will step up to provide offerings for the cloud that address several key issues including service availability, data and vendor lock-in, security, data transfer bottlenecks and performance unpredictability.”

Riverbed also expects to see growth in enterprise edge boxes that consolidate branch services: “Consolidation and virtualization initiatives help increase flexibility and efficiency in delivering valuable services while reducing costs. The deployment of edge boxes will provide another opportunity for more consolidation as enterprises will look to not only consolidate servers but also print servers.”

Adam Davison, vice president of corporate sales and marketing at Expand Networks, also emphasized virtualization and the cloud in his look-ahead observations:

“As we continue to move toward virtualized infrastructures, as-a-service offerings and cloud-based services, we will see more applications traversing the WAN than ever before. Many of these applications will be content-rich, real-time and bandwidth-intensive as the use of collaboration and Web 2.0 applications become more widely utilized across distributed enterprises and virtual workgroups. This will create increased demand for advanced WAN optimization solutions, however IT is already realizing that it is no longer just about providing acceleration, but about enabling enhanced levels of traffic visibility and control, and assuring the quality of the user experience across all these complex environments.”

On the topic of cloud computing, Davison notes that the ability to provision a software solution for virtualized WAN optimization from the data center to the branch office and mobile users will be critical:
“Management, visibility and monitoring of the WAN and its traffic will be critical here. For example, QoS in and out of the cloud; mobile clients for remote cloud users; VDI/SBC support for desktop-as-a-service (DaaS) and for the support WAN optimization-as-a-service (WOaaS) will all be critical considerations,” Davison says. “Only a wholly virtual WAN optimization offering will enable the cloud infrastructure, both private and public, to be deployed and managed from any location.”

While the idea of WAN optimization-as-a-service has been thrown around as a trend for some time, Davison says Expand is confident that 2010 will see it come into its own. “Our recent survey of IT decision makers certainly supports this sentiment, with 76% saying they would consider adopting a WOaaS strategy if offered at a compelling price point, with 44% citing ongoing cost efficiency and reduction in OPEX as a key drivers for this.”

In addition, Expand expects to see more service providers and telcos integrate WAN optimization products into their own offerings to add value and differentiate themselves from competitors. “With this integration, users may not even be aware they are reaping the benefits of WOaaS,” Davison says. 

Original Article NETWORK WORLD - http://napps.networkworld.com/newsletters/accel/2010/010410netop2.html?page=2

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

Friday 8 January 2010

The Mass Adoption of Cloud Computing Is Coming

I'm really loving a phrase that I read recently about cloud computing. It came from the CIO of Avago Technologies, a San Jose, CA-based semiconductor maker, which is gradually migrating its data and apps to the cloud from its internal servers – including recruiting, human resources, e-mail and web security. According to Bob Rudy, CIO at Avago, migration has saved the company millions of dollars by eliminating hardware needs and software licenses and improving security, speed and storage. Moving to the cloud has also freed up employees from annoying and trivial tasks like managing their e-mail, enabling them to focus more on their core jobs.

Read more Cloud Distribution stories @ our web site http://bit.ly/5NMFEA

Thursday 7 January 2010

Is Cloud Computing Actually Environmentally Friendly?

As we begin 2010 and prepare for the next decade there seems to be a nagging question, a question that I have to say, I frequently answer without any concrete proof. A question that seems to be becoming more important then ever. The question is simple yet profound in its implications as a global citizen, is cloud computing actually environmentally friendly?

First, I will admit, I am among the group of cloud advocates who routinely claim that cloud computing is green, I say this without any proof or evidence to support my statement. I make this claim as part of my broader pitch to use Cloud Computing, I say this as a sales and marketing guy, but not as an advocate. As an advocate I'd like to have some empirical data to support my position. Believe me, I've searched, and I've searched -- although there are piles of forecasts about the potential market for cloud computing, said to be in the billions, little exists to support the green / eco-friendly argument.

On the face of it, a major incentive to move to cloud computing is that it appears to be more environmentally friendly compared to traditional data center operational / deployment models. The general consensus says that reducing the number of hardware components and replacing them with remote cloud computing systems reduces energy costs for running hardware and cooling as well as reduces your carbon foot print while higher DC consolidation / optimization will conserve energy. But a major problem still remains, where is the proof?

The problem is there is no uniform way to measure this supposed efficiency. None of the major cloud companies are providing utilization data, so it is not possible to know just how efficient cloud computing actually is -- other then it sounds & feels more green.

The problem is measuring the hypothetical. What is the hypothetical footprint of a startup that may have chosen to built their own data center versus using someone elses? Things like transportation, development, construction, management, etc are very difficult to measure and arguably still create vast amounts of CO2, yet are generally not taken into consideration. Also the power sources can have dramatically different CO2 footprints, say a coal source Vs wind or Nuclear.

Then there is the question of consumption, we now have the ability to run our applications on thousands of servers, but previously this wasn't even possible. To say it another way, we can potentially use several years worth of energy in literary a few hours, where previously this wasn't even an option. So in direct contrast, hypothetically we're using more resources, not less. On the flip side, if we bought those thousand servers and had them running (under utilized) the power usage would be significantly higher. But then again, buying those servers would have been out reach for most, so it's not a fair comparison. There we are -- back, at where we started. You may use 80% less energy per unit, but have 1000% more capacity which at the end of the day means you're using more, not less energy.

I'm not alone in this thinking, more broadly, the the International Organization for Standardization (ISO) considers the label "Environmentally Friendly" to be too vague to be meaningful because there is no single international standard for this concept. Although there are a few emerging Data Center Energy Efficiency Initiatives, notably by the EPA in the United States through their Energy Star program. The EPA programs are working to identify ways in which energy efficiency can be measured, documented, and implemented in data centers and the equipment they house, especially servers. This may be the foundation for potential cloud "eco-friendliness", but until cloud computing providers step up and provide the data, it does little to resolve the question.

Let me be clear, it's not that I'm saying Cloud Computing isn't green, I'm sure that if you were to compare a traditional data center deployment to a near exact replication in the cloud you'd find the cloud to be more efficent, but the problem is there currently is no way to justify this statement without some kind of data to support it.

Introducing WebSphere CloudBurst 1.1

Introducing WebSphere CloudBurst 1.1

I've written numerous technical entries both here and elsewhere about the WebSphere CloudBurst Appliance. The appliance is a cloud management device that is geared towards those enterprises that for a variety of reasons (security, privacy, performance, customization capability, existing investment, etc.) are looking to benefit from on-premise or private clouds. The initial version of WebSphere CloudBurst, released in June of 2009, introduced the capability for users to create, deploy, and manage WebSphere application environments in a cloud that they retain control over.

While that may seem straight forward enough, this can radically change the way users perceive their application environments. By using this appliance-based approach, users can achieve flexibility and agility because of instead of in spite of their approach to the creation, provisioning, and maintenance of application platforms.

read more



Seesmic Acquires Ping.fm - GREAT NEWS !

If anyone has wondered how I manage to get my blogs out to all social media feeds at once (facebook, LinkedIn, Plaxo, Bebo, FriendFeed etc. etc.) Ping.fm is how. Mashable just reported that they have been acquired by a leading light in the Twitter client space, Seesmic. This is great news IMHO !
 
Seesmic, maker of popular desktop and mobile Twitter clients, has just acquired Ping.fm — a service that lets users post to 50 social networks with a single status update — for an undisclosed sum.
The acquisition includes both talent and technology, so Ping.fm co-founders Adam Duffy and Sean McCullough are now Seesmic shareholders and key members of the management team. They will begin immediately integrating Ping.fm technology into Seesmic applications.


Sometime in January you can expect updates to Seesmic's Blackberry, Android, web, Windows and OSX (via Air) apps. Each will add advanced Ping.fm integration, supporting the ability to post to 50 social networks with a single update, special Ping.fm triggers to specify posting to specific social sites, and the option of using Ping.fm's e-mail, SMS and chat functionality.

Ping.fm currently boasts 200,000 updates a day from its 500,000 registered members. More than 100 applications already use the Ping.fm API for cross-posting purposes, and although Seesmic will assume full control of the platform, they're committed to maintaining it and supporting the developer community.

The maneuver no doubt means that Seesmic is now infringing upon TweetDeck's territory and mission with ambitions to be much more than just a Twitter client. Ultimately, Seesmic aims to be your primary gateway to the social web and to serve 1,000,000 updates per day in 2010.

Skype offers living room TV action

Skype have annouced a partnership between themselves, LG & Panasonic to bring Video Conferencing toTV's this summer. Read more here - http://www.theregister.co.uk/2010/01/06/skype_tv/

http://bit.ly/5NMFEA

Cloud Reliability Will Be Bigger than Cloud Security for 2010-11

We have all the tools for securing information in a Cloud: establishing trust through identity, data privacy through encryption, and content integrity through signatures.  We are overly focused on Cloud Security  issues and less on reliability. This is all about to change.

Read more Cloud Distribution stories @ our web site http://bit.ly/5NMFEA

Wednesday 6 January 2010

Data Backup and 'Cloud Based' Services With The Carphone Warehouse

Spare Backup announced the launch of a remote, 'cloud based' customer portal service with the UK's leading independent mobile phone retailer, "The Carphone Warehouse." The proposition will ensure that customers have their most precious and valuable electronic property protected and readily available. Spare Backup and The Carphone Warehouse have developed the proposition jointly and more product announcements will follow in the New Year.

Read more Cloud Distribution stories @ our web site http://bit.ly/5NMFEA

Tuesday 5 January 2010

A Cloudy Future for Networks and Data Centers in 2010


The message from the VC community is clear – "don't waste our seed money on network and server equipment." The message from the US Government CIO was clear – the US Government will consolidate data centers and start moving towards cloud computing. The message from the software and hardware vendors is clear – there is an enormous investment in cloud computing technologies and services.


Read more Cloud Distribution stories @ our web site http://bit.ly/5NMFEA

The Cloud Computing – Application Acceleration Connection

Like peanut-butter and jelly, cloud computing and application acceleration are just better together.
Ann Bednarz of Network World waxes predictive regarding 2010 trends in application delivery and WAN optimization in WAN optimization in 2010. One of the interesting tidbits she offers from research firm Gartner is growth in the application acceleration market: 
blockquote Second, the research firm is predicting a return to modest growth for the application acceleration market in 2010. Gartner is forecasting a compound annual growth rate of 12.22%, with 2014 revenue of $4.27 billion.
This, when viewed alongside the predictions that cloud computing – both public and private –will see significant growth in 2010, should be no surprise.
blockquote The build out of "enterprise class" cloud computing service will continue to major growth area.   IDC stated "The emergence of enterprise-grade cloud services will be a unifying theme in this area, with a battle unfolding in cloud application platforms -- the most strategic real estate in the cloud for the next 20 years."  The overall growth in the IT industry for major categories of hardware, software and services are expected to have a 2 to 4% growth rate.
It shouldn't be a surprise, but perhaps the connection isn't as obvious as it first appears. Organizations are global, yes, but for many businesses – especially those that are most likely to take advantage of cloud computing and its cost reducing benefits, i.e. mid-sized and smaller organizations – they often focus on a fairly localized market. Larger organizations have no doubt undergone the exercise in the past of determining where, from a performance standpoint, it is best to deploy second and even tertiary data centers. Public cloud computing, however, changes the impact of location on performance and opens up a potentially increasing need for application acceleration solutions to combat longer distances and the impact of changing the delivery epicenter of their most critical, customer and end-user facing applications.

CLOUD COMPUTING and the IMPACT on the APPLICATION ACCELERATION MARKET

traffic_light_green Consider, if you will, that an organization chooses to deploy an application which will be used on a daily basis by employees in "the cloud." In previous incarnations that application would have likely been deployed in the local data center, accessible to employees over the local LAN. High speed. Low latency. The only real obstacle to astounding application performance would have been hardware and software limitations imposed by the choice of server hardware, web/application server software, and the database. Now move that application to "the cloud" and consider the potential obstacles to application performance that are introduced: higher latency, lower speed, less control. What's potentially true is that moving to "the cloud" mitigates the expense associated with higher performing servers so bottlenecks that may have occurred due to limitations imposed by the server hardware are gone, but they are replaced by the inevitable degradation of performance that comes with delivery over the Internet – higher latency, because it's farther away and there are more users out there than "in here" and lower speed because it's rare that an organization has LAN-like speeds even to the Internet.
So what we have is a most cost-effective method of deploying applications that's farther away from its users. The potential – and I'd say inevitability – is there that performance will be impacted and not in a good way. The solution is to (1) keep the application deployed locally, (2) tell the users to deal with it, or (3) employ the use of application acceleration/optimization solutions to provide consistent performance of an acceptable level to users no matter where they might end up.

There are well-known, proven solutions to addressing the core problem of distance on application performance: caching, compression, more efficient connection management, etc… All of which fall under the "application acceleration" umbrella.  As cloud computing experiences higher adoption rates it is inevitable that performance will be raised as an issue and will need to be addressed. Hence it makes perfect sense that growing cloud computing adoption will provide growth opportunities for application acceleration solution vendors' as well, which will positively impact that market.

That's good news for customers and end-users, as they are typically the folks who are impacted the most by architectural and deployment changes to applications. Usability is impacted by performance and responsiveness, which directly impacts productivity – one of the key metrics upon which many end-user employees are evaluated. A rise in offerings related to application acceleration for cloud computing deployed applications will be beneficial to both providers (differentiation and up-selling of services) and customers (remediating potential impact of moving applications further from users) alike.

As the cloud matures we are likely to see, in general, a positive impact on many application delivery related markets – WAN acceleration, application acceleration, and application security – as providers begin to offer a wider variety of services in order to differentiate from the competition and make their offerings more attractive to enterprise-class customers used to having many more technological options for dealing with application delivery challenges than are currently available from most cloud computing providers.

http://bit.ly/5NMFEA

Monday 4 January 2010

VocalTec Class-5 VoIP Solution Available on Amazon EC2


VocalTec Communications have announced the availability of its Essentra BAX, free-trial version, on the Amazon Elastic Compute Cloud (Amazon EC2) environment. Potential customers will now have the ability to use VocalTec's Class-5 VoIP Application Server, providing a comprehensive set of innovative residential and enterprise VoIP services.


Read more Cloud Distribution stories @ our web site http://bit.ly/5NMFEA

OpSourceCloud: OpSource Cloud Adds Support for CentOS 5

OpSourceCloud: OpSource Cloud Adds Support for CentOS 5 http://bit.ly/5N8DT1

Saturday 2 January 2010

Apple and Oracle on Way to Do What IBM and Microsoft Could Not

The more I thought about it, the more these two companies are extremely well positioned to actually fulfill what other powerful companies tried to do and failed. Apple and Oracle may be unstoppable in their burgeoning power to dominate the collection of profits across vast and essential markets for decades.

Apple is well on the way to dominating the way that multimedia content is priced and distributed, perhaps unlike any company since Hearst in its 1920s heyday. Apple is not killing the old to usher in the new, as Google is. Apple is rescuing the old media models with a viable online direct payment model. Then it will take all the real dough.

The iPad is a red herring, almost certainly a loss leader, like Apple TV. The real business is brokering a critical mass of music, spoken word, movies, TV, books, magazines, and newspapers. All the digital content that's fit to access. The iPad simply helps convince the producers and consumers to take the iTunes and App Store model into the domain of the formerly printed word. It should work, too.

Oracle is off to becoming the one-stop shop for mission-critical enterprise IT ... as a service. IT can come as an Oracle-provided service, from soup to nuts, applications to silicon. The "service" is that you only need go to Oracle, and that the stuff actually works well. Just leave the driving to Oracle. It should work, too.

This is a mighty attractive bid right now to a lot of corporations. The in-house suppliers of raw compute infrastructure resources are caught in a huge, decades-in-the-making vice -- of needing to cut costs, manage energy, reduce risk and back off of complexity. Can't do that under the status quo.

In doing complete IT package gig, Oracle has signaled the end of the best-of-breed, heterogeneous, and perhaps open source components era of IT. In the new IT era, services are king. The way you actually serve or acquire them is far less of a concern. Enterprises focus on the business and the IT comes, well, like electricity.

This is why "cloud" makes no sense to Oracle's CEO Larry Ellison. He'd rather we take out the word "cloud" from cloud computing and replace it with "Oracle." Now that makes sense!

Read more Cloud Distribution News @ http://bit.ly/5NMFEA

Friday 1 January 2010

Rackspace Outage Takes Down Big Portions of the Web

http://feedproxy.google.com/~r/Mashable/~3/rWw7TCrkBBg/


It appears that a networking issue at RackSpace took many websites offline this afternoon (including Mashable for a period of time), starting at around 4:30 p.m. EST (with confirmation from Rackspace about 15 minutes later).
37Signals (and its suite of apps), Threadless and the recently launched Vevo were among the major websites knocked offline, although Twitter search indicates that potentially thousands that host on either Rackspace's cloud or dedicated servers were impacted.

While a slow Friday afternoon before the holidays for those of us in the media business might not be a huge deal, an outage during the busiest online shopping time of the year is a big blow to ecommerce companies that host with Rackspace. We'll wait and see what type of resolution — if any — the company offers to its customers.
Notice any other big websites down this afternoon, or experience problems of your own? Let us know in the comments.

Reviews: Twitter
Tags: media, rackspace





Best regards

Scott Dobson
Sent from my iPhone
Cloud Distribution Ltd
01635 800410
01635 800411 (DDI)
07740 282182 (Mobile)

SpinVox sold for £64m (BIG LOSS...)

Troubled startup SpinVox - once a shooting star of the British technology industry - has been bought by an American rival in a deal worth $102m (£64m).

After a difficult year that saw substantial losses and unrest among its investors, it was today confirmed that the company - which converts customers' voicemails into text messages that they can read more easily - has been acquired by US technology firm Nuance.

In a statement Nuance, which makes the popular voice recognition program Dragon NaturallySpeaking, said it was buying SpinVox to help expand its reach into new countries.

"Around the world, the voice-to-text market has experienced tremendous growth over the last year," said Nuance vice president John Pollard. "With SpinVox's robust infrastructure, language support and operational experience, we will broaden the reach and capabilities of our platform."

The deal marks a heavy loss on the investments made in the Buckinghamshire-based company, which had raised more than $230m (£145m) in recent years to fund its ambitious expansion plans - and once valued itself at more than $500m.

While it boasted a legion of fans, however, the company had struggled to pay for major expansions around the world, while simultaneously fighting a series of claims that its automated voice-to-text technology actually relied heavily on call centre staff.

Over the summer, it rejected a BBC report suggesting that humans – not computers - transcribed large portions of customers' messages and held a demonstration of its system for journalists.

The increased scrutiny exposed a series of fissures inside the company, however. The management team, led by chief executive Christine Domecq, came in for criticism, and in August, recently-appointed director Patrick Russo – the former chief executive of telecoms giant Alcatel-Lucent - stepped down.

With losses mounting, the company raised more funding in August – largely to service its debts – and began paying staff with stock, rather than cash, as a way to save money. But in September one of its backers, Invesco, wrote down its outlay by 90% and confirmed that SpinVox was up for sale.

Rumours of the Nuance deal were reported earlier this month, around the same time that the company was given more time to repay a £30m loan that had placed extra pressure on its finances. However, early suggestions were that the company was closing in on a $150m price tag - significantly more than the $102.5m deal that was eventually struck.

Investors in the company – who include Goldman Sachs, Carphone Warehouse chief Charles Dunstone and Peter Wood, the founder of insurance group Directline – will receive a total of £42m in cash for the acquisition, with the rest of the money coming in the form of Nuance stock.

Shares in the Massachusetts technology company – which had climbed by more than 50% over the past year - were down around 1%, to 15.97, on the news.

http://bit.ly/5NMFEA