Transitioning to Cloud Computing
The drive toward cloud computing continues to be a dominant infrastructure deployment theme for organizations looking to reduce costs, increase storage and optimize mobility. What many fail to realize is the trend towards cloud computing is continually forcing IT managers to rethink fundamental security issues as a barrage of new attacks and exploits continue to assault the cloud every day. Compelling for any business model, cloud computing delivers a scalable, accessible and high-performing computing infrastructure that comes at an appealing price for organizations. Similarly, operating in the cloud allows for the convergence of new and emerging technologies. Providing appeal to both the provider and the consumer, cloud computing enables new application deployment and recovery options, as well as new application business models. However, cloud computing may not be the panacea that the press and many organizations make it out to be. We must have trust and confidence in the platform on which we are deploying our applications and data. We must be able to maintain control of the information that drives our business. Ultimately, we must be able to prove that trust to our auditors. The solution, having not yet been defined, could be deemed "auditability."
Friday, 12 February 2010
Ground Rules For Mobilizing Users
Last week, I discussed rolling user-owned mobile devices into your mobility plans. While letting the users pay for their own wireless access may be a boon to the company's bottom line, it can also snowball into a nightmare for administrators dealing with supporting a slew of new devices. A little advanced work and planning can help thwart the dark side of mobility: the threat of unending calls to help desk, security threats and general chaos. First and foremost, you'll need to set some ground rules.
On the top of the list should be defining the limits of IT support. All too often, end-users assume that it is IT's role to make every piece of technology they possess work. Don't believe me? Just try to find an IT professional who hasn't been asked to fix a co-worker's home computer. It has to be made very clear to users that IT makes no guarantees that connecting a mobile device to corporate email will work, but that they will do what they can to help the users succeed. Building up an archive of support documents and laying out the exact process needed to connect a given mobile device will go a long way in guiding all but the most technically-challenged users in getting and staying connected. If your IT staff does not have access to these mobile devices and your budget doesn't allow bringing a few in house, ask your users. There are no doubt a few folks within your organization who would be willing to trade a little time documenting and taking screen shots for early mobile access.
The second rule applies to the devices themselves, and how many are allowed to connect. Microsoft Exchange allows a virtually unlimited number of mobile devices to connect to a user's mailbox via Server Activesync. While this may thrill the gadget-happy users that want to have their entire collection of iPhones, iPads and Android devices linked to their mail, multiple devices compounds the threat of one of them being stolen or lost.
Furthermore, administrators need to make it clear that "jailbreaking" or "rooting" devices, a process which opens up these devices to third party applications and networks, is expressly forbidden. While it may be attractive for users to "free up their devices", it opens up brand new security threats. In fact, there have already been rogue applications that have made their way through these liberated devices. Another side effect of jailbreaking is that there is typically a lag between official releases and the broken versions, meaning users are waiting to apply the known security fixes of the official releases.
If despite your best efforts, the chaos of mobility gets to be too much, there are a number of products out there that can bring provide a level of visibility and control. Vendors like MobileIron, zenprise, iPass and Fiberlink are building tools, appliances and services that can bring visibility, control and a number of self-service options to mobile devices. As the paradigms of mobility continue to evolve, look for companies like these, as well as additional offerings from both traditional enterprise vendors and the wireless carriers to make the dark side of mobility a little brighter.
Original Article -= http://www.networkcomputing.com/wireless/the-dark-side-of-mobilizing-users.php
MobileIron - http://www.cloud-distribution.com/mobileiron/
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
On the top of the list should be defining the limits of IT support. All too often, end-users assume that it is IT's role to make every piece of technology they possess work. Don't believe me? Just try to find an IT professional who hasn't been asked to fix a co-worker's home computer. It has to be made very clear to users that IT makes no guarantees that connecting a mobile device to corporate email will work, but that they will do what they can to help the users succeed. Building up an archive of support documents and laying out the exact process needed to connect a given mobile device will go a long way in guiding all but the most technically-challenged users in getting and staying connected. If your IT staff does not have access to these mobile devices and your budget doesn't allow bringing a few in house, ask your users. There are no doubt a few folks within your organization who would be willing to trade a little time documenting and taking screen shots for early mobile access.
The second rule applies to the devices themselves, and how many are allowed to connect. Microsoft Exchange allows a virtually unlimited number of mobile devices to connect to a user's mailbox via Server Activesync. While this may thrill the gadget-happy users that want to have their entire collection of iPhones, iPads and Android devices linked to their mail, multiple devices compounds the threat of one of them being stolen or lost.
Furthermore, administrators need to make it clear that "jailbreaking" or "rooting" devices, a process which opens up these devices to third party applications and networks, is expressly forbidden. While it may be attractive for users to "free up their devices", it opens up brand new security threats. In fact, there have already been rogue applications that have made their way through these liberated devices. Another side effect of jailbreaking is that there is typically a lag between official releases and the broken versions, meaning users are waiting to apply the known security fixes of the official releases.
If despite your best efforts, the chaos of mobility gets to be too much, there are a number of products out there that can bring provide a level of visibility and control. Vendors like MobileIron, zenprise, iPass and Fiberlink are building tools, appliances and services that can bring visibility, control and a number of self-service options to mobile devices. As the paradigms of mobility continue to evolve, look for companies like these, as well as additional offerings from both traditional enterprise vendors and the wireless carriers to make the dark side of mobility a little brighter.
Original Article -= http://www.networkcomputing.com/wireless/the-dark-side-of-mobilizing-users.php
MobileIron - http://www.cloud-distribution.com/mobileiron/
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
Google's Smartphone Management Drops Another Big Barrier To "Apps" Adoption
Google has removed what for many CIOs and IT professionals has been one of the last remaining hurdles to their adoption of Google Apps for business documents, spreadsheets, presentations and probably most importantly: email. Yesterday, the biggest of the cloud-based challengers to Microsoft and IBM-Lotus announced it now has a range of Blackberry-esque mobile device management features (including the ability to remotely wipe a smartphone) for iPhones as well as Windows Mobile and Nokia-based devices.
Thursday, 11 February 2010
Oracle Shoots Down Sun Cloud
Amazon's EC2 and S3 have nothing to fear from Sun. Oracle says it's going to blow the Amazon-aping Sun Open Cloud away. It doesn't want to pursue the on-demand service, announced last March, a month before Oracle agreed to buy Sun, and promised for the summer, as a worldwide public cloud using OpenSolaris, Linux, Windows, Sun Grid Engine, ZFS, MySQL and Java running on Sparc and x86 blades complete with open APIs.
Wednesday, 10 February 2010
Symbian switches to open source
Symbian phone operating system goes open source
The group behind the world's most popular smartphone operating system - Symbian - is giving away "billions of dollars" worth of code for free.
It means that any organisation or individual can now use and modify the platform's underlying source code "for any purpose".
Symbian has shipped in more than 330m mobile phones, the foundation says.
It believes the move will attract new developers to work on the system and help speed up the pace of improvements.
"This is the largest open source migration effort ever," Lee Williams of the Symbian Foundation told BBC News.
"It will increase rate of evolution and increase the rate of innovation of the platform."
Ian Fogg, principal analyst at Forrester research, said the move was about Symbian "transitioning from one business model to another" as well as trying to gain "momentum and mindshare" for software that had been overshadowed by the release of Apple's iPhone and Google Android operating system.
Evolutionary barrier
Finnish mobile phone giant Nokia bought the software in 2008 and helped establish the non-profit Symbian Foundation to oversee its development and transition to open source.
The foundation includes Nokia, AT&T, LG, Motorola, NTT Docomo, Samsung, Sony Ericsson, STMicroelectronics, Texas Instruments and Vodafone.
The group has now released what it calls the Symbian platform as open source code. This platform unites different elements of the Symbian operating system as well as components - in particular, user interfaces - developed by individual members.
Until now, Symbian's source code was only open to members of the organisation.
It can be downloaded from the foundation's website from 1400 GMT.
Mr Williams said that one of the motivations for the move was to speed up the rate at which the 10-year-old platform evolved.
"When we chatted to companies who develop third party applications, we found people would spend up to nine months just trying to navigate the intellectual property," he said.
"That was really hindering the rate of progress."
Opening up the platform would also improve security, he added.
'Mind share'
Symbian development is currently dominated by Nokia, but the foundation hoped to reduce the firm's input to "no more than 50%" by the middle of 2011, said Mr Williams.
"We will see a dramatic shift in terms of who is contributing to the platform."
However, said Mr Williams, the foundation would monitor phones using the platform to ensure that they met with minimum standards.
Despite being the world's most popular smart phone operating system, Symbian has been losing the publicity battle, with Google's Android operating system and Apple's iPhone dominating recent headlines.
"Symbian desperately needs to regain mindshare at the moment," said Mr Fogg.
"It's useful for them to say Symbian is now open - Google has done very well out of that."
He also said that the software "may not be as open and free as an outsider might think".
"Almost all of the open source operating systems on mobile phones - Nokia's Maemo, Google's Android - typically have proprietary software in them."
For example, Android incorporates Google's e-mail system Gmail.
But Mr Williams denied the move to open source was a marketing move.
"The ideas we are executing ideas came 12-18 months before Android and before the launch of the original iPhone,".
Tuesday, 9 February 2010
The Network Beneath the Clouds
For those of us familiar with the network equipment industry, the suggestion that computing costs will fall to the cost of power should at least raise some eyebrows within cultures that for the most part have been driven by manual labor and specialization. Lew Tucker's response to the Lew's Law blog further helped to clarify the point I was trying to make about the rise of IT automation and the network's pivotal role: While I'm sure this is more obvious than brilliant, the cost of computing will continue to fall bounded only by the cost of the power to produce the computation. We're simply turning electricity into computation and communication using electrons to move around bits. What cloud computing does is to use automation, scaling, multi-tenancy, and a competitive marketplace to bring us closer to this lower bound for the cost of computing.
For those of us familiar with the network equipment industry, the suggestion that computing costs will fall to the cost of power should at least raise some eyebrows within cultures that for the most part have been driven by manual labor and specialization.
While the network powered the automation of business practices it has managed to insulate itself from the competitive forces it unleashed between competing supply chains and operating models.
I think that the automation of the network is at least partially in response to:
1) the threat of consumerization;
2) the growing significance of new initiatives like IPv6, virtualization, DNSSEC and private cloud; and
3) rising network complexity (more endpoints, higher rates of change, ongoing labor specialization).
Now add to that lineup of manual opex pain a recent Gartner prediction that 1 in 5 businesses will dump their IT assets by 2012 (Jon Brodkin, Network World):
The analyst firm predicts that 20 percent of businesses will own no IT assets by 2012, a shift that will have a major impact on IT careers
Anyone still clinging to their manual DDI (DNS, DHCP, and IP address management) scripts and spreadsheets, for example, after reading this sobering and yet provocative prediction will likely join the legions of "middlemen" who were eventually retrained as network-enabled supply chains replaced similar business processes. This transformation has today set the stage for the coming conflict between the real-time enterprise and the increasingly inflexible network.
The recent ESG 2010 Outlook and Gartner DDI MarketScope for DDI appliances are harbingers of a new shift in how networks are managed; and a host of vendors and professionals who embrace automation stand to benefit.
A reduction in the operating expense of networks could vastly expand the market for network solutions that are easier to manage, more powerful and connect ever larger populations of systems and endpoints. Enterprises will be forced to automate or outsource to those who do.
This new network will be all about availability, flexibility and economy and will set the stage for a new resurgence in network spending and the rise of network software.
For those of us familiar with the network equipment industry, the suggestion that computing costs will fall to the cost of power should at least raise some eyebrows within cultures that for the most part have been driven by manual labor and specialization.
While the network powered the automation of business practices it has managed to insulate itself from the competitive forces it unleashed between competing supply chains and operating models.
I think that the automation of the network is at least partially in response to:
1) the threat of consumerization;
2) the growing significance of new initiatives like IPv6, virtualization, DNSSEC and private cloud; and
3) rising network complexity (more endpoints, higher rates of change, ongoing labor specialization).
The analyst firm predicts that 20 percent of businesses will own no IT assets by 2012, a shift that will have a major impact on IT careers
Anyone still clinging to their manual DDI (DNS, DHCP, and IP address management) scripts and spreadsheets, for example, after reading this sobering and yet provocative prediction will likely join the legions of "middlemen" who were eventually retrained as network-enabled supply chains replaced similar business processes. This transformation has today set the stage for the coming conflict between the real-time enterprise and the increasingly inflexible network.
The recent ESG 2010 Outlook and Gartner DDI MarketScope for DDI appliances are harbingers of a new shift in how networks are managed; and a host of vendors and professionals who embrace automation stand to benefit.
A reduction in the operating expense of networks could vastly expand the market for network solutions that are easier to manage, more powerful and connect ever larger populations of systems and endpoints. Enterprises will be forced to automate or outsource to those who do.
This new network will be all about availability, flexibility and economy and will set the stage for a new resurgence in network spending and the rise of network software.
Monday, 8 February 2010
Elastra Named Cool Cloud Computing Vendor
Elastra Corporation announced that Everything Channel's CRN has named its Enterprise Cloud Server (ECS) as one of the "100 Coolest Cloud Computing Products" of the year, and part of the top "20 Coolest Cloud Infrastructure Vendors." The Top 100 Cloud Computing products include 20 storage vendors, 20 security vendors, 20 productivity vendors, 20 infrastructure vendors and 20 platform vendors.
Elastra ECS brings an unprecedented level of automation to IT organization seeking to leverage cloud computing to address the fundamental challenges they face today. ECS increases IT agility by cutting lead times through automatically generating deployment plans and provisioning of systems. ECS also helps reduce IT costs by minimizing human intervention in systems management while allowing IT organizations to maintain operations control by integrating with their existing tools and technologies.
"It's an honor to be recognized by the editors of CRN. The demand for IT efficiency is on the rise and it's important to create products to meet those requirements," said Peter Chiu, Director of Product Management, Elastra. "ECS seamlessly fits into existing data centers and addresses IT's perennial challenge of getting more done with fewer resources, delivering faster results and reducing IT operational costs."
The "100 Coolest Cloud Computing Products" list was based on nominations from Solution Providers rating technology, channel influence, effectiveness and visibility along with business and sales impact. The final selections were made by a panel of Everything Channel Editors.
Elastra ECS brings an unprecedented level of automation to IT organization seeking to leverage cloud computing to address the fundamental challenges they face today. ECS increases IT agility by cutting lead times through automatically generating deployment plans and provisioning of systems. ECS also helps reduce IT costs by minimizing human intervention in systems management while allowing IT organizations to maintain operations control by integrating with their existing tools and technologies.
"It's an honor to be recognized by the editors of CRN. The demand for IT efficiency is on the rise and it's important to create products to meet those requirements," said Peter Chiu, Director of Product Management, Elastra. "ECS seamlessly fits into existing data centers and addresses IT's perennial challenge of getting more done with fewer resources, delivering faster results and reducing IT operational costs."
The "100 Coolest Cloud Computing Products" list was based on nominations from Solution Providers rating technology, channel influence, effectiveness and visibility along with business and sales impact. The final selections were made by a panel of Everything Channel Editors.
Considering a move to hosted email services ? READ THIS FIRST
Cloud based email services are mature, flexible and cost effective. So why do so many small businesses try to recreate what they've been using (and paying through the nose for..) for years ?
In my last start-up, I wanted to ensure the business had a robust, flexible and cost effective email solution but did not want to invest in servers, licensing and in-house support for a Microsoft Exchange solution. As it happened, we shared an office with a Microsoft Hosted Exchange provider so the obvious route was to utilize their service which cost circa £12 ($19) per seat per month. The solution worked, was familiar to the staff and provided web based access (via Outlook Web Access) as well and calendaring and SharePoint for document management. For the first 18 months, I was happy, the staff were happy and the guys who supported the network we happy.
However, as the business grew, the cost of the solution began to outweigh the benefits and developments we were seeing from the system. The truth was (and still is I am sure) it became very costly and time consuming for the Microsoft provider to deploy new features, releases and additional services. The end game was that 18 months on, we'd still got what we had at day one. Old technology made for servers on the LAN and not suited to Cloud based deployment, development and scale.
More recently, I set about building Cloud Distribution, my new business which is focused (as you may have guessed) on Cloud Computing solutions. I took a look around the market and as expected, the same Microsoft based providers were there en-mass with little or no differentiation other than price which still came it at around £12 ($19) per seat per month. That's £180 ($288) a month for 15 mailboxes which we needed from day one. Now, in start-up land every pound counts and I simply was not prepared to splash out on old technology no matter how familiar it was to my staff so I looked around and found (not that easily I have to tell you) Google Apps Standard Edition. What an eye opener !
Up to 50 seats, 7gb mail box for every user, Web access via Gmail, Desktop access via any IMAP or POP client such as Outlook, Thunderbird etc. etc. Full group calendaring, Google Docs document management (recently extended to a full storage solution where any type of document can be uploaded and accessed), iPhone push email, calendar and contact sync and a whole host of other great small business features for NOTHING. Yes, that's right, nada, zilch, zero per seat per month. For up to 50 seats. Where's the catch ?
Believe me, from a small business perspective, there simply isn't one. Google Apps rocks. Period.
Now, I can hear the masses saying "yes but it does not do this, that and the other" but the truth is I can live without the bits it does not do, saving £180 ($288) a month for all of the other bits it does do that Microsoft's solutions will never do like deliver new features on a weekly basis, smart integration with Salesforce.com (our CRM cloud based platform) an lots of other elements which work for any business who wants to do business not support email systems.
Why am I selling Google Apps ? It's certainly not to make money for Cloud Distribution. As I said it's free and any IT savvy person who knows how to configure a domain via their ISP can get it up and running in a day max. I am outlining this proposition because it is typical of the way Cloud solutions can change the dynamics of a traditionally LAN based application or service so think about other services a small business could use Cloud IT for ?
• Clean Email (already widely used)
• Firewalling (currently at the router or behind it)
• Wi-Fi (Controller in the Cloud not on the LAN)
• VPN (Push it up to the ISP)
• Unified Threat Management (again, push it up to the ISP)
• Telephony (Hosted IP Centrex and SIP Trunking)
• Device Management
• Security Assessments
All of these can be set up to the cloud and delivered with many additional benefits at a fraction of the cost of LAN based services. So, look for the Google Apps of your next application or service requirement, save your business money, make your life easier and help you users do more with less.
Join Us: http://bit.ly/joincloud
Friday, 5 February 2010
SlideShare Launches Channels for Businesses and Brands
Cloud presentation sharing site SlideShare today adds a new Channels service to its professional content sharing arsenal, allowing businesses and brands to create custom microsites within the community. With Channels, companies can create a branded channel for sharing professional content including presentations, whitepapers and webinars, or sponsor a topical content channel curated by SlideShare staff.
Combined with the LeadShare and AdShare programs launched in late 2009, businesses can develop integrated social media campaigns, from custom brand experience to lead generation to targeted promotion of professional content to the large community of business leaders and decision makers that comprise SlideShare's more than 25 million unique visitors per month.

The above screenshot showcases a highly customizable branded channel experience for Microsoft Office, while the below image depicts the second curated channel option. In the latter, the focus is on curating great content within SlideShare to incorporate around a chosen topic (virtualization, in the example below). Interested users can follow a particular channel to get notified of new updates.

Clouds Are Like Onions
Which of course are like Ogres. They’re big, chaotic, and have lots of layers of virtualization
In discussions involving cloud it is often the case that someone will remind you that “virtualization” is not required to build a cloud.
But that’s only partially true, as some layers of virtualization are, in fact, required to build out a cloud computing environment. It’s only “operating system” virtualization that is not required.
Problem is unlike the term “cloud”, “virtualization” has come to be associated with a single, specific kind of virtualization; specifically, it’s almost exclusively used to refer to operating system virtualization, a la Microsoft, VMware, and Citrix. But many kinds of virtualization have existed for much longer than operating system virtualization, and many of them are used extensively in data centers both traditional and cloud-based.
Like ogres, the chaotic nature of a dynamic data based on these types of virtualization can be difficult to manage.

Layer upon layer of virtualization within the data center, like the many layers of an onion, are enough to make you cry at the thought of how to control that volatility without sacrificing the flexibility and scalability introduced by the technologies. You can’t get rid of them, however, as some of these types of virtualization are absolutely necessary to the successful implementation of cloud computing. All of them complicate management and make more difficult the task of understanding how data gets from point A to point B within a cloud computing environment.
Yes, that’s right, eight kinds of virtualization exist though we tend to focus on just the one, operating system virtualization. Some may or may not be leveraged in a cloud computing environment, but at least four of them are almost always found in all data center environments.
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
In discussions involving cloud it is often the case that someone will remind you that “virtualization” is not required to build a cloud.
But that’s only partially true, as some layers of virtualization are, in fact, required to build out a cloud computing environment. It’s only “operating system” virtualization that is not required.
Problem is unlike the term “cloud”, “virtualization” has come to be associated with a single, specific kind of virtualization; specifically, it’s almost exclusively used to refer to operating system virtualization, a la Microsoft, VMware, and Citrix. But many kinds of virtualization have existed for much longer than operating system virtualization, and many of them are used extensively in data centers both traditional and cloud-based.
Like ogres, the chaotic nature of a dynamic data based on these types of virtualization can be difficult to manage.
Layer upon layer of virtualization within the data center, like the many layers of an onion, are enough to make you cry at the thought of how to control that volatility without sacrificing the flexibility and scalability introduced by the technologies. You can’t get rid of them, however, as some of these types of virtualization are absolutely necessary to the successful implementation of cloud computing. All of them complicate management and make more difficult the task of understanding how data gets from point A to point B within a cloud computing environment.
EIGHT KINDS OF VIRTUALIZATION
Yes, that’s right, eight kinds of virtualization exist though we tend to focus on just the one, operating system virtualization. Some may or may not be leveraged in a cloud computing environment, but at least four of them are almost always found in all data center environments.
- Operating System Virtualization is what we tend to think of when we simply say “virtualization.” This is the virtualization of compute resources, the slicing and dicing of a single physical machine into multiple “virtual” machines typically used today to deploy several different applications (or clones of a single application) on the same physical hardware.
- Network Virtualization is likely one kind of virtualization many don’t even consider virtualization, but it is and it’s even got standards that help ensure consistency across
implementations. The VLAN (Virtual LAN) has existed since the early days of networking and is used in cloud computing environments to isolate customer data. VLANs essentially create a virtual network overlay atop an existing physical network, slicing and dicing the physical connections into multiple virtual (and hopefully smaller) networks that can be configured to provide security and network-layer functions like quality of service and rate shaping peculiar to the applications and users that are directed over the VLAN. VLAN tagging, used to identity traffic as “belonging” to a specific virtual network, is defined by IEEE 802.1q.
Also a form of network virtualization is trunking or link aggregation as defined by IEEE 802.1ad. Trunking aggregates multiple physical ports on a switching device and makes them appear as one logical (virtual) link, providing additional bandwidth to high volume networks as well as load balancing traffic across the physical interconnects in order to maintain consistent network performance. Interestingly enough, VLANs are almost always used when trunking is used in a network.
And of course there is NAT (Network Address Translation), which is also a form of network virtualization. Because of the dearth of IP addresses, most users internal to an organization are directed through a pool of one or more public IP addresses (routable, i.e. accessible by people across the Internet) to access resources external to the organization. The virtualization here again makes many IP addresses (internal, non-routable, private) appear to be one or a small number of IP addresses (public, routable, external). This process is also used on inbound connections, making one or a small number of external, public IP addresses appear to represent multiple, internal, private IP addresses. - Application Server Virtualization occurs when a Load balancer, application delivery controller, or other proxy-based application network device “virtualizes” one or more instances of an application. The process of virtualization an application server makes multiple servers appear to be one ginormous server to clients, and acts in a manner very similar to trunking in that this form of virtualization is about aggregation. When applied to application servers, this virtualization focuses on the aggregation of compute resources.
This form of virtualization is almost always necessary in a data center, whether traditional or cloud-based. Application server virtualization is the foundation on which failover (reliability) and scalability are based, and one would be hard-pressed to find a modern data center in which this form of virtualization – whether provided by software or hardware – is not already implemented. - Storage Virtualization is another form of aggregation-based virtualization. Storage virtualization aggregates multiple sources of storage such as NAS (network attached storage) devices and NFS/CIFS shares hosted on various servers around the data center and “normalizes” them into a single, consistent interface such that users are isolated from the actual implementation and see only the “virtual” namespaces presented by the storage virtualization device.
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
Thursday, 4 February 2010
Microsoft Brings Cloud Interoperability Down to Earth
An interoperable cloud could help companies cut costs and governments connect constituents, say Microsoft executives. Governments and businesses alike are looking at cloud services as a way to consolidate IT infrastructure, scale their IT systems for the future, and enable innovative services and activities that were not possible before. To help organizations realize the benefits of cloud services, technology vendors are investing in the hard work of identifying and solving the challenges presented by operating in mixed IT environments, and are collaborating to ensure that their products work well together. In fact, although the industry is still in the early stages of collaborating on cloud interoperability issues, there has already been considerable progress. But what does 'cloud interoperability' mean, and how is it benefiting people today?
Cloud interoperability is specifically about one cloud solution, such as Windows Azure, being able to work with other platforms and other applications, not just other clouds. Customers also want the flexibility to run applications either locally or in the cloud, or on a combination of the two. Microsoft is collaborating with others in the industry and working hard to ensure that the promise of cloud interoperability becomes a reality.
Leading Microsoft’s interoperability efforts are general managers Craig Shank and Jean Paoli. Shank spearheads the company’s interoperability work on global standards and public policy, while Paoli collaborates with Microsoft’s product teams as they map product strategies to customers’ needs.
Shank says one of the main attractions of the cloud is the degree of flexibility and control it gives customers: ‘There’s a tremendous level of creative energy around cloud services right now — and the industry is exploring new ideas and scenarios together all the time. Our goal is to preserve that flexibility through an open approach to cloud interoperability.’
Adds Paoli, ‘This means continuing to create software that’s more open from the ground up, building products that support the existing standards, helping customers use Microsoft cloud services together with open source technologies such as PHP and Java, and ensuring that our existing products work with the cloud.’
Shank and Paoli firmly believe that welcoming competition and choice will make Microsoft more successful in the future. ‘This may seem surprising,’ notes Paoli, ‘but it creates more opportunities for its customers, partners and developers.’
With all the excitement around the cloud, Shank says it’s easy to lose sight of the payoff. ‘To be clear, cloud computing has enormous potential to stimulate economic growth and enable governments to reduce costs and expand services to citizens.’
The public sector provides a great example of the real-world benefits of cloud interoperability, and Microsoft is already delivering results in this area through solutions such as the Eye on Earth project. Working with the European Environment Agency, Microsoft is helping the agency simplify the collection and processing of environmental information for use by government officials and the general public. Using a combination of Windows Azure, Microsoft SQL Azure and pre-existing Linux technologies, Eye on Earth pulls data from 22,000 water monitoring points and 1,000 stations that monitor air quality. It then helps synthesize this information and makes it available for people to access in real time in 24 languages.
This level of openness and interoperability doesn’t happen by accident. ‘The technical work of interoperability is challenging, and it requires a commitment to our customers’ needs as well as a concerted effort on multiple fronts and a measured, pragmatic approach in how technology is developed,’ Paoli says. Microsoft’s efforts in this area include designing its cloud services to be interoperable. The Windows Azure platform, for example, supports a variety of standards and protocols. Developers can write applications to Windows Azure using PHP, Java, Ruby or the Microsoft .NET Framework.
Many of these product developments are the result of diverse feedback channels that Microsoft has developed with its partners, customers and other vendors.
For example, in 2006 Microsoft created the Interoperability Executive Customer (IEC) Council, a group of 35 chief technology officers and chief information officers from organizations around the world. They meet twice a year in Redmond to discuss their interoperability issues and provide feedback to Microsoft executives such as Microsoft Server and Tools President Bob Muglia.
In addition, last week Microsoft published a progress report, sharing for the first time operational details and results achieved by the Council across six work streams, or priority areas. And the Council recently commissioned the creation of a seventh work stream for cloud interoperability, aimed at developing various standards related to the cloud, working through business scenarios and priorities such as data portability, and establishing privacy, security, and service policies around cloud computing.
Microsoft also participates in the Open Cloud Standards Incubator, a working group formed by the Distributed Management Task Force (DMTF), a consortium through which more than 200 technology vendors and customers develop new standards for systems management. AMD, Cisco, HP, IBM, Microsoft, Red Hat and VMware are among a handful of IT vendors that lead the Open Cloud Standards Incubator, creating technical specifications and conducting research to expedite adoption of new cloud interoperability standards.
Developers also play a critical role. Microsoft is part of Simple Cloud, an effort it co-founded with Zend Technologies, IBM and Rackspace designed to help developers write basic cloud applications that work on all of the major cloud platforms.
Microsoft is also engaging in the collaborative work of building technical ‘bridges’ between Microsoft and non-Microsoft technologies, such as the recently released Windows Azure Software Development Kits (SDKs) for PHP and Java and tools for Eclipse version 1.0, the new Windows Azure platform AppFabric SDKs for Java, PHP and Ruby, the SQL CRUD Application Wizard for PHP, and the Bing 404 Web Page Error Toolkit for PHP. Each is an example of the Microsoft Interoperability team’s yearlong work with partners to bring core scenarios to life.
Though the industry is still in the early stages of collaborating on cloud interoperability issues, great progress has already been made. The average user may not realize it, but this progress has had a significant positive impact on the way in which we work and live today.
Cloud interoperability requires a broad perspective and creative, collaborative problem-solving. Looking ahead, Microsoft will continue to support an open dialogue among the different stakeholders in the industry and community to define cloud principles and incorporate all points of view to ensure that in this time of change, there is a world of choice.
Cloud interoperability is specifically about one cloud solution, such as Windows Azure, being able to work with other platforms and other applications, not just other clouds. Customers also want the flexibility to run applications either locally or in the cloud, or on a combination of the two. Microsoft is collaborating with others in the industry and working hard to ensure that the promise of cloud interoperability becomes a reality.
Leading Microsoft’s interoperability efforts are general managers Craig Shank and Jean Paoli. Shank spearheads the company’s interoperability work on global standards and public policy, while Paoli collaborates with Microsoft’s product teams as they map product strategies to customers’ needs.
Shank says one of the main attractions of the cloud is the degree of flexibility and control it gives customers: ‘There’s a tremendous level of creative energy around cloud services right now — and the industry is exploring new ideas and scenarios together all the time. Our goal is to preserve that flexibility through an open approach to cloud interoperability.’
Adds Paoli, ‘This means continuing to create software that’s more open from the ground up, building products that support the existing standards, helping customers use Microsoft cloud services together with open source technologies such as PHP and Java, and ensuring that our existing products work with the cloud.’
Shank and Paoli firmly believe that welcoming competition and choice will make Microsoft more successful in the future. ‘This may seem surprising,’ notes Paoli, ‘but it creates more opportunities for its customers, partners and developers.’
With all the excitement around the cloud, Shank says it’s easy to lose sight of the payoff. ‘To be clear, cloud computing has enormous potential to stimulate economic growth and enable governments to reduce costs and expand services to citizens.’
The public sector provides a great example of the real-world benefits of cloud interoperability, and Microsoft is already delivering results in this area through solutions such as the Eye on Earth project. Working with the European Environment Agency, Microsoft is helping the agency simplify the collection and processing of environmental information for use by government officials and the general public. Using a combination of Windows Azure, Microsoft SQL Azure and pre-existing Linux technologies, Eye on Earth pulls data from 22,000 water monitoring points and 1,000 stations that monitor air quality. It then helps synthesize this information and makes it available for people to access in real time in 24 languages.
CIO, CTO & Developer Resources
Many of these product developments are the result of diverse feedback channels that Microsoft has developed with its partners, customers and other vendors.
For example, in 2006 Microsoft created the Interoperability Executive Customer (IEC) Council, a group of 35 chief technology officers and chief information officers from organizations around the world. They meet twice a year in Redmond to discuss their interoperability issues and provide feedback to Microsoft executives such as Microsoft Server and Tools President Bob Muglia.
In addition, last week Microsoft published a progress report, sharing for the first time operational details and results achieved by the Council across six work streams, or priority areas. And the Council recently commissioned the creation of a seventh work stream for cloud interoperability, aimed at developing various standards related to the cloud, working through business scenarios and priorities such as data portability, and establishing privacy, security, and service policies around cloud computing.
Microsoft also participates in the Open Cloud Standards Incubator, a working group formed by the Distributed Management Task Force (DMTF), a consortium through which more than 200 technology vendors and customers develop new standards for systems management. AMD, Cisco, HP, IBM, Microsoft, Red Hat and VMware are among a handful of IT vendors that lead the Open Cloud Standards Incubator, creating technical specifications and conducting research to expedite adoption of new cloud interoperability standards.
Developers also play a critical role. Microsoft is part of Simple Cloud, an effort it co-founded with Zend Technologies, IBM and Rackspace designed to help developers write basic cloud applications that work on all of the major cloud platforms.
Microsoft is also engaging in the collaborative work of building technical ‘bridges’ between Microsoft and non-Microsoft technologies, such as the recently released Windows Azure Software Development Kits (SDKs) for PHP and Java and tools for Eclipse version 1.0, the new Windows Azure platform AppFabric SDKs for Java, PHP and Ruby, the SQL CRUD Application Wizard for PHP, and the Bing 404 Web Page Error Toolkit for PHP. Each is an example of the Microsoft Interoperability team’s yearlong work with partners to bring core scenarios to life.
Though the industry is still in the early stages of collaborating on cloud interoperability issues, great progress has already been made. The average user may not realize it, but this progress has had a significant positive impact on the way in which we work and live today.
Cloud interoperability requires a broad perspective and creative, collaborative problem-solving. Looking ahead, Microsoft will continue to support an open dialogue among the different stakeholders in the industry and community to define cloud principles and incorporate all points of view to ensure that in this time of change, there is a world of choice.
Most Not Interested in Cloud Data Storage
Another dose of reality for the Cloud Computing industry! A new survey by Forrester says that just 3% of companies use cloud storage. Worse, the vast majority of firms don’t plan to put data in the cloud. This is the latest shot of poor showings for the cloud, and I have a theory about it. But first, read on: Forrester interviewed more than 1,200 IT decision makers at enterprises and small and mid-size businesses in North America and Europe. The research company asked IT decision makers if they had plans to adopt cloud storage services such as those offered by Amazon S3, EMC Atmos, Nirvanix, The Planet, or AT&T.
· 43% said they’re not interested in cloud storage;
· An equal proportion were interested but have no plans to adopt;
· 3% plan to implement a cloud storage platform in the next year;
· 5% plan on it a year from now or later;
· And, while 3% have already switched to cloud storage, only 1% are expanding an existing implementation.
To me, this reflects issues and concerns that just won’t go away on the part of IT folks and end users, chief among them the need for assurances of guaranteed service levels and security. Forrester agrees, according to a story about the survey in SF Gate:
Forrester analyst Andrew Reichman writes in the report that “there is long-term potential for storage-as-a-service, but Forrester sees issues with guaranteed service levels, security, chain of custody, shared tenancy, and long-term pricing as significant barriers that still need to be addressed before it takes off in any meaningful way.”
One interesting finding of the survey is that companies are more interested in the cloud for back-up storage rather than general purpose storage. Why is that?
“First, it’s s a complete service offering, not just CPU or storage capacity,” Reichman writes. “You get the backup software intelligence and storage capacity in a fully managed service. Second, it’s solving a very specific pain point – the pain of bringing a costly and error-prone, but very necessary, IT function under control. This is in contrast to storage-as-a-service offerings where the user has to figure out how to put it all together.”
I’ll put my theory about this finding out there, too. I think that companies probably feel they’re taking less of a chance backing up data on the cloud – data they own and also have access to on internal servers. Perhaps this doesn’t require such a leap of faith as it does to migrate all your data to the cloud…with a fear that you can’t recoup any information that gets lost.
This is why companies that do use cloud data storage also take safeguards to ensure their data remains safe and accessible – such as cloud monitoring services.
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
· 43% said they’re not interested in cloud storage;
· An equal proportion were interested but have no plans to adopt;
· 3% plan to implement a cloud storage platform in the next year;
· 5% plan on it a year from now or later;
· And, while 3% have already switched to cloud storage, only 1% are expanding an existing implementation.
To me, this reflects issues and concerns that just won’t go away on the part of IT folks and end users, chief among them the need for assurances of guaranteed service levels and security. Forrester agrees, according to a story about the survey in SF Gate:
Forrester analyst Andrew Reichman writes in the report that “there is long-term potential for storage-as-a-service, but Forrester sees issues with guaranteed service levels, security, chain of custody, shared tenancy, and long-term pricing as significant barriers that still need to be addressed before it takes off in any meaningful way.”
One interesting finding of the survey is that companies are more interested in the cloud for back-up storage rather than general purpose storage. Why is that?
“First, it’s s a complete service offering, not just CPU or storage capacity,” Reichman writes. “You get the backup software intelligence and storage capacity in a fully managed service. Second, it’s solving a very specific pain point – the pain of bringing a costly and error-prone, but very necessary, IT function under control. This is in contrast to storage-as-a-service offerings where the user has to figure out how to put it all together.”
I’ll put my theory about this finding out there, too. I think that companies probably feel they’re taking less of a chance backing up data on the cloud – data they own and also have access to on internal servers. Perhaps this doesn’t require such a leap of faith as it does to migrate all your data to the cloud…with a fear that you can’t recoup any information that gets lost.
This is why companies that do use cloud data storage also take safeguards to ensure their data remains safe and accessible – such as cloud monitoring services.
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
Wednesday, 3 February 2010
TONIGHT 6PM GMT
I am attending the OpSource webinar - Making Channels Work to Grow Your SaaS / Cloud Business TONIGHT @ 6pm http://bit.ly/94yJXc
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
Genetic Cloud: Cloud Computing Is For Everyone Every Day
Genetic algorithms are very simple. They have only three steps.
Cloud Computing provides almost unlimited resources. They are always available. The payment is only for usage time. The current article continues the series about new opportunities that become available to every person.
Do you play billiard? I do. I’m not a professional player. So I enjoy pocketing a ball accidentally. The ball’s going at the rebound from cushions, coming to the pocket; it almost stops with revolving, and … falls to the pocket. Such kind of a casual luck!
But I’m not sure the luck is here today. The coming manager doesn’t look like he simplifies my current work day.
“A task for you….”
To exclude all technical details, commercial secrets, and other things, the task can be presented as a known travelling salesman problem with 50 points there. Yes, the order of variant’s number is about 50! (10^64). The distance matrix is not static additionally. Even if I find the necessary library quickly it’ll take much time to include the task features there.
“When should it be ready?” I’m asking guardedly.
“I believe in you” my manager‘s tapping me. “Today’s evening.”
“What is if it is ready earlier than evening?” I’m trying to joke.
“If it’s ready earlier you could even play billiard the rest of the time.” My manager answers in the same manner.
Ok, at least I’m motivated now.
There are a lot of methods to solve such a problem .
It would be great if the task could be solved by my minimal efforts. Even it takes a little bit more other resources, e.g. computing time. Sure! Indeed it can be solved in this way! A genetic algorithm is the simplest method that I can implement. But I will have to just wait for the result. Wasting of time! But I can use the cloud resources. It’ll save me time, and I will be able to play billiard during waiting.
The chromosome is a ring path starting from the first defined point and directed to the second defined point (to not duplicate the same ways during the search).
The implementation is:
So the algorithm is ready. Is the search not most effective? But I’m very effective. The code is ready quickly, even if it takes some additional time to calculate, the whole task will be finished quicker.
The genetic algorithm is still randomized method. It guaranties
almost optimal solution and depends on parameters. I can increase the
population size to improve the probability of the best result, but it
will increase the calculation time. Or I can run several instances
simultaneously and pickup the best result.
As I write on .NET, I prefer any other working tools also in the Visual Studio. So my cloud is also in the Studio. I use a free tool EC2Studio. It’s the most convenient tool for me to operate with the Amazon EC2 in the Visual Studio. Amazon EC2 provides computing resources when I need them. I choose the necessary AMI configuration and start an instance. After I finish using the instance, I terminate it. The payment is only for the used time.
The deploying is:
The picture shows 4 cloud instances and 1 locally run version (on Vista).
The best solution is 181.85 here (it’s on the slowest machine, but the genetic algorithm is random).
There are different configurations by software and hardware are available on EC2. My tests use small and large images (small images are on 32bit, large images are on 64bit). The competitive test result - the average time per one cycle (selection, mutation, and crossing over according to defined shares):
Taking into account 4th times prices difference between a small and
large instance in EC2, a small instance is twice cheaper for this
program (when the execution time is not critical).
The same day later.
By e-mails:
“Mr. Manager, the found almost most optimal variant consists of the sequence of the following points…”
“Good! But where are you?”
“I’m having a good billiard playing, as you said in the morning.”
“I understand. Have beautiful doublets! And big thanks for the result!”
Now I think almost all optimization tasks can be solved with two components: a cloud and a genetic algorithm.
Read more Cloud Distribution News @ http://bit.ly/5NMFEA
Cloud Computing provides almost unlimited resources. They are always available. The payment is only for usage time. The current article continues the series about new opportunities that become available to every person.
Do you play billiard? I do. I’m not a professional player. So I enjoy pocketing a ball accidentally. The ball’s going at the rebound from cushions, coming to the pocket; it almost stops with revolving, and … falls to the pocket. Such kind of a casual luck!
But I’m not sure the luck is here today. The coming manager doesn’t look like he simplifies my current work day.
“A task for you….”
To exclude all technical details, commercial secrets, and other things, the task can be presented as a known travelling salesman problem with 50 points there. Yes, the order of variant’s number is about 50! (10^64). The distance matrix is not static additionally. Even if I find the necessary library quickly it’ll take much time to include the task features there.
“When should it be ready?” I’m asking guardedly.
“I believe in you” my manager‘s tapping me. “Today’s evening.”
“What is if it is ready earlier than evening?” I’m trying to joke.
“If it’s ready earlier you could even play billiard the rest of the time.” My manager answers in the same manner.
Ok, at least I’m motivated now.
There are a lot of methods to solve such a problem .
It would be great if the task could be solved by my minimal efforts. Even it takes a little bit more other resources, e.g. computing time. Sure! Indeed it can be solved in this way! A genetic algorithm is the simplest method that I can implement. But I will have to just wait for the result. Wasting of time! But I can use the cloud resources. It’ll save me time, and I will be able to play billiard during waiting.
Stage 1. Genetic Algorithm.
Genetic algorithms are very simple. They have only three steps (excluding an initialization): crossover, mutation, and selection.The chromosome is a ring path starting from the first defined point and directed to the second defined point (to not duplicate the same ways during the search).
The implementation is:
- The initialization is easy, because it’s random.
- The selection is even easier, because the common distance is necessary to be calculated and the best (shortest by the distance) chromosomes should be taken to the further search.
- The mutation is not such a difficult task. Two random cells of a chromosome are replaced by each other.
- The crossover is a little bit more complex. Two random cells are taken, that defines 2 parts of rings for both chromosomes. One part from each chromosome is taken in the way where parts have minimal overlapping. Duplicated cells are removed randomly, absent cells are added randomly according to their placement in the initial chromosomes.
So the algorithm is ready. Is the search not most effective? But I’m very effective. The code is ready quickly, even if it takes some additional time to calculate, the whole task will be finished quicker.
Stage 2. Cloud.
CIO, CTO & Developer Resources
As I write on .NET, I prefer any other working tools also in the Visual Studio. So my cloud is also in the Studio. I use a free tool EC2Studio. It’s the most convenient tool for me to operate with the Amazon EC2 in the Visual Studio. Amazon EC2 provides computing resources when I need them. I choose the necessary AMI configuration and start an instance. After I finish using the instance, I terminate it. The payment is only for the used time.
The deploying is:
- Launch an instance.
- Generate a key pair, if it is not generated yet.
- Find the necessary AMI (I use the following instances for competitive tests: ami-6a6f8d03 (Win2008, large, EBS), ami-a2698bcb (Win2008, small, EBS), ami-dd20c3b4 (Win2003, large), and ami-df20c3b6 (Win2003, small)).
- Start it.
- Wait until the instance becomes in running status.
- Make a test in the instance
- Get an Administrator’s password for the instance.
- Start a console for the instance.
- Copy the application there.
- Run it there.
The picture shows 4 cloud instances and 1 locally run version (on Vista).
The best solution is 181.85 here (it’s on the slowest machine, but the genetic algorithm is random).
There are different configurations by software and hardware are available on EC2. My tests use small and large images (small images are on 32bit, large images are on 64bit). The competitive test result - the average time per one cycle (selection, mutation, and crossing over according to defined shares):
| For small Windows 2003 32bit Amazon Instance | 1098 msec |
| For large Windows 2003 64bit Amazon Instance | 594 msec |
| For large Windows 2008 64bit Amazon Instance | 503 msec |
| For small Windows 2008 32bit Amazon Instance | 1139 msec |
| For my laptop Vista | 1659 msec |
The same day later.
By e-mails:
“Mr. Manager, the found almost most optimal variant consists of the sequence of the following points…”
“Good! But where are you?”
“I’m having a good billiard playing, as you said in the morning.”
“I understand. Have beautiful doublets! And big thanks for the result!”
Now I think almost all optimization tasks can be solved with two components: a cloud and a genetic algorithm.
Every problem leads to a good solution. Every
cloud has a silver lining.
Read more Cloud Distribution News @ http://bit.ly/5NMFEA




comScore has released a 
