One of the concerns with cloud bursting specifically for the use of addressing seasonal scaling needs is that cloud computing environments are not necessarily PCI-friendly. But there may be a solution that allows the application to maintain its PCI-compliance and still make use of cloud computing environments for seasonal scaling efficiency.
Cloud bursting, a.k.a. overdraft protection, is a great concept but in some situations, such as those involving PCI-compliance, it can be difficult if not impossible to actually implement. The financial advantages to cloud bursting for organizations requiring additional capacity on only a seasonal basis are well understood, but the regulatory issues that surround such implementations hinder adoption of this method to address cost-effective capacity increases when necessarily only for short periods of time.
But what if we architected a solution based on cloud bursting that offers the same type of advantages without compromising compliance with regulations and guidelines like PCI-DSS?
The ability to implement such an architecture would require that the PCI-compliant portions of a web application are separated (somehow, perhaps as SOA services or independently accessible RESTful services) from the rest of the application.
The non-PCI related portions of the application are cloned and deployed in a cloud environment. The PCI-related portions stay right where they are. As the PCI related portions are likely less heavily stressed even by seasonal spikes in demand, it is assumed that the available corporate compute resources will suffice to maintain availability during a spike, mainly because the PCI compliant resources have at their disposal all local resources. It is also possible –and likely – that the PCI-related portions of the application will not consume all available corporate compute resources, which means there is some capacity available to essentially reverse cloud burst into the corporate resources if necessary.
In a very simple scenario, the global server load balancer basically "reverses" the priority of data centers when answering queries during the time period in which you expect to see spikes. So all application requests are directed to the cloud computing provider's instance first except for queries that require the PCI-compliant portion, which are always directed to the corporate (cloud computing perhaps) instance. This is basically a "cloud balancing" scenario: distributing application requests intelligently between two cloud computing environments.
The variations on this theme can become more complex and factor in many more variables. For example, you could set a threshold of capacity on the corporate data center instance that allows enough corporate compute resources available to handle the highest expected transaction rate and only burst into the cloud if the corporate capacity reaches that level. That's traditional "cloud bursting." You could also reverse the burst by dipping into corporate compute resources based on thresholds designated at the cloud computing provider's instance to minimize the financial impact of utilizing a cloud computing provider as the primary delivery mechanism for the application. That would be "reverse cloud bursting." The key is to ensure that no matter where the compute resources are coming from for the primary application components it does not negatively impact the availability and performance of the PCI-compliant processes executing in the corporate cloud environment.
Without the flexibility to deploy individual components of an application (a.k.a. services) into different environments these scenarios simply don't work. Applications developed based on tightly-coupled frameworks and principles will never truly be capable of taking advantage of cloud balancing, bursting, or any architecture that relies upon specific components residing in a specific location because of regulatory issues or other concerns.
This is one of the core principles of SOA – separation of not only interface from implementation, but location-agnosticism. There are many ways to achieve this kind of location-agnosticism including on-demand generation of WSDL for client consumption that specifies end-point location based on the context of the initial request and the use of global server load balancing combined with context-aware application delivery. What's vitally important, though, is the flexibility of the underlying application architecture and the ability to separate components in a way that makes it possible to distribute across multiple locations in the first place.
If that means SOA is the answer, then SOA is the answer. If that means a well-designed set of RESTful components, so be it. Whatever is going to fit into your organizational development and architectural practices is the right answer, as long as the answer includes "location agnosticism" and loosely-coupled applications. Once you've got that down the possibilities for how to leverage external and internal cloud computing environments is limited only by your imagination and, as always, your budget.
Cloud bursting, a.k.a. overdraft protection, is a great concept but in some situations, such as those involving PCI-compliance, it can be difficult if not impossible to actually implement. The financial advantages to cloud bursting for organizations requiring additional capacity on only a seasonal basis are well understood, but the regulatory issues that surround such implementations hinder adoption of this method to address cost-effective capacity increases when necessarily only for short periods of time.
But what if we architected a solution based on cloud bursting that offers the same type of advantages without compromising compliance with regulations and guidelines like PCI-DSS?
REVERSE CLOUD BURSTING and CLOUD BALANCING
The ability to implement such an architecture would require that the PCI-compliant portions of a web application are separated (somehow, perhaps as SOA services or independently accessible RESTful services) from the rest of the application.
The non-PCI related portions of the application are cloned and deployed in a cloud environment. The PCI-related portions stay right where they are. As the PCI related portions are likely less heavily stressed even by seasonal spikes in demand, it is assumed that the available corporate compute resources will suffice to maintain availability during a spike, mainly because the PCI compliant resources have at their disposal all local resources. It is also possible –and likely – that the PCI-related portions of the application will not consume all available corporate compute resources, which means there is some capacity available to essentially reverse cloud burst into the corporate resources if necessary.
In a very simple scenario, the global server load balancer basically "reverses" the priority of data centers when answering queries during the time period in which you expect to see spikes. So all application requests are directed to the cloud computing provider's instance first except for queries that require the PCI-compliant portion, which are always directed to the corporate (cloud computing perhaps) instance. This is basically a "cloud balancing" scenario: distributing application requests intelligently between two cloud computing environments.
The variations on this theme can become more complex and factor in many more variables. For example, you could set a threshold of capacity on the corporate data center instance that allows enough corporate compute resources available to handle the highest expected transaction rate and only burst into the cloud if the corporate capacity reaches that level. That's traditional "cloud bursting." You could also reverse the burst by dipping into corporate compute resources based on thresholds designated at the cloud computing provider's instance to minimize the financial impact of utilizing a cloud computing provider as the primary delivery mechanism for the application. That would be "reverse cloud bursting." The key is to ensure that no matter where the compute resources are coming from for the primary application components it does not negatively impact the availability and performance of the PCI-compliant processes executing in the corporate cloud environment.
THE KEY IS FLEXIBILITY IN ARCHITECTURE
Without the flexibility to deploy individual components of an application (a.k.a. services) into different environments these scenarios simply don't work. Applications developed based on tightly-coupled frameworks and principles will never truly be capable of taking advantage of cloud balancing, bursting, or any architecture that relies upon specific components residing in a specific location because of regulatory issues or other concerns.
This is one of the core principles of SOA – separation of not only interface from implementation, but location-agnosticism. There are many ways to achieve this kind of location-agnosticism including on-demand generation of WSDL for client consumption that specifies end-point location based on the context of the initial request and the use of global server load balancing combined with context-aware application delivery. What's vitally important, though, is the flexibility of the underlying application architecture and the ability to separate components in a way that makes it possible to distribute across multiple locations in the first place.
If that means SOA is the answer, then SOA is the answer. If that means a well-designed set of RESTful components, so be it. Whatever is going to fit into your organizational development and architectural practices is the right answer, as long as the answer includes "location agnosticism" and loosely-coupled applications. Once you've got that down the possibilities for how to leverage external and internal cloud computing environments is limited only by your imagination and, as always, your budget.
No comments:
Post a Comment