Depending on the professional circles in which you travel, cloud computing could mean different things to different people. Generally though, most people can (knowingly or not) agree on the NIST definitions of there being essentially three distinct variants (I’ve excluded their Community Cloud definition):
Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Public cloud. The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.
Hybrid cloud. The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).
I’ve mentioned these definitions, not in an attempt to offer up new information, but to offer a level from which to use here, because in recent engagements I have noticed these seemingly mature definitions (NIST updated their definitions over 1 year ago) are still not fully understood. Well, perhaps that’s not 100% accurate. I think what I’m seeing and hearing is that although these definitions may be understood (but not fully), their applications are not being understood and as a consequence to that, the specific business benefits are not being taken advantage of and I think perhaps (at the root of it all) is a misunderstanding of virtualization vs. cloud computing.
The result of historical data center virtualization projects has been to reduce both variable and fixed costs (through the cost reduction of physical core infrastructure), however the virtualized computing model (NOT the cloud model) is still based upon long term fixed costs. These costs don’t decrease if the business model changes. Cloud computing obviously leverages virtualization technologies, but can also offer expansion and contraction of variable costs, which in turn, naturally builds flexibility into an operating budget, which is one reason banks and financial institutions are moving to the cloud. Specifically, these verticals are looking for agility from ITthe ability to quickly scale up or scale down (read, offer bigger and better client forward services) and then allocate costs appropriately.
Cloud computing offers the business (i.e. the CIO, CTO and CFO) the lever they need and a Hybrid Cloud deployment may be the solution of choice for the banking industry with all of the many financial IT regulations. Migrating to a fully public cloud offering for all services is not likely to happen, but a hybrid deployment may be just what the IT doctor ordered. With the soon to be release Windows Server 2012 (as a Cloud OS) and the Azure offering, coupled with the already solid, mature and cross-platform System Center solution suite, Microsoft has quietly built the beginning of what could one of the most complete (and potentially vendor agnostic) solutions on the market that does not require an IT department to build their own tools. From just a Windows Server 2012 perspective, it meets all of the top-level requirements:
- Scalable and Elastic: Hyper-V 3.0 lets an Enterprise scale from 1 to thousands of virtual machines as workload or business demands dictate. It supports up to 320 logical processors and 4 TB of RAM per server. The new OS can run large virtual machines up to 64 virtual processors and 1 terabyte of memory per VM and can scaled to 8,000 VMs per cluster.
- Shared Resources: Windows Server 2012 is architected for multi-tenancy, critical to ensure the workloads of a given group, organizational unit or customer don’t impact others. Combined with System Center 2012 SP1 and software defined networks, an Enterprise can dynamically provision isolated virtual networks running on the same physical fabric.
- Always-On: Live Migration provides VM mobility, which facilitates the movement of virtual machines from one physical server to another locally or over a wide area network. This cluster-aware feature is designed to provide continuous availability during patches, upgrades or failures.
- Automation and Self-service: Users in departments can self-provision compute and storage resources. Windows Server 2012 enables automation with over 2,400 new PowerShell commandlets, designed to eliminate manual tasks and allowing IT to manage massive amounts of servers. Combined with System Center 2012, Windows Server 2012 offers automation via user-defined policies.
So if we put this altogether, industry verticals that want to leverage the possibilities of a cloud based architecture, but cannot holistically leverage public clouds, can now much more easily begin building an environment on Windows Server 2012 that will allow them to build a legitimate private cloud architecture, while being able to burst, expand, grow or migrate acceptable workloads into a public cloud and take advantage of the benefits of that environment. All while maintaining a common infrastructure and management platform. All while being able to manage disparate virtualization environments. Not a bad set of options when you as an organization are looking to recapture costs from within IT, but need IT to be the linchpin to your organization’s innovation engine.