Archive

Archive for May, 2011

#Microsoft formalizes #cloud computing enterprise #licensing + #SPLA Updates

May 31, 2011 Comments off

Below taken from SearchCloudComputing – read more

Nearly two years after enterprise customers were given sanctioned Windows Server virtual machines to run in Amazon Web Services, Rackspace Cloud and other services, Microsoft has adjusted its enterprise volume licensing to allow for “license mobility.”

We’re trying to do this in a way that’s very straightforward to let people know where they are.

….When did Microsoft ever do anything that was straight forward with regards to Licensing ;o)

Advertisements

DC & Virtualisation Titbits…NAS vs SAN FCoE vs iSCSI? Dont Believe the Hype #Cisco #EMC #NetApp

May 27, 2011 Comments off

Some useful titbits here taken from: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

Read the full articles in the PDF

Titbit #1

NAS versus SAN for data center virtualisation storage

There are two major approaches to network storage: network attached storage (NAS) and storage area network (SAN). They vary in both network architecture and how each presents itself to the network client. NAS devices leverage the existing IP network network and deliver file-layer access.

NAS appliances are optimised for sharing files across the network because they are nearly identical to a file server.

SAN technologies, including Fibre Channel (FC) and iSCSI, deliver block-layer access, forgoing the file system
abstractions and appearing to the client as essentially an unformatted hard disk.

FC operates on a dedicated network, requiring its own FC switch and host bus adapters in each server.

An emerging standard, Fibre Channel over Ethernet (FCoE), collapses the storage and IP network onto a single converged switch, but still requires a specialised converged networking adapter in each server.

SAN solutions have an advantage over NAS devices in terms of performance, but at a cost of some contention issues

Titbit #2

FCoE vs iSCSI? Howabout the one that’s ready?

Vendors can push Fibre Channel over Ethernet (FCoE) all they want, but the technology is simply not ready for deployment, argues Stephen Foskett, Gestalt IT community organiser. But iSCSI is another story. “I am not a big fan of FCoE yet. The data centre bridging (DCB) extensions are coming … but we don’t yet have an end-to-end FCoE solution. We don’t have the DCB components standardised yet,” Foskett said. What does Foskett think it will take to make FCoE work? “It’ll take a complete end-to-end network. I understand the incremental approach is probably now what most people are going to do. It’s not like they’re going to forklift everything and get a new storage array and get a new greenfield system, but right now you can’t do that,” Foskett said. iSCSI, on the other hand, works over 10 Gigabit Ethernet today and lends itself to a total solution. So why aren’t vendors selling it? “iSCSI doesn’t give vendors a unique point of entry. They can’t say we’ve got iSCSI, so that makes us exceptional. But with FCoE they can say, ‘We are the masters of Fibre Channel’ or ‘We are the masters of Ethernet, so you can trust us.’ iSCSI works too well for anybody to have a competitive advantage,” Foskett said.

Before embarking on an FCoE implementation, ask:

Will the storage team or the networking team own the
infrastructure? If co-managed, who has the deciding vote?

Which department will pay for it? How will chargeback be calculated
and future growth determined?

Will the teams be integrated? Typically, the networking team is responsible
for IP switches, while the storage team is responsible for Fibre Channel.

Who will own day-to-day operational issues? If a decision needs to be
made regarding whether more bandwidth is given to local area network (LAN)
or storage area network (SAN) traffic, who makes the call?Will companies have
to create a single, integrated connectivity group?

Titbit #3

Chosing a convergence technology….FCOE OR ISCSI? Does it Matter?

FCoE gets all the data centre network convergence hype, but many industry veterans say iSCSI is another viable option. As an IP-based storage networking protocol, iSCSI can run natively over an Ethernet network.Most enterprises that use iSCSI today run the storage protocol over their own separate networks because convergence wasn’t an option on Gigabit Ethernet. But with 10 GbE switches becoming more affordable, iSCSI-based convergence is becoming more of a reality.
“Certainly iSCSI is the easier transition [compared to FCoE],” said storage blogger and IT consultant Stephen Foskett. “With iSCSI you don’t have to have data center bridging, new NICs, new cables or new switches.”
Ultimately the existing infrastructure and the storage demands of an enterprise will govern the choice of a network convergence path. “There are very few times where I will steer a customer down an FCoE route if they don’t all ready have a Fibre Channel investment,” said Onisick. “If they have a need for very high performance and very low throughput block data, FCoE is a great way to do it. If they can sustain a little more latency, iSCSI is fantastic. And if they have no need for block data, then NAS [networkattached storage] and NFS [network file system] is a fantastic option.”
For Ramsey, iSCSI was never a viable option because ofWellmont’s high-performance requirements. “We played around with iSCSI, but that was still going to run over TCP, and you’re still going to contend with buffering, flow control, windowing or packet drops and queuing, so we stayed away from it. What FCoE brings to the table—It doesn’t run over Layer 3. It’s an encapsulation of your Fibre Channel packet inside a native Layer 2 frame, and all we’re doing is transporting that between the server and up to the Nexus 2232 and the Nexus 5020.”

Network Convergence Beyond the Rack

…The most bang for the buck right now is to simplify the rack environment…

Although Cisco and other vendors will begin delivery of end-to-end FCoE switching capabilities this year, with technologies like Shortest Path Bridging and Transparent Interconnection of Lots of Links (TRILL), Ramsey doesn’t see moving beyond rack-level network convergence within the next five years.
“What you’re talking about is multi-hop FCoE, and Cisco is still working on fleshing that out. The most bang for the buck right now is to simplify the rack environment. If you want to go all FCoE, all your EMC stuff is going to have to be retrofitted with FCoE 10 Gigabit. And at that point you could probably get rid of your Fibre Channel. Maybe in five years we’ll look at that, but that’s not really going to buy us anything right now.We’re just not pushing into the type of bandwidth where we would need dedicated 10 Gigabit to the storage. We don’t need that much data.
Where FCoE helps us is simplification inside the rack, making it faster, cheaper and smaller.”
Cloud is also not ready to look past the rack until he gets a better handle on management of converged networks.
“[Brocade] just announced a lot of this stuff, and we want to test out the management system. Once we prove that out, we’ll be looking to go further [with convergence].We are rying to figure out the total cost of
ownership.”….

Read the full articles in the PDF: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

Cloudy with a Chance of….The great #Cloud bait and switch: Agility instead of Cost

May 27, 2011 Comments off

Read the Full SearchCloudComputing.com arcticle here: The great cloud bait and switch: Agility instead of cost Credit to Carl Brooks

Cloudy with a Chance of ….

The magic bottom-line benefits aren’t the rallying cry anymore….

The good ol’ days of cloud

Remember when “the cloud” was all about dumping those pesky capital investment dollars for slimmer, trimmer opex costs? It was the banner under which rode hosts of cloud providers, startups and independent software vendors. IT shops were supposed to drop everything and flock to the idea of never owning infrastructure again, or turning their crummy old data centers into sparkly new systems without rebuilding everything from scratch….

…As enterprise cloud adoption steadily mounts, for both public cloud services and the private cloud paradigm, it’s becoming clear that vendors are trying to change their tune to better suit their ambitions. The new byword is “agility”; the sell-siders would fain have you forget about that earlier golden calf, “operating expenses (opex) versus capital expenses (capex)….

At Interop last week, there was a two-day Enterprise Cloud Summit that trotted out every cloud computing trope and moldy old straw man from the last three years…except that one. Based on the vendor panels, which these days at least have company names you’ve heard of instead of flavor-of-the-week startups, the magic bottom-line benefits aren’t the rallying cry anymore. And Microsoft, at Interop and again this week at TechEd, put cloud in practically every tag line but accompanied it with “agility, agility, agility.”

“Use Azure to be more agile, use Hyper-V to be more agile,” and so on”

Agility, Agility, Agility

….like all marketing, there’s some truth to the fact that Agility — which we’ll describe as the ability to deliver, change and improve IT services faster than otherwise — is as important to enterprises as the operational efficiency to be gained from running a cloud computing model.

By and large, users I talk to aren’t terribly confused: Cloud computing gives them the ability to do more, with the same amount of budget, than they could before. That’s why it’s popular.

But that’s pure poison to the sales and marketing departments at the likes of IBM, Microsoft, and so on. They’ll end up selling you the same amount of stuff and you’ll do 5 times what you used to, or worse, you’ll buy less stuff…

Why arent all enterprises building private #clouds? #searchcloudcomputing.com

May 27, 2011 Comments off

Why aren’t all enterprises building private clouds?

…In part one of this look at private cloud, we examined what it takes to build an in-house cloud service. In part two, we’ll find out why all enterprises aren’t hopping onboard the private cloud bandwagon.

Given the hype and, possibly, executive pressure to implement a private cloud infrastructure, many companies have struggled to justify the massive organizational change and expense. While many IT managers believe that their shops aren’t yet ready, their objections may fall on deaf ears among executives who care only about the bottom line. Still, if you are an IT decision maker, skepticism about cloud computing is not only warranted; it’s essential. The size of your company and other key factors — such as the degree of virtualization in your IT shop — should dictate whether your company is ready for the move….

…According to the Microsoft white paper “The Economics of the Cloud,” for small and medium-sized organizations with fewer than 100 servers, private clouds are prohibitively expensive compared with public cloud services. The only way for these small organizations or departments to share the benefits of at-scale cloud computing is by moving to a public cloud model, the white paper argues. For large organizations with an installed base of approximately 1,000 servers, however, private clouds are feasible. But they still come at a significant cost of about 10 times the price of a public cloud for the same unit of service given the combined effect of scale, demand diversification and multitenancy….

Categories: Virtualisation

#Cisco Data Center #Virtualisation: Enhanced Secure Multi-Tenancy Design Guide #VMWare #IAAS #NetAPP #UCS

May 26, 2011 Comments off

Goal of This Document:
Cisco®, VMware®, and NetApp® have jointly designed a best-in-breed Enhanced Secure Multi-Tenancy (ESMT) Architecture and have validated this design in a lab environment. This document describes the design of and the rationale behind the Enhanced Secure Multi-Tenancy Architecture. The design includes many issues that must be addressed prior to deployment as no two environments are alike. This document also discusses the problems that this architecture solves and the four pillars of an Enhanced Secure Multi-Tenancy environment.

Audience :
The target audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who wish to deploy an Enhanced Secure Multi-Tenancy (ESMT) environment consisting of best-of-breed products from Cisco, NetApp, and VMware.

Objectives:
This document is intended to articulate the design considerations and validation efforts required to design, deploy, and backup Enhanced Secure Multi-Tenancy virtual IT-as-a-service.

Foundational Components:
Each implementation of an Enhanced Secure Multi-Tenancy (ESMT) environment will most likely be different due to the dynamic nature and flexibility provided within the architecture. For this reason this document should be viewed as a reference for designing a customized solution based on specific tenant requirements. The following outlines the foundational components that are required to configure a base Enhanced Secure Multi-Tenancy environment. Add additional features to these foundational components to build a customized environment designed for specific tenant requirements. It is important to note that this document not only outlines the design considerations around these foundational components, but also includes considerations for the additional features that can be leveraged in customizing an Enhanced Secure Multi-Tenancy environment.

The Enhanced Secure Multi-Tenancy foundational components include:

•Cisco Nexus® data center switches
•Cisco Unified Computing System
•Cisco Nexus 1000V Distributed Virtual Switch
•NetApp Data ONTAP®
•VMware vSphere™
•VMware vCenter™ Server
•VMware vShield™

Read the full Reference Architecture Design Guide here

Download PDF from my public SkyDrive Here

Airing Private #Cloud ’s Dirty Laundry… An interesting Laundry Analogy

May 25, 2011 Comments off

An interesting take on Private Cloud through a Laundry Analogy…

Read: Airing Private Cloud’s Dirty Laundy

It’s 10:13pm on a Friday night and as the highlight of my day begrudgingly reveals itself, I discover in preparation for the inevitable appearance of tomorrow, that I am once again out of clean underwear.

There are many potential remedies for this situation…

Define the #Cloud Blog Post: Is Private Cloud a Unicorn?

May 25, 2011 Comments off

Below ia taken from Joe Onisick’s Blog Post – Is Private Cloud a Unicorn?

With all of the discussion, adoption, and expansion of cloud offerings there is a constant debate that continues to rear its head: Public vs. Private or more bluntly ‘Is there even such thing as a private cloud?’ You typically have two sides of this debate coming from two different camps:

Public Cloud Proponents: There is no such thing as private cloud and or you won’t gain the economies of scale and benefits of a cloud model when building it privately.

Private Cloud Proponents: Building a cloud IT delivery model in-house provides greater resource control, accountability, security and can leverage existing infrastructure investment.

Before we begin let’s start with the basics, The National Institute of Standards and Technology (NIST) definition of cloud:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics:

On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.mobile phones, laptops, and PDAs).

Resource pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out, and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured Service: Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Service Models:

Cloud Software as a Service (SaaS): The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Cloud Platform as a Service (PaaS): The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.

Cloud Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.

Community cloud: The cloud infrastructure is shared by several organizations and supports a
specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.

Public cloud: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.

Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g.cloud bursting for load balancing between clouds).

http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf

Obviously NIST believes there is a place for private cloud, as do several others, so where does the issue arise?

The argument against private cloud:

Public cloud proponents believe in another defining characteristic of cloud computing: Utility Pricing. They believe that the ‘pay for only what you use’ component of public cloud should be required for all clouds, which would negate the concept of private cloud where the infrastructure is paid for up front and has a cost whether or not it’s used. The driver for this is Cloud’s benefit of moving CapEx (capital expenditure) to OpEx (Operating Expenditure.) Because you aren’t buying infrastructure you have no upfront costs and pay as you go for use. This has obvious advantages and this type of utility model makes sense (think power grid in big picture terms, you have metered use.)

So public cloud it is?

Not so fast! There are several key concerns for public cloud that may drive the decision to utilize a private cloud:

Data Security – Will my data be secure/can I entrust it to another entity? The best example of this would be the Department of Defense (DoD) and intelligence community. That level of sensitive data can not be entrusted to a private 3rd party.

Performance – Will my business applications have the same level of performance existing in a public offsite cloud?

Up-time – On average a properly designed enterprise data center provides 99.99 (4×9’s) uptime or above whereas a public cloud is typically guaranteed for 3 to 4×9’s. This means relying on a single public cloud infrastructure will most likely provide less availability for enterprise customers. To put that in perspective 3×9’s is 8.76 hours of downtime per year where 4×9’s is only 52.56 minutes. An enterprise data center operating at 5×9’s only experiences 5.26 minutes of downtime per year.

Exit/Migration strategy – In the event it were necessary how would the applications and data be moved back in-house or to another cloud?
These factors must be considered when making a decision to utilize a public cloud. For most organizations they’re typically not roadblocks, but speed bumps that must be navigated carefully.

So which it it?

That question will be answered differently for every organization. It’s based on what you want to do and how you want to do it. Chris Hoff uses laundry to explain this: http://www.rationalsurvivability.com/blog/?p=2384

Additionally cost will be a major factor, Wikibon has an excellent post arguing that private cloud is more cost effective for organizations over $1 billion: http://wikibon.org/wiki/v/Private_Cloud_is_more_Cost_Effective_than_Public_Cloud_for_Organizations_over_$1B.
Additionally in many cases a hybrid model may work best either as a permanent solution or migration path.

Summary:

Private cloud is no unicorn and will be here to stay. For some it will be a stepping stone to a fully public IT model, and for others it will be the solution. Organizations like the federal government have the data security needs to require a private cloud and the size/scale to gain the benefits of that model. Other large organizations may find that private makes more monetary sense. Availability, security, compliance etc. may drive other companies to look at a private cloud model.

Cloud is about cost but it’s more importantly about accelerating the business. When IT can respond immediately to new demands the business can execute more quickly. Both public and private models provide this benefit, each organization will have to decide for itself which model fits their demands.