SPLA Reseller Roadshow Training #SPLA #Hosters #ServiceProviders #Cloud

Register Here: http://bit.ly/14Y3qct

Introduction:
The Microsoft Services Provider License Agreement (SPLA) is designed for organizations that want to offer hosted software and services to end customers, such as Web hosting, hosted applications, messaging, collaboration, and platform infrastructure. SPLA partners have the ability to deliver a customized service with a flexible cost structure, no start-up costs, no monthly sales minimums or required term of commitment.

Microsoft now has over 22,000 service providers enrolled in the SPLA Program who have driven double-digit growth YoY for the past 3 years and in FY13 the global SPLA business surpassed the $1bn revenue mark.

Course Description:
This level 300 SPLA Reseller readiness training is specifically targeted at SPLA Resellers who want to have a detailed understanding of SPLA licensing. This training session consists of a series of short presentations followed by case work. Complex scenarios are covered such as Windows Server virtualization and SQL Server licensing.

Date & Time
4th September 2013
09:00 GMT – 16:00 GMT

Logistics
Microsoft Ireland (Sales, Marketing and Services Group)
Building 3, Training room 5.42
Carmanhall Road
Sandyford Industrial Estate
Dublin 18

HP #FlexFabric Reference Architecture – Applying HP Converged Infrastructure to data center networks #HPConverge

HP FlexFabric Reference Architecture – Applying HP Converged Infrastructure to data center networks

The  technical white paper can be downloaded hereL http://bit.ly/ZsXvlK

Courtesty of HP Converged Infrastructure

 

HP ConvergedNetwork WhitePaper

HP ConvergedNetwork WhitePaper TOC

Overview of the Guide

This guide is intended for technology decision-makers, solution architects, and other experts tasked with improving data center networking. It can serve as a baseline for network planning and design projects.

It is said, “You cannot chart your course without first knowing whence you came.” This also applies to data center architecture. However, many technical guides take the opposite approach. They attempt to sway the reader towards specific technical directions based on the merits of a current technology or standard. That approach often loses the reader because it does not provide a context for why the new technical approach was developed in the first place.

This document will frequently reference technology trends in the data center which have and are being driven through virtualization and standards. It will also introduce issues that confront data center architects in this fast-paced, results-driven, and security-minded industry.

Technical documents often promote a vendor’s products or vision. This document takes a slightly different approach. Rather than put HP’s vision for converged network infrastructure first, this guide instead presents the building blocks for that vision. It does this by first identifying the most important IT trend today—virtualization of resources at all levels. It then moves forward by introducing HP-supported technologies that enhance virtualized computer networks. Finally, it provides FlexFabric Reference Architecture examples for different types of virtualized server deployments using a layered approach.

The FlexFabric Reference Architecture guide is less of a discussion on specific HP equipment, and more of an overall focus on two things—virtualization and HP Networking technologies that support virtualization. It provides another level of detail to complement the HP Converged Infrastructure Reference Architecture Solution Block Design Guide http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA2-6453ENW.pdf

HP believes simplification is the overriding key to success in networks supporting virtualization. This document provides guidance on network simplification for virtualized deployments that do not sacrifice performance or deployment flexibility.

The major concept areas that will be covered are:

  • Virtual server networking
  • Securing the virtual edge
  • Managing the virtual edge
  • Converged network infrastructure

This approach allows data center architects and IT teams to develop new and more flexible data center models and methodologies. By doing so, IT can meet new demands head-on, rather than forcing businesses to adapt to technology limitations.

…Converged network infrastructure: unifying data and storage networks

Convergence is a technical term historically used to express the combining of voice and data onto the same network fabric. Now expressed as a converged network infrastructure, it encompasses the sharing of network resources between data and storage networks. This trend constitutes a move towards a unification of data and storage networks.

Network technologies like Fibre Channel, used to connect storage resources to computers, differ substantially from network technologies used to connect computer networks. Although high in performance, these network types create two dissimilar data center networks (LAN/WAN and storage), which increase the number of cables and management.

Technologies such as blade servers have addressed this challenge by drastically reducing the number of interconnections. Blade servers have simplified the network by reducing cables and Ethernet ports by over 75 percent. Converged network infrastructure can reduce data center complexity by an additional 50 percent, using technologies like Fibre Channel over Ethernet (FCoE), and more efficient technologies like data center bridging (DCB), also known as converged enhanced Ethernet (CEE).

ConvergedInfra

All of these emerging network technologies have an effect on how data centers are being planned for the future, but it is also important to understand how these technologies evolved.

The remainder of this section focuses on identifying what is currently used in data center network deployments, as well as identifying HP’s vision of converged network infrastructure….

HP #3PAR – VMware Environments #HPStorageGuy #HPStorage #HPConverge

Courtesy of Calvin Zito HPStorageGuy Blog

Read the full blog article here: http://bit.ly/Vl9Ase    Plus  this article: 7 reasons why HP 3PAR is the best storage for VMware

The traditional RAID era and how the storage world has changed

…The spindle count in RAID groups were calculated by storage architects based on host IOP workload requirements. There was no real concept of throwing all spindles into one big “pool”, and then carving and provisioning storage from that pool.

The architecture is similar to this example image depicting the traditional era; each RAID group was more or less dedicated to one particular workload.

Traditional RAID

Things have changed since then, thus the concept of shared pools or shared storage was born; this was to drive initiatives like cloud computing, deduplication (if your array supported it natively), and storage tiering amongst other things. By having this shared pool of resources, workloads were “spread out” across the storage resources thus generating a bigger pool of grunt to draw from.

HP 3PAR does this in the form of wide striping, breaking storage down into “chunklets”.

Chunklets
The term chunklets may sound like some sort of breakfast cereal, but although not of the food variety the concept definitely still holds some nutritional value for your storage requirements. Here’s how they work:

  • An HP 3PAR array is populated with one or more disk types; these can be either Fibre Channel, SATA, or SSD. In order to provision storage from these drives to a host, there needs to be a Common Provisioning Group (CPG) created; this serves as a template for creating LUNs. Typically, the CPG needs to be of the same disk type and the same RAID characteristics.
  • From there, LUNs can be created and provisioned to the host. When ESXi hosts starts storing virtual machine data – whether its virtual disks data or meta data to the LUN – each physical drive is broken down into 256 MB chunklets that the LUNs can use to store the data.
    One point to note is that there is also chunklets for distributed sparing as well.
    As an example, for a single 600Gb drive you will have 2400 chunklets at your disposal for virtual machine use (600Gb*1024Mb/256Mb). When you add more shelves of drives, the picture gets bigger as does the performance.

Wide Striping
From physical disks right through to the LUNs that are provisioned to the ESXi host, the result is that the chunklets are created across all of the spindle types in the array as defined in the CPG. This system wide allocation super charges performance for virtual workloads.

wide striping

chunklets

Multi Raid? Sure!

One hard question as a storage architect to answer is “what type of RAID shall I use for this virtual environment?”. This question is typically answered with the usual “It depends” response. Different workloads call for different strategies as different RAID types have different RAID penalties\performance considerations.

There is a consensus in the industry to consider the following rules of thumbs (these are only rules of thumb and are not best practices in any form):

  • RAID 1/0 – Usually higher write intensive random workloads suit this.
  • RAID 5 – Arguably one of the best all-rounders, offering a good balance of performance and redundancy. Modest random workloads are a good fit.
  • RAID 6 – HP 3PAR offers double parity protection in the form of RAID-MP, offering a higher redundancy (double failure) than RAID 5 but at the cost of usable storage and performance because of the added write penalty.

…Regardless of which RAID type is used, making a write I/O takes time. The quicker the write is made, the better the latency and throughput and the less write penalty is observed.

Dynamic Optimisation (DO) and Adaptive Optimisation (AO)

The end result is that your data gets automagically spread across all disks and all disk types in the 3PAR, with hot regions on fast disks and cold data on slow disks. The whole performance capability of the entire array is made available to all of your data automatically; this is how virtual workloads should be stored!

Optimisation

In Closing.

Here’s the key takeaway to remember….The main contributor of an array’s performance is determined by how many disks of each disk type is installed in the array, the more drives you have in the CPG then the more throughput and overall IOPS is available to all of your VMFS datastores and subsequently your virtual machine workloads….

Next Gen #3PAR What you need to know #HPConverge #HPStorage

Came across this good blog Courtesy of techopsguys.com

Below are a few extracts and full article here: http://bit.ly/YyAT9G

Some extracts:

  • Renamed Products:

    There was some basic name changes for 3PAR product lines:

    • The HP 3PAR InServ is now the HP 3PAR StorServ
    • The HP 3PAR V800 is now the HP 3PAR 10800
    • The HP 3PAR V400 is now the HP 3PAR 10400
  • The 3PAR 7000-Series mid range done right :

    • The 3PAR 7000-series leverages all of the same tier one technology that is in the high end platform and puts it in a very affordable package
    • The 7200 & 7400 represents roughly a 55-65% discount over the previous F-class mid range 3PAR solution
    • The 7000 series comes in two flavors – a two node 7200, and a two or four node 7400.
    • Note that it is not possible to upgrade in place a 7200 to a 7400. So you still have to be sure if you want a 4-node capable system to choose the 7400 up front (you can, of course purchase a two-node 7400 and add the other two nodes later).
  • Dual vs Quad ControllerThe controller configurations are different between the two and the 7400 has extra cluster cross connects to unify the cluster across enclosures. The 7400 is the first 3PAR system that is not leveraging a passive backplane for all inter-node communications.

    A unique and key selling point for having a 4-node 3PAR system is persistent cache, which keeps the cache in write back mode during planned or unplanned controller maintenance

  • Basic array specifications
    Click to see Bigger3PAR Array Specifications

    (Note: All current 3PAR arrays have dedicated gigabit network ports on each controller for IP-based replication)

  • Dual vs Quad controller:

    In a nut shell, vs the F-class mid range systems, the new 7000…

    • Doubles the data cache per controller to 12GB compared to F200, almost triple if you compare the 7400 to the F200/F400)
    • Doubles the control cache per controller to 8GB, The control cache is dedicated memory for the operating system completely isolated from the data cache.
    • Brings PCI-Express support to the 3PAR mid range allowing for 8Gbps Fibre Channel and 10Gbps iSCSI
    • Brings the mid range up to spec with the latest 4th generation ASIC, and latest Intel processor technology.
    • Nearly triples the raw capacity
    • Moves from an entirely Fibre channel based system to a SAS back end with a Fibre front end
    • Moves from exclusively 3.5″ drives to primarily 2.5″ drives with a couple 3.5″ drive options
    • Brings FC0E support to the 3PAR mid range (in 2013) for the four customers who use FCoE.
    • Cuts the size of the controllers by more than half
    • Obviously dramatically increases the I/O and throughput of the system with the new ASIC with PCIe, faster CPU cores, more CPU cores(in 7400) and the extra cache.
  • Persistent Ports

This is a really cool feature as well – it gives the ability to provide redundant connectivity to multiple controllers on a 3PAR array without having to have host-based multipathing software. How is this possible? Basically it is NPIV for the array. Peer controllers can assume the world wide names for the ports on their partner controller. If a controller goes down, it’s peer assumes the identities of that controller’s ports, instantaneously providing connectivity for hosts that were (not directly) connected to the ports on the downed controller. This eliminates pauses for MPIO software to detect faults and fail over, and generally makes life a better place.

HP claims that some other tier 1 vendors can provide this functionality for software changes, but they do not today, provide it for hardware changes. 3PAR provides this technology for both hardware and software changes – on all of their currently shipping systems!

  • Virtualized Service Processor

All 3PAR systems have come with a dedicated server known as the Service Processor, this acts as a proxy of sorts between the array and 3PAR support. It is used for alerting as well as remote administration. The hardware configuration of this server was quite inflexible and it made it needlessly complex to deploy in some scenarios (mainly due to having only a single network port).

The service processor was also rated to consume a mind boggling 300W of power (it may of been a legacy typo but that’s the number that was given in the specs).

The Service processor can now be deployed as a virtual machine!

  • Thick Conversion

I’m sure many customers have wanted this over the years as well. The new software will allow you to convert a thin volume to a thick (fat) volume. The main purpose of this of course is to save on licensing for thin provisioning when you have a volume that is fully provisioned (along with the likelihood of space reclamation on that volume being low as well). I know I could of used this years ago.. I always shook my fist at 3PAR when they made it easy to convert to thin, but really impossible to convert back to thick (without service disruption anyway).

  • Easy setup with Smart Start

Leveraging technology from the EVA line of arrays, HP has radically simplified the installation process of a 7000-series array, so much so that the customer can now perform the installation on their own without professional services. This is huge for this market segment. The up front professional services to install a mid range F200 storage system had a list price of $10,000 (as of last year anyway).

HP ASE Converged Infrastructure Architect Official Exam Certification Guide #Matrix HPCloudSystem #MOE #HPExpertOne

Description:
This HP ExpertOne book will help you prepare for the Architecting the HP Matrix Operating Environment (HP0-D20) exam. You will learn key technologies and the HP Converged Infrastructure solutions as you prepare for the HP ASE – Converged Infrastructure Architect V1 certification. Acquiring this certification validates your ability to transform data centers bringing together server, storage, and networking functions into an integrated and managed pool of IT resources. This guide will continue to serve as a useful reference to analyze customer requirements and recommend the correct HP Converged Infrastructure solution.

https://h30590.www3.hp.com/product/HP+ASE+Converged+Infrastructure+Architect+Official+Exam+Certification+Guide+Exam+HP0-D20-Hardcover-8383

Feature Comparison – #VMware vDS & #Cisco #Nexus 1000V switches

Access the Solution Overview here:
Virtual Networking Features of the VMware vNetwork
Distributed Switch and Cisco Nexus 1000V Switches

Cisco Nexus 1000V Series Switches and VMware vSphere 4: Accelerate Data Center Virtualization

ALTERNATIVES FOR VIRTUAL NETWORKING

With VMware vNetwork, VMware is introducing a number of alternatives for virtual networking in vSphere 4. Table 1 summarizes and compares the features of these alternatives.

VMware vNetwork Standard Switch
The VMware vNetwork Standard Switch (vSS) is the base level virtual networking alternative. It extends the familiar appearance, configuration, and capabilities of the standard virtual switch (vSwitch) in VMware ESX 3.5 to ESX 4.0 and vSphere 4.

VMware vNetwork Distributed Switch
The VMware vNetwork Distributed Switch (vDS) is new with vSphere 4. The VMware vDS extends the feature set of the VMware Standard Switch, while simplifying network provisioning, monitoring, and management through an abstracted, single distributed switch representation of multiple VMware ESX and ESXi Servers in a VMware data center.

Cisco Nexus 1000V Series Switches
Cisco Nexus™ 1000V Series Switches are the result of a Cisco and VMware collaboration building on the VMware vNetwork third-party vSwitch API of VMware vDS and the industry-leading switching technology of the Cisco Nexus Family of switches. Featuring the Cisco NX-OS Software data center operating system, the Cisco Nexus 1000v Series extends the virtual networking feature set to a level consistent with physical Cisco switches and brings advanced data center networking, security, and operating capabilities to the vSphere environment. It provides end-to-end physical and virtual network provisioning, monitoring, and administration with virtual machine–level granularity using common and existing network tools and interfaces. The Cisco Nexus 1000V Series transparently integrates with VMware vCenter Server to provide a consistent virtual machine provisioning workflow while offering features well suited for data center–class applications, VMware View, and other mission-critical virtual machine deployments.

VMware vDistributed Switch vDS
VMware vDistributed Switch vDS
Cisco Nexus 1000V
Cisco Nexus 1000V

#Cisco #Nexus 5000 Useful Resources

Some useful resources on this topic found whilst working on a Hosting Infrastructure Solution.

Cisco Nexus 5000 Series Switch Home Page – http://www.cisco.com/en/US/products/ps9670/index.html

Cisco Switch Guide Matrix –
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/6701_catswitch_guide_v7_r5.pdf

Cisco Nexus Family – http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/brochure_c02-466008_ps9670_Products_Brochure.html

At a Glance – http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/at_a_glance_c45-462427.pdf

Fibre Channel over Ethernet FCoE –
http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns945/ns1060/at_a_glance_c45-578384.pdf

White Papers – http://www.cisco.com/en/US/products/ps9670/prod_white_papers_list.html

#Cisco Data Center #Virtualisation: Enhanced Secure Multi-Tenancy Design Guide #VMWare #IAAS #NetAPP #UCS

Goal of This Document:
Cisco®, VMware®, and NetApp® have jointly designed a best-in-breed Enhanced Secure Multi-Tenancy (ESMT) Architecture and have validated this design in a lab environment. This document describes the design of and the rationale behind the Enhanced Secure Multi-Tenancy Architecture. The design includes many issues that must be addressed prior to deployment as no two environments are alike. This document also discusses the problems that this architecture solves and the four pillars of an Enhanced Secure Multi-Tenancy environment.

Audience :
The target audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who wish to deploy an Enhanced Secure Multi-Tenancy (ESMT) environment consisting of best-of-breed products from Cisco, NetApp, and VMware.

Objectives:
This document is intended to articulate the design considerations and validation efforts required to design, deploy, and backup Enhanced Secure Multi-Tenancy virtual IT-as-a-service.

Foundational Components:
Each implementation of an Enhanced Secure Multi-Tenancy (ESMT) environment will most likely be different due to the dynamic nature and flexibility provided within the architecture. For this reason this document should be viewed as a reference for designing a customized solution based on specific tenant requirements. The following outlines the foundational components that are required to configure a base Enhanced Secure Multi-Tenancy environment. Add additional features to these foundational components to build a customized environment designed for specific tenant requirements. It is important to note that this document not only outlines the design considerations around these foundational components, but also includes considerations for the additional features that can be leveraged in customizing an Enhanced Secure Multi-Tenancy environment.

The Enhanced Secure Multi-Tenancy foundational components include:

•Cisco Nexus® data center switches
•Cisco Unified Computing System
•Cisco Nexus 1000V Distributed Virtual Switch
•NetApp Data ONTAP®
•VMware vSphere™
•VMware vCenter™ Server
•VMware vShield™

Read the full Reference Architecture Design Guide here

Download PDF from my public SkyDrive Here

ANS group free morning seminar London 19th May – Infrastrure 3.0 Next Generation Data Centre – #UCS Unified Fabric #Cisco #VMware #NetApp

http://www.ansgroup.co.uk/events/infrastructure-3-0-at-the-brewery

Infrastructure 3.0 at The Brewery
Location: James Watt Room – The Brewery, Chiswell Street London EC1Y 4SD

As a Cisco Gold Partner, VMware Premier and NetApp Star Partner, ANS Group are at the forefront of delivering and scaling innovative Data Centre Solutions. Our leading expert’s will show you how to create a Unified Data Centre ensuring all components are designed and implemented in a manner to which the environment acts as one.

Unified Data Centre’s require core technologies to work in synergy in order to reduce overall operating costs. Deploying point technologies in the data centre without consideration to adjacent systems does not create a flexible, fluid architecture moving forward. By designing the systems to work in tandem, the maximum technology benefits can be achieved from end to end, ensuring that the maximum possible savings can be made in every area. This seminar will focus specifically on Unified Computing, Unified Fabric and Application Delivery.

Join us for our free morning seminar to develop your understanding of the Next Generation Data Centre.

Ethernet versus FC – Great ‘Surfer vs Banker’ Analogy #Cisco #NetApp #DCB

Read Part 1: http://www.networkworld.com/community/blog/ethernet-adapts-data-center-applications-%E2%80%93-pa

Read Part 2: http://www.networkworld.com/community/blog/ethernet-adapts-data-center-applications-%E2%80%93-p-0

Many networks need to marry their Fibre Channel SAN protocols to Ethernet. But Ethernet is an easy-going protocol (let’s call it West Coast) and Fibre Channel is a structured protocol (East Coast). “Data Center Bridging” will be to these two what the central country is to the Coasts – the means by which the two connect….

If we compare and contrast Ethernet and Fibre Channel (East Coast/West Coast derived protocols), we see that Ethernet is the laid back West Coast surfer that will try to deliver your frames on time and in order, but if they don’t you get a “Sorry dude, couldn’t make it happen” response but you’ll be OK because TCP will retransmit or for UDP, it was probably real-time traffic and hopefully didn’t notice the clipping.

Fibre Channel on the other hand is a very structured and regimented East Coast protocol that won’t tolerate delays and drops. Significant efforts are made to ensure this on-time delivery including a hop-by-hop buffering system and classes of service that can guarantee in-order delivery. If Fibre Channel frames hit the deck, bad things happen. Most applications and operating systems don’t like it when their storage is pulled out from under them while the network converges – recent personal experience was a great reinforcement of this principal. Wonder why your SAN admins get nervous when you mention FCoE? The laze-faire approach of Ethernet is the reason.

So how do we solve this challenge and merge the East Coast rigidity of Fibre Channel onto the West Coast laid back Ethernet – Data Center Bridging is the answer. Data Center Bridging (DCB) is a collection of enhancements to Ethernet that make it capable of providing lossless transport for protocols like FCoE….

Ethernet inherently doesn’t provide the ability to multi-path because STP is blocking our redundant links to mitigate loops in the network. So if you are implementing Fibre Channel over Ethernet and have promised your SAN team that the network won’t lose their Fibre Channel frames, the next hurdle will be multi-pathing. (See previous post that discussed the ways Fibre Channel and Ethernet don’t get along, and why Data Center Bridging is the answer.)

How do we cross that chasm? There are two approaches that relate to how you plan to implement FCoE in your network… single-hop, and…multi-hop.

Read more of the full articles at above Link

Credit: http://blog.ioshints.info/2010/10/ethernet-versus-fc-surfer-versus-banker.html

#FCoE #SAN multi-hop technology primer

Read the full SearchStorage Article @ …FCoE SAN multi-hop technology primer.

What you will learn in this tip: Fibre Channel over Ethernet (FCoE) storage-area network (SAN) technology is becoming more popular in data storage environments, but there are performance issues, primarily the lack of multi-hop switching support, that need to be addressed that could potentially stunt the growth of the technology. Find out what vendors and users are doing to improve FCoE SAN performance.

FCoE SAN is gaining broad support from storage and network vendors, and customer adoption is also rising. Because it’s a new protocol and relies on many new features, FCoE remains somewhat limited in terms of interoperability and flexibility. One often-criticized element is the lack of multi-hop switching support in FCoE SANs, but what exactly does this mean?

A quick Fibre Channel primer

Fibre Channel (FC) initiators contain a number of Node Ports (“N_Port”) that connect to the Fabric Ports (“F_Port”) on switches. FC switches talk to each other using Expansion Ports (“E_Port”) before finally communicating with the N_Port on the storage array. This allows them to route traffic through the SAN to avoid data loss and congestion. FCoE SANs adopt a virtual version of this configuration, with a “VN_Port” talking to a “VF_Port,” and (if they support it) the network switches using “VE_Ports” to exchange data over an inter-switch link (ISL).

One major difference between a Fibre Channel fabric and Ethernet network is intelligence: The fabric itself actively participates in access and routing decisions. Although it’s distributed, the FC fabric has some intelligence and thus FC switches are more involved in the network than basic Ethernet switches. In particular, each switch participates in making decisions about where to send data traffic, so each stream of initiator-to-target traffic gets its own route through the SAN rather than sharing a single route as in an Ethernet LAN with spanning tree path management.

#Cisco expands Fibre Channel over Ethernet support; adds multihop #FCoE

Orignally pubished on Search Storage here: http://bit.ly/g8eVMc

Cisco Systems Inc. took steps to fill in the gaps in Fibre Channel over Ethernet (FCoE) support today as part of its data center portfolio expansion.

Cisco’s launch of servers, switches and management tools included a handful of FCoE enhancements. It added FCoE support for the MDS 9500 storage switch and the Nexus 7000 data center director switch platform, as well as multihop FCoE support in its NX-OS operating system. Cisco is also moving to a common management tool — Data Center Network Manager — for storage-area network (SAN) and local-area network (LAN) devices.

Director-class, multihop support for Fibre Channel, FCoE, iSCSI and network-attached storage (NAS) gives Cisco the ability to make seven hops between Unified Computing System (UCS), Nexus and MDS devices, allowing customers to scale Fibre Channel over Ethernet networks without requiring the emerging Transparent Interconnection of Lots of Links (TRILL) standard. Previously, Cisco only supported FCoE on its Nexus 5000 top-of-rack switch.

 “It’s not so much FCoE everywhere, as FCoE anywhere,” said Rob Nusbaum, Cisco’s product line manager for the MDS platform. “Each node running FCoE is no longer dependent on TRILL or Fabric Path. You can aggregate FCoE traffic in the SAN core and to other devices in the network.”

These enhancements come as people in the data storage industry continue to debate when FCoE will become prevalent in the data center, and whether it will even be the key protocol for convergence between Fibre Channel and Ethernet. The move Fibre Channel over Ethernet has come slower than Cisco anticipated when it first revealed its new data center strategy around Nexus switches and its Unified Computing System platform three ago.

Analysts agree that Cisco’s new additions could help ease the transition to Fibre Channel over Ethernet for enterprises looking to go that way.

“There have been restrictions until now,” said Wikibon senior analyst Stuart Miniman. “One of them was you couldn’t do multihop. This announcement from Cisco removes most of those restrictions. For Fibre Channel customers who want to take a slower path toward convergence, it gives them a path to do that.”

Rick Villars, vice president of storage systems and executive strategies at IDC, agreed that multihop capability makes Fibre Channel over Ethernet a better fit for FC shops.

“If you were going to do Fibre Channel and FCoE before, you were limited to having dedicated storage to the UCS platform,” Villars said. “Multihop capability lets you broaden the base to customers who have Fibre Channel storage. It’s also important as part of a disaster recovery strategy because it involves the full backup and recovery effort.”

Common management for storage and network teams

Data Center Network Manager is the first step toward common management of LANs and SANs for Cisco shops. It combines Cisco Network Manager and Cisco Fabric Manager for data storage management into one platform with integration with VMware vCenter for provisioning and troubleshooting. There are still two versions — Data Center Network Manager for SAN and Data Center Network Manager for LAN.

“This is a good step,” Wikibon’s Miniman said of Data Center Network Manager. “You don’t want to just allow the LAN and SAN guys to do the same things they always did. Having a single pane of glass can blur the lines between the LAN and SAN without scaring off the network and storage guys.”

‘Still torn’ on Fibre Channel over Ethernet

Is this enough to push significant adoption of Fibre Channel over Ethernet? Villars and Miniman said they expect 10 GbE to play a major role in a converged network, but FCoE’s place in that convergence remains unclear.

“We’re still torn on FCoE,” IDC’s Villars said. “We think it’s a matter of time on 10 Gig Ethernet, but whether it’s FCoE, iSCSI or a file protocol, that’s still up in the air. We see companies that are loyal to Fibre Channel going to FCoE, but others who are just as loyal to Fibre Channel want to look at file storage for their converged infrastructure. We’re still in the early stages where people are looking at a lot of options; it’s not just about the network protocol, it’s also about management for the converged environment.”

Wikibon’s Miniman said the next major piece of the FCoE puzzle will come when Intel releases its “Sandy Bridge” server architecture later this year that will be optimized for 10 GbE and drive more LAN on motherboards (LOMs) that support FCoE. Miniman said 10 GbE may drag Fibre Channel over Ethernet into the data center because when all the pieces are in place, administrators will feel pressure to use them.

“FCoE is not the driver for convergence,” Miniman said, “10 Gig Ethernet adoption is the driver. When you have all the pieces embedded, there will be pressure from management to say ‘Why pay for separate HBAs and Fibre Channel switches when Cisco tells me I have all these pieces already?'”

Cisco’s FC switch rival Brocade has taken a more conservative approach to FCoE, although it acquired network switch vendor Foundry Networks in 2009 to give it an Ethernet platform. Brocade supports FCoE, but remains more devoted to developing its pure Fibre Channel products than Fibre Channel over Ethernet.

“Brocade has not been showing up in FCoE deployments from what I’ve heard,” Miniman said. “Cisco has a broader portfolio of FCoE products and Brocade has the second largest, but Brocade does not seem as committed to FCoE. Brocade has an Ethernet business it’s looking to grow and a Fibre Channel business to maintain, but it’s not committed to merging them in a single platform.”

This article was originally published on SearchStorage.com.