Archive

Archive for the ‘Cisco’ Category

#vmware forum 2012 presentations #IaaS #PrivateCloud

June 6, 2012 Comments off

Event Presentations here:

http://www.vmwareforum2012.com/
London/presentations

Integrating #Lync Server 2010 and #Cisco Unified Communications Manager #CUCM #SIP

July 21, 2011 Comments off

Courtesy of NextHop Blog

http://blogs.technet.com/b/nexthop/archive/2011/07/17/integrating-lync-server-2010-and-cisco-unified-communications-manager.aspx

The white paper, Integrating Microsoft Lync Server 2010 and Cisco Unified Communications Manager, is now available in the Download Center. This white paper shows step-by-step configuration tasks to set up the Direct SIP connectivity between Cisco Unified Communications Manager (CUCM) and Lync Server. These steps include configuration of the media bypass feature that optimizes media flow by allowing Lync endpoints to directly establish a media connection with a gateway or private branch exchange (PBX) without going through the Lync Server, Mediation Server.

Feature Comparison – #VMware vDS & #Cisco #Nexus 1000V switches

July 20, 2011 Comments off

Access the Solution Overview here:
Virtual Networking Features of the VMware vNetwork
Distributed Switch and Cisco Nexus 1000V Switches

Cisco Nexus 1000V Series Switches and VMware vSphere 4: Accelerate Data Center Virtualization

ALTERNATIVES FOR VIRTUAL NETWORKING

With VMware vNetwork, VMware is introducing a number of alternatives for virtual networking in vSphere 4. Table 1 summarizes and compares the features of these alternatives.

VMware vNetwork Standard Switch
The VMware vNetwork Standard Switch (vSS) is the base level virtual networking alternative. It extends the familiar appearance, configuration, and capabilities of the standard virtual switch (vSwitch) in VMware ESX 3.5 to ESX 4.0 and vSphere 4.

VMware vNetwork Distributed Switch
The VMware vNetwork Distributed Switch (vDS) is new with vSphere 4. The VMware vDS extends the feature set of the VMware Standard Switch, while simplifying network provisioning, monitoring, and management through an abstracted, single distributed switch representation of multiple VMware ESX and ESXi Servers in a VMware data center.

Cisco Nexus 1000V Series Switches
Cisco Nexus™ 1000V Series Switches are the result of a Cisco and VMware collaboration building on the VMware vNetwork third-party vSwitch API of VMware vDS and the industry-leading switching technology of the Cisco Nexus Family of switches. Featuring the Cisco NX-OS Software data center operating system, the Cisco Nexus 1000v Series extends the virtual networking feature set to a level consistent with physical Cisco switches and brings advanced data center networking, security, and operating capabilities to the vSphere environment. It provides end-to-end physical and virtual network provisioning, monitoring, and administration with virtual machine–level granularity using common and existing network tools and interfaces. The Cisco Nexus 1000V Series transparently integrates with VMware vCenter Server to provide a consistent virtual machine provisioning workflow while offering features well suited for data center–class applications, VMware View, and other mission-critical virtual machine deployments.

VMware vDistributed Switch vDS

VMware vDistributed Switch vDS

Cisco Nexus 1000V

Cisco Nexus 1000V

#Cisco #Nexus 5000 Useful Resources

July 14, 2011 Comments off

DC & Virtualisation Titbits…NAS vs SAN FCoE vs iSCSI? Dont Believe the Hype #Cisco #EMC #NetApp

May 27, 2011 Comments off

Some useful titbits here taken from: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

Read the full articles in the PDF

Titbit #1

NAS versus SAN for data center virtualisation storage

There are two major approaches to network storage: network attached storage (NAS) and storage area network (SAN). They vary in both network architecture and how each presents itself to the network client. NAS devices leverage the existing IP network network and deliver file-layer access.

NAS appliances are optimised for sharing files across the network because they are nearly identical to a file server.

SAN technologies, including Fibre Channel (FC) and iSCSI, deliver block-layer access, forgoing the file system
abstractions and appearing to the client as essentially an unformatted hard disk.

FC operates on a dedicated network, requiring its own FC switch and host bus adapters in each server.

An emerging standard, Fibre Channel over Ethernet (FCoE), collapses the storage and IP network onto a single converged switch, but still requires a specialised converged networking adapter in each server.

SAN solutions have an advantage over NAS devices in terms of performance, but at a cost of some contention issues

Titbit #2

FCoE vs iSCSI? Howabout the one that’s ready?

Vendors can push Fibre Channel over Ethernet (FCoE) all they want, but the technology is simply not ready for deployment, argues Stephen Foskett, Gestalt IT community organiser. But iSCSI is another story. “I am not a big fan of FCoE yet. The data centre bridging (DCB) extensions are coming … but we don’t yet have an end-to-end FCoE solution. We don’t have the DCB components standardised yet,” Foskett said. What does Foskett think it will take to make FCoE work? “It’ll take a complete end-to-end network. I understand the incremental approach is probably now what most people are going to do. It’s not like they’re going to forklift everything and get a new storage array and get a new greenfield system, but right now you can’t do that,” Foskett said. iSCSI, on the other hand, works over 10 Gigabit Ethernet today and lends itself to a total solution. So why aren’t vendors selling it? “iSCSI doesn’t give vendors a unique point of entry. They can’t say we’ve got iSCSI, so that makes us exceptional. But with FCoE they can say, ‘We are the masters of Fibre Channel’ or ‘We are the masters of Ethernet, so you can trust us.’ iSCSI works too well for anybody to have a competitive advantage,” Foskett said.

Before embarking on an FCoE implementation, ask:

Will the storage team or the networking team own the
infrastructure? If co-managed, who has the deciding vote?

Which department will pay for it? How will chargeback be calculated
and future growth determined?

Will the teams be integrated? Typically, the networking team is responsible
for IP switches, while the storage team is responsible for Fibre Channel.

Who will own day-to-day operational issues? If a decision needs to be
made regarding whether more bandwidth is given to local area network (LAN)
or storage area network (SAN) traffic, who makes the call?Will companies have
to create a single, integrated connectivity group?

Titbit #3

Chosing a convergence technology….FCOE OR ISCSI? Does it Matter?

FCoE gets all the data centre network convergence hype, but many industry veterans say iSCSI is another viable option. As an IP-based storage networking protocol, iSCSI can run natively over an Ethernet network.Most enterprises that use iSCSI today run the storage protocol over their own separate networks because convergence wasn’t an option on Gigabit Ethernet. But with 10 GbE switches becoming more affordable, iSCSI-based convergence is becoming more of a reality.
“Certainly iSCSI is the easier transition [compared to FCoE],” said storage blogger and IT consultant Stephen Foskett. “With iSCSI you don’t have to have data center bridging, new NICs, new cables or new switches.”
Ultimately the existing infrastructure and the storage demands of an enterprise will govern the choice of a network convergence path. “There are very few times where I will steer a customer down an FCoE route if they don’t all ready have a Fibre Channel investment,” said Onisick. “If they have a need for very high performance and very low throughput block data, FCoE is a great way to do it. If they can sustain a little more latency, iSCSI is fantastic. And if they have no need for block data, then NAS [networkattached storage] and NFS [network file system] is a fantastic option.”
For Ramsey, iSCSI was never a viable option because ofWellmont’s high-performance requirements. “We played around with iSCSI, but that was still going to run over TCP, and you’re still going to contend with buffering, flow control, windowing or packet drops and queuing, so we stayed away from it. What FCoE brings to the table—It doesn’t run over Layer 3. It’s an encapsulation of your Fibre Channel packet inside a native Layer 2 frame, and all we’re doing is transporting that between the server and up to the Nexus 2232 and the Nexus 5020.”

Network Convergence Beyond the Rack

…The most bang for the buck right now is to simplify the rack environment…

Although Cisco and other vendors will begin delivery of end-to-end FCoE switching capabilities this year, with technologies like Shortest Path Bridging and Transparent Interconnection of Lots of Links (TRILL), Ramsey doesn’t see moving beyond rack-level network convergence within the next five years.
“What you’re talking about is multi-hop FCoE, and Cisco is still working on fleshing that out. The most bang for the buck right now is to simplify the rack environment. If you want to go all FCoE, all your EMC stuff is going to have to be retrofitted with FCoE 10 Gigabit. And at that point you could probably get rid of your Fibre Channel. Maybe in five years we’ll look at that, but that’s not really going to buy us anything right now.We’re just not pushing into the type of bandwidth where we would need dedicated 10 Gigabit to the storage. We don’t need that much data.
Where FCoE helps us is simplification inside the rack, making it faster, cheaper and smaller.”
Cloud is also not ready to look past the rack until he gets a better handle on management of converged networks.
“[Brocade] just announced a lot of this stuff, and we want to test out the management system. Once we prove that out, we’ll be looking to go further [with convergence].We are rying to figure out the total cost of
ownership.”….

Read the full articles in the PDF: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

#Cisco Data Center #Virtualisation: Enhanced Secure Multi-Tenancy Design Guide #VMWare #IAAS #NetAPP #UCS

May 26, 2011 Comments off

Goal of This Document:
Cisco®, VMware®, and NetApp® have jointly designed a best-in-breed Enhanced Secure Multi-Tenancy (ESMT) Architecture and have validated this design in a lab environment. This document describes the design of and the rationale behind the Enhanced Secure Multi-Tenancy Architecture. The design includes many issues that must be addressed prior to deployment as no two environments are alike. This document also discusses the problems that this architecture solves and the four pillars of an Enhanced Secure Multi-Tenancy environment.

Audience :
The target audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who wish to deploy an Enhanced Secure Multi-Tenancy (ESMT) environment consisting of best-of-breed products from Cisco, NetApp, and VMware.

Objectives:
This document is intended to articulate the design considerations and validation efforts required to design, deploy, and backup Enhanced Secure Multi-Tenancy virtual IT-as-a-service.

Foundational Components:
Each implementation of an Enhanced Secure Multi-Tenancy (ESMT) environment will most likely be different due to the dynamic nature and flexibility provided within the architecture. For this reason this document should be viewed as a reference for designing a customized solution based on specific tenant requirements. The following outlines the foundational components that are required to configure a base Enhanced Secure Multi-Tenancy environment. Add additional features to these foundational components to build a customized environment designed for specific tenant requirements. It is important to note that this document not only outlines the design considerations around these foundational components, but also includes considerations for the additional features that can be leveraged in customizing an Enhanced Secure Multi-Tenancy environment.

The Enhanced Secure Multi-Tenancy foundational components include:

•Cisco Nexus® data center switches
•Cisco Unified Computing System
•Cisco Nexus 1000V Distributed Virtual Switch
•NetApp Data ONTAP®
•VMware vSphere™
•VMware vCenter™ Server
•VMware vShield™

Read the full Reference Architecture Design Guide here

Download PDF from my public SkyDrive Here

ANS group free morning seminar London 19th May – Infrastrure 3.0 Next Generation Data Centre – #UCS Unified Fabric #Cisco #VMware #NetApp

May 16, 2011 Comments off

http://www.ansgroup.co.uk/events/infrastructure-3-0-at-the-brewery

Infrastructure 3.0 at The Brewery
Location: James Watt Room – The Brewery, Chiswell Street London EC1Y 4SD

As a Cisco Gold Partner, VMware Premier and NetApp Star Partner, ANS Group are at the forefront of delivering and scaling innovative Data Centre Solutions. Our leading expert’s will show you how to create a Unified Data Centre ensuring all components are designed and implemented in a manner to which the environment acts as one.

Unified Data Centre’s require core technologies to work in synergy in order to reduce overall operating costs. Deploying point technologies in the data centre without consideration to adjacent systems does not create a flexible, fluid architecture moving forward. By designing the systems to work in tandem, the maximum technology benefits can be achieved from end to end, ensuring that the maximum possible savings can be made in every area. This seminar will focus specifically on Unified Computing, Unified Fabric and Application Delivery.

Join us for our free morning seminar to develop your understanding of the Next Generation Data Centre.