#NetApp Releases Storage Best Practices for #VMware #vSphere 5 #FCoE #iSCSI #NAS #SAN


•An introduction to storage concepts in vSphere 5
•Updated storage maximums, supported options, and NetApp integration tables
•Support for the VSC with the vCSA or vCenter Server Appliance
•Host Profiles
•Storage DRS, affinity rules and maintenance mode
•SIOC or Storage I/O Controls

LUN configuration best practices to boost virtual machine performance #VMware

LUN configuration best practices to boost virtual machine performance.

Advanced virtual machine (VM) storage options can improve performance, but their benefits will go only so far if your physical logical unit numbers (LUNs) are not configured with best practices in mind.
Only when a LUN configuration meets the needs of your VM workloads can you significantly improve virtual machine performance. When it comes to LUN configuration, hardware choices, I/O optimization and VM placement are all important considerations……

DC & Virtualisation Titbits…NAS vs SAN FCoE vs iSCSI? Dont Believe the Hype #Cisco #EMC #NetApp

Some useful titbits here taken from: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

Read the full articles in the PDF

Titbit #1

NAS versus SAN for data center virtualisation storage

There are two major approaches to network storage: network attached storage (NAS) and storage area network (SAN). They vary in both network architecture and how each presents itself to the network client. NAS devices leverage the existing IP network network and deliver file-layer access.

NAS appliances are optimised for sharing files across the network because they are nearly identical to a file server.

SAN technologies, including Fibre Channel (FC) and iSCSI, deliver block-layer access, forgoing the file system
abstractions and appearing to the client as essentially an unformatted hard disk.

FC operates on a dedicated network, requiring its own FC switch and host bus adapters in each server.

An emerging standard, Fibre Channel over Ethernet (FCoE), collapses the storage and IP network onto a single converged switch, but still requires a specialised converged networking adapter in each server.

SAN solutions have an advantage over NAS devices in terms of performance, but at a cost of some contention issues

Titbit #2

FCoE vs iSCSI? Howabout the one that’s ready?

Vendors can push Fibre Channel over Ethernet (FCoE) all they want, but the technology is simply not ready for deployment, argues Stephen Foskett, Gestalt IT community organiser. But iSCSI is another story. “I am not a big fan of FCoE yet. The data centre bridging (DCB) extensions are coming … but we don’t yet have an end-to-end FCoE solution. We don’t have the DCB components standardised yet,” Foskett said. What does Foskett think it will take to make FCoE work? “It’ll take a complete end-to-end network. I understand the incremental approach is probably now what most people are going to do. It’s not like they’re going to forklift everything and get a new storage array and get a new greenfield system, but right now you can’t do that,” Foskett said. iSCSI, on the other hand, works over 10 Gigabit Ethernet today and lends itself to a total solution. So why aren’t vendors selling it? “iSCSI doesn’t give vendors a unique point of entry. They can’t say we’ve got iSCSI, so that makes us exceptional. But with FCoE they can say, ‘We are the masters of Fibre Channel’ or ‘We are the masters of Ethernet, so you can trust us.’ iSCSI works too well for anybody to have a competitive advantage,” Foskett said.

Before embarking on an FCoE implementation, ask:

Will the storage team or the networking team own the
infrastructure? If co-managed, who has the deciding vote?

Which department will pay for it? How will chargeback be calculated
and future growth determined?

Will the teams be integrated? Typically, the networking team is responsible
for IP switches, while the storage team is responsible for Fibre Channel.

Who will own day-to-day operational issues? If a decision needs to be
made regarding whether more bandwidth is given to local area network (LAN)
or storage area network (SAN) traffic, who makes the call?Will companies have
to create a single, integrated connectivity group?

Titbit #3

Chosing a convergence technology….FCOE OR ISCSI? Does it Matter?

FCoE gets all the data centre network convergence hype, but many industry veterans say iSCSI is another viable option. As an IP-based storage networking protocol, iSCSI can run natively over an Ethernet network.Most enterprises that use iSCSI today run the storage protocol over their own separate networks because convergence wasn’t an option on Gigabit Ethernet. But with 10 GbE switches becoming more affordable, iSCSI-based convergence is becoming more of a reality.
“Certainly iSCSI is the easier transition [compared to FCoE],” said storage blogger and IT consultant Stephen Foskett. “With iSCSI you don’t have to have data center bridging, new NICs, new cables or new switches.”
Ultimately the existing infrastructure and the storage demands of an enterprise will govern the choice of a network convergence path. “There are very few times where I will steer a customer down an FCoE route if they don’t all ready have a Fibre Channel investment,” said Onisick. “If they have a need for very high performance and very low throughput block data, FCoE is a great way to do it. If they can sustain a little more latency, iSCSI is fantastic. And if they have no need for block data, then NAS [networkattached storage] and NFS [network file system] is a fantastic option.”
For Ramsey, iSCSI was never a viable option because ofWellmont’s high-performance requirements. “We played around with iSCSI, but that was still going to run over TCP, and you’re still going to contend with buffering, flow control, windowing or packet drops and queuing, so we stayed away from it. What FCoE brings to the table—It doesn’t run over Layer 3. It’s an encapsulation of your Fibre Channel packet inside a native Layer 2 frame, and all we’re doing is transporting that between the server and up to the Nexus 2232 and the Nexus 5020.”

Network Convergence Beyond the Rack

…The most bang for the buck right now is to simplify the rack environment…

Although Cisco and other vendors will begin delivery of end-to-end FCoE switching capabilities this year, with technologies like Shortest Path Bridging and Transparent Interconnection of Lots of Links (TRILL), Ramsey doesn’t see moving beyond rack-level network convergence within the next five years.
“What you’re talking about is multi-hop FCoE, and Cisco is still working on fleshing that out. The most bang for the buck right now is to simplify the rack environment. If you want to go all FCoE, all your EMC stuff is going to have to be retrofitted with FCoE 10 Gigabit. And at that point you could probably get rid of your Fibre Channel. Maybe in five years we’ll look at that, but that’s not really going to buy us anything right now.We’re just not pushing into the type of bandwidth where we would need dedicated 10 Gigabit to the storage. We don’t need that much data.
Where FCoE helps us is simplification inside the rack, making it faster, cheaper and smaller.”
Cloud is also not ready to look past the rack until he gets a better handle on management of converged networks.
“[Brocade] just announced a lot of this stuff, and we want to test out the management system. Once we prove that out, we’ll be looking to go further [with convergence].We are rying to figure out the total cost of

Read the full articles in the PDF: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

#NetApp Back to Basics: Deduplication

What is Deduplication?
…Deduplication is an important storage efficiency tool that can be used alone or in conjunction with other storage efficiency solutions.

This month, Tech OnTap is pleased to bring you the second installment of Back to Basics, a series of articles that discuss the fundamentals of popular NetApp technologies to help you understand and get started using them.

In 2007, NetApp introduced deduplication technology that significantly decreases storage capacity requirements. NetApp deduplication improves efficiency by locating identical blocks of data and replacing them with references to a single shared block after performing a byte-level verification check. This technique reduces storage capacity requirements by eliminating redundant blocks of data that reside in the same volume or LUN.

NetApp deduplication is an integral part of the NetApp Data ONTAP® operating environment and the WAFL® file system, which manages all data on NetApp storage systems. Deduplication works “behind the scenes,” regardless of what applications you run or how you access data, and its overhead is low.

A common question is, “How much space can you save?” We’ll come back to this question in more detail later, but, in general, it depends on the dataset and the amount of duplication it contains.

Use Cases: NetApp has been measuring the benefits of deduplication in real-world environments since deduplication was introduced. The most popular use cases are VMware® and VDI, home directory data, and file services. Microsoft SharePoint® and Exchange 2010 are also rapidly gaining traction.

Read More…

ANS group free morning seminar London 19th May – Infrastrure 3.0 Next Generation Data Centre – #UCS Unified Fabric #Cisco #VMware #NetApp


Infrastructure 3.0 at The Brewery
Location: James Watt Room – The Brewery, Chiswell Street London EC1Y 4SD

As a Cisco Gold Partner, VMware Premier and NetApp Star Partner, ANS Group are at the forefront of delivering and scaling innovative Data Centre Solutions. Our leading expert’s will show you how to create a Unified Data Centre ensuring all components are designed and implemented in a manner to which the environment acts as one.

Unified Data Centre’s require core technologies to work in synergy in order to reduce overall operating costs. Deploying point technologies in the data centre without consideration to adjacent systems does not create a flexible, fluid architecture moving forward. By designing the systems to work in tandem, the maximum technology benefits can be achieved from end to end, ensuring that the maximum possible savings can be made in every area. This seminar will focus specifically on Unified Computing, Unified Fabric and Application Delivery.

Join us for our free morning seminar to develop your understanding of the Next Generation Data Centre.

Ethernet versus FC – Great ‘Surfer vs Banker’ Analogy #Cisco #NetApp #DCB

Read Part 1: http://www.networkworld.com/community/blog/ethernet-adapts-data-center-applications-%E2%80%93-pa

Read Part 2: http://www.networkworld.com/community/blog/ethernet-adapts-data-center-applications-%E2%80%93-p-0

Many networks need to marry their Fibre Channel SAN protocols to Ethernet. But Ethernet is an easy-going protocol (let’s call it West Coast) and Fibre Channel is a structured protocol (East Coast). “Data Center Bridging” will be to these two what the central country is to the Coasts – the means by which the two connect….

If we compare and contrast Ethernet and Fibre Channel (East Coast/West Coast derived protocols), we see that Ethernet is the laid back West Coast surfer that will try to deliver your frames on time and in order, but if they don’t you get a “Sorry dude, couldn’t make it happen” response but you’ll be OK because TCP will retransmit or for UDP, it was probably real-time traffic and hopefully didn’t notice the clipping.

Fibre Channel on the other hand is a very structured and regimented East Coast protocol that won’t tolerate delays and drops. Significant efforts are made to ensure this on-time delivery including a hop-by-hop buffering system and classes of service that can guarantee in-order delivery. If Fibre Channel frames hit the deck, bad things happen. Most applications and operating systems don’t like it when their storage is pulled out from under them while the network converges – recent personal experience was a great reinforcement of this principal. Wonder why your SAN admins get nervous when you mention FCoE? The laze-faire approach of Ethernet is the reason.

So how do we solve this challenge and merge the East Coast rigidity of Fibre Channel onto the West Coast laid back Ethernet – Data Center Bridging is the answer. Data Center Bridging (DCB) is a collection of enhancements to Ethernet that make it capable of providing lossless transport for protocols like FCoE….

Ethernet inherently doesn’t provide the ability to multi-path because STP is blocking our redundant links to mitigate loops in the network. So if you are implementing Fibre Channel over Ethernet and have promised your SAN team that the network won’t lose their Fibre Channel frames, the next hurdle will be multi-pathing. (See previous post that discussed the ways Fibre Channel and Ethernet don’t get along, and why Data Center Bridging is the answer.)

How do we cross that chasm? There are two approaches that relate to how you plan to implement FCoE in your network… single-hop, and…multi-hop.

Read more of the full articles at above Link

Credit: http://blog.ioshints.info/2010/10/ethernet-versus-fc-surfer-versus-banker.html

#FCoE #SAN multi-hop technology primer

Read the full SearchStorage Article @ …FCoE SAN multi-hop technology primer.

What you will learn in this tip: Fibre Channel over Ethernet (FCoE) storage-area network (SAN) technology is becoming more popular in data storage environments, but there are performance issues, primarily the lack of multi-hop switching support, that need to be addressed that could potentially stunt the growth of the technology. Find out what vendors and users are doing to improve FCoE SAN performance.

FCoE SAN is gaining broad support from storage and network vendors, and customer adoption is also rising. Because it’s a new protocol and relies on many new features, FCoE remains somewhat limited in terms of interoperability and flexibility. One often-criticized element is the lack of multi-hop switching support in FCoE SANs, but what exactly does this mean?

A quick Fibre Channel primer

Fibre Channel (FC) initiators contain a number of Node Ports (“N_Port”) that connect to the Fabric Ports (“F_Port”) on switches. FC switches talk to each other using Expansion Ports (“E_Port”) before finally communicating with the N_Port on the storage array. This allows them to route traffic through the SAN to avoid data loss and congestion. FCoE SANs adopt a virtual version of this configuration, with a “VN_Port” talking to a “VF_Port,” and (if they support it) the network switches using “VE_Ports” to exchange data over an inter-switch link (ISL).

One major difference between a Fibre Channel fabric and Ethernet network is intelligence: The fabric itself actively participates in access and routing decisions. Although it’s distributed, the FC fabric has some intelligence and thus FC switches are more involved in the network than basic Ethernet switches. In particular, each switch participates in making decisions about where to send data traffic, so each stream of initiator-to-target traffic gets its own route through the SAN rather than sharing a single route as in an Ethernet LAN with spanning tree path management.

NetApp Storage & Best Practices – VMWare / Hyper-V

Some useful resources…

NetApp Site: NetApp and VMWare vSphere Storage Best Practices PDF

NetApp Site:  Best Practices for running VMware vSphere on NAS PDF

NetApp Site: Comparison of Storage Protoocol Performance in VMWare vSphere 4

NetApp Site: VMware vSphere 4 performance with Extreme IO Workloads

NetApp Site: VMware vSphere 4 Performance Best Practices

NetApp Site: VMware vSphere 4 Exchange Storage protocols performance –  NFS, iSCSI, and Fibre Channel

NetApp Site: VMware vSphere 4 Sharepoint Performance

NetApp Blog: NetApp and Hyper-V

NetApp Blog: NetApp and VMware