Archive

Archive for the ‘EMC’ Category

#vmware forum 2012 presentations #IaaS #PrivateCloud

June 6, 2012 Comments off

Event Presentations here:

http://www.vmwareforum2012.com/
London/presentations

Advertisements

Determining an Optimal Design #EMC @scott_lowe

September 19, 2011 Comments off

LUN configuration best practices to boost virtual machine performance #VMware

September 19, 2011 Comments off

LUN configuration best practices to boost virtual machine performance.

Advanced virtual machine (VM) storage options can improve performance, but their benefits will go only so far if your physical logical unit numbers (LUNs) are not configured with best practices in mind.
Only when a LUN configuration meets the needs of your VM workloads can you significantly improve virtual machine performance. When it comes to LUN configuration, hardware choices, I/O optimization and VM placement are all important considerations……

DC & Virtualisation Titbits…NAS vs SAN FCoE vs iSCSI? Dont Believe the Hype #Cisco #EMC #NetApp

May 27, 2011 Comments off

Some useful titbits here taken from: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

Read the full articles in the PDF

Titbit #1

NAS versus SAN for data center virtualisation storage

There are two major approaches to network storage: network attached storage (NAS) and storage area network (SAN). They vary in both network architecture and how each presents itself to the network client. NAS devices leverage the existing IP network network and deliver file-layer access.

NAS appliances are optimised for sharing files across the network because they are nearly identical to a file server.

SAN technologies, including Fibre Channel (FC) and iSCSI, deliver block-layer access, forgoing the file system
abstractions and appearing to the client as essentially an unformatted hard disk.

FC operates on a dedicated network, requiring its own FC switch and host bus adapters in each server.

An emerging standard, Fibre Channel over Ethernet (FCoE), collapses the storage and IP network onto a single converged switch, but still requires a specialised converged networking adapter in each server.

SAN solutions have an advantage over NAS devices in terms of performance, but at a cost of some contention issues

Titbit #2

FCoE vs iSCSI? Howabout the one that’s ready?

Vendors can push Fibre Channel over Ethernet (FCoE) all they want, but the technology is simply not ready for deployment, argues Stephen Foskett, Gestalt IT community organiser. But iSCSI is another story. “I am not a big fan of FCoE yet. The data centre bridging (DCB) extensions are coming … but we don’t yet have an end-to-end FCoE solution. We don’t have the DCB components standardised yet,” Foskett said. What does Foskett think it will take to make FCoE work? “It’ll take a complete end-to-end network. I understand the incremental approach is probably now what most people are going to do. It’s not like they’re going to forklift everything and get a new storage array and get a new greenfield system, but right now you can’t do that,” Foskett said. iSCSI, on the other hand, works over 10 Gigabit Ethernet today and lends itself to a total solution. So why aren’t vendors selling it? “iSCSI doesn’t give vendors a unique point of entry. They can’t say we’ve got iSCSI, so that makes us exceptional. But with FCoE they can say, ‘We are the masters of Fibre Channel’ or ‘We are the masters of Ethernet, so you can trust us.’ iSCSI works too well for anybody to have a competitive advantage,” Foskett said.

Before embarking on an FCoE implementation, ask:

Will the storage team or the networking team own the
infrastructure? If co-managed, who has the deciding vote?

Which department will pay for it? How will chargeback be calculated
and future growth determined?

Will the teams be integrated? Typically, the networking team is responsible
for IP switches, while the storage team is responsible for Fibre Channel.

Who will own day-to-day operational issues? If a decision needs to be
made regarding whether more bandwidth is given to local area network (LAN)
or storage area network (SAN) traffic, who makes the call?Will companies have
to create a single, integrated connectivity group?

Titbit #3

Chosing a convergence technology….FCOE OR ISCSI? Does it Matter?

FCoE gets all the data centre network convergence hype, but many industry veterans say iSCSI is another viable option. As an IP-based storage networking protocol, iSCSI can run natively over an Ethernet network.Most enterprises that use iSCSI today run the storage protocol over their own separate networks because convergence wasn’t an option on Gigabit Ethernet. But with 10 GbE switches becoming more affordable, iSCSI-based convergence is becoming more of a reality.
“Certainly iSCSI is the easier transition [compared to FCoE],” said storage blogger and IT consultant Stephen Foskett. “With iSCSI you don’t have to have data center bridging, new NICs, new cables or new switches.”
Ultimately the existing infrastructure and the storage demands of an enterprise will govern the choice of a network convergence path. “There are very few times where I will steer a customer down an FCoE route if they don’t all ready have a Fibre Channel investment,” said Onisick. “If they have a need for very high performance and very low throughput block data, FCoE is a great way to do it. If they can sustain a little more latency, iSCSI is fantastic. And if they have no need for block data, then NAS [networkattached storage] and NFS [network file system] is a fantastic option.”
For Ramsey, iSCSI was never a viable option because ofWellmont’s high-performance requirements. “We played around with iSCSI, but that was still going to run over TCP, and you’re still going to contend with buffering, flow control, windowing or packet drops and queuing, so we stayed away from it. What FCoE brings to the table—It doesn’t run over Layer 3. It’s an encapsulation of your Fibre Channel packet inside a native Layer 2 frame, and all we’re doing is transporting that between the server and up to the Nexus 2232 and the Nexus 5020.”

Network Convergence Beyond the Rack

…The most bang for the buck right now is to simplify the rack environment…

Although Cisco and other vendors will begin delivery of end-to-end FCoE switching capabilities this year, with technologies like Shortest Path Bridging and Transparent Interconnection of Lots of Links (TRILL), Ramsey doesn’t see moving beyond rack-level network convergence within the next five years.
“What you’re talking about is multi-hop FCoE, and Cisco is still working on fleshing that out. The most bang for the buck right now is to simplify the rack environment. If you want to go all FCoE, all your EMC stuff is going to have to be retrofitted with FCoE 10 Gigabit. And at that point you could probably get rid of your Fibre Channel. Maybe in five years we’ll look at that, but that’s not really going to buy us anything right now.We’re just not pushing into the type of bandwidth where we would need dedicated 10 Gigabit to the storage. We don’t need that much data.
Where FCoE helps us is simplification inside the rack, making it faster, cheaper and smaller.”
Cloud is also not ready to look past the rack until he gets a better handle on management of converged networks.
“[Brocade] just announced a lot of this stuff, and we want to test out the management system. Once we prove that out, we’ll be looking to go further [with convergence].We are rying to figure out the total cost of
ownership.”….

Read the full articles in the PDF: http://viewer.media.bitpipe.com/1127846808_124/1301682190_579/Evolution-EU-1_final.pdf

#EMC Whitepaper: EMC #Clariion Virtual Provisioning – FLARE 30

April 6, 2011 Comments off
Categories: EMC, Storage & Backup

#EMC Whitepaper: Private #Cloud Practitioner’s Guide

April 5, 2011 Comments off

http://www.emc.com/collateral/software/white-papers/h7298-it-journey-private-cloud-wp.pdf

Not a new whitepaper, but its interesting to view it a almost year on with the change in landscape.

Categories: Cloud Services - IaaS, EMC