Archive

Archive for the ‘HP – EVA/ 3 Par/ Left Hand / MSA’ Category

HP #3PAR & Virtualization #HPStorageGuy #HPStorage #HPConverge

March 28, 2013 Comments off

Courtesy of  Paul Haverfied  via Courtesy of Calvin Zito HPStorageGuy Blog

 

HP #3PAR – VMware Environments #HPStorageGuy #HPStorage #HPConverge

March 28, 2013 Comments off

Courtesy of Calvin Zito HPStorageGuy Blog

Read the full blog article here: http://bit.ly/Vl9Ase    Plus  this article: 7 reasons why HP 3PAR is the best storage for VMware

The traditional RAID era and how the storage world has changed

…The spindle count in RAID groups were calculated by storage architects based on host IOP workload requirements. There was no real concept of throwing all spindles into one big “pool”, and then carving and provisioning storage from that pool.

The architecture is similar to this example image depicting the traditional era; each RAID group was more or less dedicated to one particular workload.

Traditional RAID

Things have changed since then, thus the concept of shared pools or shared storage was born; this was to drive initiatives like cloud computing, deduplication (if your array supported it natively), and storage tiering amongst other things. By having this shared pool of resources, workloads were “spread out” across the storage resources thus generating a bigger pool of grunt to draw from.

HP 3PAR does this in the form of wide striping, breaking storage down into “chunklets”.

Chunklets
The term chunklets may sound like some sort of breakfast cereal, but although not of the food variety the concept definitely still holds some nutritional value for your storage requirements. Here’s how they work:

  • An HP 3PAR array is populated with one or more disk types; these can be either Fibre Channel, SATA, or SSD. In order to provision storage from these drives to a host, there needs to be a Common Provisioning Group (CPG) created; this serves as a template for creating LUNs. Typically, the CPG needs to be of the same disk type and the same RAID characteristics.
  • From there, LUNs can be created and provisioned to the host. When ESXi hosts starts storing virtual machine data – whether its virtual disks data or meta data to the LUN – each physical drive is broken down into 256 MB chunklets that the LUNs can use to store the data.
    One point to note is that there is also chunklets for distributed sparing as well.
    As an example, for a single 600Gb drive you will have 2400 chunklets at your disposal for virtual machine use (600Gb*1024Mb/256Mb). When you add more shelves of drives, the picture gets bigger as does the performance.

Wide Striping
From physical disks right through to the LUNs that are provisioned to the ESXi host, the result is that the chunklets are created across all of the spindle types in the array as defined in the CPG. This system wide allocation super charges performance for virtual workloads.

wide striping

chunklets

Multi Raid? Sure!

One hard question as a storage architect to answer is “what type of RAID shall I use for this virtual environment?”. This question is typically answered with the usual “It depends” response. Different workloads call for different strategies as different RAID types have different RAID penalties\performance considerations.

There is a consensus in the industry to consider the following rules of thumbs (these are only rules of thumb and are not best practices in any form):

  • RAID 1/0 – Usually higher write intensive random workloads suit this.
  • RAID 5 – Arguably one of the best all-rounders, offering a good balance of performance and redundancy. Modest random workloads are a good fit.
  • RAID 6 – HP 3PAR offers double parity protection in the form of RAID-MP, offering a higher redundancy (double failure) than RAID 5 but at the cost of usable storage and performance because of the added write penalty.

…Regardless of which RAID type is used, making a write I/O takes time. The quicker the write is made, the better the latency and throughput and the less write penalty is observed.

Dynamic Optimisation (DO) and Adaptive Optimisation (AO)

The end result is that your data gets automagically spread across all disks and all disk types in the 3PAR, with hot regions on fast disks and cold data on slow disks. The whole performance capability of the entire array is made available to all of your data automatically; this is how virtual workloads should be stored!

Optimisation

In Closing.

Here’s the key takeaway to remember….The main contributor of an array’s performance is determined by how many disks of each disk type is installed in the array, the more drives you have in the CPG then the more throughput and overall IOPS is available to all of your VMFS datastores and subsequently your virtual machine workloads….

Next Gen #3PAR What you need to know #HPConverge #HPStorage

March 28, 2013 Comments off

Came across this good blog Courtesy of techopsguys.com

Below are a few extracts and full article here: http://bit.ly/YyAT9G

Some extracts:

  • Renamed Products:

    There was some basic name changes for 3PAR product lines:

    • The HP 3PAR InServ is now the HP 3PAR StorServ
    • The HP 3PAR V800 is now the HP 3PAR 10800
    • The HP 3PAR V400 is now the HP 3PAR 10400
  • The 3PAR 7000-Series mid range done right :

    • The 3PAR 7000-series leverages all of the same tier one technology that is in the high end platform and puts it in a very affordable package
    • The 7200 & 7400 represents roughly a 55-65% discount over the previous F-class mid range 3PAR solution
    • The 7000 series comes in two flavors – a two node 7200, and a two or four node 7400.
    • Note that it is not possible to upgrade in place a 7200 to a 7400. So you still have to be sure if you want a 4-node capable system to choose the 7400 up front (you can, of course purchase a two-node 7400 and add the other two nodes later).
  • Dual vs Quad ControllerThe controller configurations are different between the two and the 7400 has extra cluster cross connects to unify the cluster across enclosures. The 7400 is the first 3PAR system that is not leveraging a passive backplane for all inter-node communications.

    A unique and key selling point for having a 4-node 3PAR system is persistent cache, which keeps the cache in write back mode during planned or unplanned controller maintenance

  • Basic array specifications
    Click to see Bigger3PAR Array Specifications

    (Note: All current 3PAR arrays have dedicated gigabit network ports on each controller for IP-based replication)

  • Dual vs Quad controller:

    In a nut shell, vs the F-class mid range systems, the new 7000…

    • Doubles the data cache per controller to 12GB compared to F200, almost triple if you compare the 7400 to the F200/F400)
    • Doubles the control cache per controller to 8GB, The control cache is dedicated memory for the operating system completely isolated from the data cache.
    • Brings PCI-Express support to the 3PAR mid range allowing for 8Gbps Fibre Channel and 10Gbps iSCSI
    • Brings the mid range up to spec with the latest 4th generation ASIC, and latest Intel processor technology.
    • Nearly triples the raw capacity
    • Moves from an entirely Fibre channel based system to a SAS back end with a Fibre front end
    • Moves from exclusively 3.5″ drives to primarily 2.5″ drives with a couple 3.5″ drive options
    • Brings FC0E support to the 3PAR mid range (in 2013) for the four customers who use FCoE.
    • Cuts the size of the controllers by more than half
    • Obviously dramatically increases the I/O and throughput of the system with the new ASIC with PCIe, faster CPU cores, more CPU cores(in 7400) and the extra cache.
  • Persistent Ports

This is a really cool feature as well – it gives the ability to provide redundant connectivity to multiple controllers on a 3PAR array without having to have host-based multipathing software. How is this possible? Basically it is NPIV for the array. Peer controllers can assume the world wide names for the ports on their partner controller. If a controller goes down, it’s peer assumes the identities of that controller’s ports, instantaneously providing connectivity for hosts that were (not directly) connected to the ports on the downed controller. This eliminates pauses for MPIO software to detect faults and fail over, and generally makes life a better place.

HP claims that some other tier 1 vendors can provide this functionality for software changes, but they do not today, provide it for hardware changes. 3PAR provides this technology for both hardware and software changes – on all of their currently shipping systems!

  • Virtualized Service Processor

All 3PAR systems have come with a dedicated server known as the Service Processor, this acts as a proxy of sorts between the array and 3PAR support. It is used for alerting as well as remote administration. The hardware configuration of this server was quite inflexible and it made it needlessly complex to deploy in some scenarios (mainly due to having only a single network port).

The service processor was also rated to consume a mind boggling 300W of power (it may of been a legacy typo but that’s the number that was given in the specs).

The Service processor can now be deployed as a virtual machine!

  • Thick Conversion

I’m sure many customers have wanted this over the years as well. The new software will allow you to convert a thin volume to a thick (fat) volume. The main purpose of this of course is to save on licensing for thin provisioning when you have a volume that is fully provisioned (along with the likelihood of space reclamation on that volume being low as well). I know I could of used this years ago.. I always shook my fist at 3PAR when they made it easy to convert to thin, but really impossible to convert back to thick (without service disruption anyway).

  • Easy setup with Smart Start

Leveraging technology from the EVA line of arrays, HP has radically simplified the installation process of a 7000-series array, so much so that the customer can now perform the installation on their own without professional services. This is huge for this market segment. The up front professional services to install a mid range F200 storage system had a list price of $10,000 (as of last year anyway).

HP CloudSystem and Partners: Which cloud management tool to use? #Matrix #HPCloudSystem #MOE

January 16, 2013 Comments off

HP ASE Converged Infrastructure Architect Official Exam Certification Guide #Matrix HPCloudSystem #MOE #HPExpertOne

January 16, 2013 Comments off

Description:
This HP ExpertOne book will help you prepare for the Architecting the HP Matrix Operating Environment (HP0-D20) exam. You will learn key technologies and the HP Converged Infrastructure solutions as you prepare for the HP ASE – Converged Infrastructure Architect V1 certification. Acquiring this certification validates your ability to transform data centers bringing together server, storage, and networking functions into an integrated and managed pool of IT resources. This guide will continue to serve as a useful reference to analyze customer requirements and recommend the correct HP Converged Infrastructure solution.

https://h30590.www3.hp.com/product/HP+ASE+Converged+Infrastructure+Architect+Official+Exam+Certification+Guide+Exam+HP0-D20-Hardcover-8383

Virtual Connect #FlexFabric Cookbook #CloudSystem Matrix #HP #IaaS #VMware

May 27, 2012 Comments off

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf

The purpose of this Virtual Connect Cookbook is to provide users of Virtual Connect with a better understanding of the concepts and steps required when integrating HP BladeSystem and Virtual Connect Flex-10 or FlexFabric components into an existing network.
The scenarios in this Cookbook vary from simplistic to more complex while covering a range of typical building blocks to use when designing Virtual Connect Flex-10 or FlexFabric solutions.

LUN configuration best practices to boost virtual machine performance #VMware

September 19, 2011 Comments off

LUN configuration best practices to boost virtual machine performance.

Advanced virtual machine (VM) storage options can improve performance, but their benefits will go only so far if your physical logical unit numbers (LUNs) are not configured with best practices in mind.
Only when a LUN configuration meets the needs of your VM workloads can you significantly improve virtual machine performance. When it comes to LUN configuration, hardware choices, I/O optimization and VM placement are all important considerations……

%d bloggers like this: