HP #3PAR – VMware Environments #HPStorageGuy #HPStorage #HPConverge

March 28, 2013

Courtesy of Calvin Zito HPStorageGuy Blog

Read the full blog article here: http://bit.ly/Vl9Ase    Plus  this article: 7 reasons why HP 3PAR is the best storage for VMware

The traditional RAID era and how the storage world has changed

…The spindle count in RAID groups were calculated by storage architects based on host IOP workload requirements. There was no real concept of throwing all spindles into one big “pool”, and then carving and provisioning storage from that pool.

The architecture is similar to this example image depicting the traditional era; each RAID group was more or less dedicated to one particular workload.

Traditional RAID

Things have changed since then, thus the concept of shared pools or shared storage was born; this was to drive initiatives like cloud computing, deduplication (if your array supported it natively), and storage tiering amongst other things. By having this shared pool of resources, workloads were “spread out” across the storage resources thus generating a bigger pool of grunt to draw from.

HP 3PAR does this in the form of wide striping, breaking storage down into “chunklets”.

Chunklets
The term chunklets may sound like some sort of breakfast cereal, but although not of the food variety the concept definitely still holds some nutritional value for your storage requirements. Here’s how they work:

  • An HP 3PAR array is populated with one or more disk types; these can be either Fibre Channel, SATA, or SSD. In order to provision storage from these drives to a host, there needs to be a Common Provisioning Group (CPG) created; this serves as a template for creating LUNs. Typically, the CPG needs to be of the same disk type and the same RAID characteristics.
  • From there, LUNs can be created and provisioned to the host. When ESXi hosts starts storing virtual machine data – whether its virtual disks data or meta data to the LUN – each physical drive is broken down into 256 MB chunklets that the LUNs can use to store the data.
    One point to note is that there is also chunklets for distributed sparing as well.
    As an example, for a single 600Gb drive you will have 2400 chunklets at your disposal for virtual machine use (600Gb*1024Mb/256Mb). When you add more shelves of drives, the picture gets bigger as does the performance.

Wide Striping
From physical disks right through to the LUNs that are provisioned to the ESXi host, the result is that the chunklets are created across all of the spindle types in the array as defined in the CPG. This system wide allocation super charges performance for virtual workloads.

wide striping

chunklets

Multi Raid? Sure!

One hard question as a storage architect to answer is “what type of RAID shall I use for this virtual environment?”. This question is typically answered with the usual “It depends” response. Different workloads call for different strategies as different RAID types have different RAID penalties\performance considerations.

There is a consensus in the industry to consider the following rules of thumbs (these are only rules of thumb and are not best practices in any form):

  • RAID 1/0 – Usually higher write intensive random workloads suit this.
  • RAID 5 – Arguably one of the best all-rounders, offering a good balance of performance and redundancy. Modest random workloads are a good fit.
  • RAID 6 – HP 3PAR offers double parity protection in the form of RAID-MP, offering a higher redundancy (double failure) than RAID 5 but at the cost of usable storage and performance because of the added write penalty.

…Regardless of which RAID type is used, making a write I/O takes time. The quicker the write is made, the better the latency and throughput and the less write penalty is observed.

Dynamic Optimisation (DO) and Adaptive Optimisation (AO)

The end result is that your data gets automagically spread across all disks and all disk types in the 3PAR, with hot regions on fast disks and cold data on slow disks. The whole performance capability of the entire array is made available to all of your data automatically; this is how virtual workloads should be stored!

Optimisation

In Closing.

Here’s the key takeaway to remember….The main contributor of an array’s performance is determined by how many disks of each disk type is installed in the array, the more drives you have in the CPG then the more throughput and overall IOPS is available to all of your VMFS datastores and subsequently your virtual machine workloads….

Advertisements
%d bloggers like this: