Archive

Archive for the ‘Dynamic Data Center’ Category

HP Networking Product Portfolio Guide #hp_networking #HPConverge #HPFlexNetwork

March 30, 2013 Comments off

Download the HP FlexNetwork Portolio Guide here: http://bit.ly/16oebkL

HP FlexNetwork Porfolio Guide

HP Networking Product Portfolio Poster #hp_networking #HPConverge #HPFlexNetwork

March 30, 2013 Comments off

Download the Poster as PDF here: http://bit.ly/10j9tRq

HP Networking Product Portfolio Poster

HP FlexNetwork Porfolio Guide2

HP Converged Infrastructure Reference Architecture Design Guide – Accelerating IT with HP Converged Infrastructure #HPConverge

March 29, 2013 Comments off

The  technical white paper can be downloaded here:  http://bit.ly/YJ00o0

Courtesty of HP Converged Infrastructure

2

3

4

5

6

7

HP #FlexFabric Reference Architecture – Applying HP Converged Infrastructure to data center networks #HPConverge

March 29, 2013 Comments off

HP FlexFabric Reference Architecture – Applying HP Converged Infrastructure to data center networks

The  technical white paper can be downloaded hereL http://bit.ly/ZsXvlK

Courtesty of HP Converged Infrastructure

 

HP ConvergedNetwork WhitePaper

HP ConvergedNetwork WhitePaper TOC

Overview of the Guide

This guide is intended for technology decision-makers, solution architects, and other experts tasked with improving data center networking. It can serve as a baseline for network planning and design projects.

It is said, “You cannot chart your course without first knowing whence you came.” This also applies to data center architecture. However, many technical guides take the opposite approach. They attempt to sway the reader towards specific technical directions based on the merits of a current technology or standard. That approach often loses the reader because it does not provide a context for why the new technical approach was developed in the first place.

This document will frequently reference technology trends in the data center which have and are being driven through virtualization and standards. It will also introduce issues that confront data center architects in this fast-paced, results-driven, and security-minded industry.

Technical documents often promote a vendor’s products or vision. This document takes a slightly different approach. Rather than put HP’s vision for converged network infrastructure first, this guide instead presents the building blocks for that vision. It does this by first identifying the most important IT trend today—virtualization of resources at all levels. It then moves forward by introducing HP-supported technologies that enhance virtualized computer networks. Finally, it provides FlexFabric Reference Architecture examples for different types of virtualized server deployments using a layered approach.

The FlexFabric Reference Architecture guide is less of a discussion on specific HP equipment, and more of an overall focus on two things—virtualization and HP Networking technologies that support virtualization. It provides another level of detail to complement the HP Converged Infrastructure Reference Architecture Solution Block Design Guide http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA2-6453ENW.pdf

HP believes simplification is the overriding key to success in networks supporting virtualization. This document provides guidance on network simplification for virtualized deployments that do not sacrifice performance or deployment flexibility.

The major concept areas that will be covered are:

  • Virtual server networking
  • Securing the virtual edge
  • Managing the virtual edge
  • Converged network infrastructure

This approach allows data center architects and IT teams to develop new and more flexible data center models and methodologies. By doing so, IT can meet new demands head-on, rather than forcing businesses to adapt to technology limitations.

…Converged network infrastructure: unifying data and storage networks

Convergence is a technical term historically used to express the combining of voice and data onto the same network fabric. Now expressed as a converged network infrastructure, it encompasses the sharing of network resources between data and storage networks. This trend constitutes a move towards a unification of data and storage networks.

Network technologies like Fibre Channel, used to connect storage resources to computers, differ substantially from network technologies used to connect computer networks. Although high in performance, these network types create two dissimilar data center networks (LAN/WAN and storage), which increase the number of cables and management.

Technologies such as blade servers have addressed this challenge by drastically reducing the number of interconnections. Blade servers have simplified the network by reducing cables and Ethernet ports by over 75 percent. Converged network infrastructure can reduce data center complexity by an additional 50 percent, using technologies like Fibre Channel over Ethernet (FCoE), and more efficient technologies like data center bridging (DCB), also known as converged enhanced Ethernet (CEE).

ConvergedInfra

All of these emerging network technologies have an effect on how data centers are being planned for the future, but it is also important to understand how these technologies evolved.

The remainder of this section focuses on identifying what is currently used in data center network deployments, as well as identifying HP’s vision of converged network infrastructure….

HP #3PAR – VMware Environments #HPStorageGuy #HPStorage #HPConverge

March 28, 2013 Comments off

Courtesy of Calvin Zito HPStorageGuy Blog

Read the full blog article here: http://bit.ly/Vl9Ase    Plus  this article: 7 reasons why HP 3PAR is the best storage for VMware

The traditional RAID era and how the storage world has changed

…The spindle count in RAID groups were calculated by storage architects based on host IOP workload requirements. There was no real concept of throwing all spindles into one big “pool”, and then carving and provisioning storage from that pool.

The architecture is similar to this example image depicting the traditional era; each RAID group was more or less dedicated to one particular workload.

Traditional RAID

Things have changed since then, thus the concept of shared pools or shared storage was born; this was to drive initiatives like cloud computing, deduplication (if your array supported it natively), and storage tiering amongst other things. By having this shared pool of resources, workloads were “spread out” across the storage resources thus generating a bigger pool of grunt to draw from.

HP 3PAR does this in the form of wide striping, breaking storage down into “chunklets”.

Chunklets
The term chunklets may sound like some sort of breakfast cereal, but although not of the food variety the concept definitely still holds some nutritional value for your storage requirements. Here’s how they work:

  • An HP 3PAR array is populated with one or more disk types; these can be either Fibre Channel, SATA, or SSD. In order to provision storage from these drives to a host, there needs to be a Common Provisioning Group (CPG) created; this serves as a template for creating LUNs. Typically, the CPG needs to be of the same disk type and the same RAID characteristics.
  • From there, LUNs can be created and provisioned to the host. When ESXi hosts starts storing virtual machine data – whether its virtual disks data or meta data to the LUN – each physical drive is broken down into 256 MB chunklets that the LUNs can use to store the data.
    One point to note is that there is also chunklets for distributed sparing as well.
    As an example, for a single 600Gb drive you will have 2400 chunklets at your disposal for virtual machine use (600Gb*1024Mb/256Mb). When you add more shelves of drives, the picture gets bigger as does the performance.

Wide Striping
From physical disks right through to the LUNs that are provisioned to the ESXi host, the result is that the chunklets are created across all of the spindle types in the array as defined in the CPG. This system wide allocation super charges performance for virtual workloads.

wide striping

chunklets

Multi Raid? Sure!

One hard question as a storage architect to answer is “what type of RAID shall I use for this virtual environment?”. This question is typically answered with the usual “It depends” response. Different workloads call for different strategies as different RAID types have different RAID penalties\performance considerations.

There is a consensus in the industry to consider the following rules of thumbs (these are only rules of thumb and are not best practices in any form):

  • RAID 1/0 – Usually higher write intensive random workloads suit this.
  • RAID 5 – Arguably one of the best all-rounders, offering a good balance of performance and redundancy. Modest random workloads are a good fit.
  • RAID 6 – HP 3PAR offers double parity protection in the form of RAID-MP, offering a higher redundancy (double failure) than RAID 5 but at the cost of usable storage and performance because of the added write penalty.

…Regardless of which RAID type is used, making a write I/O takes time. The quicker the write is made, the better the latency and throughput and the less write penalty is observed.

Dynamic Optimisation (DO) and Adaptive Optimisation (AO)

The end result is that your data gets automagically spread across all disks and all disk types in the 3PAR, with hot regions on fast disks and cold data on slow disks. The whole performance capability of the entire array is made available to all of your data automatically; this is how virtual workloads should be stored!

Optimisation

In Closing.

Here’s the key takeaway to remember….The main contributor of an array’s performance is determined by how many disks of each disk type is installed in the array, the more drives you have in the CPG then the more throughput and overall IOPS is available to all of your VMFS datastores and subsequently your virtual machine workloads….

Next Gen #3PAR What you need to know #HPConverge #HPStorage

March 28, 2013 Comments off

Came across this good blog Courtesy of techopsguys.com

Below are a few extracts and full article here: http://bit.ly/YyAT9G

Some extracts:

  • Renamed Products:

    There was some basic name changes for 3PAR product lines:

    • The HP 3PAR InServ is now the HP 3PAR StorServ
    • The HP 3PAR V800 is now the HP 3PAR 10800
    • The HP 3PAR V400 is now the HP 3PAR 10400
  • The 3PAR 7000-Series mid range done right :

    • The 3PAR 7000-series leverages all of the same tier one technology that is in the high end platform and puts it in a very affordable package
    • The 7200 & 7400 represents roughly a 55-65% discount over the previous F-class mid range 3PAR solution
    • The 7000 series comes in two flavors – a two node 7200, and a two or four node 7400.
    • Note that it is not possible to upgrade in place a 7200 to a 7400. So you still have to be sure if you want a 4-node capable system to choose the 7400 up front (you can, of course purchase a two-node 7400 and add the other two nodes later).
  • Dual vs Quad ControllerThe controller configurations are different between the two and the 7400 has extra cluster cross connects to unify the cluster across enclosures. The 7400 is the first 3PAR system that is not leveraging a passive backplane for all inter-node communications.

    A unique and key selling point for having a 4-node 3PAR system is persistent cache, which keeps the cache in write back mode during planned or unplanned controller maintenance

  • Basic array specifications
    Click to see Bigger3PAR Array Specifications

    (Note: All current 3PAR arrays have dedicated gigabit network ports on each controller for IP-based replication)

  • Dual vs Quad controller:

    In a nut shell, vs the F-class mid range systems, the new 7000…

    • Doubles the data cache per controller to 12GB compared to F200, almost triple if you compare the 7400 to the F200/F400)
    • Doubles the control cache per controller to 8GB, The control cache is dedicated memory for the operating system completely isolated from the data cache.
    • Brings PCI-Express support to the 3PAR mid range allowing for 8Gbps Fibre Channel and 10Gbps iSCSI
    • Brings the mid range up to spec with the latest 4th generation ASIC, and latest Intel processor technology.
    • Nearly triples the raw capacity
    • Moves from an entirely Fibre channel based system to a SAS back end with a Fibre front end
    • Moves from exclusively 3.5″ drives to primarily 2.5″ drives with a couple 3.5″ drive options
    • Brings FC0E support to the 3PAR mid range (in 2013) for the four customers who use FCoE.
    • Cuts the size of the controllers by more than half
    • Obviously dramatically increases the I/O and throughput of the system with the new ASIC with PCIe, faster CPU cores, more CPU cores(in 7400) and the extra cache.
  • Persistent Ports

This is a really cool feature as well – it gives the ability to provide redundant connectivity to multiple controllers on a 3PAR array without having to have host-based multipathing software. How is this possible? Basically it is NPIV for the array. Peer controllers can assume the world wide names for the ports on their partner controller. If a controller goes down, it’s peer assumes the identities of that controller’s ports, instantaneously providing connectivity for hosts that were (not directly) connected to the ports on the downed controller. This eliminates pauses for MPIO software to detect faults and fail over, and generally makes life a better place.

HP claims that some other tier 1 vendors can provide this functionality for software changes, but they do not today, provide it for hardware changes. 3PAR provides this technology for both hardware and software changes – on all of their currently shipping systems!

  • Virtualized Service Processor

All 3PAR systems have come with a dedicated server known as the Service Processor, this acts as a proxy of sorts between the array and 3PAR support. It is used for alerting as well as remote administration. The hardware configuration of this server was quite inflexible and it made it needlessly complex to deploy in some scenarios (mainly due to having only a single network port).

The service processor was also rated to consume a mind boggling 300W of power (it may of been a legacy typo but that’s the number that was given in the specs).

The Service processor can now be deployed as a virtual machine!

  • Thick Conversion

I’m sure many customers have wanted this over the years as well. The new software will allow you to convert a thin volume to a thick (fat) volume. The main purpose of this of course is to save on licensing for thin provisioning when you have a volume that is fully provisioned (along with the likelihood of space reclamation on that volume being low as well). I know I could of used this years ago.. I always shook my fist at 3PAR when they made it easy to convert to thin, but really impossible to convert back to thick (without service disruption anyway).

  • Easy setup with Smart Start

Leveraging technology from the EVA line of arrays, HP has radically simplified the installation process of a 7000-series array, so much so that the customer can now perform the installation on their own without professional services. This is huge for this market segment. The up front professional services to install a mid range F200 storage system had a list price of $10,000 (as of last year anyway).

vMotion over Distance and Stretched VLAN across L3 WAN – Cisco OTV is your Answer @stretchcloud

March 24, 2013 1 comment

Not a new article, but this topic came up in debate for a current project working on but illustrated the point nicely of when you dont have L2 across sites and must use routed networks.

Below courtesy @stretchcloud. Read the article here: http://stretch-cloud.info/2012/07/vmotion-over-distance-and-stretched-vlan-cisco-otv-is-your-answer/

…I will talk about how do you stretch the VLAN onto a different DC.

Yes, I am talking about Cisco OTV. Cisco’s OTV provides a mechanism to transport native Layer-2 Ethernet frames to a remote site. With a standard Layer-3 WAN, there is no way to bridge layer-2 VLANs, and as a result, communication between two sites must be routed. Because of the routing aspect, it is not possible to define the same VLAN in two locations and have them both be actively transmitting data simultaneously.

Because OTV can operate over any WAN that can forward IP traffic, it can be used with a multitude of different underlying technologies. It provides mechanisms to control broadcasts at the edge of each site, just as with a standard Layer-3 WAN, but also gives you the ability to allow certain broadcasts to cross the islands. OTV only needs to be deployed at certain edge devices, and is only configured at those points, making it simple to implement and manage. It also supports many features to optimize bandwidth utilization, provide resiliency and scalability.

Let’s look at how OTV operates. Here we have two sites separate by a standard Layer-3 WAN connection. OTV is deployed across the WAN by configuring it on an edge switch at both sites. Each end of the OTV “tunnel” is assigned an IP address. Both OTV switches maintain a MAC-to-Next Hop IP table so that they know where to forward frames in a multi-site configuration.

When a host at one site sends a frame to a host at the other site, it can determine the MAC address of the other host, since it is on the same VLAN/network. The host sends the Ethernet frame, which is accepted by the OTV switch and then encapsulated in an IP packet, sent across the WAN, and subsequently decapsulatedby the remote OTV switch. From here, the Ethernet frame is delivered to the destination as if it had been sent locally.

Concept in Practice: Workload Relocation Across Sites

In the event that there is a planned event that will impact a significant number of resources, services must be moved to an alternate location. Unfortunately, because the networks are disjointed, there is no way to seamlessly migrate virtual servers from one location to another without changing IP addresses. As a result, Site Recovery Manager is used to provide an offline migration to the second site, and update DNS records to reflect the new IP addresses for the affected servers. Once the event is complete, another offline migration is performed to restore services to the primary site.

Concept in Practice: vMotion over Distance with OTV & Stretched VLANs

By using OTV in this situation, instead of having to use SRM to emulate a disaster situation, vMotion can be used to migrate the VMs from one site to the other. While this migration is still an offline event, it does provide a much simpler solution to implement and manage by allowing the VM to maintain it’s network identity in either location.

In addition to addressing the initial challenge, OTV provides additional benefits. By having two functional sites with the same network attributes, it is possible to split workloads for services, providing fault tolerance and redundancy. So, if the primary site does have a planned event, failing resources over to the second site may not even be necessary. It also allows the infrastructure to scale by having added ESXi servers operational at the second location to distribute the load.

%d bloggers like this: