尊敬的 微信汇率:1円 ≈ 0.046166 元 支付宝汇率:1円 ≈ 0.046257元 [退出登录]
SlideShare a Scribd company logo
Technology Brief
Blade Server I/O and Workloads of the Future
Comparing Cisco UCS and HP BladeSystem
November, 2014
Where IT perceptions are reality
New Generation of Blade Servers and Workloads
2
HP and Cisco are the two most popular blade server brands on the planet. A big reason why is the networks embedded in
the HP BladeSystem and Cisco UCS products are the most powerful and flexible networks for virtualized workloads.
On August 28th, HP announced new HP ProLiant Gen9 servers, including several enhancements to their HP BladeSystem
I/O design. Shortly afterwards, on September 4th, Cisco announced long-awaited enhancements to UCS.
The UCS enhancements centered around the UCS Mini blade system which is targeted at SMBs and the edge of the
enterprise. There were no significant changes to the 5108 chassis used for larger systems, which after 5 years, is getting
long in the tooth. With only 1.2Tb/s of mid-plane bandwidth, the 5108 is limited in its ability to support more than 8 servers
and single links greater than 10Gb.
The new HP BladeSystem c7000 Platinum chassis offers 7TB/s of mid-plane bandwidth, with new support for 20GbE
downlinks as well as 40GbE uplinks. The HP ProLiant Gen9 BladeSystem also takes converged networks to the next level
with hardware offload of important new networking protocols supporting tunneling of L2 traffic over L3 networks, and scale-
out file storage traffic.
The new HP and Cisco blade systems are hitting the market just as hyperscale-driven applications and data center
architectures are reaching the enterprise. Our conclusion? There’s a new generation of blade servers and workloads, but
the same HP advantage.
This Report Compares 3 Facets of Cisco UCS
and HP BladeSystem I/O
3
To set the stage for comparing the capabilities that will matter most in the future, this Technology Brief reviews the
trend towards a new mix of applications and server workloads in Webscale private clouds.
2
1
3
I/O Capabilities Which Will Differentiate Blade Servers in Webscale Environments
Performance
Consolidation
Flexibility
Inflection Point
4
Intel Xeon E5-2600 v3
In 2014, the server
industry reached a major
inflection point with the
introduction of a new
generation of Intel server
processors launched v3
of the Xeon E5-2600
family. At this inflection
point, x86 server product
lines are being refreshed,
and new technologies are
being introduced which
complement the
capabilities of the Xeon
E5-2600.
Hierarchical Networks
LAN/SAN Convergence with FCoE
10GbE
20GbE and 40GbE
Virtual Networks
Converged cloud , RDMA , FC and Ethernet Connectivity
Virtualized
Servers
Webscale
Servers
Complementary Technologies are what
Differentiate Blade Server Offerings
5
Given that HP and Cisco blade systems will feature the same Xeon E5-2600
processor, it’s the complementary technologies which will differentiate the
systems. The factors which are expected to separate leaders from followers,
is 20GbE connectivity to servers, 40GbE uplinks from blade server chassis to
network, switchless connectivity to storage, and convergence of Ethernet,
FCoE, native Fibre Channel, RDMA, and cloud tunneling protocols on the
same port. Servers with the best implementations of these technologies will be
better suited to handle traditional workloads, plus a new class of Webscale
workloads.
Workload Mix of the Future
6
Share Everything Applications + Share Nothing Applications
Enterprise IT organizations, who for the most part have become private cloud builders, are blending
traditional Enterprise and Hyperscale IT into a Webscale model. Traditional IT encompasses support for
workloads such as SQL databases, and ERP applications, with “share-everything” infrastructure
featuring many VMs sharing physical servers, and many servers sharing networked storage.
Webscale IT must support traditional workloads as well as a new generation of workloads such as
NoSQL databases and predictive analytics. Many of the new applications are designed to run in “share-
nothing” distributed computing environments featuring scale-out server and storage clusters.
Workload Mix of the Future
7
Private cloud builders are also trending
towards cloud platforms like OpenStack
and vCloud. Cloud operating systems
incorporate a software defined data
center architecture which allows a
single cloud operating system to
manage servers, storage and
networking systems in different data
centers. As a result, new cloud
tunneling protocols, such as VXLAN
and NVGRE, are being deployed as a
software defined datacenter foundation,
along with a new generation of NICs
which can offload the tunnel protocol
processing.
Traditional IT + Hyperscale IT = Webscale IT
Environment for Workloads of the Future
8
Webscale Private Cloud
The defining characteristic of a Webscale Private
Cloud is data center infrastructure which efficiently
supports two distinctly different application
environments — a shared infrastructure environment
and a distributed infrastructure environment. A
Webscale Private Cloud also includes an overlapping
environment with software defined (virtualized)
servers, networking and storage.
Converged Networks Make it Possible
A key capability of blade servers in a Webscale Private
Cloud is a higher level of network convergence. In the
next generation of 2.0 Converged Networks, the
RDMA network protocol for scale-out clusters, and
hardware offload of tunneling protocol processing for
carrying L2 traffic over L3 networks, are integrated as
standard features in Webscale CNAs and/or switches.
Webscale Private Cloud Environment
Shared environments include servers heavily loaded with virtual machines, and
networked storage shared by many servers. Distributed environments support database
and application workloads spread across many servers, and scale-out storage. Cloud
operating platforms such as vCloud and OpenStack are introducing management tools for
a software defined data center, including software defined networks.
Anatomy of Blade Server I/O
9
Application Performance Depends
on a Healthy Network
Every blade server has an entire
network embedded to carry east-west
traffic between servers, and north-
south traffic to top-of-rack, end-of-
row, and core switches upstream.
The I/O performance of applications
running on blade servers can differ
significantly depending on the
capabilities of their embedded
networks.
The Blade Servers
10
Blade Server Systems
Cisco UCS
in 5108 Chassis
HP BladeSystem
in c7000 Chassis
The Products
Chassis Size 6U 10U
Max. Blade Servers 8 16
Mid-plane Bandwidth 1.2Tb/s 7.168 Tb/s
Server Downlinks 10Gb 20Gb
Chassis Uplinks 10Gb 10/40Gb
Interconnect Options Ethernet/FCoE
Ethernet/FCoE, Fibre
Channel, SAS, InfiniBand
I/O Slots 2 8
Cisco UCS and HP
BladeSystem
In the following pages we
will compare the
performance, network
convergence, flexibility and
software defined
networking of the Cisco
UCS in a 5108 chassis,
and the HP BladeSystem
in a c7000 Platinum
chassis.
Comparing I/O Performance
11
Why it Matters
Meeting application performance service levels is directly related to the I/O performance of a blade
server system. In addition, the new generation of servers with Xeon E5-2600 processors hosting a
generation of demanding new applications, need higher bandwidth and lower latency I/O than ever
before. And in Webscale private cloud environments, performance is needed more cost-effectively than
ever before, bringing CPU efficiency to the forefront of important performance metrics.
I/O Performance Metrics
In the following pages, we will examine the capabilities of Cisco UCS and HP BladeSystem against the
following I/O performance metrics:
· Bandwidth
· Useable Bandwidth
· Latency
· CPU Efficiency
1
I/O Bandwidth
12
80GbE is Specmanship
There are some discussions in the blogosphere about how UCS achieves 80Gb of bandwidth per blade. Based on a the Cisco UCS B200 M4 Blade Server
Spec Sheet for details, that scenario refers to the configuration of a Cisco B200 M4 blade with a VIC1340 adapter and added mezzanine card (port
expander) that allows four 10Gb links to each IO Module (2208 FEX) for a total of 80Gb of bandwidth (2 x 4 x 10Gb).
40GbE is Expensive
From the point of view of pure technology, 40GbE is a perfect solution for delivering the performance needed in a single server link, and eliminating the
need for teaming. But the cost per port for 40GbE network adapters is typically more than 3x the cost per port of 10GbE adapters. In another case of
specmanship, Cisco is promoting the availability of a 40Gb port on the new 6324 Fabric Interconnect (FI) for the USC Mini. However, as of the writing of
this report, the 40G port, called a Scalability Port, is not a native 40GbE port and can only be used to breakout to four 1GbE or 10GbE SFP+ (4x1G or 4
x10G) connections. In addition, this 40GbE port requires an expensive software license to activate.
20GbE is Juuust Right
A choice that has only recently been made available to server architects is 20GbE. Each 20GbE ports offers bandwidth equivalent to twenty 1GbE ports or
two 10GbE ports. 20GbE is juuuust right because a single 20GbE port is enough bandwidth for all but the most I/O intensive supercomputing applications,
and is available for a fraction of the price of 40GbE technology. According to the Cisco UCS B200 M4 Blade Server Spec Sheet all Cisco UCS 5108
midplane, FEX and FI network connectivity ports are currently 10GbE, including the 40Gb scalability port on the 6324 FI which must be split into multiple
10GbE ports.
The HP BladeSystem provides 20GbE links between blade server adapters and the chassis interconnects, as well as inter-switch links. With HP Flex-20
technology, Ethernet network adapters deliver twice the bandwidth of 10Gb adapters, while reducing the management overhead associated with multiple
10Gb adapters.
With 20Gb downlinks, HP Virtual Connect FlexFabric-20/40 F8 Modules offer more than twice the throughput of other 10Gb extenders and fabric
interconnects. In addition, ports on the HP Virtual Connect FlexFabric-20/40 F8 Modules can be dynamically configured to support Ethernet, Fibre
Channel, or FCoE.
Almost no Oversubscription with HP BladeSystem
13
Oversubscription occurs when the I/O capacity of the adapter ports connected
to chassis switch ports exceeds the capacity of the switch ports. The
oversubscription ratio is the sum of the capacity of the adapter ports divided
by the capacity of the switch port. Below you can see that if you actually
configured 80Gb of bandwidth per UCS blade as mentioned above, you would
be building a blade server network with 4:1 oversubscription. In contrast, a
comparable configured HP BladeSystem would result in 1.1:1
oversubscription — almost a 100% improvement in oversubscription when
compared to Cisco.
Oversubscription
14
16 ports x 20Gb from
Mid-plane to 4 x
Virtual Connect
Modules = 1,280Gb
4 Virtual Connect
Modules. Each with
4 x 40Gb ports + 8 x
10Gb ports + 2
x20Gb ISL ports =
1,120Gb
2 ports x 20Gb
from FLOM + 2
ports x 20Gb for
Mezz. Card x 16
Servers = 1,280Gb
HP BladeSystem: Oversubscription = 1.1:1
8 ports x 10Gb from
Mid-Plane x 2 IO
Modules = 160 Gb
8 ports x 10Gb x 2 IO
Modules = 160Gb
4 ports x 10Gb
from VICs and 4
ports x 10Gb from
expansion cards
(80Gb) x 8 Servers
= 640Gb
Cisco UCS: Oversubscription = 4:1
Oversubscription occurs when the I/O capacity of the adapter ports connected to chassis switch ports exceeds the capacity of the switch
ports. The oversubscription ratio is the sum of the capacity of the adapter ports divided by the capacity of the switch port. Below you can
see that if you actually configured 80Gb of bandwidth per UCS blade as mentioned above, you would be building a blade server network
with 4:1 oversubscription. In contrast, a comparable configured HP BladeSystem would result in 1.1:1 oversubscription — almost a 100%
improvement in oversubscription when compared to Cisco.
What Oversubscription Means
15
0
200
400
600
800
1000
1200
1 2 3 4 5 7 8 9 10 11 12 13 14 15 16
Cisco HP
Blade Server I/O Hits
The Wall
If you configured 80Gb of
bandwidth per blade on
both a Cisco UCS and HP
BladeSystem, the Cisco
5108 chassis switches are
oversubscribed with the
second server. In contrast,
fifteen HP blade servers
can be configured before
reaching the bandwidth
limit of the HP c7000
Platinum chassis
switches.
1.12TB/s
Chassis
Bandwidth
160Gb/s
Chassis
Bandwidth
# Blade Servers to Hit Limit of Chassis Bandwidth
Two fully configured UCS blade servers hit the limits of the 5108 fabric extenders (FEX). It takes fifteen fully configured
HP Gen 9 servers to hit the bandwidth limit of the HP FlexFabric Modules.
RDMA over Ethernet (RoCE)
16
InfiniBand networks were invented to overcome the need to plow through the Ethernet protocol stack to complete an I/O
transaction. InfiniBand boosts performance by eliminating layers of the stack for Remote Direct Memory Access (RDMA). The
Ethernet industry responded by developing an enhanced version of Ethernet called Converged Ethernet (CE), featuring Priority
Flow Control which is necessary to support RDMA over Converged Ethernet (RoCE). Blade systems with switches supporting
CE, and with NICs supporting RDMA, can deliver I/O with lower latency and less CPU usage than previous generations of CNAs.
HP ProLiant Gen9 blade servers incorporate 20Gb FlexibleLOM NICs which are RDMA NICs. Cisco has introduced RDMA LOM
and Mezz NICs called the VIC1340 and VIC1380, respectively.
I/O Without RDMA I/O With RDMA
RoCE Blade Environment
17
Networked Storage Killer Apps for RoCE
A killer app for RoCE is SMB 3.0 file servers where users accessing shared storage experience the
response time of local storage. File servers turbo-charged with RoCE are commercially available via
two Windows Server 2012 features called SMB Multi-Channel and SMB Direct. With SMB
Multichannel, SMB 3.0 automatically detects the RDMA capability and creates multiple RDMA
connections for a single session. This allows SMB to use the high throughput, low latency and low
CPU utilization offered by SMB Direct.
HP FlexFabric 20Gb adapters (RDMA NICs) are certified by Microsoft for use in the killer app
described above. As of 11/14/14 the VIC 1340 is not certified by Microsoft for SMB Direct.
RoCE Blade Environment
18
In this diagram a single HP BladeSystem with HP 6125XLG Ethernet Blade Switches required to support RoCE, is a high
performance environment for 3 app clusters and 1 file server cluster. Hyper-V automatically senses the presence of RDMA NICs,
then use multi-channel communications to evacuate VMs in seconds, and uses direct memory access for higher I/O to shared
storage inside the blade server.
IOPS Performance Benefits of RoCE
19
Sequential Read
Performance (IOPs)
The HP FlexFabric 20Gb
2-port 650FLB Adapter
(Emulex OCe14102)
with RoCE, used with
Windows Storage
Server and SMB Direct,
provided 82% more
IOPs than previous
generation adapters
without RoCE.
Efficiency Benefits of RoCE
20
Server Power Efficiency
(IOPs per Watt)
The HP FlexFabric 20Gb
2-port 650FLB Adapter
(Emulex OCe14102) with
RoCE, used with
Windows Storage Server
and SMB Direct, delivered
80% higher server
power efficiency than
adapters not using RoCE
Response Time Benefits of RoCE
21
Read I/O Response Time
(Seconds)
The HP FlexFabric 20Gb
2-port 650FLB Adapter
with RoCE (Emulex
OCe14102) , used with
Windows Storage Server
and SMB Direct, reduced
I/O response time by
70% compared to NICs
without SMB Direct
capabilities.
The Cost Benefits of RoCE Offload
22
Hardware Offload
A key to achieving efficient use of processing power is adapter offload of networking protocols so that
application server CPU cycles are not wasted on network protocol processing. Using a software
initiator instead of hardware offload requires that every TCP/IP, FCoE, and iSCSI packet be sent over
the PCI bus to the NIC. A constant PCI bus busy state can interfere with traffic to other devices on the
PCI bus.
The lack of offload can have a big impact on CPU utilization. For example, a single adapter running an
iSCSI software initiator can utilize 30% of the server CPU for iSCSI protocol processing. Add more
adapters and VMs, and more CPU is needed for network protocol processing.
The lack of offload is expensive. The cost of 30% CPU utilization for a $20,000 server is $6,000 — a
cost that can be easily avoided by simply deploying a network adapter with iSCSI offload.
Cisco UCS 1300 Series VIC adapters support TCP, FCoE , NVGRE, VXLAN and RoCE offload. HP
FlexFabric adapters add to that offload for iSCSI. It is worth noting that at the time this report was
written, HP 20Gb adapter VXLAN offload is certified by VMware, while as of 11/14/14 the Cisco VIC
1340/1380 VXLAN offload does not appear on the VMware Compatibility Guide.
The Lack of Offload Can Be Expensive
23
$3,000
$4,500
$6,000
$9,000
$-
$5,000
$10,000
$15,000
$20,000
$25,000
$30,000
$10K Server $15K Server $20K Server $25K Server
Cost of Network Protocol Processing Cost of Server
There are a variety of different network protocols supported by adapters, and many are used simultaneously. The more
protocol processing that is done in the adapter, the more of your server investment can be applied to applications
instead of network protocol processing.
Comparing I/O Consolidation
24
Why it Matters
IT consolidation is hugely important because it represents less hardware and simplified
management. The utilization of storage media leaped when storage was configured in a
SAN and could be shared by many servers. The utilization of physical servers
dramatically increased when multiple virtual servers could be hosted on a single physical
server. Similarly, network utilization increases when more network protocols can run on a
single cable, adapter or switch.
Consolidation Metrics
There are two metrics for I/O consolidation: the convergence of network protocols, and
the consolidation of cables into higher bandwidth links.
· Network Convergence
· Cable Consolidation
2
Wanted: One Blade Server Network for LAN,
SAN, Cluster and SDN Traffic
25
A new best practice for data center managers is to converge traditional shared computing infrastructure
with their growing infrastructure for distributed apps. This is made possible by a new generation of
network adapters and switches with support for the RDMA, VXLAN and NVGRE protocols. Support for
these protocols enables blade servers to converge LAN, SAN, Cluster and SDN traffic on a single
network. It also allows data center managers to use software defined data center tools.
The HP 20Gb FlexibleLOM adapters supports stateless hardware offload of TCP, iSCSI and FCoE
protocols for LAN/SAN convergence, as well as hardware offload of RDMA, VXLAN and NVGRE for
efficient support of cluster and tunnel traffic. The Cisco VIC1340 supports all of the same protocols,
with hardware offload for all of the above except iSCSI.
Network Convergence Road Map
26
1.0
LAN+SAN
2.0
LAN+SAN+Clusters+SDN
IP
CE
iSCSI
FCoE
IP
iSCSI
FCoE
RoCE
VXLAN
NVGRE
At the Xeon E5-2600
inflection point,
specialized adapters will
no longer be needed to
support RDMA. The new
class of adapters will
also support new
tunneling protocols
which are essential
components of software
defined data centers.
A Perfect Fit for a Webscale Private Clouds
27
Network Convergence 2.0
The added support for RDMA over Converged Ethernet, NVGRE and VXLAN allow one adapter port on a blade server to support four network
environments. Hardware offload allow the blade server to use precious CPU resources for applications, instead of for network protocol processing.
Shared Distributed SDN
Cable Consolidation
28
A Single 40Gb Link Eliminates Cables for 40 x 1Gb Links or 4 x 10Gb Links
Until recently, 40GbE was used mostly for inter-switch connectivity and in the core of the
network. The availability of 40GbE ports on servers sitting on the edge of the network
has presented the opportunity for IT pros to consolidate dozens of 1GbE links and
handfuls of 10GbE links with a single cable. This is an area where the HP BladeSystem
stand out.
The Cisco UCS architecture makes extensive use of teaming of 10Gb ports to build
uplinks with higher bandwidth. That means lots of cables. Even the 40Gb port on the
UCS Mini must be split into four cables. In contrast, the Virtual Connect Modules on the
HP BladeSystem include four 40GbE ports, which in the apple-to-apples comparison
below reduced the number of cables needed from 24 to 2.
Configuring Redundant 40Gb Uplinks for 16
Blade Servers
29
This diagram shows an apples-to-apples comparison of a 16 blade servers configured with redundant connections between servers and switches, and
redundant uplinks. Many more cables are needed in the Cisco UCS configuration because the switches are external, and because of the lack of 40Gb ports.
Note the Cisco Mini has a 40Gb port but it can only be used in a 4 x 10GbE configuration.
Cisco UCS (24 cables) HP (2 cables)
4 x 10Gb 1 x 40Gb
Comparing I/O Flexibility
30
Why it Matters
A new era of agility awaits IT organizations who implement cloud operating systems designed to
manage multiple software defined data centers. Years required for a generation of hardware change
will be replaced by months required to deploy a software update. A foundation for this capability is
overlay networks with tunneling of L2 traffic across data centers using L3 networks. Support for
tunneling protocols is embedded in a new class of network adapters making it easy for private cloud
builders to integrate their servers into a cloud platform. Conversely, IT organizations want to continue
using native Fibre Channel SANs and want the flexibility to choose “if” and “when” they converge LANs
and SANs on Ethernet.
I/O Flexibility Metrics
There are two capabilities which are expected to effect I/O flexibility in Webscale private clouds.
· More efficient delivery of tunnel traffic with hardware offload of tunnel protocol processing
· Support for native Fibre Channel
3
Tunneling Unlocks the Cloud
31
Live Migrations a Killer App for VXLAN and NVGRE
One of the most valuable functions of server virtualization is live migration. This function frees system
administrators from the time-consuming and complex process of moving workloads to optimize performance
or mitigate a hardware failure. However, moving VMs on different networks requires extensive network
reconfiguration. IT organizations using data center infrastructure dispersed in public, private or hybrid clouds
simply can’t configure all servers and VMs on one local network, and need a tunneling mechanism to extend
live migrations.
Virtual Extensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE )
are protocols for deploying overlay (virtual) networks on top of a Layer 3 networks. VXLAN and NVGRE are
used to isolate apps and tenants in a cloud and migrate virtual machines across long distances.
While VXLAN and NVGRE allow live migrations across racks and data centers. RoCE accelerates live
migrations. In a Microsoft TechEd demo, migrating Windows Server 2012 to a like system takes just under 1
minute 26 seconds. Windows Server 2012R2 performed the same migration in just over 32 seconds. Then
using RoCE during the live migration process combined with SMB Direct, it took just under 11 seconds,
without utilizing added CPU resources.
Live Migrations Across the Cloud
32
Overlay Network Tunnel
Overlay Network Tunnel
Efficient use of the
cloud requires
protocols allowing
the creation of
virtual networks,
and allowing Layer 2
network services to
traverse Layer 3
networks without
network
configuration.
Storage Networks
33
Support for Native Fibre Channel Needed for I/O Flexibility
Based on IT Brand Pulse surveys, 40% of IT organizations are not converging
with FCoE. For the 40% of IT professionals who have been too busy to look
at FCoE, or who say they have no plans to converge their LANs and SANs,
parallel Ethernet and Fibre Channel infrastructure will be deployed.
The modular design of blade servers make them inherently flexible. But not
all blade server platforms are equal when it comes to hosting multiple
heterogeneous virtualized workloads and delivering I/O flexibility.
The Cisco UCS blade servers support Ethernet/FCoE connectivity.
The flexible HP BladeSystem supports Ethernet/FCoE, SAS, InfiniBand and
Fibre Channel connectivity.
Wanted: Ethernet & Fibre Channel Networks
34
In 2014, the prevalent data
center network architecture
remains a parallel network
architecture, including a mix
of specialized NIC, iSCSI, and
Fibre Channel host adapters,
as well as Ethernet and Fibre
Channel switched fabrics.
Cisco UCS blade servers
support only Ethernet
connectivity. Adoption of FCoE
technology is required to
access installed Fibre Channel
resources.
Advantage HP
35
Blade Server Systems Cisco UCS in 5108 Chassis HP BladeSystem inC7000 Chassis
Chassis Size 6U 10U
Max. Blade Servers 8 16
Mid-plane Bandwidth 1.2Tb/s 7.16Tb/s
Max. Embedded Switches 2 8
Support for native 20Gb Ethernet No Yes
Support for native 40Gb Ethernet
(not including 40Gb port used in 4 x 10 mode)
No Yes
Support for native Fibre Channel No Yes
Support for native InfiniBand No Yes
Over subscription 4:1 1.1:1
Hardware offload:
Fibre Channel over Ethernet (FCoE) Yes Yes
iSCSI No Yes
TCP offload engine (TOE) Yes Yes
RoCE offload engine (ROE) Yes Yes
VXLAN offload engine (VOE) Yes (not yet qualified by VMware) Yes
NVGRE offload engine (NOE) Yes (not yet qualified by Microsoft for SMB Direct) Yes
Designed for Workloads of the Future
36
The ProLiant Gen9 Blade Server is designed for I/O flexibility with a choice of FlexFabric converged networking or
parallel Ethernet and Fibre Channel networks. The ProLiant Gen9 Blade Server is also fully compliant with Windows
Server 2012 Virtual Fibre Channel—an innovation that will play an important role in the virtualization of Tier-1 workloads
with Microsoft Hyper-V.
HP Virtual Connect FlexFabric
20/40 F8 module supports “FlatSAN” direct
connectivity to native Fibre Channel 3PAR storage at
a lower cost than using Fibre Channel switches
 Native Fibre Channel server adapter
 Over 12 million ports shipped on this stack
718203-B21 HP LPe1605
16Gb Fibre Channel HBA
HP FlexFabric 20Gb 2-port
650FLB Adapter
HP Virtual Connect FlexFabric
20/40 F8 module supports LAN, SAN, NAS,
iSCSI and FCoE connectivity
 Ethernet LAN on Motherboard (LOM) or Mezz
adapter
 Dual 10/20GbE Ports
 Supports LAN, NAS, iSCSI and FCoE connectivity
 FlexFabric Ready
 Supports RoCE for scale-out cluster connectivity.
 Supports NVGRE and VXLAN for migrating VMs
across the cloud.
Summary
37
Infrastructure of the past is functionally defined and purpose-built. Servers are servers, networking is networking
and storage is storage. These purpose-built devices are deployed with little ability to change the function as
needs change. In the future, infrastructure needs to be more transformative, taking the shape of business
demands.
Potential power and flexibility is locked inside the aging Cisco UCS 5108 chassis which severely limits the use of
new high-bandwidth networks and any network other than Ethernet/FCoE.
The new HP BladeSystem answers the call with:
• A new level of convergence which will allow for resources to be allocated at a very granular level, improving
efficiencies and ensuring optimal performance as workload demands change.
• Interfaces to the software-defined data center. HP ProLiant Gen9 blade servers possess the capability to
respond to intelligent orchestration of infrastructure resources in real-time, as applications and user needs
change.
• A cloud-ready architecture ready to scale-out, agile, and always on.
• Workload-optimized for traditional share-everything applications and new share-nothing applications.
Resources
38
Related Links
OCe14000 Test Report
HP FlexFabric Adapters Provided by Emulex
HP BladeSystem
HP Virtual Connect Technology
HP BladeSystem and Cisco UCS Comparison
Cisco Fabric Extender
Cisco UCS Virtual Interface Card 1340
Cisco UCS 6324 Fabric Interconnect Data Sheet
Cisco UCS Ethernet Switching Modes
IT Brand Pulse
About the Author
Joe Kimpler is a senior analyst responsible for IT Brand Pulse Labs. Joe’s team manages the delivery of technical
services including hands-on testing, product reviews, total cost of ownership studies and product launch collateral.
He has over 30 years of experience in information technology and has held senior engineering and marketing
positions at Fujitsu, Rockwell Semiconductors, Quantum and QLogic. Joe holds an engineering degree from the
University of Illinois and a MBA in marketing.
Blade Server I/O and Workloads of the Future (slides)

More Related Content

What's hot

LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...
LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...
LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...
The Linux Foundation
 
Storage Virtualization Challenges
Storage Virtualization ChallengesStorage Virtualization Challenges
Storage Virtualization Challenges
Randy Weis
 
Fortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyperFortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyper
Emilio Billi
 
NetApp & Storage fundamentals
NetApp & Storage fundamentalsNetApp & Storage fundamentals
NetApp & Storage fundamentals
Shashidhar Basavaraju
 
Network attached storage
Network attached storageNetwork attached storage
Network attached storage
ashutosh rai
 
ARM server, The Cy7 Introduction by Aaron Joue, Ambedded Technology
ARM server, The Cy7 Introduction by Aaron Joue, Ambedded TechnologyARM server, The Cy7 Introduction by Aaron Joue, Ambedded Technology
ARM server, The Cy7 Introduction by Aaron Joue, Ambedded Technology
Aaron Joue
 
Storage Managment
Storage ManagmentStorage Managment
Storage Managment
Kasun Rathnayaka
 
Storage Area Network interview Questions
Storage Area Network interview QuestionsStorage Area Network interview Questions
Storage Area Network interview Questions
Sandeep Sharma IIMK Smart City,IoT,Bigdata,Cloud,BI,DW
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 Presentation
RamanDua
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization Presentation
Steve Meek
 
Q&A for TechWiseTV Workshop: HyperFlex
Q&A for TechWiseTV Workshop: HyperFlexQ&A for TechWiseTV Workshop: HyperFlex
Q&A for TechWiseTV Workshop: HyperFlex
Robb Boyd
 
Storage
StorageStorage
Storage
Fahad Noaman
 
Ds716+
Ds716+Ds716+
Storage Area Network(SAN)
Storage Area Network(SAN)Storage Area Network(SAN)
Storage Area Network(SAN)
Krishna Kahar
 
Netapp Storage
Netapp StorageNetapp Storage
Netapp Storage
Prime Infoserv
 
Storage Area Network (San)
Storage Area Network (San)Storage Area Network (San)
Storage Area Network (San)
sankcomp
 
Emc san-overview-presentation
Emc san-overview-presentationEmc san-overview-presentation
Emc san-overview-presentation
jabramo
 
Software Defined Storage Appliance Power by ARM based Microserver
Software Defined Storage Appliance Power by ARM based MicroserverSoftware Defined Storage Appliance Power by ARM based Microserver
Software Defined Storage Appliance Power by ARM based Microserver
Aaron Joue
 
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Principled Technologies
 
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
The Linux Foundation
 

What's hot (20)

LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...
LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...
LF Collab Summit 2015: ARM Servers for the Next Generation Date Center and Cl...
 
Storage Virtualization Challenges
Storage Virtualization ChallengesStorage Virtualization Challenges
Storage Virtualization Challenges
 
Fortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyperFortissimo converged super_converged_hyper
Fortissimo converged super_converged_hyper
 
NetApp & Storage fundamentals
NetApp & Storage fundamentalsNetApp & Storage fundamentals
NetApp & Storage fundamentals
 
Network attached storage
Network attached storageNetwork attached storage
Network attached storage
 
ARM server, The Cy7 Introduction by Aaron Joue, Ambedded Technology
ARM server, The Cy7 Introduction by Aaron Joue, Ambedded TechnologyARM server, The Cy7 Introduction by Aaron Joue, Ambedded Technology
ARM server, The Cy7 Introduction by Aaron Joue, Ambedded Technology
 
Storage Managment
Storage ManagmentStorage Managment
Storage Managment
 
Storage Area Network interview Questions
Storage Area Network interview QuestionsStorage Area Network interview Questions
Storage Area Network interview Questions
 
Jan 2011 Presentation
Jan 2011 PresentationJan 2011 Presentation
Jan 2011 Presentation
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization Presentation
 
Q&A for TechWiseTV Workshop: HyperFlex
Q&A for TechWiseTV Workshop: HyperFlexQ&A for TechWiseTV Workshop: HyperFlex
Q&A for TechWiseTV Workshop: HyperFlex
 
Storage
StorageStorage
Storage
 
Ds716+
Ds716+Ds716+
Ds716+
 
Storage Area Network(SAN)
Storage Area Network(SAN)Storage Area Network(SAN)
Storage Area Network(SAN)
 
Netapp Storage
Netapp StorageNetapp Storage
Netapp Storage
 
Storage Area Network (San)
Storage Area Network (San)Storage Area Network (San)
Storage Area Network (San)
 
Emc san-overview-presentation
Emc san-overview-presentationEmc san-overview-presentation
Emc san-overview-presentation
 
Software Defined Storage Appliance Power by ARM based Microserver
Software Defined Storage Appliance Power by ARM based MicroserverSoftware Defined Storage Appliance Power by ARM based Microserver
Software Defined Storage Appliance Power by ARM based Microserver
 
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
 
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
XPDS16: High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima &...
 

Viewers also liked

blade server
blade serverblade server
blade server
Ameena Aiman
 
Hp Blade Server
Hp Blade ServerHp Blade Server
Hp Blade Server
candinhviet
 
HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009
Louis Göhl
 
Beyond TCO
Beyond TCOBeyond TCO
HANA SPS07 Architecture & Landscape
HANA SPS07 Architecture & LandscapeHANA SPS07 Architecture & Landscape
HANA SPS07 Architecture & Landscape
SAP Technology
 
Enterprise Architecture for Dummies - TOGAF 9 enterprise architecture overview
Enterprise Architecture for Dummies - TOGAF 9 enterprise architecture overviewEnterprise Architecture for Dummies - TOGAF 9 enterprise architecture overview
Enterprise Architecture for Dummies - TOGAF 9 enterprise architecture overview
Winton Winton
 
Cisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual ConnectCisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual Connect
Stefano Soliani
 
Slideshare ppt
Slideshare pptSlideshare ppt
Slideshare ppt
Mandy Suzanne
 

Viewers also liked (8)

blade server
blade serverblade server
blade server
 
Hp Blade Server
Hp Blade ServerHp Blade Server
Hp Blade Server
 
HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009
 
Beyond TCO
Beyond TCOBeyond TCO
Beyond TCO
 
HANA SPS07 Architecture & Landscape
HANA SPS07 Architecture & LandscapeHANA SPS07 Architecture & Landscape
HANA SPS07 Architecture & Landscape
 
Enterprise Architecture for Dummies - TOGAF 9 enterprise architecture overview
Enterprise Architecture for Dummies - TOGAF 9 enterprise architecture overviewEnterprise Architecture for Dummies - TOGAF 9 enterprise architecture overview
Enterprise Architecture for Dummies - TOGAF 9 enterprise architecture overview
 
Cisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual ConnectCisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual Connect
 
Slideshare ppt
Slideshare pptSlideshare ppt
Slideshare ppt
 

Similar to Blade Server I/O and Workloads of the Future (slides)

Technology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOTechnology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IO
IT Brand Pulse
 
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIndustry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
IT Brand Pulse
 
Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)
E. Balauca
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate Brochure
Netronome
 
Harnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V EngineHarnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V Engine
IT Brand Pulse
 
Renaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityRenaissance in VM Network Connectivity
Renaissance in VM Network Connectivity
IT Brand Pulse
 
Complexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual ConnectComplexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual Connect
Principled Technologies
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate Brochure
Carly Steele
 
Deploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureDeploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network Infrastructure
Cisco Canada
 
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloudCisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Canada
 
Network scaling cost analysis: Cisco UCS and IBM Flex System
Network scaling cost analysis: Cisco UCS and IBM Flex SystemNetwork scaling cost analysis: Cisco UCS and IBM Flex System
Network scaling cost analysis: Cisco UCS and IBM Flex System
Principled Technologies
 
Cisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex updateCisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Canada
 
Compute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCompute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid Cloud
Cisco Canada
 
Compute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCompute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid Cloud
Cisco Canada
 
IBM System Networking RackSwitch G8264CS
IBM System Networking RackSwitch G8264CSIBM System Networking RackSwitch G8264CS
IBM System Networking RackSwitch G8264CS
IBM India Smarter Computing
 
Renaissance in vm network connectivity
Renaissance in vm network connectivityRenaissance in vm network connectivity
Renaissance in vm network connectivity
IT Brand Pulse
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
Lenovo Data Center
 
Nerworking es xi
Nerworking es xiNerworking es xi
Nerworking es xi
Alvaro del Pozo Hernández
 
Dell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE ServersDell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE Servers
IT Brand Pulse
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internet
xKinAnx
 

Similar to Blade Server I/O and Workloads of the Future (slides) (20)

Technology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOTechnology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IO
 
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIndustry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
 
Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)Family data sheet HP Virtual Connect(May 2013)
Family data sheet HP Virtual Connect(May 2013)
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate Brochure
 
Harnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V EngineHarnessing the Power of Hyper-V Engine
Harnessing the Power of Hyper-V Engine
 
Renaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityRenaissance in VM Network Connectivity
Renaissance in VM Network Connectivity
 
Complexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual ConnectComplexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual Connect
 
Netronome Corporate Brochure
Netronome Corporate BrochureNetronome Corporate Brochure
Netronome Corporate Brochure
 
Deploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureDeploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network Infrastructure
 
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloudCisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
 
Network scaling cost analysis: Cisco UCS and IBM Flex System
Network scaling cost analysis: Cisco UCS and IBM Flex SystemNetwork scaling cost analysis: Cisco UCS and IBM Flex System
Network scaling cost analysis: Cisco UCS and IBM Flex System
 
Cisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex updateCisco Connect Toronto 2017 - UCS and Hyperflex update
Cisco Connect Toronto 2017 - UCS and Hyperflex update
 
Compute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCompute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid Cloud
 
Compute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCompute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid Cloud
 
IBM System Networking RackSwitch G8264CS
IBM System Networking RackSwitch G8264CSIBM System Networking RackSwitch G8264CS
IBM System Networking RackSwitch G8264CS
 
Renaissance in vm network connectivity
Renaissance in vm network connectivityRenaissance in vm network connectivity
Renaissance in vm network connectivity
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
 
Nerworking es xi
Nerworking es xiNerworking es xi
Nerworking es xi
 
Dell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE ServersDell First Out the Blocks with 25GbE Servers
Dell First Out the Blocks with 25GbE Servers
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internet
 

More from IT Brand Pulse

CXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdfCXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdf
IT Brand Pulse
 
2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report
IT Brand Pulse
 
2020 Storage Brand Leader Report
2020 Storage Brand Leader Report2020 Storage Brand Leader Report
2020 Storage Brand Leader Report
IT Brand Pulse
 
2019 Servers Brand Leader Report
2019 Servers Brand Leader Report2019 Servers Brand Leader Report
2019 Servers Brand Leader Report
IT Brand Pulse
 
Industry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaSIndustry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaS
IT Brand Pulse
 
2019 Storage Brand Leader Report
2019 Storage Brand Leader Report2019 Storage Brand Leader Report
2019 Storage Brand Leader Report
IT Brand Pulse
 
AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020
IT Brand Pulse
 
AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020
IT Brand Pulse
 
Application Report
Application ReportApplication Report
Application Report
IT Brand Pulse
 
Application Report
Application ReportApplication Report
Application Report
IT Brand Pulse
 
2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report
IT Brand Pulse
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
IT Brand Pulse
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
IT Brand Pulse
 
2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report
IT Brand Pulse
 
2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report
IT Brand Pulse
 
2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report
IT Brand Pulse
 
Backup to Disk Infographic
Backup to Disk InfographicBackup to Disk Infographic
Backup to Disk Infographic
IT Brand Pulse
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
IT Brand Pulse
 
2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report
IT Brand Pulse
 
2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report
IT Brand Pulse
 

More from IT Brand Pulse (20)

CXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdfCXL Forum at ISC 23 - Speaker Invitation.pdf
CXL Forum at ISC 23 - Speaker Invitation.pdf
 
2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report2022 Flash Brand Leader Survey Report
2022 Flash Brand Leader Survey Report
 
2020 Storage Brand Leader Report
2020 Storage Brand Leader Report2020 Storage Brand Leader Report
2020 Storage Brand Leader Report
 
2019 Servers Brand Leader Report
2019 Servers Brand Leader Report2019 Servers Brand Leader Report
2019 Servers Brand Leader Report
 
Industry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaSIndustry's First Petabyte-Scale On-Prem STaaS
Industry's First Petabyte-Scale On-Prem STaaS
 
2019 Storage Brand Leader Report
2019 Storage Brand Leader Report2019 Storage Brand Leader Report
2019 Storage Brand Leader Report
 
AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020
 
AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020AWS #3 Storage Vendor in 2017 | #1 in 2020
AWS #3 Storage Vendor in 2017 | #1 in 2020
 
Application Report
Application ReportApplication Report
Application Report
 
Application Report
Application ReportApplication Report
Application Report
 
2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report2018 Infrastructure-as-a-Service Brand Leader Survey Report
2018 Infrastructure-as-a-Service Brand Leader Survey Report
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
 
2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report2017 Scale-Out File Storage Brand Leader Survey Report
2017 Scale-Out File Storage Brand Leader Survey Report
 
2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report2017 Servers for Software-Defined Storage Brand Leader Report
2017 Servers for Software-Defined Storage Brand Leader Report
 
2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report2017 Enterprise HDD Brand Leader Report
2017 Enterprise HDD Brand Leader Report
 
Backup to Disk Infographic
Backup to Disk InfographicBackup to Disk Infographic
Backup to Disk Infographic
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
 
2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report2017 Flash Storage and NVME Brand Leader Mini-Report
2017 Flash Storage and NVME Brand Leader Mini-Report
 
2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report2017 AI and Cloud Brand Leader Mini-Report
2017 AI and Cloud Brand Leader Mini-Report
 

Recently uploaded

一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理
一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理
一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理
yswno
 
Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...
Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...
Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...
rajnisinghkjn
 
Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...
Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...
Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...
gujratescort#p11
 
Call Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 Minutes
Call Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 MinutesCall Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 Minutes
Call Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 Minutes
kamka4105
 
💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...
💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...
💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...
SAKSHI$L14
 
ℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkata
ℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkataℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkata
ℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkata
nhero3888
 
Ahmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls Ahmedabad
Ahmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls AhmedabadAhmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls Ahmedabad
Ahmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls Ahmedabad
2004kavitajoshi
 
Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...
Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...
Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...
bhavnakaurni
 
Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...
Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...
Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...
dhyaansingh0898#S07
 
📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...
📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...
📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...
kumarikajal989877
 
❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...
❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...
❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...
mahi02356
 
Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...
Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...
Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...
simrangupta87541
 
Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...
Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...
Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...
gujratescort#p11
 
Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...
Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...
Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...
simrangupta87541
 
Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...
Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...
Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...
vaibhavkumar8900
 
Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...
Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...
Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...
ss728938
 
Mira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book Now
Mira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book NowMira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book Now
Mira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book Now
prishapandey62
 
exammeessccglpyq2023exammeessccglpyq2023
exammeessccglpyq2023exammeessccglpyq2023exammeessccglpyq2023exammeessccglpyq2023
exammeessccglpyq2023exammeessccglpyq2023
SubhamMandal40
 
We’re Underestimating the Damage Extreme Weather Does to Rooftop Solar Panels
We’re Underestimating the Damage Extreme Weather Does to Rooftop Solar PanelsWe’re Underestimating the Damage Extreme Weather Does to Rooftop Solar Panels
We’re Underestimating the Damage Extreme Weather Does to Rooftop Solar Panels
Grid Freedom Inc.
 
🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...
🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...
🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...
AK47
 

Recently uploaded (20)

一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理
一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理
一比一原版(ku学位证书)美国堪萨斯大学毕业证如何办理
 
Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...
Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...
Call Girls Pune, Baramati 🔝 7339748667 🔝 Escorts 💯 Yeena Best Independent Low...
 
Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...
Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...
Delhi Call Girls Daryaganj 👉 9999965857 👈 unlimited short high profile full t...
 
Call Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 Minutes
Call Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 MinutesCall Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 Minutes
Call Girls In Solapur 👯‍♀️ 7339748667 🔥 Free Home Delivery Within 30 Minutes
 
💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...
💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...
💋High Class Call Girls Noida 💯Call Us 🔝 9899900591 🔝💃Independent Noida Escort...
 
ℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkata
ℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkataℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkata
ℂall Girls Kolkata 😍 Call 0000000 Vip Escorts Service Kolkata
 
Ahmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls Ahmedabad
Ahmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls AhmedabadAhmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls Ahmedabad
Ahmedabad Call Girls Service 🔥 7014168258 🔥 High Profile Call Girls Ahmedabad
 
Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...
Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...
Coimbatore Call Girls 💯Call Us 🔝 7374876321 🔝 💃 Independent Female Escort Ser...
 
Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...
Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...
Call Girls In Jalandhar👯‍♀️ 7339748667 🔥 Safe Housewife Call Girl Service Hot...
 
📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...
📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...
📛 Independent Call Girls In Pune 👉 7014168258 😂👈 Render Sexy Fulfillment Esco...
 
❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...
❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...
❣Foreigners Call Girls Surat 💯Call Us 🔝 7014168258 🔝💃Independent Surat Escort...
 
Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...
Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...
Noida Extension Call Girls Delhi 🔥 9999965857 ❄- Pick Your Dream Call Girls w...
 
Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...
Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...
Delhi Call Girls RK Puram 👉 9711199171 👈 unlimited short high profile full tr...
 
Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...
Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...
Vasant Kunj Call Girls Delhi 🔥 9711199012 ❄- Pick Your Dream Call Girls with ...
 
Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...
Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...
Marathi Call Girls Bangalore 9024918724 Just CALL ME Book Beautiful Girls any...
 
Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...
Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...
Call Girls Malegaon 💯Call Us 🔝 7426014248 🔝 Independent Malegaon Escorts Serv...
 
Mira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book Now
Mira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book NowMira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book Now
Mira Bhayandar Call Girls ☑ +91-9967824496 ☑ Available Hot Girls Aunty Book Now
 
exammeessccglpyq2023exammeessccglpyq2023
exammeessccglpyq2023exammeessccglpyq2023exammeessccglpyq2023exammeessccglpyq2023
exammeessccglpyq2023exammeessccglpyq2023
 
We’re Underestimating the Damage Extreme Weather Does to Rooftop Solar Panels
We’re Underestimating the Damage Extreme Weather Does to Rooftop Solar PanelsWe’re Underestimating the Damage Extreme Weather Does to Rooftop Solar Panels
We’re Underestimating the Damage Extreme Weather Does to Rooftop Solar Panels
 
🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...
🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...
🔥18+ Young Call Girls Lucknow 💯Call Us 🔝 8630512678 🔝💃Independent Lucknow Esc...
 

Blade Server I/O and Workloads of the Future (slides)

  • 1. Technology Brief Blade Server I/O and Workloads of the Future Comparing Cisco UCS and HP BladeSystem November, 2014 Where IT perceptions are reality
  • 2. New Generation of Blade Servers and Workloads 2 HP and Cisco are the two most popular blade server brands on the planet. A big reason why is the networks embedded in the HP BladeSystem and Cisco UCS products are the most powerful and flexible networks for virtualized workloads. On August 28th, HP announced new HP ProLiant Gen9 servers, including several enhancements to their HP BladeSystem I/O design. Shortly afterwards, on September 4th, Cisco announced long-awaited enhancements to UCS. The UCS enhancements centered around the UCS Mini blade system which is targeted at SMBs and the edge of the enterprise. There were no significant changes to the 5108 chassis used for larger systems, which after 5 years, is getting long in the tooth. With only 1.2Tb/s of mid-plane bandwidth, the 5108 is limited in its ability to support more than 8 servers and single links greater than 10Gb. The new HP BladeSystem c7000 Platinum chassis offers 7TB/s of mid-plane bandwidth, with new support for 20GbE downlinks as well as 40GbE uplinks. The HP ProLiant Gen9 BladeSystem also takes converged networks to the next level with hardware offload of important new networking protocols supporting tunneling of L2 traffic over L3 networks, and scale- out file storage traffic. The new HP and Cisco blade systems are hitting the market just as hyperscale-driven applications and data center architectures are reaching the enterprise. Our conclusion? There’s a new generation of blade servers and workloads, but the same HP advantage.
  • 3. This Report Compares 3 Facets of Cisco UCS and HP BladeSystem I/O 3 To set the stage for comparing the capabilities that will matter most in the future, this Technology Brief reviews the trend towards a new mix of applications and server workloads in Webscale private clouds. 2 1 3 I/O Capabilities Which Will Differentiate Blade Servers in Webscale Environments Performance Consolidation Flexibility
  • 4. Inflection Point 4 Intel Xeon E5-2600 v3 In 2014, the server industry reached a major inflection point with the introduction of a new generation of Intel server processors launched v3 of the Xeon E5-2600 family. At this inflection point, x86 server product lines are being refreshed, and new technologies are being introduced which complement the capabilities of the Xeon E5-2600. Hierarchical Networks LAN/SAN Convergence with FCoE 10GbE 20GbE and 40GbE Virtual Networks Converged cloud , RDMA , FC and Ethernet Connectivity Virtualized Servers Webscale Servers
  • 5. Complementary Technologies are what Differentiate Blade Server Offerings 5 Given that HP and Cisco blade systems will feature the same Xeon E5-2600 processor, it’s the complementary technologies which will differentiate the systems. The factors which are expected to separate leaders from followers, is 20GbE connectivity to servers, 40GbE uplinks from blade server chassis to network, switchless connectivity to storage, and convergence of Ethernet, FCoE, native Fibre Channel, RDMA, and cloud tunneling protocols on the same port. Servers with the best implementations of these technologies will be better suited to handle traditional workloads, plus a new class of Webscale workloads.
  • 6. Workload Mix of the Future 6 Share Everything Applications + Share Nothing Applications Enterprise IT organizations, who for the most part have become private cloud builders, are blending traditional Enterprise and Hyperscale IT into a Webscale model. Traditional IT encompasses support for workloads such as SQL databases, and ERP applications, with “share-everything” infrastructure featuring many VMs sharing physical servers, and many servers sharing networked storage. Webscale IT must support traditional workloads as well as a new generation of workloads such as NoSQL databases and predictive analytics. Many of the new applications are designed to run in “share- nothing” distributed computing environments featuring scale-out server and storage clusters.
  • 7. Workload Mix of the Future 7 Private cloud builders are also trending towards cloud platforms like OpenStack and vCloud. Cloud operating systems incorporate a software defined data center architecture which allows a single cloud operating system to manage servers, storage and networking systems in different data centers. As a result, new cloud tunneling protocols, such as VXLAN and NVGRE, are being deployed as a software defined datacenter foundation, along with a new generation of NICs which can offload the tunnel protocol processing. Traditional IT + Hyperscale IT = Webscale IT
  • 8. Environment for Workloads of the Future 8 Webscale Private Cloud The defining characteristic of a Webscale Private Cloud is data center infrastructure which efficiently supports two distinctly different application environments — a shared infrastructure environment and a distributed infrastructure environment. A Webscale Private Cloud also includes an overlapping environment with software defined (virtualized) servers, networking and storage. Converged Networks Make it Possible A key capability of blade servers in a Webscale Private Cloud is a higher level of network convergence. In the next generation of 2.0 Converged Networks, the RDMA network protocol for scale-out clusters, and hardware offload of tunneling protocol processing for carrying L2 traffic over L3 networks, are integrated as standard features in Webscale CNAs and/or switches. Webscale Private Cloud Environment Shared environments include servers heavily loaded with virtual machines, and networked storage shared by many servers. Distributed environments support database and application workloads spread across many servers, and scale-out storage. Cloud operating platforms such as vCloud and OpenStack are introducing management tools for a software defined data center, including software defined networks.
  • 9. Anatomy of Blade Server I/O 9 Application Performance Depends on a Healthy Network Every blade server has an entire network embedded to carry east-west traffic between servers, and north- south traffic to top-of-rack, end-of- row, and core switches upstream. The I/O performance of applications running on blade servers can differ significantly depending on the capabilities of their embedded networks.
  • 10. The Blade Servers 10 Blade Server Systems Cisco UCS in 5108 Chassis HP BladeSystem in c7000 Chassis The Products Chassis Size 6U 10U Max. Blade Servers 8 16 Mid-plane Bandwidth 1.2Tb/s 7.168 Tb/s Server Downlinks 10Gb 20Gb Chassis Uplinks 10Gb 10/40Gb Interconnect Options Ethernet/FCoE Ethernet/FCoE, Fibre Channel, SAS, InfiniBand I/O Slots 2 8 Cisco UCS and HP BladeSystem In the following pages we will compare the performance, network convergence, flexibility and software defined networking of the Cisco UCS in a 5108 chassis, and the HP BladeSystem in a c7000 Platinum chassis.
  • 11. Comparing I/O Performance 11 Why it Matters Meeting application performance service levels is directly related to the I/O performance of a blade server system. In addition, the new generation of servers with Xeon E5-2600 processors hosting a generation of demanding new applications, need higher bandwidth and lower latency I/O than ever before. And in Webscale private cloud environments, performance is needed more cost-effectively than ever before, bringing CPU efficiency to the forefront of important performance metrics. I/O Performance Metrics In the following pages, we will examine the capabilities of Cisco UCS and HP BladeSystem against the following I/O performance metrics: · Bandwidth · Useable Bandwidth · Latency · CPU Efficiency 1
  • 12. I/O Bandwidth 12 80GbE is Specmanship There are some discussions in the blogosphere about how UCS achieves 80Gb of bandwidth per blade. Based on a the Cisco UCS B200 M4 Blade Server Spec Sheet for details, that scenario refers to the configuration of a Cisco B200 M4 blade with a VIC1340 adapter and added mezzanine card (port expander) that allows four 10Gb links to each IO Module (2208 FEX) for a total of 80Gb of bandwidth (2 x 4 x 10Gb). 40GbE is Expensive From the point of view of pure technology, 40GbE is a perfect solution for delivering the performance needed in a single server link, and eliminating the need for teaming. But the cost per port for 40GbE network adapters is typically more than 3x the cost per port of 10GbE adapters. In another case of specmanship, Cisco is promoting the availability of a 40Gb port on the new 6324 Fabric Interconnect (FI) for the USC Mini. However, as of the writing of this report, the 40G port, called a Scalability Port, is not a native 40GbE port and can only be used to breakout to four 1GbE or 10GbE SFP+ (4x1G or 4 x10G) connections. In addition, this 40GbE port requires an expensive software license to activate. 20GbE is Juuust Right A choice that has only recently been made available to server architects is 20GbE. Each 20GbE ports offers bandwidth equivalent to twenty 1GbE ports or two 10GbE ports. 20GbE is juuuust right because a single 20GbE port is enough bandwidth for all but the most I/O intensive supercomputing applications, and is available for a fraction of the price of 40GbE technology. According to the Cisco UCS B200 M4 Blade Server Spec Sheet all Cisco UCS 5108 midplane, FEX and FI network connectivity ports are currently 10GbE, including the 40Gb scalability port on the 6324 FI which must be split into multiple 10GbE ports. The HP BladeSystem provides 20GbE links between blade server adapters and the chassis interconnects, as well as inter-switch links. With HP Flex-20 technology, Ethernet network adapters deliver twice the bandwidth of 10Gb adapters, while reducing the management overhead associated with multiple 10Gb adapters. With 20Gb downlinks, HP Virtual Connect FlexFabric-20/40 F8 Modules offer more than twice the throughput of other 10Gb extenders and fabric interconnects. In addition, ports on the HP Virtual Connect FlexFabric-20/40 F8 Modules can be dynamically configured to support Ethernet, Fibre Channel, or FCoE.
  • 13. Almost no Oversubscription with HP BladeSystem 13 Oversubscription occurs when the I/O capacity of the adapter ports connected to chassis switch ports exceeds the capacity of the switch ports. The oversubscription ratio is the sum of the capacity of the adapter ports divided by the capacity of the switch port. Below you can see that if you actually configured 80Gb of bandwidth per UCS blade as mentioned above, you would be building a blade server network with 4:1 oversubscription. In contrast, a comparable configured HP BladeSystem would result in 1.1:1 oversubscription — almost a 100% improvement in oversubscription when compared to Cisco.
  • 14. Oversubscription 14 16 ports x 20Gb from Mid-plane to 4 x Virtual Connect Modules = 1,280Gb 4 Virtual Connect Modules. Each with 4 x 40Gb ports + 8 x 10Gb ports + 2 x20Gb ISL ports = 1,120Gb 2 ports x 20Gb from FLOM + 2 ports x 20Gb for Mezz. Card x 16 Servers = 1,280Gb HP BladeSystem: Oversubscription = 1.1:1 8 ports x 10Gb from Mid-Plane x 2 IO Modules = 160 Gb 8 ports x 10Gb x 2 IO Modules = 160Gb 4 ports x 10Gb from VICs and 4 ports x 10Gb from expansion cards (80Gb) x 8 Servers = 640Gb Cisco UCS: Oversubscription = 4:1 Oversubscription occurs when the I/O capacity of the adapter ports connected to chassis switch ports exceeds the capacity of the switch ports. The oversubscription ratio is the sum of the capacity of the adapter ports divided by the capacity of the switch port. Below you can see that if you actually configured 80Gb of bandwidth per UCS blade as mentioned above, you would be building a blade server network with 4:1 oversubscription. In contrast, a comparable configured HP BladeSystem would result in 1.1:1 oversubscription — almost a 100% improvement in oversubscription when compared to Cisco.
  • 15. What Oversubscription Means 15 0 200 400 600 800 1000 1200 1 2 3 4 5 7 8 9 10 11 12 13 14 15 16 Cisco HP Blade Server I/O Hits The Wall If you configured 80Gb of bandwidth per blade on both a Cisco UCS and HP BladeSystem, the Cisco 5108 chassis switches are oversubscribed with the second server. In contrast, fifteen HP blade servers can be configured before reaching the bandwidth limit of the HP c7000 Platinum chassis switches. 1.12TB/s Chassis Bandwidth 160Gb/s Chassis Bandwidth # Blade Servers to Hit Limit of Chassis Bandwidth Two fully configured UCS blade servers hit the limits of the 5108 fabric extenders (FEX). It takes fifteen fully configured HP Gen 9 servers to hit the bandwidth limit of the HP FlexFabric Modules.
  • 16. RDMA over Ethernet (RoCE) 16 InfiniBand networks were invented to overcome the need to plow through the Ethernet protocol stack to complete an I/O transaction. InfiniBand boosts performance by eliminating layers of the stack for Remote Direct Memory Access (RDMA). The Ethernet industry responded by developing an enhanced version of Ethernet called Converged Ethernet (CE), featuring Priority Flow Control which is necessary to support RDMA over Converged Ethernet (RoCE). Blade systems with switches supporting CE, and with NICs supporting RDMA, can deliver I/O with lower latency and less CPU usage than previous generations of CNAs. HP ProLiant Gen9 blade servers incorporate 20Gb FlexibleLOM NICs which are RDMA NICs. Cisco has introduced RDMA LOM and Mezz NICs called the VIC1340 and VIC1380, respectively. I/O Without RDMA I/O With RDMA
  • 17. RoCE Blade Environment 17 Networked Storage Killer Apps for RoCE A killer app for RoCE is SMB 3.0 file servers where users accessing shared storage experience the response time of local storage. File servers turbo-charged with RoCE are commercially available via two Windows Server 2012 features called SMB Multi-Channel and SMB Direct. With SMB Multichannel, SMB 3.0 automatically detects the RDMA capability and creates multiple RDMA connections for a single session. This allows SMB to use the high throughput, low latency and low CPU utilization offered by SMB Direct. HP FlexFabric 20Gb adapters (RDMA NICs) are certified by Microsoft for use in the killer app described above. As of 11/14/14 the VIC 1340 is not certified by Microsoft for SMB Direct.
  • 18. RoCE Blade Environment 18 In this diagram a single HP BladeSystem with HP 6125XLG Ethernet Blade Switches required to support RoCE, is a high performance environment for 3 app clusters and 1 file server cluster. Hyper-V automatically senses the presence of RDMA NICs, then use multi-channel communications to evacuate VMs in seconds, and uses direct memory access for higher I/O to shared storage inside the blade server.
  • 19. IOPS Performance Benefits of RoCE 19 Sequential Read Performance (IOPs) The HP FlexFabric 20Gb 2-port 650FLB Adapter (Emulex OCe14102) with RoCE, used with Windows Storage Server and SMB Direct, provided 82% more IOPs than previous generation adapters without RoCE.
  • 20. Efficiency Benefits of RoCE 20 Server Power Efficiency (IOPs per Watt) The HP FlexFabric 20Gb 2-port 650FLB Adapter (Emulex OCe14102) with RoCE, used with Windows Storage Server and SMB Direct, delivered 80% higher server power efficiency than adapters not using RoCE
  • 21. Response Time Benefits of RoCE 21 Read I/O Response Time (Seconds) The HP FlexFabric 20Gb 2-port 650FLB Adapter with RoCE (Emulex OCe14102) , used with Windows Storage Server and SMB Direct, reduced I/O response time by 70% compared to NICs without SMB Direct capabilities.
  • 22. The Cost Benefits of RoCE Offload 22 Hardware Offload A key to achieving efficient use of processing power is adapter offload of networking protocols so that application server CPU cycles are not wasted on network protocol processing. Using a software initiator instead of hardware offload requires that every TCP/IP, FCoE, and iSCSI packet be sent over the PCI bus to the NIC. A constant PCI bus busy state can interfere with traffic to other devices on the PCI bus. The lack of offload can have a big impact on CPU utilization. For example, a single adapter running an iSCSI software initiator can utilize 30% of the server CPU for iSCSI protocol processing. Add more adapters and VMs, and more CPU is needed for network protocol processing. The lack of offload is expensive. The cost of 30% CPU utilization for a $20,000 server is $6,000 — a cost that can be easily avoided by simply deploying a network adapter with iSCSI offload. Cisco UCS 1300 Series VIC adapters support TCP, FCoE , NVGRE, VXLAN and RoCE offload. HP FlexFabric adapters add to that offload for iSCSI. It is worth noting that at the time this report was written, HP 20Gb adapter VXLAN offload is certified by VMware, while as of 11/14/14 the Cisco VIC 1340/1380 VXLAN offload does not appear on the VMware Compatibility Guide.
  • 23. The Lack of Offload Can Be Expensive 23 $3,000 $4,500 $6,000 $9,000 $- $5,000 $10,000 $15,000 $20,000 $25,000 $30,000 $10K Server $15K Server $20K Server $25K Server Cost of Network Protocol Processing Cost of Server There are a variety of different network protocols supported by adapters, and many are used simultaneously. The more protocol processing that is done in the adapter, the more of your server investment can be applied to applications instead of network protocol processing.
  • 24. Comparing I/O Consolidation 24 Why it Matters IT consolidation is hugely important because it represents less hardware and simplified management. The utilization of storage media leaped when storage was configured in a SAN and could be shared by many servers. The utilization of physical servers dramatically increased when multiple virtual servers could be hosted on a single physical server. Similarly, network utilization increases when more network protocols can run on a single cable, adapter or switch. Consolidation Metrics There are two metrics for I/O consolidation: the convergence of network protocols, and the consolidation of cables into higher bandwidth links. · Network Convergence · Cable Consolidation 2
  • 25. Wanted: One Blade Server Network for LAN, SAN, Cluster and SDN Traffic 25 A new best practice for data center managers is to converge traditional shared computing infrastructure with their growing infrastructure for distributed apps. This is made possible by a new generation of network adapters and switches with support for the RDMA, VXLAN and NVGRE protocols. Support for these protocols enables blade servers to converge LAN, SAN, Cluster and SDN traffic on a single network. It also allows data center managers to use software defined data center tools. The HP 20Gb FlexibleLOM adapters supports stateless hardware offload of TCP, iSCSI and FCoE protocols for LAN/SAN convergence, as well as hardware offload of RDMA, VXLAN and NVGRE for efficient support of cluster and tunnel traffic. The Cisco VIC1340 supports all of the same protocols, with hardware offload for all of the above except iSCSI.
  • 26. Network Convergence Road Map 26 1.0 LAN+SAN 2.0 LAN+SAN+Clusters+SDN IP CE iSCSI FCoE IP iSCSI FCoE RoCE VXLAN NVGRE At the Xeon E5-2600 inflection point, specialized adapters will no longer be needed to support RDMA. The new class of adapters will also support new tunneling protocols which are essential components of software defined data centers.
  • 27. A Perfect Fit for a Webscale Private Clouds 27 Network Convergence 2.0 The added support for RDMA over Converged Ethernet, NVGRE and VXLAN allow one adapter port on a blade server to support four network environments. Hardware offload allow the blade server to use precious CPU resources for applications, instead of for network protocol processing. Shared Distributed SDN
  • 28. Cable Consolidation 28 A Single 40Gb Link Eliminates Cables for 40 x 1Gb Links or 4 x 10Gb Links Until recently, 40GbE was used mostly for inter-switch connectivity and in the core of the network. The availability of 40GbE ports on servers sitting on the edge of the network has presented the opportunity for IT pros to consolidate dozens of 1GbE links and handfuls of 10GbE links with a single cable. This is an area where the HP BladeSystem stand out. The Cisco UCS architecture makes extensive use of teaming of 10Gb ports to build uplinks with higher bandwidth. That means lots of cables. Even the 40Gb port on the UCS Mini must be split into four cables. In contrast, the Virtual Connect Modules on the HP BladeSystem include four 40GbE ports, which in the apple-to-apples comparison below reduced the number of cables needed from 24 to 2.
  • 29. Configuring Redundant 40Gb Uplinks for 16 Blade Servers 29 This diagram shows an apples-to-apples comparison of a 16 blade servers configured with redundant connections between servers and switches, and redundant uplinks. Many more cables are needed in the Cisco UCS configuration because the switches are external, and because of the lack of 40Gb ports. Note the Cisco Mini has a 40Gb port but it can only be used in a 4 x 10GbE configuration. Cisco UCS (24 cables) HP (2 cables) 4 x 10Gb 1 x 40Gb
  • 30. Comparing I/O Flexibility 30 Why it Matters A new era of agility awaits IT organizations who implement cloud operating systems designed to manage multiple software defined data centers. Years required for a generation of hardware change will be replaced by months required to deploy a software update. A foundation for this capability is overlay networks with tunneling of L2 traffic across data centers using L3 networks. Support for tunneling protocols is embedded in a new class of network adapters making it easy for private cloud builders to integrate their servers into a cloud platform. Conversely, IT organizations want to continue using native Fibre Channel SANs and want the flexibility to choose “if” and “when” they converge LANs and SANs on Ethernet. I/O Flexibility Metrics There are two capabilities which are expected to effect I/O flexibility in Webscale private clouds. · More efficient delivery of tunnel traffic with hardware offload of tunnel protocol processing · Support for native Fibre Channel 3
  • 31. Tunneling Unlocks the Cloud 31 Live Migrations a Killer App for VXLAN and NVGRE One of the most valuable functions of server virtualization is live migration. This function frees system administrators from the time-consuming and complex process of moving workloads to optimize performance or mitigate a hardware failure. However, moving VMs on different networks requires extensive network reconfiguration. IT organizations using data center infrastructure dispersed in public, private or hybrid clouds simply can’t configure all servers and VMs on one local network, and need a tunneling mechanism to extend live migrations. Virtual Extensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE ) are protocols for deploying overlay (virtual) networks on top of a Layer 3 networks. VXLAN and NVGRE are used to isolate apps and tenants in a cloud and migrate virtual machines across long distances. While VXLAN and NVGRE allow live migrations across racks and data centers. RoCE accelerates live migrations. In a Microsoft TechEd demo, migrating Windows Server 2012 to a like system takes just under 1 minute 26 seconds. Windows Server 2012R2 performed the same migration in just over 32 seconds. Then using RoCE during the live migration process combined with SMB Direct, it took just under 11 seconds, without utilizing added CPU resources.
  • 32. Live Migrations Across the Cloud 32 Overlay Network Tunnel Overlay Network Tunnel Efficient use of the cloud requires protocols allowing the creation of virtual networks, and allowing Layer 2 network services to traverse Layer 3 networks without network configuration.
  • 33. Storage Networks 33 Support for Native Fibre Channel Needed for I/O Flexibility Based on IT Brand Pulse surveys, 40% of IT organizations are not converging with FCoE. For the 40% of IT professionals who have been too busy to look at FCoE, or who say they have no plans to converge their LANs and SANs, parallel Ethernet and Fibre Channel infrastructure will be deployed. The modular design of blade servers make them inherently flexible. But not all blade server platforms are equal when it comes to hosting multiple heterogeneous virtualized workloads and delivering I/O flexibility. The Cisco UCS blade servers support Ethernet/FCoE connectivity. The flexible HP BladeSystem supports Ethernet/FCoE, SAS, InfiniBand and Fibre Channel connectivity.
  • 34. Wanted: Ethernet & Fibre Channel Networks 34 In 2014, the prevalent data center network architecture remains a parallel network architecture, including a mix of specialized NIC, iSCSI, and Fibre Channel host adapters, as well as Ethernet and Fibre Channel switched fabrics. Cisco UCS blade servers support only Ethernet connectivity. Adoption of FCoE technology is required to access installed Fibre Channel resources.
  • 35. Advantage HP 35 Blade Server Systems Cisco UCS in 5108 Chassis HP BladeSystem inC7000 Chassis Chassis Size 6U 10U Max. Blade Servers 8 16 Mid-plane Bandwidth 1.2Tb/s 7.16Tb/s Max. Embedded Switches 2 8 Support for native 20Gb Ethernet No Yes Support for native 40Gb Ethernet (not including 40Gb port used in 4 x 10 mode) No Yes Support for native Fibre Channel No Yes Support for native InfiniBand No Yes Over subscription 4:1 1.1:1 Hardware offload: Fibre Channel over Ethernet (FCoE) Yes Yes iSCSI No Yes TCP offload engine (TOE) Yes Yes RoCE offload engine (ROE) Yes Yes VXLAN offload engine (VOE) Yes (not yet qualified by VMware) Yes NVGRE offload engine (NOE) Yes (not yet qualified by Microsoft for SMB Direct) Yes
  • 36. Designed for Workloads of the Future 36 The ProLiant Gen9 Blade Server is designed for I/O flexibility with a choice of FlexFabric converged networking or parallel Ethernet and Fibre Channel networks. The ProLiant Gen9 Blade Server is also fully compliant with Windows Server 2012 Virtual Fibre Channel—an innovation that will play an important role in the virtualization of Tier-1 workloads with Microsoft Hyper-V. HP Virtual Connect FlexFabric 20/40 F8 module supports “FlatSAN” direct connectivity to native Fibre Channel 3PAR storage at a lower cost than using Fibre Channel switches  Native Fibre Channel server adapter  Over 12 million ports shipped on this stack 718203-B21 HP LPe1605 16Gb Fibre Channel HBA HP FlexFabric 20Gb 2-port 650FLB Adapter HP Virtual Connect FlexFabric 20/40 F8 module supports LAN, SAN, NAS, iSCSI and FCoE connectivity  Ethernet LAN on Motherboard (LOM) or Mezz adapter  Dual 10/20GbE Ports  Supports LAN, NAS, iSCSI and FCoE connectivity  FlexFabric Ready  Supports RoCE for scale-out cluster connectivity.  Supports NVGRE and VXLAN for migrating VMs across the cloud.
  • 37. Summary 37 Infrastructure of the past is functionally defined and purpose-built. Servers are servers, networking is networking and storage is storage. These purpose-built devices are deployed with little ability to change the function as needs change. In the future, infrastructure needs to be more transformative, taking the shape of business demands. Potential power and flexibility is locked inside the aging Cisco UCS 5108 chassis which severely limits the use of new high-bandwidth networks and any network other than Ethernet/FCoE. The new HP BladeSystem answers the call with: • A new level of convergence which will allow for resources to be allocated at a very granular level, improving efficiencies and ensuring optimal performance as workload demands change. • Interfaces to the software-defined data center. HP ProLiant Gen9 blade servers possess the capability to respond to intelligent orchestration of infrastructure resources in real-time, as applications and user needs change. • A cloud-ready architecture ready to scale-out, agile, and always on. • Workload-optimized for traditional share-everything applications and new share-nothing applications.
  • 38. Resources 38 Related Links OCe14000 Test Report HP FlexFabric Adapters Provided by Emulex HP BladeSystem HP Virtual Connect Technology HP BladeSystem and Cisco UCS Comparison Cisco Fabric Extender Cisco UCS Virtual Interface Card 1340 Cisco UCS 6324 Fabric Interconnect Data Sheet Cisco UCS Ethernet Switching Modes IT Brand Pulse About the Author Joe Kimpler is a senior analyst responsible for IT Brand Pulse Labs. Joe’s team manages the delivery of technical services including hands-on testing, product reviews, total cost of ownership studies and product launch collateral. He has over 30 years of experience in information technology and has held senior engineering and marketing positions at Fujitsu, Rockwell Semiconductors, Quantum and QLogic. Joe holds an engineering degree from the University of Illinois and a MBA in marketing.
  翻译: