Cisco Nexus 5000,Nexus 2000 Fabric Extender

上传人:guoc****ang 文档编号:242864974 上传时间:2024-09-10 格式:PPT 页数:46 大小:16.14MB
返回 下载 相关 举报
Cisco Nexus 5000,Nexus 2000 Fabric Extender_第1页
第1页 / 共46页
Cisco Nexus 5000,Nexus 2000 Fabric Extender_第2页
第2页 / 共46页
Cisco Nexus 5000,Nexus 2000 Fabric Extender_第3页
第3页 / 共46页
点击查看更多>>
资源描述
Click to edit Master title style, 2007 Cisco Systems, Inc. All rights reserved.,Cisco Confidential,BRKCAM-3004,46,Click to edit Master text styles,Second level,Third level,Fourth level,Fifth level,Cisco Nexus 5000, Nexus 2000 Fabric Extender,Evolution of the Data Center Access Architecture,Eddie Tan Technical Marketing Engineer,Server Access & Virtualization BU,Data Center Access Evolution,Virtual Access,DC Virtual Access,Max Port Densities,Up to 52x10GE Line Rate,Up to 16x1/2/4G FC + 40x10GE,* FC ports are SFP,Nexus 5000Server Access Switch,More than 1Tbps switching capacity,High density 10GE and 1GE (with FEX),Front to Back Cooling,Data Center Capabilities,Multi-protocol forwarding, CE, FC, FCoE, DCE,First platform to support FCoE,VN-Link,High Availability,Hot Swap Fans N+1 redundant,Hot Swap PSUs N+N grid redundant,NX-OS modular Operating System,Nexus 5010,Nexus 5020,Nexus 2000 Fabric Extender,1GE(1000Base-T) Connectivity,48 x 1 GE(1000Base-T) interfaces,4 x 10 GE interfaces,Beacon and status LEDs,Redundant, hot-swappable power supplies,Hot-swappable fan tray,Nexus 2000 Fabric Extender,Virtual Chassis,The Nexus 2000 Fabric Extender (FEX) acts as a remote linecard,for the Nexus 5000, retaining all centralized management and configuration,on the Nexus 5000, transforming it into a Virtualized Chassis,Nexus 5000,Virtualized chassis,+,Nexus 5000,Nexus 2000 Fabric Extender,=,Nexus 2000 Fabric Extender,Design advantages,Optimization of the Layer 1 and layer 2 Data Center Design,Larger layer 2 domains with fewer STP scaling challenges,One logical access switch (simplified topology),1GE to 10GE migration,Optimized cabling solutions for current and future connectivity,Support for ToR, MoR, EoR or any combination,N2K provides flexible 1GE and 10GE (1HCY10) server ports on a DCE enabled access switch,DCE,+,Nexus 5000,Nexus 2000 Fabric Extender,Nexus 5000 Sales Promotions for Q3 FY09,Q3 FY09 Nexus 5000 and 2000 Sales promotions, 5010 Lab bundles (Ethernet and Storage),Nexus 5020 and Nexus 5010 NFR bundles,CAP awards for Case studies Tencent Case study from China,Need more Case studies from China, 2000 Lab bundles will be updated soon,Nexus 2000 Highlights,Early Orderability was on late Q2 FY09,General Orderability is now ON!,Customer WINs: LDS, 5,th,Signal, Trans Pacific, Illumina, Ebay, Komatsu, Yosemite Community college, Nextel Communications, The Salem Hospital.,Lots of customers designing the DC access layer with FEX!,If you need Nexus 2000 seed units for immediate customer opportunities please work with Latha Vishnubhotla (,lvishnu,), Product Manager, SAVBU,One FEX seed unit will be shipped by mid Feb09 for customer testing in China,More collateral on Nexus 2000 at:,http:/bock- Presentation, Extender Management model,Nexus 5000 + Nexus 2000 5,th,Signal Case study Internal ONLY,Requirement for 1GE ToR deployment (N2K),Full Line Rate 10GE for End of Row or Middle of Row design,Savings on Management cost with N2K as “module” of N5K,NetApp FCoE Storage as part of the Unified I/O for future proofing,Juniper EX4200,Fixed configuration switch based on Marvell chipset,Store and Forward switching mode,1 RU,Layer 3 switching capability,Two model (24 or48) x 10/100/1000 base-T ports,10 EX4200 can be interconnected to form upto a 480 port virtual chassis with 1 management point,Optional 4 port 1GE or 2 port 10GE module with XFP optic,Switching fabric capacity 136Gpbs,System Power 190W (930W with PoE),Competing against Juniper EX4200,Lay emphasis on the DC becoming a solution driven entity rather than box-to-box comparisons.,Explain value in the end to end solution Cisco brings in the DC with the Nexus family of products (N 1k, 2k, 5k and 7k),No support for VM optimized services (VN-link) or Unified IO,Spanning tree is not required between the 5k and the 2k,Auto provisioning with the Nexus-2k being a stateless device,No EoR functionality,DO NOT Compare feature by Feature Juniper EX4200 is not a Data Center Product!,Nexus 5000Projected Software Roadmap (2008-2010),Avalon,4.0(1a)N1,Target Q4CY08,Nexus 5010,8 Port 1/2/4G FC Module,7m CX1 SFP+,SFP+ LR Support,SFP Fibre Channel LW,1GE Support on first 8/16 ports (SX/LX/GLC-T),UDLD,Configuration Sessions,IPv6 Device Management,CLI Migration for FCoE & FIP Compliance,Support for Direct-Attach FCoE Targets,Support for Direct-Attach FC Targets,CiscoWorks LMS/RME,VFrame DC 1.2.5,DCNM Support,Support for 110V on N5020,NPV Traffic Engineering,Bondi,4.0(1a)N2,Target Q1CY09,Nexus 2000,Bridge Assurance,Port-based COS Assignment,Cronulla Release Status: CC in progress,Target: Q3CY09,USR Optics,VPC Support on N5K,VPC support on N2K,EtherChannel Hash DistributionShow Command,N2K Connectivity through N5KExpansion Modules,Support for 576 Host PortChannels*,Support for 16 Port Channels,Support for 16-Port in an Etherchannels,PVLAN Isolated Trunks,PVLAN Promiscuous UplinkTrunks,Native 802.1Q VLAN Tag,ACL-based QoS Classification,MQC-based COS Marking,FCoE Security Enhancements,Standards-compliant T11 FIP(Direct & In-Direct),Support for CEE-based DCBX,FCoE CLI Enhancements,Support for SMI-S at MDS parity,NX-OS Parity Features,Dee Why Release Status: Not Committed,Target: Q1CY10,FEX-10G,2/4/8G FC Module,8G FC Optics,ISSU for N5K,ISSU for N2K,F_Port Trunking,F_Port Channeling,ACLs for SNMP Communities,N1KV Control Plane Support,Network Interface Virtualization,Port Profiles,Increased VLANs,Increased FEX Support,End Host Virtualization,Port Security,Roadmap Subject to Change,Nexus 5000 and 2000,New Data Center Architecture Requirements,Data Center Architecture,Evolution of the Hierarchical Design,Layer 2,Layer 3,Access,Core,Services,Aggregation,The Data Center Architecture is based on a hierarchical design model,Aggregation block contains the access and aggregation layers,Core provides layer 3 boundary to the rest of the network,Dedicated service switches provide application load balancing, firewall, etc.,Architecture is based on optimized design for control plane stability and scalability,Need to understand how the design needs to evolve to accommodate server, applications and facilities requirements,Data Center Architecture,Evolving Requirements for Layer 2 Connectivity,VMW ESX,VMW ESX,VM,#4,VM,#3,VM,#2,VM,#1,VM,#8,VM,#7,VM,#5,VM,#5,VM,#2,VM,#3,VM,#4,VM,#5,VM,#6,VM,#7,VM,#8,VM,#1,VM,#1,Nexus 1000V,Nexus 1000V,Optimizing server workload High Availability and Management requires ubiquitous network and SAN connectivity,L2 MAC, IP address, SAN addressing all move when the Virtual Machines move,Server and Application requirements are driving Layer 2 Scalability requirements,Server link redundancy - NIC Teaming,Security services (Firewall, Load Balancers, IPS, ) L2 adjaceny requirements,Server HA mechanisms require subnets to span between all the nodes in the HA cluster,Server Virtualization (e.g. VMWare) driving the need for every VLAN everywhere based designs,Data Center Architecture,Changing Traffic Loads,VMW ESX,VM,#8,VM,#7,VM,#5,VM,#5,VM,#5,VM,#6,VM,#7,VM,#8,Nexus 1000V,VMW ESX,VM,#8,VM,#7,VM,#5,VM,#5,VM,#5,VM,#6,VM,#7,VM,#8,Nexus 1000V,Virtual Machine environments change the traffic loads in the network,By increasing the density of applications on a single physical server both the server itself and also the network become more heavily utilized,Over-subscription calculations may need to be re-evaluated,Network capacity both grows in volume (more total bytes of storage and application data moved) and becomes more dense (more data and storage traffic from each physical server),Nexus 5000 and 2000,Evolution of the Data Center Access,Data Center Access Architecture,Virtualized Access Switch,Nexus 5010/5020,Nexus 5000/2148T Virtualized Access Switch provides a number of design options to address evolving Data Center requirements,Fabric Extender provides for flexibility in the design of the physical topologies,Aids in building larger layer 2 designs safely,Support of latest spanning tree enhancements,Single virtual access switch (Simplifies the layer 2 design),Support of 16-way 10GE Etherchannel combined with vPC provides for increased network capacity,Nexus 2148T Fabric Extender 48 GE Ports,4 x 10GE Fabric Links per Fabric Extender (CX-1 Cu),Data Center Access Architecture,N5K/N2K - Logical Topology,Nexus 5000/2000 Virtualized Access Switch Pods,. . .,Cisco Nexus 2148T Fabric Extender (N2K) and Nexus 5000 (N5K) Pod,N2K + N5K Pod represents networking Access layer,Nexus 7000 at Distribution Layer,Each Virtualized Access Switch Pod configured to support up to 576 1GE server ports at FCS,Data Center Access Architecture,Supported N2k per N5k,Available 10GE ports for N2k connection,52 10GE ports per nexus 5020. 26 10GE ports per Nexus 5010,Recommend to have at least 2 10GE ports connected to peer N5k for future vPC deployment,At FCS, ports on expansion modules cant be used for FEX connection. It will be supported with Cronulla release(Q3 CY09),Software scalability,At FCS software supports 12 FEX per N5k, which is 576 1GE ports,Future software release will scale beyond 12 FEX per N5k,Hardware support 12 physical PortChannel at FCS and 16 with Cronulla release,This doesnt consume N5k hardware PortChannel count because there is only one connection to each N5k,Data Center Access Architecture,Optimizing Layer 1 and Layer 2 Designs,1GE Attached Servers - Maintain Existing Cat5e Server Wiring Infrastructure with EoR topology,Nexus 5000/2000 EoR,. . .,Cisco Nexus 2148T Fabric Extender and Nexus 5000 provide a Flexible Access Solution,De-Coupling of the Layer 1 and Layer 2 Topologies,Optimization of both Layer 1 (Cabling) and Layer 2 (Spanning Tree) Designs,Provides for simultaneous support of EoR, MoR and ToR,Data Center Access Architecture,Physical Pod (End of Row) Topology,Server 1,Server 2,Server 3,Server 22,Network,Nexus 5000/2000 Virtualized Access Switches,2 x Nexus 5000 + 12 x Nexus 2148T,576 Server ports in each Nexus 5000/2148T Virtualized Access Switch,Only 16 Rack Units required to support 576 x 1GE + 104 x 10GE ports (Fabric ports will be allocated out of the 104 x 10GE ports),Architecture accommodates End of Row centralized cabling where required,Structured Cat5e cabling extending from server racks to centralized network racks,Fabric Links CX-1 Twinax Cu,Fabric Links CX-1 Twinax Cu,Data Center Access Architecture,Physical Pod (Top of Rack) Topology,Server 1,Server 2,Server 3,Server 24,Network,Virtualized Access Switch Remote I/O Modules (Line Cards) - 24 x Nexus 2148T,Virtualized Access Switch architecture supports ToR cabling plant where required,A single Spanning Tree device (one node and one set of uplinks to the aggregation),10 GE MM OM3 Fiber and/or CX-1 Twinax Fabric Links,Nexus 5000 Centralized Switching Fabric,Local In Rack server to network access port cabling,Data Center Access Architecture,N5K/N2K Advantages Flexible Cabling,Combination of EoR and ToR cabling,Nexus 5000/2000 Mixed ToR & EoR,. . .,Cisco Nexus Fabric Extender (FEX) and Nexus 5000 provide a Flexible Access Solution,Migration to ToR for 10GE servers or selective 1GE server racks if required (mix of ToR and EoR),Mixed cabling environment (optimized as required),Flexible support for Future Requirements,Nexus 5000 and 2000,High Availability Design,Fabric Extender,Fabric Modes,Fabric Extender associates (pins) a server side (1GE) port with an uplink (10GE) port,Server ports are either individually pinned to specific uplinks (static pinning) or all interfaces pinned to a single logical port channel,Behavior on FEX uplink failure depends on the configuration,Static Pinning Server ports pinned to the specific uplink are brought down with the failure of the pinned uplink,Port Channel Server traffic is shifted to remaining uplinks based on port channel hash,Static Pinning,Port Channel,Server Interface goes down,Server Interface stays active,Nexus 2148 Fabric Extender,Configuring the Fabric Extender,Two step process,Define the Fabric Extender (100-199) and the number of fabric uplinks to be used by that FEX (valid range: 1-4),Nexus-5000#,switch# configure terminal,switch(config)# fex 100,switch(config-fex)# pinning max-links 4,Nexus-5000#,switch# switch# configure terminal,switch(config)# interface ethernet 1/1,switch(config-if)# switchport mode fex-fabric,switch(config-if)# fex associate 100,. . .,Configure Nexus 5000 ports as fabric ports and associate the desired FEX,Nexus 2148 Fabric Extender,Fabric Extender ports are Nexus 5000 ports,Nexus 2148 Fabric Extender,Detecting Link Failure No STP in the Fabric,The failure of one of the links between Nexus 5000 and FEX is detected either via Layer 1,IEEE 802.3ae link negotiation define the use of Remote Fault Indicator & Link Fault Signaling mechanisms,Bit D13 in the Fast Link Pulse (FLP) can be set to indicate a physical fault to the remote side,And/or Layer 2 mechanisms,Satellite Discovery Protocol (SDP) packets are sent every (1) second on all links connecting FEX to nexus 5000,In the event of three (3) lost SDP frames on a link that link is disabled,Traffic recovery will be sub-second in most real world cases,SDP messages provide for a hello/dead link failure detection mechanism,Fabric Extender,Detecting Link Failure No STP in the Fabric,TM-5010-1# show platform software satmgr info fport ethernet 1/1,Interface : Eth1/1 - 0x1d000000 Up Remote chassis: 100,satellite: 0xc02cb1ec0d00, SDP state Active, Rx:504384, Tx:518720,Fabric mode. satellite Bound. Fport state: Active,fabric slot:131, SDP module id:0xc02cb1ec0d00, rlink: 0x1f000080,parent:0x16000000 num mem: 0 num mem up: 0,TM-5010-1# show spanning-tree interface ethernet 1/1,ERROR: No spanning tree information available for Ethernet1/1,SDP is active,No Spanning Tree State for the Port,The Fabric Links are internal to the switch itself,No Spanning Tree required for link redundancy,SDP provides UDLD, LACP functionality,Hardware prevents a packet from looping between Fabric Links,Fabric Links,Fabric Extender,Uplink Failure Static Pinning,VMWare ESX,VM Port Groups,VM,#1,VM,#4,VM,#3,VM,#2,Service Console,& VMotion,On failure of a specific Fabric Extender uplink the associated pinned server ports are brought down,Emulates,a server port failure,Traffic restoration is dependent on server side NIC teaming recovery mechanisms,As an example on link loss an ESX vSwitch will,Issue a RARP to trigger MAC learning for downstream recovery,Failover all upstream traffic (MAC or Port Group load balancing),vSwitch issues RARP to trigger recovery of downstream traffic,Fabric Extender,Uplink Failure Port Channel,When configured as a port channel a failure of a Fabric Extender to Nexus 5000 uplink will not trigger a change to server port,The logical uplink that the server port is pinned to remains up (the port channel),Upstream and downstream traffic will both be redistributed on a per flow basis across the remaining links in the bundle,Current support for 12 Ethernet Port Channels on the Nexus 5000 (16 supported Q3CY09),Nexus 5000 rebalances flows downstream using Port Channel hash,FEX rebalances flows upstream using Port Channel hash,Fabric Extender,Port Channel Load Sharing,Nexus 5000 and Fabric Extender utilize the same port channel hashing algorithm,Each input field to the hash is divided by one of two CRC-8 polynomials,Minimum hash bucket distribution bias with 256 hash buckets,Worst case imbalance is 6%,Nexus 5000 supports 6-tuple input the hash (L2 + L3 + L4),Fabric Extender supports 5-tuple input (L2 + L3 + VLAN),Same hash will used on BOTH ends of FEX uplinks,Configured only once on the remote line card interfaces on the Nexus 5000,L2 & L3 & VLAN,Fabric Extender,Port Channel Load Sharing,1,2,3,1,2,3,1,2,8,Prior generations of Etherchannel load sharing leveraged eight hash buckets,Could lead to non optimal load sharing with an odd number of links,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,16,32,48,64,80,96,112,128,256,N5K & N2K hash to 256 buckets,Provides better load sharing in normal operation and avoids in-balancing of flows in any link failure cases,Nexus 5000 and 2000,Virtual Port Channel on Nexus 5000,Q3CY09,Data Center Access Architecture,Virtualized Access Switch System Level Redundancy,. . .,Component MTBF,1,5010,87180 hours,5020,58457 hours,2148T,160799 hours,1. Based on Telcordia SR-332 Black Box” Methodology,System Level Redundancy,Redundant Switching Fabrics (2 x N5K) Q3CY09,Redundant Control Plane (2 x N5K) - Q3CY09,Redundant Fabric Links (4 x 10GE),Component Level Redundancy,Redundant & Hot Swap PS,Redundant & Hot Swap Fan,Full ISSU (1HCY10),Data Center Architecture,vPC MultiChassis EtherChannel,vPC is a Port-channeling concept extending link aggregation to two separate physical switches,Allows the creation of resilient L2 topologies based on Link Aggregation.,Eliminates the need for STP in the access-distribution,Provides increased bandwidth,All links are actively forwarding,MCEC is available in two implementations:,VSS on the Cat6k,vPC as a standalone feature in NX-OS,vPC maint
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 商业管理 > 营销创新


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!