版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
1、<p><b> 中文5030字</b></p><p><b> 英文資料翻譯</b></p><p> LAN Switch Architecture</p><p> This chapter introduces many of the concepts behind LAN switching
2、common to all switch vendors. The chapter begins by looking at how data are received by a switch, followed by mechanisms used to switch data as efficiently as possible, and concludes with forwarding data toward their des
3、tinations. These concepts are not specific to Cisco and are valid when examining the capabilities of any LAN switch.</p><p> 1. Receiving Data—Switching Modes</p><p> The first step in LAN swi
4、tching is receiving the frame or packet, depending on the capabilities of the switch, from the transmitting device or host. Switches making forwarding decisions only at Layer 2 of the OSI model refer to data as frames, w
5、hile switches making forwarding decisions at Layer 3 and above refer to data as packets. This chapter's examination of switching begins from a Layer 2 point of view. Depending on the model, varying amounts of each fr
6、ame are stored and examined before bein</p><p> Three types of switching modes have been supported on Catalyst switches:</p><p> ?Cut through</p><p> ?Fragment free</p>&l
7、t;p> ?Store and forward</p><p> These three switching modes differ in how much of the frame is received and examined by the switch before a forwarding decision is made. The next sections describe each m
8、ode in detail.</p><p> 1.1 Cut-Through Mode</p><p> Switches operating in cut-through mode receive and examine only the first 6 bytes of a frame. These first 6 bytes represent the destination
9、MAC address of the frame, which is sufficient information to make a forwarding decision. Although cut-through switching offers the least latency when transmitting frames, it is susceptible to transmitting fragments creat
10、ed via Ethernet collisions, runts (frames less than 64 bytes), or damaged frames.</p><p> 1.2 Fragment-Free Mode</p><p> Switches operating in fragment-free mode receive and examine the first
11、64 bytes of frame. Fragment free is referred to as "fast forward" mode in some Cisco Catalyst documentation. Why examine 64 bytes? In a properly designed Ethernet network, collision fragments must be detected i
12、n the first 64 bytes.</p><p> 1.3 Store-and-Forward Mode</p><p> Switches operating in store-and-forward mode receive and examine the entire frame, resulting in the most error-free type of swi
13、tching.</p><p> As switches utilizing faster processor and application-specific integrated circuits (ASICs) were introduced, the need to support cut-through and fragment-free switching was no longer necessa
14、ry. As a result, all new Cisco Catalyst switches utilize store-and-forward switching.</p><p> Figure2-1 compares each of the switching modes.</p><p> Figure2-1.Switching Modes</p><p
15、> 2. Switching Data </p><p> Regardless of how many bytes of each frame are examined by the switch, the frame must eventually be switched from the input or ingress port to one or more output or egress p
16、orts. A switch fabric is a general term for the communication channels used by the switch to transport frames, carry forwarding decision information, and relay management information throughout the switch. A comparison c
17、ould be made between the switching fabric in a Catalyst switch and a transmission on an automobile. In an a</p><p> Although a variety of techniques have been used to implement switching fabrics on Cisco Ca
18、talyst platforms, two major architectures of switch fabrics are common:</p><p> ?Shared bus</p><p><b> ?Crossbar</b></p><p> 2.1 Shared Bus Switching</p><p
19、> In a shared bus architecture, all line modules in the switch share one data path. A central arbiter determines how and when to grant requests for access to the bus from each line card. Various methods of achieving
20、fairness can be used by the arbiter depending on the configuration of the switch. A shared bus architecture is much like multiple lines at an airport ticket counter, with only one ticketing agent processing customers at
21、any given time.</p><p> Figure2-2illustrates a round-robin servicing of frames as they enter a switch. Round-robin is the simplest method of servicing frames in the order in which they are received. Current
22、 Catalyst switching platforms such as the Catalyst 6500 support a variety of quality of service (QoS) features to provide priority service to specified traffic flows.</p><p> Figure 2-2. Round-Robin Service
23、 Order</p><p> The following list and Figure 2-3 illustrate the basic concept of moving frames from the received port or ingress, to the transmit port(s) or egress using a shared bus architecture:</p>
24、<p> Frame received from Host1— The ingress port on the switch receives the entire frame from Host1 and stores it in a receive buffer. The port checks the frame's Frame Check Sequence (FCS) for errors. If the
25、 frame is defective (runt, fragment, invalid CRC, or Giant), the port discards the frame and increments the appropriate counter.</p><p> Requesting access to the data bus— A header containing information ne
26、cessary to make a forwarding decision is added to the frame. The line card then requests access or permission to transmit the frame onto the data bus.</p><p> Frame transmitted onto the data bus— After the
27、central arbiter grants access, the frame is transmitted onto the data bus.</p><p> Frame is received by all ports— In a shared bus architecture, every frame transmitted is received by all ports simultaneous
28、ly. In addition, the frame is received by the hardware necessary to make a forwarding decision.</p><p> Switch determines which port(s) should transmit the frame— The information added to the frame in step
29、2 is used to determine which ports should transmit the frame. In some cases, frames with either an unknown destination MAC address or a broadcast frame, the switch will transmit the frame out all ports except the one on
30、which the frame was received.</p><p> Port(s) instructed to transmit, remaining ports discard the frame— Based on the decision in step 5, a certain port or ports is told to transmit the frame while the rest
31、 are told to discard or flush the frame.</p><p> Egress port transmits the frame to Host2— In this example, it is assumed that the location of Host2 is known to the switch and only the port connecting to Ho
32、st2 transmits the frame.</p><p> One advantage of a shared bus architecture is every port except the ingress port receives a copy of the frame automatically, easily enabling multicast and broadcast traffic
33、without the need to replicate the frames for each port. This example is greatly simplified and will be discussed in detail for Catalyst platforms that utilize a shared bus architecture in Chapter 3, "Catalyst Switch
34、ing Architecture."</p><p> Figure 2-3. Frame Flow in a Shared Bus</p><p> 2.2 Crossbar Switching</p><p> In the shared bus architecture example, the speed of the shared data
35、 bus determines much of the overall traffic handling capacity of the switch. Because the bus is shared, line cards must wait their turns to communicate, and this limits overall bandwidth.</p><p> A solutio
36、n to the limitations imposed by the shared bus architecture is the implementation of a crossbar switch fabric, as shown in Figure 2-4. The term crossbar means different things on different switch platforms, but essential
37、ly indicates multiple data channels or paths between line cards that can be used simultaneously.</p><p> In the case of the Cisco Catalyst 5500 series, one of the first crossbar architectures advertised by
38、Cisco, three individual 1.2-Gbps data buses are implemented. Newer Catalyst 5500 series line cards have the necessary connector pins to connect to all three buses simultaneously, taking advantage of 3.6 Gbps of aggregate
39、 bandwidth. Legacy line cards from the Catalyst 5000 are still compatible with the Catalyst 5500 series by connecting to only one of the three data buses. Access to all three buse</p><p> A crossbar fabric
40、on the Catalyst 6500 series is enabled with the Switch Fabric Module (SFM) and Switch Fabric Module 2 (SFM2). The SFM provides 128 Gbps of bandwidth (256 Gbps full duplex) to line cards via 16 individual 8-Gbps connectio
41、ns to the crossbar switch fabric. The SFM2 was introduced to support the Catalyst 6513 13-slot chassis and includes architecture optimizations over the SFM.</p><p> Figure 2-4. Crossbar Switch Fabric</p&
42、gt;<p> 3. Buffering Data</p><p> Frames must wait their turn for the central arbiter before being transmitted in shared bus architectures. Frames can also potentially be delayed when congestion occ
43、urs in a crossbar switch fabric. As a result, frames must be buffered until transmitted. Without an effective buffering scheme, frames are more likely to be dropped anytime traffic oversubscription or congestion occurs.&
44、lt;/p><p> Buffers get used when more traffic is forwarded to a port than it can transmit. Reasons for this include the following:</p><p> ?Speed mismatch between ingress and egress ports</p&
45、gt;<p> ?Multiple input ports feeding a single output port</p><p> ?Half-duplex collisions on an output port</p><p> ?A combination of all the above</p><p> To prevent
46、 frames from being dropped, two common types of memory management are used with Catalyst switches:</p><p> ?Port buffered memory</p><p> ?Shared memory</p><p> 3.1 Port Buffere
47、d Memory</p><p> Switches utilizing port buffered memory, such as the Catalyst 5000, provide each Ethernet port with a certain amount of high-speed memory to buffer frames until transmitted. A disadvantage
48、of port buffered memory is the dropping of frames when a port runs out of buffers. One method of maximizing the benefits of buffers is the use of flexible buffer sizes. Catalyst 5000 Ethernet line card port buffer memory
49、 is flexible and can create frame buffers for any frame size, making the most of the availa</p><p> Using the 168 KB of transmit buffers, each port can create as many as 2500 64-byte buffers. With most of t
50、he buffers in use as an output queue, the Catalyst 5000 family has eliminated head-of-line blocking issues. (You learn more about head-of-line blocking later in this chapter in the section "Congestion and Head-of-Li
51、ne Blocking.") In normal operations, the input queue is never used for more than one frame, because the switching bus runs at a high speed.</p><p> Figure 2-5illustrates port buffered memory.</p>
52、<p> Figure 2-5. Port Buffered Memory</p><p> 3.2 Shared Memory</p><p> Some of the earliest Cisco switches use a shared memory design for port buffering. Switches using a shared memor
53、y architecture provide all ports access to that memory at the same time in the form of shared frame or packet buffers. All ingress frames are stored in a shared memory "pool" until the egress ports are ready to
54、 transmit. Switches dynamically allocate the shared memory in the form of buffers, accommodating ports with high amounts of ingress traffic, without allocating unnecessary buffer</p><p> The Catalyst 1200 s
55、eries switch is an early example of a shared memory switch. The Catalyst 1200 supports both Ethernet and FDDI and has 4 MB of shared packet dynamic random-access memory (DRAM). Packets are handled first in, first out (FI
56、FO).</p><p> More recent examples of switches using shared memory architectures are the Catalyst 4000 and 4500 series switches. The Catalyst 4000 with a Supervisor I utilizes 8 MB of Static RAM (SRAM) as dy
57、namic frame buffers. All frames are switched using a central processor or ASIC and are stored in packet buffers until switched. The Catalyst 4000 Supervisor I can create approximately 4000 shared packet buffers. The Cata
58、lyst 4500 Supervisor IV, for example, utilizes 16 MB of SRAM for packet buffers. Shared</p><p> Figure 2-6. Shared Memory Architecture</p><p> 4. Oversubscribing the Switch Fabric</p>&
59、lt;p> Switch manufacturers use the term non-blocking to indicate that some or all the switched ports have connections to the switch fabric equal to their line speed. For example, an 8-port Gigabit Ethernet module wou
60、ld require 8 Gb of bandwidth into the switch fabric for the ports to be considered non-blocking. All but the highest end switching platforms and configurations have the potential of oversubscribing access to the switchin
61、g fabric.</p><p> Depending on the application, oversubscribing ports may or may not be an issue. For example, a 10/100/1000 48-port Gigabit Ethernet module with all ports running at 1 Gbps would require 48
62、 Gbps of bandwidth into the switch fabric. If many or all ports were connected to high-speed file servers capable of generating consistent streams of traffic, this one-line module could outstrip the bandwidth of the enti
63、re switching fabric. If the module is connected entirely to end-user workstations with lower</p><p> Cisco offers both non-blocking and blocking configurations on various platforms, depending on bandwidth r
64、equirements. Check the specifications of each platform and the available line cards to determine the aggregate bandwidth of the connection into the switch fabric.</p><p> 5. Congestion and Head-of-Line Bloc
65、king</p><p> Head-of-line blocking occurs whenever traffic waiting to be transmitted prevents or blocks traffic destined elsewhere from being transmitted. Head-of-line blocking occurs most often when multip
66、le high-speed data sources are sending to the same destination. In the earlier shared bus example, the central arbiter used the round-robin service approach to moving traffic from one line card to another. Ports on each
67、line card request access to transmit via a local arbiter. In turn, each line card's loc</p><p> In Figure 2-7, a congestion scenario is created using a traffic generator. Port 1 on the traffic generator
68、 is connected to Port 1 on the switch, generating traffic at a 50 percent rate, destined for both Ports 3 and 4. Port 2 on the traffic generator is connected to Port 2 on the switch, generating traffic at a 100 percent r
69、ate, destined for only Port 4. This situation creates congestion for traffic destined to be forwarded by Port 4 on the switch because traffic equal to 150 percent of the forw</p><p> Figure 2-7. Head-of-Lin
70、e Blocking</p><p> Head-of-line blocking can also be experienced with crossbar switch fabrics because many, if not all, line cards have high-speed connections into the switch fabric. Multiple line cards may
71、 attempt to create a connection to a line card that is already busy and must wait for the receiving line card to become free before transmitting. In this case, data destined for a different line card that is not busy is
72、blocked by the frames at the head of the line.</p><p> Catalyst switches use a number of techniques to prevent head-of-line blocking; one important example is the use of per port buffering. Each port mainta
73、ins a small ingress buffer and a larger egress buffer. Larger output buffers (64 Kb to 512 k shared) allow frames to be queued for transmit during periods of congestion. During normal operations, only a small input queue
74、 is necessary because the switching bus is servicing frames at a very high speed. In addition to queuing during congestion, many </p><p> 6. Forwarding Data</p><p> Regardless of the type of s
75、witch fabric, a decision on which ports should forward a frame and which should flush or discard the frame must occur. This decision can be made using only the information found at Layer 2 (source/destination MAC address
76、), or on other factors such as Layer 3 (IP) and Layer 4 (Port). Each switching platform supports various types of ASICs responsible for making the intelligent switching decisions. Each Catalyst switch creates a header or
77、 label for each packet, and forwa</p><p> 7. Summary</p><p> Although a wide variety of different approaches exist to optimize the switching of data, many of the core concepts are closely rela
78、ted. The Cisco Catalyst line of switches focuses on the use of shared bus, crossbar switching, and combinations of the two depending on the platform to achieve very high-speed switching solutions. High-speed switching AS
79、ICs use shared and per port buffers to reduce congestion and prevent head-of-line blocking.</p><p><b> 中文翻譯:</b></p><p> 局域網(wǎng)交換機(jī)體系結(jié)構(gòu)</p><p> 本章將介紹所有交換機(jī)生產(chǎn)廠商都遵守的局域網(wǎng)交換技術(shù)的
80、一些基本概念。本章首先介紹交換機(jī)如何接受數(shù)據(jù)。隨后,本章介紹保證高效數(shù)據(jù)交換的一些機(jī)制。最后,本章介紹如何將數(shù)據(jù)轉(zhuǎn)發(fā)給目標(biāo)。這些概念并非Cisco交換機(jī)所特有的,而是在查看局域網(wǎng)交換機(jī)功能的時候,對所有交換機(jī)產(chǎn)品都適用的。</p><p> 1. 數(shù)據(jù)接收----交換模式</p><p> 在局域網(wǎng)交換中,根據(jù)交換機(jī)功能的不同,第一步就是從發(fā)送設(shè)備或主機(jī)接收幀或分組。對于僅在OSI模型
81、的第2層進(jìn)行轉(zhuǎn)發(fā)決策的交換機(jī),它們將數(shù)據(jù)看作幀。而對于在OSI模型的第3層或者更高層進(jìn)行轉(zhuǎn)發(fā)決策的交換機(jī),它們將數(shù)據(jù)看作分組。本章首先從第2層的角度來研究交換機(jī)。根據(jù)具體型號的不同,交換機(jī)在數(shù)據(jù)交換之前所存儲和檢查的楨數(shù)目也存在一定差異。</p><p> Catalyst交換機(jī)攴持下述三種交換模式:</p><p><b> ? 直通模式;</b></p&
82、gt;<p><b> ? 碎片隔離模式;</b></p><p><b> ? 存儲轉(zhuǎn)發(fā)模式。</b></p><p> 上述3種交換模式的區(qū)別在于交換機(jī)在制定轉(zhuǎn)發(fā)決策之前所接收和檢查的幀數(shù)目。下面將詳細(xì)討論每種交換模式。</p><p><b> 1.1 直通模式</b>&l
83、t;/p><p> 如果交換機(jī)工作在直通模式,那么它將只接收和檢查幀的的前6個字節(jié)。這6個字節(jié)代表了幀的日標(biāo)MAC地址,交換機(jī)利用這些信息足以做出轉(zhuǎn)發(fā)決策。盡管直通交換能夠在數(shù)據(jù)傳送的時候提供最低的延遲,但卻容易傳送以太網(wǎng)碰撞所產(chǎn)生的碎片、殘幀(runt)或受損幀。</p><p> 1.2 碎片隔離模式</p><p> 如呆交換機(jī)工作在碎片隔離模式,那么它將接
84、收和檢查全幀的前64個字節(jié)。在某些Cisco Catalyst交換機(jī)的文檔中,碎片隔離又稱為“快速轉(zhuǎn)發(fā)”模式。為什么交換機(jī)檢查幀的前64個字節(jié)呢?因為在設(shè)計良好的以太網(wǎng)網(wǎng)絡(luò)中,碰撞碎片必須在前64字節(jié)中檢測出來。</p><p> 1.3 存儲轉(zhuǎn)發(fā)模式</p><p> 如果交換機(jī)工作在存儲轉(zhuǎn)發(fā)模式,那么它將接收和檢查整幀,因此它是錯誤率最低的交換模式。</p><
85、p> 由于采用速度更快的處理器和ASIC(Application-Specific Integrated Circuit,專用集成電路),交換機(jī)不必支持直通交換機(jī)和碎片隔離交換,因此,所有新型的Cisco Catalyst交換機(jī)都采用存儲轉(zhuǎn)發(fā)交換。</p><p> 圖2-1比較各種交換模式之間的區(qū)別。</p><p><b> 圖2-1 交換模式</b>
86、</p><p><b> 2. 數(shù)據(jù)交換</b></p><p> 無論交換機(jī)需要檢查幀的多少字節(jié),幀最終都將由輸入或入站端口交換到單個或多個輸出和出站端口。交換矩陣(switch fabric)是交換機(jī)通信信道的一個常用術(shù)語,它可以在交換機(jī)內(nèi)部傳送幀、承載轉(zhuǎn)發(fā)決策信慮、和轉(zhuǎn)送管理信息。Catalyst交換機(jī)中的交換矩陣可以看作汽車中的傳動裝置,在汽車中,傳動裝
87、置負(fù)責(zé)將引擎的動力傳遞給汽車輪子;在Catalyst交換機(jī)中,交換矩陣負(fù)責(zé)將輸入或入站端口的幀轉(zhuǎn)送給單個或多個輸出和出站端口。無論具體型號如何,也無論何時產(chǎn)生新的交換平臺,所有文檔都會將“傳動裝置”作為交換矩陣。</p><p> 盡管Cisco Catalyst平臺已經(jīng)采用多種技術(shù)來實現(xiàn)交換矩陣,但以下兩種體系結(jié)構(gòu)的交換矩陣最為常見:</p><p><b> ? 共享總線
88、;</b></p><p><b> ? 交叉矩陣。</b></p><p> 2.1 共享總線交換</p><p> 在共享總線的體系結(jié)構(gòu)中,交換機(jī)的所有線路模塊都共享1個數(shù)據(jù)通路。中央仲裁器決定何時授予各線路卡訪問總線的請求。根據(jù)交換機(jī)配置的情況,仲裁器能夠使用多種公平方法。共享總線體系結(jié)構(gòu)非常類似于機(jī)場票務(wù)柜臺前的多個隊
89、列,但任何時候僅有1個票務(wù)代理處理客戶請求。</p><p> 圖2-2舉例說明幀進(jìn)入交換機(jī)時的循環(huán)服務(wù)過程。如果希望根據(jù)幀的接收順序進(jìn)行服務(wù),那么循環(huán)是最簡單的方法。為了能夠給特定通信流量提供優(yōu)先級服務(wù),當(dāng)前的Catalyst交換平臺(例如Catalyst 6500)能夠支持各種各樣的QoS(Quanlity Of Service,服務(wù)質(zhì)量)特性。</p><p> 圖2-2 循環(huán)服
90、務(wù)順序</p><p> 圖2-3說明了共享總線體系結(jié)構(gòu)中將接收端口或入口處的幀移動到發(fā)送端口或出口的基本原理,其中各步驟說明如下。</p><p> 1.接收源自主機(jī)1的幀----交換機(jī)的入站端口接受源自主機(jī)1的整幀,并且將其存儲到接受緩沖區(qū)中。端口根據(jù)幀的FCS(Frame Check Sequence,幀檢驗序列)進(jìn)行錯誤檢測。如果幀存在缺陷(例如殘幀、碎片、無效CRC或者巨型幀
91、),那么端口將丟棄該幀,并且將增加相關(guān)計數(shù)器的數(shù)值。</p><p> 2.請求訪問數(shù)據(jù)總線----包含轉(zhuǎn)發(fā)決策所需要的信息報頭將被添加到幀中,然后線路卡請求在數(shù)據(jù)總線上發(fā)送幀的訪問權(quán)限或者許可權(quán)限。</p><p> 圖2-3 共享總線中的幀流</p><p> 3.將幀發(fā)送到數(shù)據(jù)總線----在中央仲裁器授予訪問權(quán)限之后, 幀將被發(fā)送到數(shù)據(jù)總線上。</
92、p><p> 4.所有端口接收到幀----在共享總線體系結(jié)構(gòu)中,所有端口都將同時接收每個發(fā)送幀。此外,負(fù)責(zé)轉(zhuǎn)發(fā)決策的硬件也將接收到幀。</p><p> 5.交換機(jī)決定哪個端口應(yīng)當(dāng)發(fā)送幀----第2步驟中添加到幀中的信息可用于確定哪些端口應(yīng)當(dāng)發(fā)送幀。在某些情況下,對于具有未知目標(biāo)MAC地址的幀或者廣播幀,交換機(jī)將向除幀接收端口之外的所有端口發(fā)送幀。</p><p>
93、 6.端口發(fā)送幀,其余端口丟棄該幀----根據(jù)第5步驟中的決策,某個特定端口或者某些端口被告知發(fā)送幀,而其余端口則被告知丟棄或者清空幀。</p><p> 7.出站端口將幀發(fā)送到主機(jī)2----在這個示例中,假定交換機(jī)知道主機(jī)2的位置,并且僅在連接到主機(jī)2的端口發(fā)送幀。</p><p> 共享總線體系結(jié)構(gòu)的優(yōu)勢之一在于每個端口(入站端口除外)都將自動接收幀的副本,也就易于實現(xiàn)組播和廣播
94、流量,而無需復(fù)制各個端口的幀。</p><p> 2.2 交叉矩陣交換</p><p> 在共享總線體系結(jié)構(gòu)示例中,共享數(shù)據(jù)總線的速度決定了交換機(jī)的流量處理總?cè)萘?。因為總線采用共享訪問的方式,所以線路卡必須等待時機(jī)才能進(jìn)行通信,這嚴(yán)重限制了總帶寬。</p><p> 為了克服共享數(shù)據(jù)總線體系結(jié)構(gòu)所產(chǎn)生的限制,解決方案是采用交叉交換矩陣,如圖2-4所示。對于不同
95、的交換機(jī)平臺,術(shù)語交叉矩陣意味著不同的內(nèi)容,但基本都指線路卡之間能夠同時使用多個數(shù)據(jù)信道或者通路。</p><p> 圖2-4 交叉交換矩陣</p><p> 在Cisco Catalyst 5500系列(Cisco公司最早采用交叉矩陣體系結(jié)構(gòu)的產(chǎn)品之一)交換機(jī)產(chǎn)品中,總共實現(xiàn)3條獨立的1.2Gbit/s數(shù)據(jù)總線,新型的Catalyst 5500系列線路卡具有必要的連接器針腳,她們能夠
96、同時連接到這3條數(shù)據(jù)總線,進(jìn)而能夠充分利用3.6Gbit/s的總帶寬。通過僅連接到3條數(shù)據(jù)總線中的1條數(shù)據(jù)總線,老式的Cisco Catalyst 5500系列線路卡仍然能夠兼容Cisco Catalyst 5500系列。在Cisco Catalyst 5500平臺中,吉比特以太網(wǎng)線路卡要求訪問所有3條數(shù)據(jù)總線。</p><p> 在Cisco Catalyst 6500系列交換機(jī)中,SFM(Switch Fa
97、bric Module,交換矩陣模塊)和SFM2(Switch Fabric Module2,交換矩陣模塊2)能夠支持交叉矩陣。通過到交叉交換矩陣的16個獨立的8Gbit/s連接,SFM能夠向線路卡提供128Gbit/s的帶寬(256Gbit/s全雙工)。新型SFM2用于支持Catalyst 6513(13插槽的機(jī)箱),并且對SFM進(jìn)行了體系結(jié)構(gòu)方面的優(yōu)化。</p><p><b> 3. 數(shù)據(jù)緩沖&
98、lt;/b></p><p> 在共享數(shù)據(jù)體系結(jié)構(gòu)傳送幀之前,幀必須等待中央仲裁器的處理安排。此外,交叉交換矩陣發(fā)生擁塞,也可能會延遲幀的處理?;谏鲜鲈?,在傳送幀之前,必須對其進(jìn)行緩沖處理。如果沒有有效的緩沖機(jī)制,那么當(dāng)出現(xiàn)流景過量或發(fā)牛擁塞的時候,幀被丟棄的可能性就非常高。</p><p> 如果發(fā)往端口的流量超過了它所能發(fā)送的流量,那么就需要使用緩沖。出現(xiàn)下述情況的時候,
99、就需要使用緩沖:</p><p> ? 入口和出站端口的速度不匹配;</p><p> ? 多個輸入端口共同向單個輸出端口提供流景;</p><p> ? 輸出端口發(fā)生半雙工碰撞;</p><p> ? 上述幾種情況的組合。</p><p> 為了防止丟棄幀,Catalyst交換機(jī)通常采用下述兩種內(nèi)存管理方式
100、:</p><p><b> ? 端口緩沖內(nèi)存;</b></p><p><b> ? 共享內(nèi)存。</b></p><p> 3.1 端口緩沖內(nèi)存</p><p> 通過采用端口緩沖內(nèi)存,交換機(jī)(例如Catalyst 5500)能夠為每個以太網(wǎng)端口提供一定數(shù)量的高速內(nèi)存,這些內(nèi)存可用于幀發(fā)送
101、之前的幀緩沖。端口緩沖內(nèi)存的不足之處,在于如果端口的緩沖已經(jīng)用盡,那么就會發(fā)生丟棄幀的情況。為了最大限度利用緩沖的優(yōu)勢,方法之一是采用靈活的緩沖區(qū)尺寸。Catalyst 5500以太網(wǎng)線路卡端口的緩沖內(nèi)存就是非常靈活的,并且能夠創(chuàng)建各種尺寸的幀緩沖區(qū),進(jìn)而充分利用可用的緩沖區(qū)內(nèi)存。對于采用SAINT ASIC的Catalyst 5000以太網(wǎng)卡,每個端口包含192KB的緩沖區(qū)內(nèi)存,其中24KB用于接收或者輸入緩沖區(qū),而168KB用于發(fā)送
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 分布式以太網(wǎng)交換機(jī)體系結(jié)構(gòu)的研究.pdf
- 高速光交換機(jī)體系結(jié)構(gòu)及調(diào)度技術(shù)研究.pdf
- 大容量光電MPLS交換機(jī)體系結(jié)構(gòu)及其關(guān)鍵技術(shù)研究.pdf
- 外文翻譯---無線局域網(wǎng)
- 基于安全交換機(jī)的局域網(wǎng)ARP攻擊響應(yīng)系統(tǒng)的設(shè)計與實現(xiàn).pdf
- 基于體系結(jié)構(gòu)的無線局域網(wǎng)安全弱點研究.pdf
- 局域網(wǎng)交換技術(shù)
- 以太網(wǎng)交換機(jī)淺析
- 行政交換機(jī)、調(diào)度交換機(jī)交流
- 無線局域網(wǎng)安全體系結(jié)構(gòu)及關(guān)鍵技術(shù).pdf
- GSM移動通信網(wǎng)絡(luò)交換機(jī)軟件體系結(jié)構(gòu)分析及其模擬實現(xiàn).pdf
- 系列以太網(wǎng)核心交換機(jī)
- 一種基于交換機(jī)的局域網(wǎng)ARP攻擊防御方法的研究及系統(tǒng)實現(xiàn).pdf
- 交換機(jī)原理
- 交換機(jī)配置
- 網(wǎng)絡(luò)交換機(jī)
- 交換機(jī)術(shù)語
- 無線局域網(wǎng)切換機(jī)制研究.pdf
- 無線局域網(wǎng)畢業(yè)論文外文翻譯
- 無線局域網(wǎng)畢業(yè)論文外文翻譯
評論
0/150
提交評論