LAN Switches

LAN Switches
 


A typical network consists of:

  • nodes, or computers

  • a medium for connection, either wired or wireless

  • special network equipment, such as routers or hubs

In the case of the Internet, these pieces work together to allow your computer to send information to another computer. The other computer can be on the other side of the world!

Switches are a fundamental part of most networks. Switches enable several users to send information over a network. Users can send the information at the same time and do not slow each other down. Just like routers allow different networks to communicate with each other, switches allow different nodes of a network to communicate directly with each other. A node is a network connection point, typically a computer. Switches allow the nodes to communicate in a smooth and efficient manner.

Illustration of a Cisco Catalyst switch.

 

/image/gif/paws/10607/lan-switch-cisco1.gif

There are many different types of switches and networks. Switches that provide a separate connection for each node in a company internal network have the name LAN switches. Essentially, a LAN switch creates a series of instant networks that contain only the two devices that communicate with each other at that particular moment. This document focuses on Ethernet networks that use LAN switches. The document describes what a LAN switch is and how transparent bridging works. The document also explains VLANs, trunking, and spanning trees.

In the most basic type of network of today, nodes simply connect together with the use of hubs. As a network grows, there are some potential problems with this configuration:

  • Scalability—In a hub network, there is a limit to the amount of bandwidth that users can share. Significant growth is difficult to accommodate without a sacrifice in performance. Applications today need more bandwidth than ever before. Quite often, the entire network must undergo a periodic redesign to accommodate growth.

  • Latency—Latency is the amount of time that a packet takes to get to the destination. Each node in a hub-based network has to wait for an opportunity to transmit in order to avoid collisions. The latency can increase significantly as you add more nodes. Or, if a user transmits a large file across the network, all the other nodes must wait for an opportunity to send packets. You have probably experienced this problem before at work. You try to access a server or the Internet, and suddenly everything slows down to a crawl.

  • Network Failure—In a typical network, one device on a hub can cause problems for other devices that attach to the hub. Incorrect speed settings or excessive broadcasts cause the problems. An example of an incorrect speed setting is 100 Mbps on a 10 Mbps hub. You can configure switches to limit broadcast levels.

  • Collisions—Ethernet uses a process with the name carrier sense multiple access collision detect (CSMA/CD) to communicate across the network. Under CSMA/CD, a node does not send out a packet unless the network is clear of traffic. If two nodes send out packets at the same time, a collision occurs and the packets are lost. Then, both nodes wait for a random amount of time and retransmit the packets. Any part of the network where packets from two or more nodes can interfere with each other is a collision domain. A network with a large number of nodes on the same segment often has a lot of collisions and, therefore, a large collision domain.

Hubs provide an easy way to scale up and shorten the distance that the packets must travel to get from one node to another. But hubs do not break up the actual network into discrete segments. Switches handle this job.

Imagine that each vehicle is a packet of data that waits for an opportunity to continue the trip.

 

lan-switch-cisco2.gif

Think of a hub as a four-way intersection where all vehicles have to stop. If more than one car reaches the intersection at one time, the cars must wait for a turn to proceed. But a switch is like a cloverleaf intersection. Each car can take an exit ramp to get to the destination without the need to stop and wait for other traffic to pass. Now imagine this scenario with a dozen or even a hundred roads that intersect at a single point. The wait and the potential for a collision increases significantly if every car has to check all the other roads before the car proceeds. Imagine that you can take an exit ramp from any one of those roads to the road of your choice. This ability is what a switch provides for network traffic.

There is a vital difference between a hub and a switch; all the nodes that connect to a hub share the bandwidth, but a device that connects to a switch port has the full bandwidth alone. For example, consider 10 nodes that communicate with use of a hub on a 10 Mbps network. Each node can only get a portion of the 10 Mbps if other nodes on the hub want to communicate as well. But, with a switch, each node can possibly communicate at the full 10 Mbps. Consider the road analogy. If all the traffic comes to a common intersection, the traffic must share that intersection. But a cloverleaf allows all the traffic to continue at full speed from one road to the next.

In a fully switched network, switches replace all the hubs of an Ethernet network with a dedicated segment for every node. These segments connect to a switch, which supports multiple dedicated segments. Sometimes the number of segments reaches the hundreds. Since the only devices on each segment are the switch and the node, the switch picks up every transmission before the transmission reaches another node. The switch then forwards the frame over the appropriate segment. Since any segment contains only a single node, the frame only reaches the intended recipient. This arrangement allows many conversations to occur simultaneously on a network that uses a switch.

An example of a network that uses a switch.

 

lan-switch-cisco3.gif

Switching allows a network to maintain full-duplex Ethernet. Before switching existed, Ethernet was half duplex. Half duplex means that only one device on the network can transmit at any given time. In a fully switched network, nodes only communicate with the switch and never directly with each other. In the road analogy, half duplex is similar to the problem of a single lane, when road construction closes one lane of a two-lane road. Traffic attempts to use the same lane in both directions. Traffic that comes one way must wait until traffic from the other direction stops in order to avoid collision.

Fully switched networks employ either twisted pair or fiber-optic cable setups. Both twisted pair and fiber-optic cable systems use separate conductors to send and receive data. In this type of environment, Ethernet nodes can forgo the collision detection process and transmit at will; these nodes are the only devices with the potential to access the medium. In other words, the network dedicates a separate lane to traffic that flows in each direction. This dedication allows nodes to transmit to the switch at the same time that the switch transmits to the nodes. Thus, the environment is collision-free. Transmission in both directions also can effectively double the apparent speed of the network when two nodes exchange information. For example, if the speed of the network is 10 Mbps, each node can transmit at 10 Mbps at the same time.

A mixed network with two switches and three hubs.

 

/image/gif/paws/10607/lan-switch-cisco4.gif

Most networks are not fully switched because replacement of all the hubs with switches is costly. Instead, a combination of switches and hubs create an efficient yet cost-effective network. For example, a company can have hubs that connect the computers in each department and a switch that connects all the department-level hubs together.

A switch has the potential to radically change the way that the nodes can communicate with each other. But what makes a switch different than a router? Switches usually work at Layer 2 (Data or Datalink) of the Open System Interconnection (OSI) reference model with use of MAC addresses. Routers work at Layer 3 (Network) with Layer 3 addresses. The routers use IP, Internetwork Packet Exchange (IPX), or Appletalk, which depends on the Layer 3 protocols that are in use. The algorithm that switches use to decide how to forward packets is different than the algorithms that routers use to forward packets. One difference in the algorithms is how the device handles broadcasts. On any network, the concept of a broadcast packet is vital to the operability of the network. Whenever a device needs to send out information but does not know to whom to send the information, the device sends out a broadcast. For example, every time a new computer or other device comes onto the network, the device sends out a broadcast packet to announce the entry. The other nodes, such as a domain server, can add the device to the browser list. The browser list is like an address directory. Then, the other nodes can communicate directly with that device. A device can use broadcasts to make an announcement to the rest of the network at any time.

The OSI reference model consists of seven layers that build from the wire (Physical) to the software (Application).

 

/image/gif/paws/10607/lan-switch-cisco5.gif

A hub or a switch passes along any broadcast packets that the device receives to all the other segments in the broadcast domain. But a router does not pass along broadcast packets. Think about the four-way intersection again. In the analogy, all the traffic passes through the intersection, despite the direction of travel. Now, imagine that this intersection is at an international border. In order to pass through the intersection, you must provide the border guard with the specific address to which you are going. If you do not have a specific destination, the guard does not let you pass. A router works in a similar way. If a data packet does not have the specific address of another device, the router does not let the data packet pass. This restriction keeps networks separate from each other, which is good. But, when you want to talk between different parts of the same network, the restriction is not good. Switches can overcome this restriction.

LAN switches rely on packet switching. The switch establishes a connection between two segments and keeps the connection just long enough to send the current packet. Incoming packets, which are part of an Ethernet frame, save to a temporary memory area. The temporary memory area is a buffer. The switch reads the MAC address that is in the frame header and compares the address to a list of addresses in the switch lookup table. In a LAN with an Ethernet basis, an Ethernet frame contains a normal packet as the payload of the frame. The frame has a special header that includes the MAC address information for the source and destination of the packet.

Switches use one of three methods for routing traffic:

  • Cut-through

  • Store and forward

  • Fragment-free

Cut-through switches read the MAC address as soon as a packet is detected by the switch. After storing the six bytes that make up the address information, the switches immediately begin to send the packet to the destination node, even though the rest of the packet is coming into the switch.

A switch that uses store and forward saves the entire packet to the buffer and checks the packet for Cyclic Redundancy Check (CRC) errors or other problems. If the packet has an error, the packet is discarded. Otherwise, the switch looks up the MAC address and sends the packet on to the destination node. Many switches combine the two methods by using cut-through until a certain error level is reached, then changing over to store and forward. Very few switches are strictly cut-through because this provides no error correction.

A less common method is fragment-free. Fragment-free works like cut-through, but stores the first 64 bytes of the packet before sending the packet on. The reason for this is that most errors and all collisions occur during the initial 64 bytes of a packet.

LAN switches vary in physical design. Currently, there are three popular configurations in use:

  • Shared-memory—The switch stores all incoming packets in a common memory buffer that all the switch ports (input/output connections) share. Then, the switch sends the packets out the correct port for the destination node.

  • Matrix—This type of switch has an internal grid with which the input ports and the output ports cross each other. When the switch detects a packet on an input port, the switch compares the MAC address to the lookup table to find the appropriate output port. The switch then makes a connection on the grid where these two ports intersect.

  • Bus-architecture—Instead of a grid, an internal transmission path (common bus) is shared by all the ports using time division multiplex access (TDMA). A switch with this configuration dedicates a memory buffer to each port. There is an application-specific integrated circuit (ASIC) to control the internal bus access.

Most Ethernet LAN switches use transparent bridging to create the address lookup tables. Transparent bridging technology allows a switch to learn everything that the switch needs to know about the location of nodes on the network without the need for the network administrator to do anything. Transparent bridging has five parts:

  • Learning

  • Flooding

  • Filtering

  • Forwarding

  • Aging

 

It includes the following topics -

 For Support