Switches, Routers, Bridges and LANs/Introduction
||This page or section is an undeveloped draft or outline.
You can help to develop the work, or you can ask for assistance in the project room.
- 1 The need for a layered approach
- 2 The OSI model
- 3 Switching at different layers
The need for a layered approach
Computer networks have very high expectations of them: we expect to be able to communicate with anywhere in the world with minimal delay, and with high reliability. But the hardware that provides this communication sometimes fails, or needs to be upgraded. Nor is the internet a static system: new networks are connected every day, and some old networks stop working temporarily or permanently. The systems that are used for computer networking need to detect and work around errors, and do so automatically without waiting for human intervention. Errors can occur on any level, from tiny amounts of electrical interference or cosmic rays that might alter a single bit of information on a wire, to an entire trans-Atlantic network trunk cable being cut by a ship's anchor.
Writing a single protocol that can deal with all these eventualities is achievable in theory, but in practice the need to deal with all the possible sources of error would quickly make it unmanageably complex. In addition, there's a political problem with deploying one single monolithic protocol: everyone would have to use the same protocol, and everyone would have to adopt it at the same time.
The solution that developed is a common one in software engineering: the use of abstraction. Networking is implemented in terms of a series of layers, each of which only solves a small part of the whole problem of networked communications. However, each layer can rely on the services provided by the layers below it, so that the entire stack working together can solve a problem that no single layer solves. In addition, the layered model means that different parts of the network can solve the lower-layer problem in different ways, provided that they all implement the same interface that the higher-level layers rely on.
The OSI model
An early attempt to standardize the layers of networking was made in 1978 by the Open Systems Interconnection (OSI) project by the International Organization for Standardization (ISO). The product of this group described 7 network layers and specified a suite of protocols to operate at each of these layers. Though the protocols specified didn't catch on and were superseded by TCP/IP, the concepts of the 7 layers stuck, and are still used to this day to describe networking protocols. There is no strict need for protocols to comply with the OSI 7-layer model, and indeed many protocols blur the boundaries or collapse the functions of several layers into one protocol, but the layer numbers are still useful for informal communication.
Layers in the OSI model
Layer 1: the physical layer
The lowest layer of the stack deals with the physical details of sending signals from one place to another. Typically this information is carried encoded in electrical signals or laser light, but in principle any means of communication could be used. For example, if you had a way of encoding binary data into sound transmitted from a speaker to a microphone (with a second speaker and microphone to send data back the other way) then you could use the rest of the standard protocols on top of this physical layer without having to change them.
The physical layer needn't provide completely reliable transmission: the upper layers are responsible for detecting errors and resending data if necessary. The physical layer merely provides some way of propagating data from one node to another.
Among other things, the specification of a physical layer protocol will need to specify the voltage of an electric signal or frequency and power of a laser, size and shape of connectors, modulation of the signal, and the way that multiple nodes share the same link.
Layer 2: The Data Link Layer
The data link layer is responsible for providing the means to transfer data from one node to the other, if possible detecting or correcting errors in the physical layer. Layer 2 introduces the concept of unique addresses that identify the nodes that are communicating. Unlike layer 3, data link layer addresses use a flat structure, i.e. the structure of the address doesn't yield any information about the relative location of nodes or the route that traffic should take between them.
The most familiar layer 2 protocol is Ethernet, although the Ethernet standard also specifies details of the physical layer.
Layer 3: The Network Layer
The network layer builds on the lower layers to support routing data across interconnected networks (rather than within a single network). Addressing at the network layer takes advantage of a hierarchical structure so that it's possible to summarise the route to thousands or millions of hosts as a single piece of information. Typically, nodes that are on the same network as each other will share a common prefix on their layer 3 address. Layer 3 communication doesn't contain any concept of a continuing connection: each packet of data sent between a pair of communicating hosts is treated separately, with no knowledge of the packets that went before it. Layer 3 protocols may be able to correct errors in a packet that have been introduced at the physical or data link layer, but do not guarantee that no packet will be lost.
The dominant layer 3 protocol is IP.
Layer 4: The Transport Layer
The transport layer allows communicating hosts to establish an ongoing connection between them. Layer 4 protocols may detect missing packets and compensate by retransmitting them, but not all protocols do so: most obviously, TCP does provide reliable transmission but UDP doesn't. Providing reliable transmission incurs an overhead, and some data is obsolete once it has been even slightly delayed (e.g. data for a live phone call or video conference) so there are cases where unreliable transmission is desirable.
Layer 5: The Session Layer
The OSI model allowed for a fifth layer that provides the mechanism for creating, maintaining and destroying a semi-permanent session between end-user applications. For example, it might make it possible to checkpoint and restore communication sessions, or bring several streams from different sources into sync. In practice, although there are protocols that provide features of this type, layer 5 is rarely referred to as a general concept.
Layer 6: The Presentation Layer
Layer 6 is the layer at which data structures that have meaning to the application are mapped into a stream of bytes, the details of which need not concern the lower layers. In theory, this relieves the application layer of having to worry about the differences between one computer platform and another, e.g. a computer that uses ASCII to encode its text files communicating with one that uses EBCDIC. In practice, protocols rarely bother to differentiate this layer from the highest layer (the application layer), treating the two combined as one layer.
Layer 7: The Application Layer
The application layer covers the protocols that describe application-specific details of communication. FTP, HTTP and SMTP are all application-layer protocols.
Switching at different layers
The distinction between the aspects of communication that are constrained by each protocol layer may at first seem unimportant or arbitrary. This is true in the case of the most trivial network, consisting of just two computers with a single cable connecting them. However, as the number of nodes on the network increases, it gets more and more useful to clearly distinguish the responsibilities of each layer.
Two computers can share a single physical cable between them, but if we want to add a third computer to this micro-network, how do we connect it? Do we attempt to share the same physical cable by cutting it and splicing in a branch to the cable? This might work in theory, but would be inflexible in practice: apart from the time taken to cut and splice the cable, it would be very hard to add a node to the network without disrupting the network for the existing network users. A more maintainable solution is to plug each computer into a common hub. The hub will have network sockets on it that our network cables can plug into, and the circuitry within the hub will ensure that as soon as a cable is plugged in it will be able to send and receive signals with any other cables already connected.
The hub is a purely physical layer device. It doesn't know the meaning of any of the signals it transmits, nor does it make any decisions about which signals should go where or whether data is corrupt. Every signal on every cable is copied to all the other cables.
An alternative to using a physical-layer hub is to use a layer 2 switch to connect the hosts. Unlike a hub, a switch attempts to process the data it receives so as to understand something about the packets being transferred. A switch will only parse the layer 2 content of a packet, treating all the higher layer data as a blob of data that can be transferred without understanding it. The advantage of parsing the layer 2 wrapper is that this contains the source and destination addresses of the packet. If the switch knows which direction to send the packet (based on the destination address and its knowledge of the network) it can send the packet to only one link on the network, which saves bandwidth. If the switch doesn't know where to send the packet, it sends it to all interfaces other than the one on which it received the packet: this is called flooding.
Using switches in place of hubs saves bandwidth (by only sending packets on links that need to receive them), the only disadvantage being that switches are more complex devices and may cost more or require additional configuration. In practice, simple switches don't require any configuration and have long since become as cheap as hubs (or even cheaper, now that there is little demand for physical layer hubs).
This discussion has glossed over the detail of how exactly the switch knows where to send a particular packet. When a switch is first connected to the network, it doesn't know anything about the network. Without any knowledge, it has to flood every packet it receives onto every port, behaving in effectively the same way as a hub would. However, the switch can learn from each packet it sees, and this means it can make better decisions. For example, if a switch receives a packet with source address A and destination B on interface 2, it can conclude that source address A is reachable via interface 2. Next time it receives a packet with destination address A (whether or not it comes from address B) it can forward it straight to interface 2, saving bandwidth on the other links.
Inside every switch there is therefore a table mapping layer 2 addresses to port numbers. The size of this table is limited by the available memory, and the speed of looking up addresses in the table (packets must be looked up very quickly in order to be forwarded on without too much delay). The specialist hardware that enables this fast lookup is very expensive, so in practice if you want your switch to have more layer 2 address capacity you have to be prepared to pay extra for it. Home network switches might be able to store a few thousand addresses, while high-end data center switches can cope with tends or hundreds of thousands.
No matter how much you spend on your switch, it's never going to be able to cope with an entry in its lookup table for every one of the billions of hosts on the internet. The only sensible way to deal with this is to break down the network into sections, and store information about it a section at a time. This is the way that routing (which takes place at layer 3) differs from switching (taking place at layer 2).