Local Area Network design/Ethernet evolutions

From Wikibooks, open books for an open world
Jump to navigation Jump to search
Previous page
Repeaters and bridges
Local Area Network design Next page
Advanced features on Ethernet networks
Ethernet evolutions

With the success of Ethernet new issues arose:

  • need for higher speed: DIX Ethernet II supported a transmission speed equal to 10 Mbit/s, while FDDI, used in backbone, supported a very higher speed (100 Mbit/s), but it would have been too expensive to wire buildings by optical fiber;
  • need to interconnect multiple networks: networks of different technologies (e.g. Ethernet, FDDI, token ring), were difficult to interconnect because they had different MTUs → having the same technology everywhere would have solved this problem.

Fast Ethernet[edit | edit source]

Fast Ethernet, standardized as IEEE 802.3u (1995), raises the transmission speed to 100 Mbit/s and makes the maximum collision diameter 10 times shorter (~200-300 m) accordingly, keeping the same frame format and the same CSMA/CD algorithm.

Physical layer[edit | edit source]

Fast Ethernet physical layer is altogether different than 10-Mbit/s Ethernet physical layer: it partially derives from existing standards in the FDDI world, so much that Fast Ethernet and FDDI are compatible at the physical layer, and definitively abandons the coaxial cable:

  • 100BASE-T4: twisted copper pair using 4 pairs;
  • 100BASE-TX: twisted copper pair using 2 pairs;
  • 100BASE-FX: optical fiber (only in backbone).

Adoption[edit | edit source]

When Fast Ethernet was introduced, its adoption rate was quite low because of:

  • distance limit: network size was limited → Fast Ethernet was not appropriate for backbone;
  • bottlenecks in backbone: the backbone made in 100-Mbps FDDI technology had the same speed as access networks in Fast Ethernet technology → it was unlikely to be able to drain all the traffic coming from access networks.

Fast Ethernet started to be adopted more widely with:

  • the introduction of bridges: they break the collision domain overcoming the distance limit;
  • the introduction of Gigabit Ethernet in backbone: it avoids bottlenecks in backbone.

Gigabit Ethernet[edit | edit source]

Gigabit Ethernet, standardized as IEEE 802.3z (1998), rises the transmission speed to 1 Gbit/s and introduces two features, 'Carrier Extension' and 'Frame Bursting', to keep the CSMA/CD protocol working.

Carrier Extension[edit | edit source]

Decuplicating the transmission speed would make the maximum collision diameter 10 more times shorter putting it down to a few tens of meters, too few for cabling → to keep the maximum collision diameter unchanged, the minimum frame size should be increased to 512 bytes[1].

Stretching the minimum frame however would cause an incompatibility issue: in the interconnection of a Fast Ethernet network and a Gigabit Ethernet network by a bridge, minimum-sized frames coming from the Fast Ethernet network could not enter the Gigabit Ethernet network → instead of stretching the frame the slot time, that is the minimum transmission time unit, was stretched: a Carrier Extension made up of padding dummy bits (up to 448 bytes) was appended to all the frames shorter than 512 bytes:

Gigabit Ethernet packet format (532 to 1538 bytes).
7 bytes 1 byte 64 to 1518 bytes 0 to 448 bytes 12 bytes
preamble SFD Ethernet II DIX/IEEE 802.3 frame Carrier Extension IFG
512 to 1518 bytes
Disadvantages
  • Carrier Extension occupies the channel with useless bits.
    For example with 64-byte-long frames useful throughput is very low:
  • in newer pure switched networks the full-duplex mode is enabled → CSMA/CD is disabled → Carrier Extension is useless.

Frame Bursting[edit | edit source]

The maximum frame size of 1518 bytes is obsolete by now: in 10-Mbit/s Ethernet the channel occupancy was equal to 1.2 ms, a reasonable time to guarantee the statistical multiplexing, while in Gigabit Ethernet the channel occupancy is equal to 12 µs → collisions are a lot less frequent → to reduce the header overhead in relation to useful data improving efficiency, the maximum frame size could be increased.

Stretching the maximum frame however would cause an incompatibility issue: in the interconnection of a Fast Ethernet network and a Gigabit Ethernet network by a bridge, maximum-sized frames coming from the Gigabit Ethernet network could not enter the Fast Ethernet networks → Frame Bursting consists in concatenating several standard-sized frames one after the other, without realising the channel:

Gigabit Ethernet packet format with Frame Bursting.
frame 1[2]
+ Carrier Extension
FILL frame 2[2] FILL ... FILL   last frame[2] IFG
burst limit (8192 bytes)
  • just the first frame is possibly extended by Carrier Extension, to make sure that the collision window is filled; in the next frames Carrier Extension is useless because, if a collision would occur, it would already be detected by the first frame;
  • IFG between a frame and another is replaced by a 'Filling Extension' (FILL) to frame bytes and announce that another frame will follow;
  • the transmitter station keeps a byte counter: when it arrives to byte number 8192, the frame currently in transmission must be the last one → up to 8191 bytes + 1 frame can be sent with Frame Bursting.
Advantages
  • the number of collision chances is reduced: once the first frame is transmitted without collisions, all the other stations detect that the channel is busy thanks to CSMA;
  • the frames following the first one do not require Carrier Extension → useful throughput increases especially in case of small frames, thanks to saving for Carrier Extension.
Disadvantages
  • Frame Bursting does not address the primary goal of reducing the header overhead: it was opted to keep in every frame all headers (including preamble, SFD and IFG) to make the processing hardware simpler;
  • typically a station using Frame Bursting has to send a lot of data → big frames do not require Carrier Extension → there is no saving for Carrier Extension;
  • in newer pure switched networks the full-duplex mode is enabled → CSMA/CD is disabled → Frame Bursting has no advantages and therefore is useless.

Physical layer[edit | edit source]

Gigabit Ethernet can work over the following transmission physical media:

  • twisted copper pair:
    • Shielded (STP): the 1000BASE-CX standard uses 2 pairs (25 m);
    • Unshielded (UTP): the 1000BASE-T standard uses 4 pairs (100 m);
  • optical fiber: the 1000BASE-SX and 1000BASE-LX standards use 2 fibers, and can be:
    • Multi-Mode Fiber (MMF): less valuable (275-550 m);
    • Single-Mode Fiber (SMF): its maximum length is 5 km.
GBIC

Gigabit Ethernet introduces for the first time gigabit interface converters (GBIC), which are a common solution for having the capability of updating the physical layer without having to update the rest of the equipment: the Gigabit Ethernet board has not the physical layer integrated on board, but it includes just the logical part (from the data-link layer upward), and the user can plug into the dedicated board slots the desired GBIC implementing the physical layer.

10 Gigabit Ethernet[edit | edit source]

10 Gigabit Ethernet, standardized as IEEE 802.3ae (2002), raises the transmission speed to 10 Gbps and finally abandons the half-duplex mode, removing all the issues deriving from CSMA/CD.

It is not still used in access networks, but it is mainly being used:

  • in backbones: it works over optical fiber (26 m to 40 km) because the twisted copper pair is no longer enough because of signal attenuation limitations;
  • in datacenters: besides optical fibers, also very short cables are used to connect servers to top-of-the-rack (TOR) switches:[3]
    • Twinax: coaxial cables, first used because units for transmission over twisted copper pairs were consuming too much power;
    • 10Gbase T: shielded twisted copper pairs, having a very high latency;
  • in Metropolitan Area Networks (MAN) and Wide Area Networks (WAN): 10 Gigabit Ethernet can be transported over already existing MAN and WAN infrastructures, although at a transmission speed decreased to 9.6 Gb/s.

40 Gigabit Ethernet and 100 Gigabit Ethernet[edit | edit source]

40 Gigabit Ethernet and 100 Gigabit Ethernet, both standardized as IEEE 802.3ba (2010), raise the transmission speed respectively to 40 Gbit/s and 100 Gbit/s: for the first time the transmission speed evolution is no longer at 10×, but it was decided to define a standard at an intermediate speed due to still high costs for 100 Gigabit Ethernet. In addition, 40 Gigabit Ethernet can be transported over the already existing DWDM infrastructure.

These speeds are used only in backbone because they are not suitable yet not only for hosts, but also for servers, because they are very close to internal speeds in processing units (bus, memory, etc.) → the bottleneck is no longer the network.

References[edit | edit source]

  1. In theory the frame should be stretched 10 times more, then to 640 bytes, but the standard decides otherwise.
  2. a b c preamble + SFD + Ethernet II DIX/IEEE 802.3 frame
  3. Please refer to section FCoE.
Previous page
Repeaters and bridges
Local Area Network design Next page
Advanced features on Ethernet networks
Ethernet evolutions