- 1 101.1 Determine and Configure Hardware Settings
- 1.1 Introduction
- 1.2 BIOS
- 1.3 Enabling/Disabling Integrated Devices
- 1.4 IRQ, IO Addresses and DMA Addresses
- 1.5 Interrupt Requests
- 1.6 Input/Output Addresses
- 1.7 DMA Addresses
- 1.8 Configuring IRQs, IO Ports/Addresses and DMA
- 1.9 Linux Device Management Overview
- 1.10 Mass Storage Devices
- 1.11 Hot plug & Cold plug Devices
- 1.12 Device Drivers
101.1 Determine and Configure Hardware Settings
Candidates should be able to configure fundamental system hardware by making the correct settings in the system BIOS in x86 based systems
Key Knowledge Areas
- Enable and disable integrated peripherals.
- Configure systems with or without external peripherals such as keyboards.
- Differentiate between the various types of mass storage devices.
- Set the correct hardware ID for different devices, especially the boot device.
- Know the differences between coldplug and hotplug devices.
- Determine hardware resources for devices.
- Tools and utilities to list various hardware information (e.g. lsusb, lspci, etc.)
- Tools and utilities to manipulate USB devices
- Conceptual understanding of sysfs, udev, hald, dbus
A computer system consists of a central CPU, primary storage or memory, secondary or permanent storage such as a hard disk, and various input/output devices. In order to operate the CPU needs to be able to load instructions and data into the CPU from memory, possibly having to fetch it from the secondary storage first, execute the instructions and then store the results back in memory. It communicates with the various input/output or peripheral devices via a data path know as bus. A computer system may have more than one bus which it uses to communicate with different components.
This simplified model of the modern computer is known as the Von Neumann architecture after John von Neumann a Hungarian born mathematician who developed the basic architecture of modern computers in the 1940s.
With this basic conceptual model as a basis we can begin to understand how the CPU determines and configures the numerous peripherals and devices that make up a computer system, and how the operating system manages and coordinates activities for sharing system resources and access to its connected peripherals and devices.
These resources and peripherals include mass storage devices (secondary storage), and the input/output devices such as monitors, keyboards and network cards. Some peripherals are integrated into the motherboard of PCs, such as parallel and com ports, and even VGA or network cards in more modern ones while others are external devices such as USB sticks or bluetooth dongles. All these devices need a way to communicate with the CPU to provide it with information, to request services from the CPU or to receive instruction from the CPU.
When configuring a computer system you need to know how to find the current settings of these peripherals, what possible values their settings can have and how to change them in the BIOS or operating system if necessary. Configuring PC peripherals to work with an operating system is done in two places. The first place configuration is done is by the system firmware or BIOS (Basic Input/Output System); the second is by the operating system.
The purpose of the BIOS (Basic Input/Output System) is to perform a Power On Self Test (POST), identify and initialize system devices and peripherals and start the process to load the operating system by loading the boot loader from the boot device. The BIOS also provides an abstraction layer to the operating system for accessing system devices. The intention of this layer is to insulate the operating system from having to deal with the wide variety of devices available today but it is ignored by most modern operating systems, including Linux, which access the devices with their own device drivers.
Most BIOS manufactures provide a console based user interface that allows you to configure the low-level system settings for devices. The BIOS configuration interface, and how it is accessed, varies from manufacturer to manufacturer but is usually accessed by pushing a key or key combination, such as delete or insert during system boot.
It is under the BIOS configuration console that you can:
- enable or disable devices,
- allocate resources such as IRQ and IO addresses,
- select the boot order of devices and,
- change settings that affect operating modes of devices such as disk drives and network cards
Besides the main BIOS many peripherals also have their own BIOS firmware and configuration consoles. Some network cards, and most SCSI host adapter cards come with their own BIOS which can be used to set their configuration. Like the main system BIOS these firmwares also perform some low level checking and initialization of their devices.
Enabling/Disabling Integrated Devices
Sometimes it may be necessary to disable a device which is preventing the operating system from installing or working properly. This is a rare occurrence, and can usually be remedied by passing the correct parameters to the Linux kernel at boot time or changing a parameter setting in the BIOS. Usually the setting which can give the most problems relate to hard disk access modes, power management and interrupt controller settings. Often the cause of these problems are bugs in the firmware itself.
Most BIOS consoles allow one to disable integrated peripherals such as COM ports, video or network cards. Even if these devices are not causing any installation or boot up issues you may wish to disable them if they are unused to free up resources that would otherwise be unavailable for other purposes.
Sometimes it is necessary to disable system checks for peripherals such as a keyboard or a mouse which, although necessary for the proper operation of desktop machines, are often not present for servers. Machines without keyboards, mouse or monitors are referred to as “headless” systems. Some BIOSes will refuse to boot if these devices are not present unless these system checks are disabled. (You may wonder how headless machines are accessed at all if they don't have a keyboard, mouse or monitor. The answer is via the network with utilities such as SSH which is covered later in this book.) Of course to access the BIOS itself it is necessary to have peripherals such as a keyboard and monitor present!
IRQ, IO Addresses and DMA Addresses
To understand how peripheral devices communicate with the CPU you need to know about interrupt requests (IRQs), Input/Output (IO) addresses and direct memory access (DMA) addresses. The CPU is responsible for processing all instructions and events that occur in the system. To communicate with a device it needs to know when a device has an event for it to handle and it needs to be able to pass information to and from the device.
IRQs are the mechanism by which peripherals tell the CPU to suspend its current activity and to handle its event such as a key press or a disk read. IO addresses are regions of memory mapped to devices where the CPU can write information of the devices attention and read from the devices as well. This is a somewhat simplified explanation of how peripheral devices communicate with the CPU but suits our purposes for understanding IRQ and IO Addresses.
When an event occurs on a device, such as a mouse movement or data arriving from a USB connected drive, the device signals to the CPU that it has data which it needs to handle by generating an interrupt on the bus. Before the advent of Plug and Play technology, which requires both a hardware (PCI bus) and software component to work, Intel PCs were limited to 16 possible IRQ settings and many of these IRQ lines were preset for devices, so that of those, only a few are freely available. The table below lists the IRQ's that cannot be used in red and the IRQ's that could be reassigned providing certain hardware does not exist in your system in orange, and those that you are free to assign as you please in white.
|0||System timer||4||COM1||8||Real Time Clock||12||PS2 Mouse|
|1||Keyboard||5||LPT2 / Sound Card||9||Available||13||Floating Point Proc|
|2||Handles IRQ 8 - 15||6||Floppy Controller||10||Available||14||Primary IDE|
|3||COM2||7||Parallel Port||11||Available||15||Secondary IDE|
IRQ 0 system timer (cannot be changed);
IRQ 1 keyboard controller(cannot be changed);
IRQ 2 cascaded signals from IRQs 8–15 devices configured to use IRQ 2 will use IRQ 9
IRQ 3 serial port controller for COM2 (shared with COM4, if present);
IRQ 4 serial port controller for COM1 (shared with COM3, if present);
IRQ 5 LPT port 2 or sound card;
IRQ 6 floppy disk controller;
IRQ 7 LPT port 1 or sound card (8bit Sound Blaster and compatibles).
IRQ 8 realtime clock;
IRQ 9 open interrupt / available or SCSI host adapter;
IRQ 10 open interrupt / available or SCSI or NIC;
IRQ 11 open interrupt / available or SCSI or NIC;
IRQ 12 mouse on PS/2 connector;
IRQ 13 math coprocessor / integrated floating point unit or interprocessor interrupt (use
IRQ 14 primary ATA channel (disk drive or CDROM);
IRQ 15 secondary ATA channel
As the number and types of peripheral devices grew, the limited number of IRQ lines became a problem. Even if there was a free IRQ some devices where hardwired to use a specific IRQ that may already have been in use. Some devices allowed for manual configured via jumper settings and could have IRQ, as well as IO addresses reassigned in the BIOS.
To overcome this limitation “plug and play” technology was introduced which allows devices to share interrupt lines thus expanding the number of devices that a could be accommodated. In addition “plug and play” did away with the need for manual configuration of devices. With Intel's Advanced Programmable Interrupt Controller (APIC) architecture there are now usually 24 interrupt lines available that can accommodate up to 255 devices.
In order to see the allocation of IRQs to devices in a Linux system, you can examine the contents of /proc/interrupts file.
From this point onward it becomes necessary to have access to a Linux PC. Although some theory is involved, we shall be interacting with Linux more and more. I advise that you attempt the commands as you come across them, testing your understanding as you go.
Figure 101.1.1: Sample result of /proc/interrupts
What is the /proc file system?
The proc file system is a virtual or pseudo file system. The kernel sets up the /proc file system to export information about the running kernel, user processes and hardware devices that have been configured. It is a virtual file system in that it exists in memory only and does not represent any true physical files. Treating everything as a file is a cornerstone of UNIX design philosophy and many such pseudo file systems exists. Others are the /sys and /dev directories.
When the CPU and peripheral devices need to communicate and/or pass data between each other they make use of IO addresses. Essentially they communicate by reading and writing data to the reserved IO addresses of the device. There are two complementary methods for performing IO between the CPU and device. There is memory mapped IO (MMIO) and port IO (PIO) addressing.
In memory mapped IO, addresses are regions of memory reserved for communication between the CPU and a particular device. It is important that these memory regions are not used by any other processes. In port mapped IO addressing the CPU has a separate set of instructions for performing IO with a devices having a separate address space.
The allocation of IO ports in Linux can be revealed by examining the contents of the /proc/ioports file.
Figure 101.1.2: Sample of /proc/ioports
For allocation of IO memory you can look at /proc/iomem, an example of the contents of this file is given below.
Figure 101.1.3: Sample of /proc/iomem
Prior to “plug and play” technology you would have to configure IO devices addresses in the BIOS and operating system but with the introduction of “plug and play” the allocation is now done automatically by the operating system and the bus.
DMA stands for direct memory access and is an optimization that allows devices to read and write directly to memory without having to go through the CPU. Traditionally when the CPU requests data to be read from a device into memory, the CPU needs to be involved in the process of transferring the data. Under this model, called Progammed IO (PIO), significant CPU time is spent simply copying data between the device and memory. With DMA devices can write directly to memory, by-passing the CPU. This greatly enhances system performance. To examine the allocation of DMA addresses on a Linux box you can examine the contents of /proc/dma
Figure 1.4: Sample of /proc/dma output
DMA is automatically configured by the operating system for most devices. A significant device where DMA may not be automatically configured is on the system parallel ata (PATA) disks. For PATA devices (see section below) DMA access can be enabled, assuming your device is /dev/hda (SATA drives which are popular today do not use DMA in the conventional sense) by running the command:
hdparm -d1 /dev/sda
To see the current setting on your hard disk you can run the command
hdparm -d /dev/sda
Configuring IRQs, IO Ports/Addresses and DMA
Configuration for device IRQ, IO addresses, DMA channels and memory regions happens automatically with today's plug and play PCI bus. Resources are allocated via the system BIOS, via the Linux kernel and possibly the device driver with resource conflicts being automatically resolved in most cases.
Linux provides various utilities to query devices to find out the resources that they use. A lot of these utilities make use of the information exported by the kernel in the /proc file system.
Two commands used to query PCI devices are the lspci and setpci commands. The lspci utility can provide verbose information on devices using the PCI bus, depending on which parameters you use.
“lspci -vv” provides detailed output on PCI devices. The output below shows typical output from the “lspci” command with no parameters.
Figure 1.5: Sample lspci output
All PCI devices are identified by a unique ID. This ID is made up of a unique vendor ID, device ID with potentially a sub-system vendor ID and sub-system device ID. When “lspci” is run the PCI ID is looked up in the systems ID database and translated into a human readable format, showing the vendor name and device. The PIC ID database is found at /usr/share/misc/pci.ids on Ubuntu and on RedHat. When a PCI ID is not found in the database it is displayed as two numbers separated by a colon. In such cases it is a good bet that the correct device drivers for this peripheral are not loaded either. Searching the Internet for the PCI ID may reveal the manufacturer, device ID and chipset information with which it could be possible to identify the correct driver for the device. Alternatively no driver for Linux has been written for the device yet. setpci is a utility for configuring and querying PCI devices. It is an advanced utility that is beyond the scope of this manual but you should be aware of its existence.
How to reassign IRQ Allocation in a PnP BIOS/OS
Even though IRQ conflicts are now a thing of the past, as interrupts can be shared, it can still be a bad idea for some devices with a high frequency of interrupts to share the same IRQ. This is particularly true if the devices are hard disks or sound devices.
In this case it is recommended to try and move the PCI cards to different slots to try and get them assigned different IRQs.
Linux Device Management Overview
Before going on to discuss the various types of mass storage devices that you will typically encounter when running Linux, it is beneficial to have an overview of how Linux manages devices. Many of the concepts introduced here will be expanded on later in this manual, but a high level overview will provide you with a map, so to speak, of how it all fits together.
A device driver is a kernel space program that allows user space applications to interact with the underlying hardware. Kernel space refers to privileged code which has full access to hardware and runs in ring 0 which is a hardware enforced privileged execution mode. User space applications run in a less privileged mode, ring 3, and cannot access hardware directly. User space application can only interact with hardware by making system calls to the kernel to perform actions on their behalf.
The Linux kernel exports information about the devices from the device drivers to a pseudo filesystem mounted under /sys. This file system tells user space what devices are available. It is populated at system boot but when a hot plug device is inserted or removed the /sys file system is updated and the kernel fires events to let user space know that there has been a change.
One of the principles of UNIX design is to use the metaphor of a file where ever possible, providing a standard conceptual model for interacting with various UNIX components where its a real file, printer, disk or keyboard using standard input/output system calls. The file interface for device drivers is exported under the /dev directory.
One of these user space applications is the kernel device manager, udev, which creates a device node under the /dev directory to allow user space application access to the device. The name of the device node or file is determined by the device drivers naming convention or by user defined rules in /etc/udev/rules.d/. It is through the entries under /dev that user space application can interact with the device driver.
The udev daemon is also responsible for informing other user land applications of changes. The applications that are most important here are the HAL (hardware abstraction layer) daemon, and D-Bus (desktop bus). These applications are mainly used by desktop environment to carry out tasks when an event occurs such as open the file browser when a USB drive is inserted or image application when a camera is inserted.
While udev creates the relevant entries under the /dev file system, if anything useful need to happen when the event occurs HAL and D-Bus are needed. (Note: HAL is now deprecated as it is being merged into udev). HAL is a single daemon responsible for discovering, enumerating and mediating access to most of the hardware on the host computer for desktop applications to which it provides a hardware abstraction layer. Application register with the D-Bus daemon to receive notifications of events and also post event notifications that other applications may be interested in. D-Bus is used for example to launch media players when an audio CD is inserted and to notify other applications of the currently playing song for example.
From a practical point of view, the service which impacts on you the most will be udev as it created device nodes under the /dev directory. For LPI it is important to be able to identify which device are available.
Mass Storage Devices
The secondary storage or mass storage devices, as they are known today come in different types which are determined by their physical interface. An interface is the means by which the device physically attaches to the computer. Over time numerous interfaces for connecting mass storage devices have been developed but the four main types of disks encountered today are :
- PATA – Parallel Advanced Technology Attachment, also known as IDE
- SATA – Serial Advanced Technology Attachment, the latest standard replacing PATA especially on desktops and laptops,
- SCSI – Small Computer System Interface disks are used in servers and other high end machines. SCSI provides high speed access as well as the ability to connect a large number of devices
- SAS – serial SCSI, a new SCSI standard aimed at servers
Besides the interface, the type of mass storage is also determined by the device itself. For example, you can have SATA optical drives (CDROMs/DVDROMS) as well as hard disk and SCSCI disks and SCSI tape drives.
PATA is an obsolete standard but you might still encounter it on older machines. Parallel refers to the manner in which data is transferred from the device to the CPU and memory. There are several variations on the PATA standard such as IDE and EIDE but since the introduction of SATA they are collectively referred to as PATA devices.
Motherboards come with two PATA connectors and PATA cables support up to two devices per cable, in a master/slave setup. Which device is master or slave depends on the location of the device on the cable and jumper settings on the hard disks themselves. Since ATA drives have been around for a long time they are well supported in Linux and most BIOSes usually have no problem in automatically identifying and configuring these devices.
Linux identifies ATA devices under the /dev file system with the naming convention /dev/hd[a-z]. The last letter of the device name is determined by the whether the disk is master or slave on the primary or secondary connection.
Gaps can also appear in the device node naming. For example, it is often the case that the disk is identified as /dev/hda, being the master drive on the primary connector, with the CDROM drive being identified as /dev/hdc as the master on the secondary connector. This is quiet a common setup for desktop machines as it is better to have your two most frequently accessed mass storage devices on separate cables for improved performance rather than having them share a cable where access contention may arise.
Partitions on PATA disks are identified by a number following the letter. For example the first primary partition on the /dev/hda drive is identified by /dev/hda1 and the 2nd primary partition /dev/hda2 etc. For more information on disk partition numbering please refer to section 102.1
What is important about understanding device node naming conventions at this point is being able to identify the type of hard disk device and its device name.
Serial ATA (SATA) drives have largely replaced PATA drives on desktops and laptops. SATA is a serial standard but offers higher throughput than the older PATA interface. SATA drives are not configured in a master/slave setup and each have their own dedicated controller or channel. The cables for SATA drives are considerably thinner than those for PATA devices saving space and cost. SATA controllers use the Advanced Host Controller Interface (AHCI) which allows for hotplugging and hotswapping of SATA disk.
As with PATA devices most BIOSes automatically detect SATA drives and the Linux kernel usually has no problem identifying and loading the correct drivers for SATA drives. A peculiarity of SATA under Linux is that SATA disks make use of the SCSI disk sub system and hence the naming convention of these devices follows that of SCSI devices.
SATA drive naming conventions uses the scsi sub system naming conventions with devices being labeled /dev/sd[a-z]. The final character of the device node name is determined by the order in which the Linux kernel discovers these devices. Partitions on SATA drives are named numbered from 1 upwards.
SCSI like PATA and SATA defines a physically interface as well as protocol and commands for transferring data between computers and peripheral devices. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, such as scanners and CD drives. Usually SCSI devices connect to a host adapter that has its own BIOS. SCSI storage devices are faster and more robust than SATA or PATA devices but are also more expensive, hence they are used mostly in servers or high end workstations.
There are two types of SCSI interfaces: an 8-bit interface with a bus that supports 8 devices, this includes the controller, so there is only space for 7 devices and a 16-bit interface (WIDE) that supports 16 devices including the controller, so there can only be 15 block devices.
SCSI devices are uniquely identified using a set of 3 numbers called the SCSI ID.
a. the SCSI channel
b. the device ID number
c. the logical unit number LUN
The SCSI Channel
Each SCSI adapter supports one data channel on which to attach SCSI devices (disc, CDROM, etc.)
These channels are numbered from 0 onwards.
Device ID number
Each device is assigned a unique ID number that can be set using jumpers on the disk. The IDs range from 0 to 7 for 8-bit controllers and from 0 to 15 for 16-bit controllers.
The Logical Unit Number (LUN) is used to differentiate between devices within a SCSI target number. This is used, for example, to indicate a particular partition within a disk drive or a particular tape drive within a multi-drive tape robot. It is not seen so often these days as host adapters are now less costly and can accommodate more targets per bus.
All detected devices are listed in the /proc/scsi/scsi file. The example below is from the SCSI-2.4-HOWTO
|Host: scsi0 Channel: 00 Id: 02 Lun: 00|
|Vendor: PIONEER Model: DVD-ROM DVD-303 Rev: 1.10|
|Type: CD-ROM ANSI SCSI revision: 02|
|Host: scsi1 Channel: 00 Id: 00 Lun: 00|
|Vendor: IBM Model: DNES-309170W Rev: SA30|
|Type: Direct-Access ANSI SCSI revision: 03|
Since SATA drives use the same scsi sub-system as real SCSI drives, the naming convention for SCSI devices is the same as that setout above for SATA drives. It is not only scsi drives that make use of the scsi sub system but USB drives as well, hence you will find USB drives following the same naming convention
The scsi_info tool uses the information in /proc/scsi/scsi to printout the SCSI_ID and the model of a specified device. From the file above scsi_info would produce the following output:
# scsi_info /dev/sda
The system will boot from the device with SCSI ID 0 by default. This can be changed in the SCSI BIOS which can be configured at boot time. If the PC has a mixture of SCSI and SATA/PATA disks, then the boot order must be selected in the system's BIOS first.
Serial attached SCSI is the latest interface on the block and is an upgrade to the SCSI protocol and interface as SATA is to PATA. In general SCSI devices are faster and more reliable than SATA/PATA drives. The performance gap between the two technologies continues to close but the two technologies are generally targeted at two different markets, namely the consumer market for SATA and the enterprise for SCSI. The enterprise has a higher requirement for reliability and speed than does the consumer market and the price of the different drives reflect this.
Identifying the correct device ID for BOOT device
The purpose of listing the different type of mass storage devices and their naming conventions is to allow you to easily identify the device ID for your disks. This is important for being able to identify which of your drives is the boot device and which disk partitions contains the root and boot directories.
There is however a problem with the Linux naming convention, and not just for disk drives. The problem is that device names may change between system reboots when hardware configuration changes are made. Since the naming convention of hardware has a component which depends on the oder the device is discovered by the kernel, adding, moving or removing devices may result in changes to device names. This was not such a problem a few years ago, as changing hard disks is not a regular occurrence; but with the advent of USB and the fact that is uses the SCSI sub-system for its device naming, the problem has become more severe. In order to uniquely identify a device, irrespective of when it is discovered or located, away has to be found to uniquely identify the device. For disks this is done by writing a universally unique id to the disk meta data and using the UUID in configuration files rather than the device node. (It is not only hard disks that need to be uniquely identifiable. Other devices need to be as well but may use different means of doing so. Network cards use their MAC address for example.) The blkid utility, which replaces vol_id, can be used to query the settings on a device or search for a device with a specific UUID.
for example produces the following output: /dev/sda1: UUID="c7d63e4b-2d9f-450a-8052-7b8929ec8a6b" TYPE="ext4" showing that the first partition on the device sda has an ext4 filesystem and a UUID of c7d63e4b-2d9f-450a-8052-7b8929ec8a6b
blkid -U 75426429-cc4b-4bfc-beb9-305e1f7f8bc9
searches for the device with the specified ID and returns /dev/sdb1. Alternatively you could look under /dev/disk/by-uuid to see which ids map to which devices as seen by the Linux kernel.
Hot plug & Cold plug Devices
Hot plug and cold plug devices refer to the ability of a device to be inserted or removed while the computer is running (hot plug) or whether the machine has to be powered down before the device can be inserted or removed (cold plug).
Typical hot plug devices are USB data sticks, mice or keyboards and some SATA drives, while cold plug devices include video cards, network cards and CPUs, although in high end server machines even these components can be hot pluggable. The benefit of hot plug devices is the ability to swap out faulty, or add additional, components without having to take the machine offline, a requirement for many production servers that need to maintain maximum uptime.
A lot of what you need to know has been explained under “Linux Device Management Overview”. When a hotplug device is inserted or removed the kernel updates the sys virtual filesystem and udev receives an event notification. Udev will create the device node under /dev and fire notifications to HAL which in turn will notify D-Bus of the changes.
One of the most popular types of hotplug interfaces today is the Universal Serial Bus (USB) which is a communication architecture designed to connect devices to a PC. These devices are divided into five classes:
- Display Devices
- Communication Devices
- Audio Devices
- Mass Storage Devices
- Human Interface Devices (HID)
The devices are plugged into a USB port which is driven by a USB controller. Support for USB controllers is present in the Linux kernel since version 2.2.7 ( The Linux USB sub-system HOWTO)
There are 3 types of USB host controllers:
|EHCI (USB v 2.0)||ehci-hdc.o|
Once a USB device is plugged into a PC we can list the devices with lsusb:
Bus 001 Device 002: ID 04a9:1055 Canon, Inc.
Figure 1.6: Sample lsusb output
In our discussion of how Linux manages devices we mentioned that the kernel makes use of device drivers to export information to the /sys filesystem. In Linux most device drivers are provided as part of the kernel source code and can be compiled as modules which can be loaded and unloaded dynamically into the kernel or are compiled directly into the kernel. The modular system makes it possible for 3rd parties to write device drivers and provide them as modules that can be loaded and unloaded without having to recompile the kernel.
Most drivers are compiled as modules as this keep the kernel small and allows for far more drivers to be available than what would be practical if they were all compiled as part of the kernel. The commands you need to know to manage device drivers under Linux are:
- lsmod – lists currently install modules,
- modinfo – query a module for dependency, author and parameter information
- insmod – installs a module by providing a path to the driver,
- modprobe – installs a module via module name and handles dependency resolution,
- rmmod – remove
In order to see what kernel modules are loaded by your system you can run the command lsmod. The first column of the output is the module name, the second is the size of the module and the third shows which other modules depend on this module.
Figure 1.7: Sample lsmod output
If you require additional information about a module, such as its dependencies and configuration parameters you can use the modinfo command. Below is an example of output from the command “modinfo psmouse” – the mouse driver.
Figure 101.1.8: Sample modinfo output
To load a driver for a device you make use of the insmod or modprobe command. The insmod command takes the path to a kernel module as its parameter and will attempt to load the module into memory. The command below will attempt to load the xpad driver into memory.
The insmod command is inconvenient because it does not load any driver dependencies and you need to specify the full path to a driver module. The modprobe command will automatically load any dependencies which are not resident and allows one to use the module name rather than the module's file name.
To remove a module one can use the rmmod command or modprobe -r. This will remove a module as long as there are no modules which depend on this module.
Used terms, files and utilities: