A comparison of the Linux and Windows Device Driver Architectures

Preparing to load PDF file. please wait...

0 of 0
100%
A comparison of the Linux and Windows Device Driver Architectures

Transcript Of A comparison of the Linux and Windows Device Driver Architectures

A Comparison of the Linux and Windows Device Driver Architectures
Melekam Tsegaye Rhodes University, South Africa
[email protected]
Richard Foss Rhodes University, South Africa
[email protected]
Abstract: In this paper the device driver architectures currently used by two of the most popular operating systems, Linux and Microsoft’s Windows, are examined. Driver components required when implementing device drivers for each operating system are presented and compared. The process of implementing a driver, for each operating system, that performs I/O to a kernel buffer is also presented. The paper concludes by examining the device driver development environments and facilities provided to developers by each operating system.
1. Introduction
Modern operating system kernels consist of a number of components such as a memory manager, process scheduler, hardware abstraction layer (HAL) and security manager. For a detailed look at the Windows kernel refer to [Russinovich, 98] and for the Linux kernel [Rusling, 99], [Beck et al, 98]. The kernel can be viewed as a black box that should know how to interact with the many different types of hardware devices that exist and the many more devices that do not yet exist. Creating a kernel that has inbuilt functionality for interacting with all known hardware devices may be possible but is not practical. It would consume too many system resources, needlessly.
1.1. Kernel Modularity
A kernel is not expected to know how to interact with new types of devices that do not yet exist at the time of its creation. Instead modern operating system kernels allow their functionality to be extended by the addition of device driver modules at runtime. A module implements functionality that will allow the kernel to interact with a particular new device. Each module implements a routine that the kernel will call at module load time and a routine that gets called at module removal time. Modules also implement various routines that will implement I/O functionality for transferring data to and from a device as well as a routine for issuing device I/O control instructions to a device. The above applies to both the Linux and Windows driver architectures.
1.2. Organisation of this paper
The material in this paper is divided into the following sections: • General driver architecture of the two operating systems (section 2) • Driver architecture components of each operating system (sections 3) • Implementation of a driver that performs I/O to a kernel buffer (section 4) • Driver development environments and facilities offered by the two operating systems to developers (section 5)
1.3. Related Work
The Windows device driver architecture is documented by documentation that accompanies the Windows Device Driver Development kit [Microsoft DDK, 02]. Further, the works produced by Walter Oney [Oney, 99] and Chris Cant [Cant, 99] present a detailed account of the Windows Driver Architecture. The Linux device driver architecture is documented well by the freely available publication authored by Rubini et al [Rubini et al, 01].

2. Device Driver Architectures
A device driver enables the operation of a piece of hardware by exposing a programming interface that allows a device to be controlled externally by applications and parts of an operating system. This section presents the driver architectures currently in use by two of the most commonly used operating systems, Microsoft‘s Windows and Linux, and the origin of their architecture.
2.1. Origin of the Linux Driver Architecture
Linux is a clone of the UNIX operating system first created by Linux Travolds [Linus FAQ, 02], [LinuxHQ, 02]. It follows that the Linux operating system utilises a similar architecture to UNIX systems. UNIX operating systems view devices as file system nodes. Devices appear as special file nodes in a directory designated by convention to contain device file system node entries [Deitel, 90]. The aim of representing devices as file system nodes is so that applications can access devices in a device independent manner [Massie, 86],[Flynn et al, 97]. Applications can still perform device dependent operations with a device I/O control operation. Devices are identified by major and minor numbers. A major number serves as an index to an array of drivers and a minor number is used to group similar physical devices [Deitel, 90]. Two types of UNIX devices exist, char and block. Char device drivers manage devices that are accessed sequentially with no buffering, and Block device drivers manage devices where random access is possible, and data is accessed in blocks. Buffering is also utilised in block device drivers. A block device must be mounted as a file system node for it to be accessible [Beck et al, 98].
Linux retains much of the UNIX architecture, the difference being that char device nodes corresponding to block devices have to be created in UNIX systems, whereas in Linux, the Virtual File System (VFS) interface blurs the distinction between char and block devices [Beck et al, 98]. Linux also introduces a third type of device called a network device. Network device drivers are accessed in a different way to char and block drivers. A set of APIs different from the file system I/O APIs are used e.g. the socket API, which is used for accessing network devices.
2.2. Origin of the Windows Driver Architecture
In 1980, Microsoft licensed the UNIX operating system from Bell labs, later releasing it as the XENIX operating system. With the first IBM PC, MS DOS version 1 was released in 1981. MS DOS version 1 had a similar driver architecture to UNIX systems based on XENIX [Deitel, 90]. The difference to UNIX systems was that the operating system came with built in drivers for common devices. Device entries did not appear as file system nodes. Instead reserved names were assigned to devices. E.g. CON was the keyboard or screen, PRN the printer and AUX the serial ports. Applications could open these devices and obtain a handle to associated drivers as they would with file system nodes, and perform I/O to them. The operating system, transparent to applications, translated reserved device names to devices that its drivers managed. MS DOS version 2 introduced the concept of loadable drivers. Since Microsoft had made the interface to its driver architecture open, this encouraged third party device manufacturers to produce new devices [Davis, 83]. Drivers for these new devices could then be supplied by hardware manufacturers and be loaded/unloaded at runtime into the kernel, manually.
Later on, Windows 3.1 was released by Microsoft. It had support for many more devices and utilised an architecture based on MS DOS. With its later operating systems, Windows 95, 98 and NT, Microsoft introduced the Windows Driver Mode (WDM). The WDM came about because Microsoft wanted to make device drivers source code compatible with all of its new operating systems [Microsoft WDM, 02]. Thus, the advantage of making drivers WDM compliant is that once created, a driver need only be recompiled before it is usable on any of Microsoft’s later operating systems.
2.3. The Windows Driver Architecture
There are two types of Windows drivers, legacy and Plug and Play (PnP) drivers. The focus here is only on PnP drivers, as all drivers should be PnP drivers where the possible. PnP drivers are user friendly since very little effort is required from users to install them. Another benefit of making drivers PnP is that they get loaded by the operating system only when needed, thus they do not use up system resources needlessly. Legacy drivers were implemented for Microsoft’s earlier operating systems and their architecture is outdated. The Windows Driver Model (WDM) is a standard model specified by Microsoft [Microsoft DDK, 02]. WDM drivers are usable on all of Microsoft’s recent operating systems (Windows 95 and later).

2.3.1. The WDM driver architecture
There are three classes of WDM drivers: filter, functional and bus drivers [Oney, 01]. They form the stack illustrated in figure 2.3. In addition, WDM drivers must be PnP aware, support power management and Windows Management Instrumentation. Figure 2.3 shows how data and messages are exchanged between the various driver layers. A standard structure called an I/O Request Packet (IRP) is used for communication. Whenever a request is made from an application to a driver, the I/O manager builds an IRP and passes it down to the driver, which processes it, and when done, ‘completes’ the IRP [Cant, 99]. Not every IRP filters down to a bus driver. Some IRPs get handled by the layers above and are returned to the I/O manager from there. Hardware access to a device is done through a hardware abstraction layer (HAL).

Applications Win32 API

PnP Manager

I/O Manager

Power Manager

User space Kernel space

IRP IRP IRP IRP

Upper Filter Functional Lower Filter
Bus
HAL

IRP IRP IRP IRP

IRP Hardware bus

I/O Request Packet

Figure 2.3 The WDM Driver Architecture
2.4. The Linux Driver Architecture
Drivers in Linux are represented as modules, which are pieces of code that extend the functionality of the Linux kernel [Rubini et al, 01]. Modules can be layered as shown in figure 2.4. Communication between modules is achieved using function calls. At load time a module exports all functions it wants to make public to a symbol table that the Linux kernel maintains. These functions are then visible to all modules. Access to devices is done through a hardware abstraction layer (HAL) whose implementation depends on the hardware platform that the kernel is compiled for, e.g. x86 or SPARC.

Applications

System Call Interface
Module X Module Y Module Z
HAL

User space Kernel space

Hardware bus

Function call with custom dat a

Figure 2.4 The Linux Driver Architecture
2.5. The Linux and Windows Driver Architectures Compared
As can be seen in figures 2.3 and 2.4, a number of similarities exist between the two operating systems. On both systems, drivers are modular components that extend the functionality of the kernel. Communication between driver layers in Windows is through the use of I/O Request Packets (IRPs) supplied as arguments to standard system and driver defined functions, whereas in Linux function calls with parameters customized to a particular driver are used. Windows has separate kernel components that manage PnP, I/O and Power. These components send messages to drivers using IRPs at appropriate times.
In Linux, there is no clear distinction between layered modules, i.e. modules are not categorised as bus, functional or filter drivers. There is no clearly defined PnP or Power manager in the kernel that sends standardised messages to modules at appropriate times. The kernel may have modules loaded that implement Power Management or PnP functionality, but the interface of these modules to drivers is not clearly specified. This functionality is likely to be incorporated in later Linux kernels as the Linux kernel is always in development. Once data is passed to a driver that is part of a stack of modules by the kernel, the data may be shared with other drivers in the stack through an interface specific to that set of drivers.
In both environments, hardware access through a HAL interface is implemented for the specific platform the kernel is compiled for, i.e. x86, SPARC etc. A common feature of both architectures is that drivers are modules that can be loaded into a kernel at runtime. Each module contains an entry point that the kernel knows to start code execution from. A module will also contain routines that the kernel knows to call when an I/O operation is requested to a device managed by that module. This enables the kernel to provide a device independent interface to the application layer. A more in-depth comparison of driver components from the two architectures is presented later in Section 3.3.
3. Drivers Components
The process of creating a device driver requires knowledge of how the associated hardware device is expected to operate. For example, just about every device will allow its clients to read data from and write data to it. In this section driver components that must be implemented by all drivers are presented, as well as a comparison of the two operating systems’ driver components. The implementation of a driver that performs I/O to a kernel buffer is also presented. The section concludes with a look at the driver development environments and facilities offered by each operating system.

3.1. Windows Driver Components
Drivers in Windows consist of various routines. Some are required, others optional. This section presents the routines that every driver must implement. A device driver in Windows is represented by a structure called a DriverObject. It is necessary to represent a driver with a structure such as a driver object because the kernel implements various routines that can be performed for every driver. These routines, discussed in the following sections, operate on a driver object.
3.1.1. Driver Initialisation
Every device driver in Windows contains a routine called DriverEntry. As its name suggests, this routine is the first driver routine executed when a driver is loaded and is where initialisation of the device driver’s device object is performed. Microsoft’s DDK [Microsoft DDK, 02] states that a driver object represents a currently loaded kernel driver whereas a device object represents a physical, logical or virtual device. A single loaded kernel driver (represented by a driver object) can manage multiple devices (represented by device objects). During initialisation, fields in the device object that specify the driver’s unload routine, add device routine and dispatch routines are set. The unload routine is a routine that is called when the driver is about to be unloaded so that it can perform cleanup operations e.g. freeing up memory allocated off the kernel heap. addDevice is a routine that is called after the DriverEntry routine if the driver being loaded is a PnP driver, while the dispatch routines are routines that implement driver I/O operations.
3.1.2. The AddDevice Routine
PnP drivers implement a routine called AddDevice. In this routine a device object is created at, which time space for storing global data for a device is allocated. Device resource allocation and initialisation is also performed. Device objects are referred to by different names depending on where they where created. If a device object is created in a currently loaded driver to manage that driver, it is called a Function Device Object (FDO). If it is a device object from a lower driver in a stack of drivers, it is called a Physical Device Object (PDO). If it is a device object from an upper driver in a stack of drivers, it is called a Filter Driver Object (FIDO).
3.1.2.1.Creating a device object
A device object corresponding to a device is created using the I/O Manager routine called IoCreateDevice inside the add device routine. The most important requirements for IoCreateDevice are a name for the device object and device type. The name allows applications and other kernel drivers to gain a handle to the driver, in order to perform I/O operations. The device type specifies the type of device the driver is used for, for example a storage device.
3.1.2.2. Global Driver Data
When a device object is created it is possible to associate with it a block of memory, called DeviceExtension in Windows, where custom driver data can be stored. This is an important facility, as it eliminates the need to use global data structures in driver code, which can be difficult to manage. For example, in the case where a local variable with the same name as a global variable is declared in a routine mistakenly, the driver writer may find it difficult to track a bug in the driver. It also makes it easier to manage device object specific data, when more than one device object exists in a single driver, as is the case when a bus driver manages child physical device objects for devices present on its bus.
3.1.2.3.Device naming
A device can be named at device object creation time. This name can be used for accessing a handle to a driver. The handle is then used for performing I/O. Microsoft recommends not naming functional device objects created in filter and functional drivers. As pointed out by Oney [Oney, 99], if a device object is named, any client can open the device object and perform I/O for non-disk device drivers. This is because the default access control status Windows gives to non-disk device objects is an unrestricted one. Another problem is that the name specified does not have to follow any naming protocol, so the name specified may not be a well chosen one. For example two driver writers may come up with the same name for their device objects, which would cause a clash.
Windows supports a second device object naming scheme using device interfaces. Device interfaces are constructed with 128 bit globally unique identifiers (GUIDs) [Open Group, 97]. A GUID can be generated using a utility provided by the Microsoft DDK. Once generated, a GUID can be publicised. A driver registers the

GUID for a device interface in its add device routine through a call to the I/O manager routine IoRegisterDeviceInterface. Once registered, the driver must enable the device interface through a call to the I/O manager routine IoSetDeviceInterfaceState. The registration process will add an interface data entry to the Windows registry file, which can be accessed by applications.
3.1.2.4. Driver Access from an Application
An application that wants to perform I/O operations with a device driver must obtain a handle to a device driver through the CreateFile Win32 API call. It requires a path to a device such as \\device\devicex. Named devices will have their names appear in the name space called \\device, thus the previous path is for a device named devicex. CreateFile also requires access mode flags such as read, write and file sharing flags for the device.
Accesses to unnamed devices that have registered a device interface are performed differently as shown in figure 3.1.2.4. This requires obtaining a handle to a device information structure using the driver’s GUID, and calling the SetupDiGetClassDevs Win32 API routine. This is only possible if the driver registered a device interface, through which applications can access the device (called a device interface class).
Each time a driver calls the I/O manager routine IoRegisterDeviceInterface, a new instance of the device interface class is created. Once a device information handle is obtained by an application, multiple calls to the Win32 API routine SetupDiEnumDeviceInterfaces will return device interface data for each instance of the device interface class. Lastly, a device path for each of the driver instances can be retrieved from the interface data obtained from the previous call with another Win32 API routine, SetupGetDeviceInterfaceDetail. CreateFile can then be called with the device path for the device instance of interest, to obtain a handle for performing I/O.

GUID { }

SetupDiGetClass Devs

Device Interface Handle

S etupDiEnumDevic eInt erfac es

Device Name Space \\Device
devicea deviceb

Interface Data SetupDiGetDeviceInterfaceDetail
Device Path

devicex

CreateFile API call

Handle for I/O

Figure 3.1.2.4 Obtaining a handle an application can use for I/O from a device GUID.
3.1.2.5. Device Object Stacking When the add device routine is called by the PnP manager, one of the parameters passed to it is a device
object (PDO) for a driver below the current one. Device object stacking is performed in the add device routine so that IRPs sent by drivers in the layer below the driver being loaded can be received by it. Device object

stacking is achieved by a call to the I/O Manager routine IoAttachDeviceToDeviceStack as shown in figure 3.1.2.5. A physical device object (PDO) is required, which is lower in the stack than the new device object when calling IoAttachDeviceToDeviceStack. The routine attaches the specified device object to the top of the driver stack and returns a device object that is one below the new one e.g. in the example shown on figure 3.1.2.5 this would be lower device object X. The lower physical device object (PDO) can be any number of layers below the new device object but IoAttachDeviceToStack returns the device object one below the current one.

Device object (FDO)

IoAttachDevic eToDevic eSt ack

Lower Device object X Lower Device object Y Lower Device object Z

Device object (FDO) Lower Device object X Lower Device object Y Lower Device object Z

Figure 3.1.2.5 Attaching a device object to the top of a device object stack.
3.1.2.6. User to Kernel and Kernel to User Data Transfer Modes in Windows
The mode used to transfer data from kernel space to user space and vice versa is specified in the flags field of a device object. There are three modes: buffered I/O, direct I/O and I/O that does not use any of the latter methods termed “method neither I/O”. Figure 3.1.2.6 illustrates the three modes. In buffered I/O mode the operating system allocates a kernel buffer that can handle a request. In the case of a write operation, the operating system validates the supplied user space buffer and copies data from the user space buffer to the newly allocated kernel buffer and passes the kernel buffer to the driver. In the case of reads, the operating system validates the user space buffer and copies data from the newly allocated kernel buffer to the user space buffer. The kernel buffer is accessible to drivers as the AssociatedIrp.SystemBuffer field of an IRP. Drivers read from or write to this buffer to communicate with applications when buffered I/O is in use.
Direct I/O is the second I/O method that can be used for data exchanges between applications and a driver. An application-supplied buffer is locked into memory by the operating system, so that it will not be swapped out, and a memory descriptor list (MDL) for the locked memory is passed to a driver. An MDL is an opaque structure. Its implementation details are not visible to drivers. The driver then performs DMA to the user space buffer through the MDL. The MDL is accessible to drivers through the MdlAddress field of an IRP. The advantage of using direct I/O is that it is faster than buffered I/O since no copying of data to and from user and kernel space is necessary and I/O is performed directly into a user space buffer.

1. Buffered I/O User Space Buffer

2. Direct I/O with MDLs
User Space Buffer

3. Direct I/O User Space Buffer

User space

Kernel space

For reads, the kernel validates the user space buffer, creates a copy of it and passes the new buffer to the driver. For writes, when I/O is done the kernel copies the contents of the kernel space buffer to the user space buffer.

Kernel Space Buffer

Device driver performs I/O to a kernel space buffer.

Device Driver

The kernel creates an MDL to the user space buffer and passes it to the device driver
MDL to user buffer

Device driver performs direct I/O to the user space buffer using the buffer’s virtual address

Device driver performs DMA using the MDL

Device Driver

Device Driver

Figure 3.1.2.6 The three ways in which data from kernel to user and user to kernel space is exchanged.
The third method for I/O is neither buffered nor uses MDLs. Instead the operating system passes the virtual address for a user space buffer to the driver. The driver is then responsible for checking the validity of the buffer before it makes use of it. In addition, the user space buffer is only accessible if the current thread context is the same as the application’s, otherwise a page fault will occur since the virtual address is valid only while that application’s process is active.
3.1.3. Dispatch Routines
Dispatch routines are routines that handle incoming I/O requests packaged as IRPs (I/O request packets). When an IRP arrives (e.g. when an application initiates I/O), an appropriate dispatch routine is selected from the array of routines specified in the MajorFunction field of a driver object as shown in figure 3.1.3. These dispatch routines are initialised in the driver’s entry routine. Every IRP is associated with an I/O stack location structure (used for storing an IRP’s parameters) when created. This structure contains a field, which specifies the dispatch routine the IRP is meant for and the relevant parameters for that dispatch routine. The I/O manager determines from an IRP which dispatch routine it will send IRPs to.

IRP

I/O Manager
The I/O Manger selects a dispatch routine to send the IRP to. The IRP’s I/O Stack location contains a field called MajorFunction which identifies the target dispatch routine.

Driver Object
The dispatch routine identified by the major function number will be called from the driver object’s MajorFunction field member, which is an array of routines.

Figure 3.1.3 dispatching IRP’s to dispatch routines.

Thus IRPs are routed to an appropriate driver supplied routine so that they can be handled there. Required dispatch routine IDs are shown in table 3.1.3. They are indexes for the array of routines specified by the MajorFunction field of a device object. The dispatch routines have custom driver supplied names that are implemented by the driver. They all accept an IRP and a device object to which the IRP is being sent.

IRP_MJ_PNP

Handles PnP messages

IRP_MJ_CREATE

Handles the opening of device to gain a handle

IRP_MJ_CLEANUP

Handles the closing of the device handle gained above

IRP_MJ_CLOSE

Same as clean up, called after cleanup

IRP_MJ_READ

Handles a read request to a device

IRP_MJ_W RI TE

Handles a write request to a device

IRP_MJ_DEVICE_CONTROL

Handles a I/O control request to a device

IRP_MJ_INTERNAL_DEVICE_CONTROL Handles driver specific I/O control requests

IRP_MJ_SYSTEM_CONTROL

Handles WMI requests

IRP_MJ_POW ER

Handles power management messages

Table 3.1.3 Required Windows driver dispatch routines

3.1.4. Windows Driver Installation
Windows uses installation information contained in a text file called an INF file to install drivers. The creator of a driver is responsible for providing an INF file for the driver. A GUI application that is provided with the Windows DDK called GenInf allows the generation of an INF file for a driver. It requires a company name and a Windows Device class under which the driver will be installed. Windows has various pre-defined device classes for installed drivers. The Windows device manager applet, accessible through the system control panel applet, shows all installed drivers categorised using these device classes. Examples of existing classes are the 1394 and PCMCIA device classes. A custom device class can be added by adding a ClassInstall32 section in the INF file.
The hardware ID for a PnP-aware device must also be specified in the INF file since it will be used by the system to identify the device when the device is inserted into the system. A hardware ID is an identification string used by the PnP manager to identify devices that are inserted into the system. Microsoft publishes PnP hardware IDs for the various devices that are usable with the Windows operating system. This hardware ID is stored on the hardware device and read off the device by the system when that device is inserted into the system. Once an INF file for a new device is successfully installed into the system, the driver for that device (which has a specific hardware ID) will be loaded each time the device is inserted into the system and unloaded when the device is removed from the system.

3.1.5. Obtaining Driver Usage Information in Windows
The device manager found in the control panel system applet provides driver information for users. It lists all currently loaded drivers and information on the providers of each driver and their resource usage. It also displays drivers that failed to load and their error codes.
3.2. Linux Driver Architecture Components
Device drivers in Linux are similar to those in Windows in that they too are made up of various routines that perform I/O and device control operations. There is no driver object visible to a driver, instead drivers are internally managed by the kernel.
3.2.1. Driver Initialisation
Every driver in Linux contains a register driver routine and a deregister driver routine. The register driver routine is the counterpart to the Windows driver entry routine. Driver writers use the module_init and module_exit kernel defined macros to specify custom routines that will be designated as the register and deregister routines.
3.2.1.1. Driver Registration and Deregistration
The routine designated by the module_init macro as the registration routine is the first routine executed when a driver is loaded. The driver is registered here by using a kernel character device registration routine called register_chrdev. The important requirements for this routine are a name for the driver, a driver major number (discussed later in section 3.2.2) and a set of routines for performing file operations. Other driverspecific initialisation should take place in this routine. The deregistration routine gets executed when the driver is being unloaded. Its function is to perform cleanup operations before a driver is unloaded. A call to the kernel routine unregister_chrdev with a device name and major number is necessary when deregistering a driver that was previously registered with a register_chrdev call.
3.2.2. Device Naming
In Linux, devices are named using numbers in the range 0 to 255, called device major numbers. This implies that there can be a maximum of 256 usable devices i.e. devices that an application can gain a handle to, but each driver for such a major device can manage as many as 256 additional devices. These driver-managed devices are numbered using numbers in the range 0 to 255, called device minor numbers. It is therefore possible for applications to gain access up to 65535 (256x256) devices. Major numbers are assigned to well known devices for example major number 171 is assigned to IEEE1394 devices. The file Documentation/devices.txt in the Linux kernel source tree contains all major number assignments and a contact address for the device number registration authority. Currently, major numbers 240-254 are available for experimental use. A driver can specify a major number of 0 to request automatic assignment of a major number for a device, if one is available. The use of major number 0 for this purpose does not cause problems, as it is reserved for the null device and no new driver should register itself as a the null device driver.
3.2.2.1. Driver Access from an Application
Drivers are accessed by applications through file system entries (nodes). By convention, the drivers directory is /dev on a particular system. Applications that want to perform I/O with a driver use the open system call to obtain a handle to a particular driver. The open system call requires a device node name such as /dev/tty and access flags. After obtaining a handle, the application can use the handle in calls to other system I/O calls such as read, write and IOCTL.
3.2.3. File operations
In Windows, dispatch routines were set up in the driver entry routine of a driver. In Linux, these dispatch routines are known as file operations and are represented by a structure called file_operations. A typical driver would implement the file operations listed in table 3.2.3.

Open Release

Handles the opening of device to gain a handle Handles the closing of device handled gained above
DriverDeviceDevice ObjectDevicesDrivers