Linux Kernel Interrupt Controller

Linux Kernel Interrupt Controller API

Interrupt handling is an important part of the Linux kernel. Most of the kernel’s functionality, mainly in embedded system, involve interrupt handling.

This article describes the most important concepts related to the Linux kernel’s interrupt handling mechanisms. These concepts include the relevant code and data structures. Sample code from Linux kernel is also provided.

The kernel IRQ subsystem is composed of below structures:

  • Struct irq_desc;
  • struct irq_data;
  • struct irqaction;
  • struct irq_chip;
  • struct irq_domain;
  • struct irq_domain_ops;

Interrupt Descriptor: struct irq-desc
Each interrupt source available to the system has allocated to it a single struct irq_desc structure. This structure stores important information for the interrupt controller, handler and others:
struct irq_desc {
struct irq_data irq_data;
irq_flow_handler_t handle_irq;
struct irqaction * action;
#ifdef CONFIG_PROC_FS
struct proc_dir_entry * dir;
#endif
unsigned int nr_actions;
int parent_irq;
struct module *owner;
const char *name;

};
@irq_data: per irq and chip data passed down to chip functions
@handle_irq: highlevel irq-events handler
@action: the irq action chain
@dir: /proc/irq/ procfs entry
@nr_actions: number of installed actions on this descriptor
@name: flow handler name for /proc/interrupts output

When the linux boots, the start_kernel() function calls the early_irq_init() function which probes for the number of preallocated irqs by calling arch_probe_nr_irqs() function and allocates the irq_desc structure for each irqs by calling alloc_desc() function and also initializes various fields to their default values.

Posted in Technical | Tagged , , , , , , | Leave a comment

Linux Kernel Interrupt Descriptor

IRQ number and Interrupt Descriptor

An IRQ is an interrupt request from a device. Currently they can come in over a pin, or over a packet. Several devices may be connected to the same pin thus sharing an IRQ. An IRQ number is a kernel identifier used to talk about a hardware interrupt source. An IRQ number is an enumeration of the possible interrupt sources on a machine. What is enumerated is the number of input pins on all of the interrupt controller in the system, which is an index into the global irq_desc array.

In the linux kernel, for each peripheral IRQ, interrupt descriptor (struct irq_desc) are used to describe the key information related to an IRQ. When an interrupt occurs, the architecture specific handle_arch_irq handler is triggered which identifies the corresponding hardware interrupt number, then the hw interrupt number is translated to a logical IRQ number using a IRQ domain, then we can get through the IRQ number corresponding interrupt descriptor. Then the interrupt descriptor highlevel irq-event handler is called for interrupt handling.

The highlevel irq-event handler mainly does two operations:

  • call the interrupt descriptor’s underlying irq chip driver to perform interrupt flow control like mask, unmask, ack and other callback functions.
  • call the interrupt descriptor’s underlying all the irqaction’s device/peripheral handler.
Posted in Technical | Tagged , , , , , | Leave a comment

Linux Kernel IRQ Domain

IRQ Domain – Interrupt number mapping library

The current design of the Linux kernel uses a single large number space where each separate IRQ source is assigned a different number. This is simple when there is only one interrupt controller, but in systems with multiple interrupt controllers the kernel must ensure that each one gets assigned non-overlapping allocations of Linux IRQ numbers.

The system in the past, there were only one interrupt controller, interrupt controller actual HW interrupt line number can become IRQ number. For example, earlier embedded SOC interrupt controller mostly had a single interrupt status register, the register might have 64 bit (probably more), each bit is an IRQ number, which can be directly mapped. At this time, GPIO interrupt status register in the interrupt controller is only one bit, so all GPIO interrupt only one IRQ number. So that the system has at least two interrupt controllers, one a traditional sense interrupt controller, one GPIO controller type interrupt controller.

With the increasing complexity of the system, the number of interrupt controllers registered as unique irqchips show a rising tendency: for example sub-drivers of different kinds such as GPIO controllers avoid re-implementing identical callback mechanisms as the IRQ core system by modelling their interrupt handlers as irqchips, i.e. in effect cascading interrupt controllers. For this reason we need a mechanism to separate controller-local interrupt numbers, called hardware irqs, from Linux IRQ numbers.

In the linux kernel, we use the following two ID to identify an interrupt from the peripheral:

  • Linux IRQ number: It is a kernel identifier used to talk about a hardware interrupt source. It is an enumeration of possible interrupt sources on a machine. Typically what is enumerated is the number of input pins on all of interrupt controller in the system. The IRQ number is a virtual interrupt ID, and hardware independent.
  • HW interrupt ID: It is a peripheral interrupt identifier in the actual hardware interrupt controller. Interrupt controller uses HW interrupt ID to identify the peripheral interrupts.

The irq_domain provides a mapping between hwirq and IRQ numbers. It also implements translation from Device Tree interrupt specifiers to hwirq numbers.

Types of Mapping

  • linear mapping: The linear mapping maintains a fixed size table indexed by the hwirq number. When a hwirq is mapped, an irq_desc is allocated for the hwirq, and the IRQ number is stored in the table. For a linear mapping, its interface API is as follows:
    static inline struct irq_domain * irq_domain_add_linear (struct device_node * of_node,
    unsigned int size,
    const struct irq_domain_ops * ops,
    void * host_data)
    irq_domain_add_linear() – Allocate and register a linear revmap irq_domain.
    @of_node: pointer to interrupt controller’s device tree node.
    @size: Number of interrupts in the domain.
    @ops: map/unmap domain callbacks
    @host_data: Controller private data pointer
  • Radix Tree map: The irq_domain maintains a radix tree map from hwirq numbers to Linux IRQs. When an hwirq is mapped, an irq_desc is allocated and the hwirq is used as the lookup key for the radix tree. For Radix Tree map, its interface API is as follows:
    static inline struct irq_domain * irq_domain_add_tree (struct device_node * of_node,
    const struct irq_domain_ops * ops,
    void * host_data)
  • No map: The No Map mapping is to be used when the hwirq number is programmable in the hardware. In this case it is configured via HW interrupt ID register rather than by the physical connection decisions. For this type of mapping, the interface API is as follows:
    static inline struct irq_domain * irq_domain_add_nomap (struct device_node * of_node,
    unsigned int max_irq,
    const struct irq_domain_ops * ops,
    void * host_data)
  • Legacy map: The Legacy mapping is a special case for drivers that already have a range of irq_descs allocated for the hwirqs. It is used when the driver cannot be immediately converted to use the linear mapping. For example, many embedded system board support files use a set of #defines for IRQ numbers that are passed to struct device registrations. In that case the Linux IRQ numbers cannot be dynamically assigned and the legacy mapping should be used. The legacy map assumes a contiguous range of IRQ numbers has already been allocated for the controller and that the IRQ number can be calculated by adding a fixed offset to the hwirq number, and visa-versa. For legacy map, its interface API is as follows:
    static inline struct irq_domain * irq_domain_add_legacy (struct device_node * of_node,
    unsigned int size,
    unsigned int first_irq,
    irq_hw_number_t first_hwirq,
    const struct irq_domain_ops * ops,
    void * host_data)
    irq_domain_add_legacy() – Allocate and register a legacy revmap irq_domain.
    @of_node: pointer to interrupt controller’s device tree node.
    @size: total number of irqs in legacy mapping
    @first_irq: first number of irq block assigned to the domain
    @first_hwirq: first hwirq number to use for the translation.
    @ops: map/unmap domain callbacks
    @host_data: Controller private data pointer
  • Hierarchy map: On some architectures, there may be multiple interrupt controllers involved in delivering an interrupt from the device to the target CPU.To support such a hardware topology and make software architecture match hardware architecture, an irq_domain data structure is built for each interrupt controller and those irq_domains are organized into hierarchy. When building irq_domain hierarchy, the irq_domain near to the device is child and the irq_domain near to CPU is parent. For legacy map, its interface API is as follows:
    static inline struct irq_domain * irq_domain_add_hierarchy (struct irq_domain *parent,
    unsigned int flags,
    unsigned int size,
    struct device_node *node,
    const struct irq_domain_ops * ops,
    void * host_data)
    irq_domain_add_hierarchy – Add a irqdomain into the hierarchy
    @parent: Parent irq domain to associate with the new domain
    @flags: Irq domain flags associated to the domain
    @size: Size of the domain. See below
    @node: Optional device-tree node of the interrupt controller
    @ops: Pointer to the interrupt domain callbacks
    @host_data: Controller private data pointer

Create a mapping for the IRQ domain

An interrupt controller driver creates and registers an irq_domain by calling one of the irq_domain_add_*() functions. The function will return a pointer to the irq_domain on success.

In most cases, the irq_domain will begin empty without any mappings between hwirq and IRQ numbers. Mappings are added to the irq_domain by calling irq_create_mapping() which accepts the irq_domain and a hwirq number as arguments. If a mapping for the hwirq doesn’t already exist then it will allocate a new Linux irq_desc, and calls the function irq_domain_associate to associate it with the hwirq, and call the .map() callback so the driver can perform any required hardware setup.

In case of a legacy mapping, where drivers already have a range of irq_descs allocated for the hwirqs and it assumes a contiguous range of IRQ numbers has already been allocated for the controller, so irq_domain_add_legacy calls the function irq_domain_associate_many, which in turn calls the function irq_domain_associate and associates the hwirq with linux irq number, and call the .map() callback so the driver can perform any required hardware setup.

Data Structure Descriptor

  • irq domain: in the kernel, the concept irq domain represented by struct irq_domain:
    struct irq_domain {
    struct list_head link;
    const char * name;
    const struct irq_domain_ops * ops;
    void * host_data;
    struct device_node * of_node;
    struct irq_domain_chip_generic * gc;
    #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
    struct irq_domain *parent;
    #endif
    irq_hw_number_t hwirq_max;
    unsigned int revmap_direct_max_irq;
    unsigned int revmap_size;
    struct radix_tree_root revmap_tree;
    unsigned int linear_revmap [];
    };
    struct irq_domain – Hardware interrupt number translation object
    @link: Element in global irq_domain list.
    @name: Name of interrupt domain
    @ops: pointer to irq_domain methods
    @host_data: private data pointer for use by owner. Not touched by irq_domain core code
    @flags: host per irq_domain flags
    @of_node: Pointer to device tree nodes, used to decode device tree interrupt specifiers
    @gc: Pointer to a list of generic chips
    @parent: Pointer to parent irq_domain to support hierarchy irq_domains
    @revmap_direct_max_irq: largest hwirq that can be set for controllers for direct mapping
    @revmap_size: Size of the linear map table @linear_revmap[]
    @revmap_tree: Radix map tree for hwirqs that don’t fit in the linear map
    @linear_revmap: Linear table of hwirq->virq reverse mappings

In linux kernel, all irq domain is linked into a global list, the list head is defined as follows:
static LIST_HEAD (irq_domain_list);
By irq_domain_list this pointer, you can get the entire system HW interrupt ID and IRQ number. The host_data define private data for the use of underlying interrupt controller.

  • irq domain callback interfaces: in the kernel irq domain callback function is abstracted by struct irq_domain_ops, defined as follows:
    struct irq_domain_ops {
    int (* match) (struct irq_domain * d, struct device_node * node);
    int (* map) (struct irq_domain * d, unsigned int virq, irq_hw_number_t hw);
    void (* unmap) (struct irq_domain * d, unsigned int virq);
    int (* xlate) (struct irq_domain * d, struct device_node * node,
    const u32 * intspec, unsigned int intsize,
    unsigned long * out_hwirq, unsigned int * out_type);
    };
    struct irq_domain_ops – Methods for irq_domain objects
    @match: Match an interrupt controller device node to a host, returns 1 on a match
    @map: Create or update a mapping between a virtual irq number and a hwirq number.
    @unmap: Dispose of such a mapping
    @xlate: Decode hwirq and linux irq from device tree node and interrupt specifier

The map callback function is implemented by the device driver and does the following operations:

  • Set the IRQ number corresponding interrupt descriptor (struct irq_desc) of irq chip
  • Set the IRQ number corresponding interrupt descriptor highlevel irq event handler
  • Sets the IRQ number corresponding interrupt descriptor irq chip data
Posted in Technical | Tagged , , , , , | 2 Comments

U-Boot – Universal Bootloader for Embedded Linux System

Introduction to U-Boot

U-Boot (Universal Bootloader) is an open source, primary boot loader used in embedded devices. It is available for a number of different computer architectures, including 68k, ARM, AVR32, Blackfin, MicroBlaze, MIPS, Nios, PPC and x86.

Following are the key features of U-Boot:

  • Monolithic code image
  • Runs processor in physical or a single address space
  • Enables clocking, sets up some of the pin mux settings
  • Reads in Kernel image (uImage)
  • Jumps to load address pointed to in uImage header
  • Passes Kernel Command Line to Kernel
  • Debugging capabilities (just mentioning, not used during boot process)

Embedded U-Boot process

  • Power on
  • U-Boot gets control
  • U-Boot initializes minimum necessary hardware
  • U-Boot runs environment variable “bootcmd” on boot
  • bootcmd tells U-Boot to boot Linux from flash with initramfs and dtb
  • U-Boot loads Linux, initramfs, and dtb from flash into RAM
  • bootcmd boots Linux, passing initramfs and dtb
  • Kernel finds hardware via dtb
  • Kernel extracts initramfs into tmpfs
  • Kernel runs init
  • Init gets userspace up and running

Bootstrapping the U-Boot

The term bootstrapping usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, especially the operating system which will then take care of loading other software as needed. A bootloader is required to load an operating system. The bootloader starts before any other software starts and initializes the processor and makes CPU ready to execute a software like an operating system.

bootstrapping uboot-1
Bootstrapping U-Boot on ARM System – 1
bootstrapping uboot-2
Bootstrapping U-Boot on ARM System – 2

U-Boot Binary Relocation and Memory Remap

The Process of assigning load address to various parts of program and adjusting the code and data segment to reflect the assigned address is called relocation. In other words, copy the binary from a source address to a new destination address but the binary must be able to execute correctly.
Doing memory relocation usually need adjusting the linking address offsets and global offset table through all binary. If no address adjustment is required, copy binary to destination address with exactly the same as its original address, this requires memory remap.
uboot reloc & mem remap
Two relocation happens in u-boot.

  • One is relocation with remap, which doesn’t need to do address adjustment.
  • The other is general relocation, which need to do address adjustment based on dynamic calculated offsets relative to the top of the DRAM.

Why U-BOOT relocate itself into RAM?

  • Memory access is faster from RAM than from ROM, this matters for system without instruction cache.
  • Execution from RAM allows flash reprogramming, allows software breakpoints with “trap” instructions.
  • Variable cannot be modified when it is stored in ROM.
  • Boot loader must copy itself into RAM to update the variables correctly.
  • Function calls relies on stack operation, which is to be stored in RAM.
Posted in Technical | Tagged , , , | Leave a comment

Bootloader for an Embedded Linux System

Introduction: What is boot-loader?

The bootloader is a piece of code which is responsible for basic hardware initialization (cpu,ram,gpio etc), loading of an application binary, usually an operating system kernel, from flash storage, from the network, or from another type of non-volatile storage. It also helps in execution of application binary.
Besides these basic functions, most bootloaders provide a shell with various commands implementing different operations like loading of data from storage or network, memory inspection, hardware diagnostics and testing, firmware upgrade and fail-safe functions.

Bootloaders on x86 System

The x86 processors are typically bundled on a board with a non-volatile memory containing a program, the BIOS. This program gets executed by the CPU after reset, and is responsible for basic hardware initialization and loading of a small piece of code from non-volatile storage. This piece of code is usually the first 512 bytes of a storage device. This piece of code is usually a 1st stage bootloader, which will load the full bootloader itself. The bootloader can then offer all its features. It typically understands filesystem formats so that the kernel can be loaded directly from a normal filesystem.
bootstage-x86

GRUB, Grand Unified Bootloader, the most powerful bootloader. It Can read many filesystem formats to load the kernel image and the configuration, provides a powerful shell with various commands, can load kernel images over the network, etc.
For more info visit http://www.gnu.org/software/grub/

Booting on Embedded CPUs

Case 1:
When the system is powered ON, the CPU starts executing code at a fixed address. There is no other booting mechanism provided by the CPU and the hardware design must ensure that a NOR flash chip is wired so that it is accessible at the address at which the CPU starts executing instructions. The first stage bootloader must be programmed at this address in the NOR flash memory. NOR is mandatory, because it allows random access, which NAND doesn’t allow. This kind of booting is not very common anymore (unpractical, and requires NOR flash).
Case 2:
The CPU has an integrated boot code in ROM, BootROM on AT91 CPUs and ROM code on OMAP, etc. The exact details are CPU-dependent and the boot code is able to load a first stage bootloader from a storage device (MMC, NAND, SPI flash, UART, etc) into an internal SRAM (DRAM not initialized Yet). The first stage bootloader is limited in size due to hardware constraints (SRAM size). The first stage bootloader must initialize DRAM and other hardware devices and load a second stage bootloader into RAM.

Why are there Boot Stages?

At Power-ON-Reset, POR the internal ROM code in the processor knows nothing about the system it is in. Therefore the processor uses predefined methods on where to find the boot code that can be accessed with a minimal standard configuration of external interfaces. The internal RAM is limited in size and due to that only a portion of the boot process can be read into it. Subsequent stages are enabled from this partial boot from Internal RAM. Biggest reason why is due to system configurations that can only be defined during the application design process such as memory DDR types and settings.

Generic bootloaders for embedded CPUs

We will focus on the generic part, the main bootloader, offering the most important features. There are several open-source generic bootloaders. Here are the most popular ones:
1. U-Boot, the universal bootloader by Denx. The most used on ARM, also used on PPC, MIPS, x86, m68k, NIOS, etc. The de-facto standard nowadays.
Visit the link for more info http://www.denx.de/wiki/U-Boot
2. Barebox, a new architecture-neutral bootloader, written as a successor of U-Boot. Better design, better code, active development, but doesn’t yet have as much hardware support as U-Boot.
Visit the link for more info http://www.barebox.org
3. There are also a lot of other open-source or proprietary bootloaders, often architecture-specific RedBoot, Yaboot, PMON, etc.

Components of the ARM Linux Boot Process (4 Stages)

1. RBL – ROM Boot Loader, contained in the ROM, minimal capability to initialize the processor and read the SPL from off chip into internal RAM.
2. SPL – Secondary Program Loader, is code of a minimal configuration specific to the target board that has the capability to setup the processor to be able to read in the next stage which is U-Boot.
3. U-boot – Enables most of the specific processor functionality for the target board and end application to configure the board for booting Linux and to load the kernel image from persistent storage.
4. Kernal image – Final stage of the boot process. Kernel initialization, MMU enable, Device Initilization, User Init process and finally user level applications.
4stage-bootloader-ARM-linux

Booting on ARM TI OMAP3

ROM Code: tries to find a valid bootstrap image from various storage sources, and load it into SRAM or RAM (RAM can be initialized by ROM code through a configuration header). Size limited to 64 KB. No user interaction possible.
X-Loader or U-Boot: runs from SRAM. Initializes the DRAM, the NAND or MMC controller, and loads the secondary bootloader into RAM and starts it. No user interaction possible.
U-Boot: runs from RAM. Initializes some other hardware devices (network, USB, etc.). Loads the kernel image from storage or network to RAM and starts it. Shell with commands provided.
Linux Kernel: runs from RAM. Takes over the system completely (bootloaders no longer exists).
bootstage-ti-omap

Posted in Technical | Tagged , , | 1 Comment