SISTEMA EMBEBIDO.

Similar presentations

Presentation on theme: "SISTEMA EMBEBIDO."— Presentation transcript:

2 OPERATING SYSTEMS TYPESThe Linux OS is monolithic. Generally operating systems come in three flavors:real-time executive,monolithic,and microkernel.The basic reasoning behind the classification is how the OS makes use of hardware for protection.

3 Real-Time ExecutiveTraditional real-time executives are meant for MMU-less processors. On these operating systems, the entire address space is flat or linear with no memory protection between the kernel and applications.These operating systems havesmall memorysize footprint as both the OSapplications are bundled into a single image.As the name suggests, they are real-time in nature because there is no overhead of system calls, message passing, or copying of data.

4 Real-Time ExecutiveFigure shows the architecture of the real-time executive where the core kernel, kernel subsystems, and applications share the same address space.However, because the OS provides no protection, all software running on the system should be foolproof. Adding new software becomes a not-so-pleasant action because it needs to be tested thoroughly lest it bring down the entire system. Also it is very difficult to add applications or kernel modules dynamically as the system has to be brought down.

5 Monolithic KernelsMonolithic kernels have a distinction between the user and kernel space.When software runs in the user space normally it cannot access the system hardware nor can it execute privileged instructions. Using special entry points (provided by hardware), an application can enter the kernel mode from user space. The user space programs operate on a virtual address so that they cannot corrupt another application’s or the kernel’s memory. However, the kernel components share the same address space; so a badly written driver or module can cause the system to crash.

6 Monolithic KernelsFigure shows the architecture of monolithic kernels where the kernel and kernel submodules share the same address space and where the applications each have their private address spaces.Monolithic kernels can support a large application software base. Any fault in the application will cause only that application to misbehave without causing any system crash. Also applications can be added to a live system without bringing down the system. Most of the UNIX OSs are monolithic.

7 MicrokernelThese kernels have been subjected to lots of research especially in the late 1980s and were considered to be the most superior with respect to OS design principles. However, translating the theory into practice caused too many bottlenecks; very few of these kernels have been successful in the marketplace.The microkernel makes use of a small OS that provides the very basic service (scheduling, interrupt handling, message passing) and the rest of the kernel (file system, device drivers, networking stack) runs as applications. On the usage of MMU, the real-time kernels form one extreme with no usage of MMU whereas the microkernels are placed on the other end by providing kernel subsystems with individual address space. The key to the microkernel is to come up with well-defined APIs for Communication with the OS as well as robust message-passing schemes.

8 MICROKERNEL ARCHITECTUREFigure shows a microkernel architecture where kernel subsystems such as network stack and file systems have private address space similar to applications. Microkernels require robust message-passing schemes. Only if the message passing is proper are real-time and modularity ensured. Microkernels have been vigorously debated especially against the monolithic kernels.One such widely known debate was between the creator of Linux, Linus Torvalds, and Andrew Tanenbaum who was the creator of the Minix OS (a microkernel). The debate may not be of very much interest for the reader who wants to get right down into embedded Linux.

9 Linux Kernel ArchitectureAlthough the Linux kernel has seen major releases, the basic architecture of the Linux kernel has remained more or less unchanged. The Linux kernel can be split into the following subsystems.The hardware abstraction layerMemory managerSchedulerFile systemIO subsystemNetworking subsystemIPC

10 Hardware Abstraction Layer (HAL)The hardware abstraction layer (HAL) virtualizes the platform hardware so that the different drivers can be ported easily on any hardware. The HAL is equivalent to the BSP provided on most of the RTOSs except that the BSP on commercial RTOSs normally has standard APIs that allow easy porting.However, on recent kernel versions the idea of coming up with standard APIs for hooking board-specific software is catching up. Two prominent architectures, ARM and PowerPC, have a well-described notation of data structures and APIs that make porting to a new board easier.The following are some embedded processors (other than x86) supported on the Linux 2.6 kernel.MIPSPowerPCARMM68KCRISV850SuperH

11 HARDWARE ABSTRACTION LAYERThe HAL has support for the following hardware components.Processor, cache, and MMUSetting up the memory mapException and interrupt handling supportDMATimersSystem consoleBus managementPower management

12 Memory ManagerThe memory manager on Linux is responsible for controlling access to the hardware memory resources. The memory manager is responsible for providing dynamic memory to kernel subsystems such as drivers, file systems, and networking stack. It also implements the software necessary to provide virtual memory to user applications. Each process in the Linux subsystem operates in its separate address space called the virtual address. By using virtual address, a process can corrupt neither another process’s nor the operating system’s memory. Any pointer corruptions within the process are localized to the process without bringing down the system; thus it is very important for system reliability.

13 Memory ManagerThe Linux kernel divides the total memory available into pages. The typical size of a page is 4 KB. Though all the pages are accessible by the kernel, only some of them get used by the kernel; the rest are used by applications. Note that the pages used by the kernel are not part of the paging process; only the application pages get pulled into main memory on demand. This simplifies the kernel design. When an application needs to be executing, the entire application need not be loaded into memory; only the used pages flip between memory and storage.

14 Memory ManagerThe presence of separate user and kernel memory is the most radical change that a developer can expect when moving from a proprietary RTOS. For the former all the applications form a part of the same image containing the OS. Thus when this image is loaded, the applications get copied to memory too. On Linux, however, the OS and applications are compiled and built separately; each application needs its own storage instance, often referred to as the program.

15 SchedulerThe Linux scheduler provides the multitasking capabilities and is evolving over the kernel releases with the aim of providing a deterministic scheduling policy. The instances that are understood by the scheduler.Kernel thread: These are processes that do not have a user context. They execute in the kernel space as long as they live.User process: Each user process has its own address space thanks to the virtual memory. They enter into the kernel mode when an interrupt, exception, or a system call is executed. Note that when a process enters the kernel mode, it uses a totally different stack. This is referred to as the kernel stack and each process has its own kernel stack.User thread: The threads are different execution entities that are mapped to a single user process. The user space threads share a common text, data, and heap space. They have separate stack addresses. Other resources such as open files and signal handlers are also shared across the threads.

16 As Linux started becoming popular, demand for supporting real-time applications increased. As a result, the Linux scheduler saw constant improvements so that its scheduling policy became deterministic. The following are some of the important milestones in the Linux kernel evolution with respect to real time features.Starting from the kernel, there was support for round robin and FIFO-based scheduling along with the classic time-sharing scheduler of Linux. Also it had the facility to disable paging for selected regions of an application memory; this is referred to as memory locking (because demand paging makes the system nondeterministic).The 2.0 kernel provided a new function nanosleep() that allowed a process to sleep or delay for a very short time. Prior to this, the minimum time was around 10 msec; with nanosleep() a process can sleep from a few microseconds to milliseconds.

17 The 2. 2 kernel had support for POSIX real-time signals. The 2The 2.2 kernel had support for POSIX real-time signals. The 2.4 kernel series saw lots of improvements with respect to real-time scheduling. Most important was the MontaVista patch for kernel preemption and Andrew Morton’s low-latency patch. These were ultimately pulled in to the 2.6 kernel.The 2.6 kernel has a totally new scheduler referred to as the O(1) scheduler that brings determinism into the scheduling policy. Also more real-time features such as the POSIX timers were added to the 2.6 kernel.

18 File SystemOn Linux, the various file systems are managed by a layer called the VFS or the Virtual File System. The virtual file system provides a consistent view of data as stored on various devices on the system. It does this by separating the user view of file systems using standard system calls but allowing the kernel developer to implement logical file systems on any physical device. Thus it abstracts the details of the physical device and the logical file system and allows users to access files in a consistent way. Any Linux device, whetherit’s an embedded system or a server, needs at least one file system. This is unlike the real-time executives that need not have any file system at all. The Linux necessity of file systems stems from two facts.

19 The applications have separate program images and hence they need to have storage space in a file system.All low-level devices too are accessed as files.It is necessary for every Linux system to have a master file system, the root file system. This gets mounted at system start-up. Later many more file systems can be mounted using this file system. If the system cannot mount the root file system over the specified device it will panic and not proceed with system start-up.Along with disk-based file systems, Linux supports specialized file systems that are flash- and ROM-based for embedded systems.

20 Also there is support for NFS on Linux, which allows a file system on a host to be mounted on the embedded system. Linux supports memory-based file systems, which are again useful on embedded systems. Also there is support for logical or pseudo file systems; these can be used for getting the system information as well as used as debugging tools. The following are some of the commonly used embedded file systems.EXT2: A classical Linux file system that has a broad user baseCRAMFS: A compressed read-only file systemROMFS: A read-only file systemRAMFS: A read-write, memory-based file systemJFFS2: A journaling file system built specifically for storage on flashPROCFS: A pseudo file system used for getting system informationDEVFS: A pseudo file system for maintaining the device files

21 IO SubsystemThe IO subsystem on Linux provides a simple and uniform interface to onboarddevices. Three kinds of devices are supported by the IO subsystem.Character devices for supporting sequential devices.Block devices for supporting randomly accessible devices. Block devices are essential for implementing file systems.Network devices that support a variety of link layer devices.

22 Networking SubsystemsOne of the major strengths of Linux has been its robust support for various networking protocols. Table lists the major feature set along with the kernel versions in which they are supported.

23 IPCThe interprocess communication on Linux includes signals (for asynchronous communication), pipes, and sockets as well as the System V IPC mechanisms such as shared memory, message queues, and semaphores. The 2.6 kernel has the additional support for POSIX-type message queues.

24 User Space The user space on Linux is based on the following concepts.Program: This is the image of an application. It resides on a file system. When an application needs to be run, the image is loaded into memory and run. Note that because of virtual memory the entire process image is not loaded into memory but only the required memory pages are loaded.Virtual memory: This allows each process to have its own address space. Virtual memory allows for advanced features such as shared libraries. Each process has its own memory map in the virtual address space; this is unique for any process and is totally independent of the kernel memory map.System calls: These are entry points into the kernel so that the kernel can execute services on behalf of the application.

25 #include <stdio.h>char str[] = “hello world”;void myfunc(){printf(str);}main()myfunc();sleep(10);Let’s take a small example in order to understand how an application runs in Linux. Assume the following piece of code needs to run as an application on a MIPS-based target.

26 The steps involved are:1. Compiling and making an executable program: On an embedded system, the programs are not built on the target but require a host system with cross-development tools. More about this is discussed in Section 2.5; for now assume that you have the host and the tools to build the application, which we name hello_world.2. Getting the executable program on a file system on the target board: Chapter 8 discusses the process of building a root file system and downloading applications on the target. Hence assume that this step is readily available to you; by some magic you are able to download hello_world onto /bin of your root file system.3. Running the program by executing it on the shell: A shell is a command language interpreter; it can be used to execute files. Without going into details of how the shell works, assume that when you type the command /bin/hello_world, your program runs and you see the string on your console (which is normally the serial port).

27 For a MIPS-based target the following command is used to generate the executable.#mips_fp_le-gcc hello_world.c -o hello_world#ls -l hello_world-rwxrwxr-x 1 raghav raghav Jul 20 13:02 hello_worldFour steps are involved in it: Generating preprocessed output, followed by generating assembly language output, which is followed by generating object output, and then the last stage of linking. The output file hello_world is a MIPS-executable file in a format called ELF (Executable Linkage Format). All executable files have two formats: binary format and script files. Executable binary formats that are most popular on embedded systems are the COFF, ELF, and the flat format. The flat format is used on MMU-less uClinux systems and is discussed in Chapter 10. COFF was the earlier default format and was replaced by the more powerful and flexible ELF format. The ELF format

28 consists of a header followed by many sections including the text and the data. You can use the nm command to find the list of symbols in an executable as shown in Listing 2.1. As you can see, the functions main and myfunc as well as the global data str have been assigned addresses but the printf function is undefined (specified by the “U”) and is defined as This means that the printf is not a part of the hello_world image. Then where is this function defined and how are the addresses resolved? This function is part of a library, libc (C library). Libc contains a list of commonly used functions. For example, the printf function is used in almost all applications. Thus instead of having it reside in every application image, the library becomes a common placeholder for it. If the library is used as a shared library then not only does it optimize storage space, it optimizes memory too by making sure that only one copy of the text resides in memory. An application can have more libraries either shared or static; this can be specified at the time of linking. The list ofdependencies can be found by using the following command (the shared library dependencies are the runtime dynamic linker ld.so and the C library).

29 #mips_fp_le-ldd hello_worldlibc.so.6ld-linux.so.2So in effect at the time of creating the executable, all relocation and the symbol resolution have not happened. All functions and global data variables that are not part of shared libraries have been assigned addresses and their addresses resolved so that the caller knows their runtime addresses. However, the runtime address of the shared libraries is not yet known and hence their resolution (for example, from the myfunc function that calls printf) is pending. This all happens at runtime when the program is actually run from the shell.Note that there is an alternative to using shared libraries and that is to statically link all the references. For example, the above code can be linked to a static C library libc.a which is an archive of a set of object files) as shown below.

31 As we see along with the main program’s hello_world, a range of the addresses is allocated to libc and the dynamic linker ld.so. The memory map of the application is created at runtime and then the symbol resolution (in our case the printf) is done. This is done by a series of steps. The ELF loader, which is built as a part of the kernel, scans the executable and finds out that the process has shared library dependency; hence it calls the dynamiclinker ld.so. The ld.so, which is also implemented as a shared library, is a bootstrap library; it loads itself and the rest of the shared libraries (libc.so) into memory thus freezing the memory map of the application and does the rest of the symbol resolution.This leaves us with one last question: how does the printf actually work? As we discussed above, any services to be done by the kernel require that an application make a system call. The printf too does a system call after doing all its internal work. Because the actual implementation of system calls is very hardware dependent, the C library hides all this by providing wrappers that invoke the actual system call. The list of all system calls that are done by the application can be known using an application called strace; for example, running strace on the application yields the following output a part of which is shown below.#strace hello_world...write(1, "hello world", 11) = 11Now that we have a basic idea of the kernel and user space, let us proceed to the Linux system start-up procedure.

32 Linux Start-Up SequenceNow as we have a high-level understanding of Linux architecture, understanding the start-up sequence will give the flow of how the various kernel subsystems are started and how Linux gives control to the user space. The Linux start-up sequence describes the series of steps that happen right from the moment a Linux system is booted on until the user is presented with alog-in prompt on the console. Why do you need to understand the start-up sequence at this stage? The understanding of the start-up sequence is essential to mark milestones in the development cycle. Also once the start-up is understood, the basic pieces necessary for building a Linux system such as the boot loader and the root file system will be understood. On embedded systems the start-up time often has to be as small as possible; understanding the details will help the user to tweak the system for a faster start-up. Please refer to Appendix A for more details on fast boot-up. The Linux start-up sequence can be split into three phases.

33 Linux Start-Up SequenceBoot loader phase: Typically this stage does the hardware initialization and testing, loads the kernel image, and transfers control to the Linux kernel.Kernel initialization phase: This stage does the platform-specific initialization, brings up the kernel subsystems, turns on multitasking, mounts the root file system, and jumps to user space.User- space initialization phase: Typically this phase brings up the services, does network initialization, and then issues a log-in prompt.

34 Boot Loader PhaseBoot loaders are discussed in detail in Chapter 3. This section skims over the sequence of steps executed by the boot loader.Hardware InitializationThis typically includes:1. Configuring the CPU speed2. Memory initialization, such as setting up the registers, clearing the memory,and determining the size of the onboard memory3. Turning on the caches4. Setting up the serial port for the boot console5. Doing the hardware diagnostics or the POST (Power On Self-Test diagnostics)

35 Downloading Kernel Image and Initial Ram DiskThe boot loader needs to locate the kernel image, which may be on the system flash or may be on the network. In either case, the image needs to be loaded into memory. In case the image is compressed (which often is the case), the image needs to be decompressed. Also if an initial ram disk is present, the boot loader needs to load the image of the initial ram disk to the memory. Note that the memory address to where the kernel image is downloaded is decided by the boot loader by reading the ELF header of the kernel image. In case the kernel image is a raw binary dump, additional information needs to be passed to the boot loader regarding the placement of the kernel sections and the starting address.

36 Setting Up ArgumentsArgument passing is a very powerful option supported by the Linux kernel. Linux provides a generic way to pass arguments to the kernel across all platforms. Chapter 3 explains this in detail. Typically the boot loader has to set up a memory area for argument passing, initialize it with the required data structures (that can be identified by the Linux kernel), and then fill them up with the required values.

37 Jumping to Kernel Entry PointThe kernel entry point is decided by the linker script when building the kernel (which is typically present in linker script in the architecture-specific directory). Once the boot loader jumps to the kernel entry point, its job is done and it is of no use. (There are exceptions to this; some platforms offer a boot PROM service that can be used by the OS for doing platform-specific operations.) If this is the case and if the boot loader executes from memory, that memory can be reclaimed by the kernel. This should be taken into account when deciding the memory map for the system.

39 CPU/Platform-Specific InitializationIf you are porting Linux to your platform this section is very important as it marks the important milestones in BSP porting. The platform-specific initialization consists of the following steps.1. Setting up the environment for the first C routine: The kernel entry point is an assembly language routine; the name of this entry point varies (stext on ARM, kernel_entry on MIPS, etc.). Look at the linker script to know the entry point for your platform. This function normally resides in the arch/<name>/kernel/head.S file. This function does the following.a. On machines that do not have the MMU turned on, this turns on the MMU. Most of the boot loaders do not work with the MMU so the virtual address equals the physical address. However, the kernel is compiled with the virtual address. This stub needs to turn on the MMU so that the kernel can start using the virtual address normally. This is not required on platforms such as MIPS where the MMU is turned on at power-on.b. Do cache initialization. This is again platform-dependent.c. Set up the BSS by zeroing it out (normally you cannot rely on the boot loader to do this).d. Set up the stack so that the first C routine can be invoked. The first C routine is the start_kernel() function in init/main.c. This function is a jumbo function that does a lot of things until it terminates in an idle task (the first task in the system having a process id of 0).This function invokes the rest of the platform initialization functions that are discussed below.

40 2. The setup_arch() function: This function does the platform- and CPUspecific initialization so that the rest of the initialization can be invoked safely. Again this is highly platform-specific; only the common functionalities are explained:a. Recognizing the processor. Because a CPU architecture can come in various flavors, this function recognizes the processor (such as, if you

41 have selected the ARM processor this finds out the ARM flavor) using hardware or information that may be passed at the time of building. Again any processor-specific fixups can be done in this code.b. Recognizing the board. Again because the kernel supports a variety of boards this option recognizes the board and does the board-specific fixups.c. Analysis of command-line parameters passed to the kernel.d. Identifying the ram disk if it has been set up by the boot loader so that the kernel later can mount it as the root file system. Normally the boot loader passes the starting of the ram disk area in memory and size.e. Calling the bootmem functions. Bootmem is a misnomer; it refers to the initial memory that the kernel can reserve for various purposes before the paging code grabs all the memory. For example, you can reserve a portion of a contiguous large memory that can be used for DMA by your device by calling the bootmem allocator.f. Calling the paging initialization function, which takes the rest of the memory for setting up pages for the system.

42 3. Initialization of exceptions — the trap_init() function: This function sets the kernel-specified exception handlers. Prior to this if an exception happens, the outcome is platform-specific. (For example, on some platforms the boot loader-specified exception handlers get invoked.)4. Initialization of interrupt handling procedure — the init_IRQ() function:This function initializes the interrupt controller and the interrupt descriptors (these are data structures that are used by the BSP to route interrupts; more of this in the next chapter). Note that interrupts are not enabled at this point; this is the responsibility of the individual, drivers owning the interrupt lines to enable them during their initialization which is called later. (For example, the timer initialization would make sure that the timer interrupt line is enabled.)

43 5. Initialization of timers — the time_init() function: This function initializes the timer tick hardware so that the system starts producing the periodic tick, which is the system heartbeat.6. Initialization of the console—the console_init() function: This function does the initialization of the serial device as a console. Once the console is up, all the start-up messages appear on the screen. To print a message from the kernel, the printk() function has to be used. (printk() is a very powerful function as it can be called from anywhere, even from interrupt handlers.)

44 7. Calculating the delay loops for the platform — the calibrate_delay() function: This function is used to implement microdelays within the kernel using the udelay() function. The udelay() function spins for a few cycles for the microseconds specified as the argument. For udelay to work, the number of clock cycles per microsecond needs to be known by the kernel. This is exactly done by this function; it calibrates the number of delay loops. This makes sure that the delay loops work uniformly across all platforms. Note that the working of this depends on the timer interrupt.

45 Subsystem InitializationThis includesScheduler initializationMemory manager initializationVFS initializationNote that most of the subsystem initialization is done in the start_kernel() function. At the end of this function, the kernel creates another process, the init process, to do the rest of the initialization (driver initialization, initcalls, mounting the root file system, and jumping to user space) and the current process becomes the idle process with process id of 0.

46 Driver InitializationThe driver initialization is done after the process and memory management is up. It gets done in the context of the init process.Mounting Root File SystemRecall that the root file system is the master file system using which other file systems can be mounted. Its mounting marks an important process in the booting stage as the kernel can start its transition to user space. The block device holding the root file system can be hard-coded in the kernel (while building the kernel) or it can be passed as a command line argument from the boot loader using the boot loader tag “root=”.

47 There are three kinds of root file systems that are normally used on embedded systems:The initial ram diskNetwork-based file system using NFSFlash-based file system

48 Note that the NFS-based root file system is mainly used for debugging builds; the other two are used for production builds. The ram disk simulates a block device using the system memory; hence it can be used to mount file systems provided a file system image is copied onto it. The ram disk can be used as a root file system; this usage of the ram disk is known as initrd (short form for initial ram disk). Initrd is a very powerful concept and has wide uses especially in the initial parts of embedded Linux development when you do not have a flash driver ready but your applications are ready for testing (often this is the case when you have a driver and a separate application team working in parallel). So how do you proceed without a flash-based root file system? You can use a network-based file system provided your network driver is ready; if not, the best alternative is the initrd. Creating an initial ram disk is explained in more detail in Chapter 8. This section explains how the kernel

49 mounts an initrd as the root file systemmounts an initrd as the root file system. If you want the kernel to load aninitrd, you should configure the kernel during the build process with theCONFIG_BVLK_DEV_INITRD option. As previously explained, the initrd imageis loaded along with the kernel image and the kernel needs to be passed thestarting address and ending address of the initrd using command line arguments.Once it is known, the kernel will mount a root file system loaded oninitrd. The file systems normally used are romfs and ext2 file systems.There is more magic to initrd. Initrd is a use-and-throw root file system.It can be used to mount another root file system. Why is this necessary?

50 Assume that your root file system is mounted on a storage device whosedriver is a kernel module. So it needs to be present on a file system. Thispresents a chicken-and-egg problem; the module needs to be on a file system,which in turn requires that the module be loaded first. To circumvent this,the initrd can be used. The driver can be made as a module in the initrd;once the initrd is mounted then the driver module can be loaded and hencethe storage device can be accessed. Then the file system on that storage devicecan be mounted as the actual root file system and finally the initrd can bediscarded. The Linux kernel provides a way for this use-and-throw facility; itdetects a file linuxrc in the root of the initrd and executes it. If this binaryreturns, then the kernel assumes that initrd is no longer necessary and itswitches to the actual root file system (the file linuxrc can be used to loadthe driver modules). NFS and flash-based file systems are explained in moredetail in Chapter 4.If the root file system is not mounted, the kernel will stall execution andenter the panic mode after logging the complaint on the console:Unable to mount root fs on device

51 Doing Initcall and Freeing Initial MemoryIf you open the linker script for any architecture, it will have an init section. The start of this section is marked using __init_begin and the end is marked using __init_end. The idea of this section is that it contains text and data that can be thrown away after they are used once during the system start-up. Driver initialization functions are an example of the use-and-throw function. Once a driver that is statically linked to the kernel does its registration and initialization, that function will not be invoked again and hence it can be thrown away. The idea behind putting all such functions together is that the entire memory occupied by all such functions can be freed as a big chunk and hence will be available for the memory manager as free pages. Considering that memory is a scarce resource on the embedded systems, the reader is advised to use this concept effectively. A use-and-throw function or variable is declared using the __init directive. Once all the driver and subsystem initialization is done, the start-up code frees all the memory. This is done just before moving to user space.Linux also provides a way of grouping functions that should be called at system start-up time. This can be done by declaring the function with the __initcall directive. These functions are automatically called during kernel start-up, so you need not insert them into system start-up code.

52 Moving to User SpaceThe kernel that is executing in the context of the init process jumps to the user space by overlaying itself (using execve) with the executable image of a special program also referred to as init. This executable normally resides in the root file system in the directory /sbin. Note that the user can specify the init program using a command line argument to the kernel. However, if the kernel is unable to load either the user-specified init program or the default one, it enters the panic state after logging the complaint:No init found. Try passing init= option to the kernel.

53 User Space InitializationUser space initialization is distribution dependent. The responsibility of the kernel ends with the transition to the init process. What the init process does and how it starts the services is dependent on the distribution. We now study the generic model on Linux (which assumes that the init process is /sbin/init); the generic model is pretty similar to the initialization sequence of a UNIX variant, System V UNIX.

54 The /sbin/init Process and /etc/inittabThe init process is a very special process to the kernel; it has the following capabilities.It can never be killed. Linux offers a signal called SIGKILL that can terminate execution of any process but it cannot kill the init process.When a process starts another process, the latter becomes the child of the former. This parent–child relationship is important. In case the parent dies before the child then init adopts the orphaned processes.The kernel informs the init of special events using signals. For example:if you press the Ctrl-Alt-Del on your system keyboard, this makes the kernel send a signal to the init process, which typically does a system shutdown.

55 The init process can be configured on any system using the inittab file, which typically resides in the /etc directory. init reads the inittab file and does the actions accordingly in a sequential manner. init also decides the system state known as run level. A run level is a number that is passed as an argument to init. In case none is passed the default run level can be picked up by init from the inittab file. The following are the run levels that are used.0 – Halt the system1 – Single-user mode (used for administrative purposes)2 – Multi-user mode with restricted networking capabilities3 – Full multi-user mode4 – Unused5 – Graphics mode (X11™)6 – Reboot state

56 The inittab file has a special formatThe inittab file has a special format. It generally has the following details. (Please refer to the main page of inittab on your system for more information.)The default run level.The actions to be taken when init is moved to a run level. Typically a script /etc/rc.d/rc is invoked with the run level as the argument.The process that needs to be executed during system start-up. This is typically the file /etc/rc.d/rc.sysinit file.init can respawn a process if it is so configured in the inittab file.This feature is used for respawning the log-in process after a user has logged out from his previous log-in.Actions to trap special events such as Ctrl-Alt_Del or power failure.

57 The rc.sysinit FileThis file does the system initialization before the services are started. Typically this file does the following on an embedded system.Mount special file systems such as proc, ramfsCreate directories and links if necessarySet the hostname for the systemSet up networking configuration on the systemStarting ServicesAs mentioned above, the script /etc/rc.d/rc is responsible for starting the services. A service is defined as a facility to control a system process. Using services, a process can be stopped, restarted, and its status can be queried. The services are normally organized into directories based on the run levels; depending on what run level is chosen the services are stopped or started. After performing the above steps, the init starts a log-in program on a TTY or runs a window manager on the graphics display (depending on the run level).

58 GNU Cross-Platform ToolchainOne of the initial steps in the embedded Linux movement is setting up the toolchains for building the kernel and the applications. The toolchain that is used on embedded systems is known as the cross-platform toolchain. What exactly does a cross-platform mean? Normally an x86 compiler is used to generate code for the x86 platform. However, this may not be the case in embedded systems. The target on which the application and kernel need to run may not have enough memory and disk space to house the build tools. Also in most cases the target may not have a native compiler. In such cases cross-compilation is the solution. Cross-compilation generally happens on the desktop (usually an x86-based one) by using a compiler that runs on Linuxx86 (HOST) and generates code that is executable on the embedded (TARGET) platform. This process of compiling on a HOST to generate code for theTARGET system is called cross-compilation and the compiler used for the purpose is called a cross-compiler.

59 Any compiler requires a lot of support libraries (such as libc) and binaries (such as assemblers and linkers). One would require a similar set of tools for cross-compilation too. This whole set of tools, binaries, and libraries is collectively called the cross-platform toolchain. The most reliable open source compiler toolkit available across various platforms is the GNU compiler and its accessory tools are called the GNU toolchain. These compilers are backed up by a host of developers across the Internet and tested by millions of peopleacross the globe on various platforms. A cross-platform toolchain has the components listed below.Binutils: Binutils are a set of programs necessary for compilation/linking/ assembling and other debugging operations.GNU C compiler: The basic C compiler used for generating object code (both kernel and applications).GNU C library: This library implements the system call APIs such as open, read, and so on, and other support functions. All applications that are developed need to be linked against this base library.

60 Apart from GCC and Glibc, binutils are also an important part of a toolchain. Some of the utilities that constitute binutils are the following.addr2line: It translates program addresses into file names and line numbers. Given an address and an executable, it uses the debugging information in the executable to figure out which file name and line number are associated with a given address.ar: The GNU ar program creates, modifies, and extracts from archives. An archive is a single file holding a collection of other files in a structure that makes it possible to retrieve the original individual files (called members of the archive).as: GNU as is a family of assemblers. If you use (or have used) the GNU assembler on one architecture, you should find a fairly similar environment when you use it on another architecture. Each version has much in common with the others, including object file formats, most assembler directives (often called pseudo-ops), and assembler syntax.c++filt: The c++filt program does the inverse mapping: it decodes low-level names into user-level names so that the linker can keep these overloaded functions from clashing.

61 gasp: The GNU assembler macro preprocessor.ld: The GNU linker ld combines a number of object and archive files, relocates their data, and ties up symbol references. Often the last step in building a new compiled program to run is a call to ld.nm: GNU nm lists the symbols from object files.objcopy: The GNU objcopy utility copies the contents of an object file to another. objcopy uses the GNU BFD library to read and write the object files. It can write the destination object file in a format different from that of the source object file. The exact behavior of objcopy is controlled by command-line options.objdump: The GNU objdump utility displays information about one or more object files. The options control what particular information to display, such as symbol table, GOT, and the like.

62 ranlib: ranlib generates an index to the contents of an archive, and stores it in the archive. The index lists each symbol defined by a member of an archive that is a relocatable object file.readelf: It interprets headers on ELF files.size: The GNU size utility lists the section sizes and the total size for each of the object files in its argument list. By default, one line of output is generated for each object file or each module in an archive.strings: GNU strings print the printable character sequences that are at least characters long and are followed by an unprintable character. By default, it only prints the strings from the initialized and loaded sections of object files; for other types of files, it prints the strings from the whole file.strip: GNU strip discards all symbols from the target object file(s). The list of object files may include archives. At least one object file must be given. strip modifies the files named in its argument, rather than writing modified copies under different names.