Process control block

Process Control Block (PCB, also called Task Controlling Block,[1]Entry of the Process Table,[2]Task Struct, or Switchframe) is a data structure in the operating systemkernel containing the information needed to manage the scheduling of a particular process. The PCB is "the manifestation of a process in an operating system."[3]

Contents

The role of the PCBs is central in process management: they are accessed and/or modified by most OS utilities, including those involved with scheduling, memory and I/O resource access and performance monitoring. It can be said that the set of the PCBs defines the current state of the operating system. Data structuring for processes is often done in terms of PCBs. For example, pointers to other PCBs inside a PCB allow the creation of those queues of processes in various scheduling states ("ready", "blocked",etc.) that was previously mentioned.

In modern sophisticated multitasking systems, the PCB stores many different items of data, all needed for correct and efficient process management.[1] Though the details of these structures are obviously system-dependent, we can identify some very common parts, and classify them in three main categories:

Process identification data

Process state data

Process control data

The approach commonly followed to represent this information is to create and update status tables for each relevant entity, like memory, I/O devices, files and processes.

Memory tables, for example, may contain information about the allocation of main and secondary (virtual) memory for each process, authorization attributes for accessing memory areas shared among different processes, etc. I/O tables may have entries stating the availability of a device or its assignment to a process, the status of I/O operations being executed, the location of memory buffers used for them, etc.

File tables provide info about location and status of files. Finally, process tables store the data the OS needs to manage processes. At least part of the process control data structure is always maintained in main memory, though its exact location and configuration varies with the OS and the memory management technique it uses.

Process identification data always include a unique identifier for the process (almost invariably an integer number) and, in a multiuser-multitasking system, data like the identifier of the parent process, user identifier, user group identifier, etc. The process id is particularly relevant, since it is often used to cross-reference the OS tables defined above, e.g. allowing to identify which process is using which I/O devices, or memory areas.

Process state data are those pieces of information that define the status of a process when it is suspended, allowing the OS to restart it later and still execute correctly. This always includes the content of the CPU general-purpose registers, the CPU process status word, stack and frame pointers etc.
During context switch, the running process is stopped and another process is given a chance to run. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process.

Process control information is used by the OS to manage the process itself. This includes:

The process scheduling state: The state of the process in terms of "ready", "suspended", etc., and other scheduling information as well, like priority value, the amount of time elapsed since the process gained control of the CPU or since it was suspended. Also, in case of a suspended process, event identification data must be recorded for the event the process is waiting for.

Process structuring information: process's children id's, or the id's of other processes related to the current one in some functional way, which may be represented as a queue, a ring or other data structures.

Interprocess communication information: various flags, signals and messages associated with the communication among independent processes may be stored in the PCB.

Process Privileges in terms of allowed/disallowed access to system resources.

Since PCB contains the critical information for the process, it must be kept in an area of memory protected from normal user access. In some operating systems the PCB is placed in the beginning of the kernel stack of the process as it is a convenient protected location.[4]

1.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices

2.
Booting
–
In computing, booting is the initialization of a computerized system. The system can be a computer or a computer appliance, the booting process can be hard, e. g. after electrical power to the CPU is switched from off to on, or soft, when those power-on self-tests can be avoided. On some systems a soft boot may optionally clear RAM to zero, both hard and soft booting can be initiated by hardware such as a button press, or by software command. Booting is complete when the normal, operative, runtime environment is attained, within the hard reboot process, it runs after completion of the self-tests, then loads and runs the software. A boot loader is loaded into memory from persistent memory, such as a hard disk drive or, in some older computers, from a medium such as punched cards, punched tape. The boot loader then loads and executes the processes that finalize the boot, the process of hibernating or sleeping does not involve booting. Minimally, some embedded systems do not require a noticeable sequence to begin functioning. All computing systems are state machines, and a reboot may be the method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the process can also load a storage dump program for diagnosing problems in an operating system. Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by ones bootstraps, early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem. The invention of read-only memory of various types solved this paradox by allowing computers to be shipped with a start up program that could not be erased, growth in the capacity of ROM has allowed ever more elaborate start up procedures to be implemented. There are many different methods available to load a short initial program into a computer and these methods reach from simple, physical input to removable media that can hold more complex programs. Early computers in the 1940s and 1950s were one-of-a-kind engineering efforts that could take weeks to program and program loading was one of problems that had to be solved. An early computer, ENIAC, had no program stored in memory, bootstrapping did not apply to ENIAC, whose hardware configuration was ready for solving problems as soon as power was applied. The program was stored as a bit image on a continuously running magnetic drum, core memory was probably cleared manually via the maintenance console, and startup from when power was fully up was very fast, only a few seconds. In its general design, the DIP compared roughly with a DEC PDP-8, thus, it was not the kind of single-button-pressure bootstrap that came later, nor a read-only memory in strict terms, since the magnetic drum involved could be written to. The first programmable computers for commercial sale, such as the UNIVAC I and they typically included instructions that performed a complete input or output operation. The left 18-bit half-word was then executed as an instruction, which usually read additional words into memory, the loaded boot program was then executed, which, in turn, loaded a larger program from that medium into memory without further help from the human operator

3.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing

4.
Usage share of operating systems
–
The usage share of operating systems is the percentage of the operating systems used in computing devices. There are three big personal computing platforms, two of which claim over 1.4 billion users, Android and Windows, another one, Apples iOS and macOS combined, have over 1 billion users. Historically the desktop, meaning mainframes, were dominant, then Macintosh desktops became dominant, then for a 25 year period from the early 1990s to 2016, Windows desktops became dominant. From late 2016 the mobile era took over, and desktop market share was down to 45% in January 2017. Different categories of use a wide variety of operating systems. Windows gained majority share in the 1990s, on desktops. On smartphones, Android is dominant by any metric, its base is 1.8 billion. Android is the highest ranked OS in most countries of the world, leading to it eventually, in late 2016, Android alone explains that to a large degree, smartphones alone have majority use, where Android is dominant. Android has over half the share across platforms in the two biggest continents, Africa and Asia. For brief periods, countries on other continents, such as the United States, have lost desktops-majority share, since 2013, devices running Android have been selling more than Windows, iOS and macOS devices combined. That made Android the most popular operating system runs on smartphones. Most desktop and laptop computers use Microsoft Windows, while virtually all supercomputers use Linux, in the servers category, there is more diversity, with Linux and Windows Server most popular, and many fewer mainframes. Data about operating system share is difficult to obtain, since in most categories there are few primary sources or agreed methodologies for its collection. According to Gartner, the following is the device shipments by operating system. Note that shipments do not mean sales to consumers, so the use of the numbers as a popularity guide could be misleading. For 2015, Gartner reports for the year, worldwide PC shipments declined for the fourth consecutive year, Gartner includes Macs in PC sales numbers, and they individually had a slight increase in sales in 2015. On 28 May 2015, Google announced that there are 1.4 billion Android users and 1 billion Google play users, active in May 2015. On 27 January 2016, Paul Thurrott summarized the operating system market, granted, some of those Apple devices were probably sold into the market place years ago

5.
Round-robin scheduling
–
Round-robin is one of the algorithms employed by process and network schedulers in computing. As the term is used, time slices are assigned to each process in equal portions and in circular order. Round-robin scheduling is simple, easy to implement, and starvation-free, round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept, the name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn. To schedule processes fairly, a round-robin scheduler generally employs time-sharing, giving each job a time slot or quantum, the job is resumed next time a time slot is assigned to that process. If the process terminates or changes its state to waiting during its attributed time quantum, in the absence of time-sharing, or if the quanta were large relative to the sizes of the jobs, a process that produced large jobs would be favoured over other processes. Round-robin algorithm is an algorithm as the scheduler forces the process out of the CPU once the time quota expires. Once the other jobs have had their share, job1 will get another allocation of CPU time. This process continues until the job finishes and needs no more time on the CPU, job1 = Total time to complete 250 ms. Third allocation =100 ms but job1 self-terminates after 50 ms, hence, all processes end at the same time. In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing. A multiplexer, switch, or router that provides round-robin scheduling has a queue for every data flow. The algorithm lets every active data flow that has data packets in the queue to take turns in transferring packets on a channel in a periodically repeated order. The scheduling is work-conserving, meaning if one flow is out of packets. Hence, the scheduling tries to prevent link resources from going unused, round-robin scheduling results in max-min fairness if the data packets are equally sized, since the data flow that has waited the longest time is given scheduling priority. It may not be if the size of the data packets varies widely from one job to another. A user that produces large packets would be favored over other users, in that case fair queuing would be preferable. However, if link adaptation is used, it take a much longer time to transmit a certain amount of data to expensive users than to others since the channel conditions differ

6.
Segmentation fault
–
On standard x86 computers this is a form of general protection fault. The OS kernel will, in response, usually perform some corrective action, segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access. They arise primarily due to errors in use of pointers for virtual memory addressing, newer programming languages may employ mechanisms designed to avoid segmentation faults and improve memory safety. For example, the Rust programming language employs an Ownership based model to ensure memory safety. A segmentation fault occurs when a program attempts to access a location that it is not allowed to access. The term segmentation has various uses in computing, in the context of segmentation fault, thus attempting to read outside of the programs address space, or writing to a read-only segment of the address space, results in a segmentation fault, hence the name. On systems using only paging, a page fault generally leads to a segmentation fault. At the hardware level, the fault is initially raised by the management unit on illegal access, as part of its memory protection feature. If the problem is not a logical address but instead an invalid physical address. At the operating level, this fault is caught and a signal is passed on to the offending process. Different operating systems have different signal names to indicate that a fault has occurred. On Unix-like operating systems, a signal called SIGSEGV is sent to the offending process, on Microsoft Windows, the offending process receives a STATUS_ACCESS_VIOLATION exception. The proximate cause is an access violation, while the underlying cause is generally a software bug of some sort. In C code, segmentation faults most often occur because of errors in pointer use, the default action for a segmentation fault or bus error is abnormal termination of the process that triggered it. A core file may be generated to aid debugging, and other platform-dependent actions may also be performed, for example, Linux systems using the grsecurity patch may log SIGSEGV signals in order to monitor for possible intrusion attempts using buffer overflows. Writing to read-only memory raises a segmentation fault, at the level of code errors, this occurs when the program writes to part of its own code segment or the read-only portion of the data segment, as these are loaded by the OS into read-only memory. Here is an example of ANSI C code that will cause a segmentation fault on platforms with memory protection. It attempts to modify a string literal, which is undefined behavior according to the ANSI C standard, when loaded, the operating system places it with other strings and constant data in a read-only segment of memory

7.
Time-sharing
–
In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking at the same time. Its introduction in the 1960s and emergence as the prominent model of computing in the 1970s represented a major shift in the history of computing. The earliest computers were expensive devices, and very slow in comparison to later models. These programs might take hours, or even weeks, to run, as computers grew in speed, run times dropped, and soon the time taken to start up the next program became a concern. Batch processing methodologies evolved to decrease these dead periods by queuing up programs so that as soon as one program completed, to support a batch processing operation, a number of comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs offline. When typing was complete, the programs were submitted to the operations team, important programs were started quickly, how long before less important programs were started was unpredictable. When the program run was completed, the output was returned to the programmer. The complete process might take days, during which time the programmer might never see the computer, the alternative of allowing the user to operate the computer directly was generally far too expensive to consider. This was because users might have long periods of entering code while the computer remained idle and this situation limited interactive development to those organizations that could afford to waste computing cycles, large universities for the most part. Programmers at the universities decried the behaviors that batch processing imposed and they experimented with new ways to interact directly with the computer, a field today known as human–computer interaction. Time-sharing was developed out of the realization that any single user would make inefficient use of a computer. Given an optimal size, the overall process could be very efficient. Similarly, small slices of time spent waiting for disk, tape, in a paper published in December 1958 by W. F. Bauer, he wrote that The computers would handle a number of problems concurrently. Implementing a system able to take advantage of this was initially difficult, batch processing was effectively a methodological development on top of the earliest systems. Since computers still ran single programs for single users at any time, developing a system that supported multiple users at the same time was a completely different concept. The state of each user and their programs would have to be kept in the machine and this would take up computer cycles, and on the slow machines of the era this was a concern. However, as computers rapidly improved in speed, and especially in size of memory in which users states were retained. The first project to implement a system was initiated by John McCarthy at MIT in 1959, initially planned on a modified IBM704

8.
File system
–
In computing, a file system or filesystem is used to control how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops, by separating the data into pieces and giving each piece a name, the information is easily isolated and identified. Taking its name from the way paper-based information systems are named, the structure and logic rules used to manage the groups of information and their names is called a file system. There are many different kinds of file systems, each one has different structure and logic, properties of speed, flexibility, security, size and more. Some file systems have been designed to be used for specific applications, for example, the ISO9660 file system is designed specifically for optical discs. File systems can be used on different types of storage devices that use different kinds of media. The most common device in use today is a hard disk drive. Other kinds of media that are used include flash memory, magnetic tapes, in some cases, such as with tmpfs, the computers main memory is used to create a temporary file system for short-term use. Some file systems are used on local storage devices, others provide file access via a network protocol. Some file systems are virtual, meaning that the files are computed on request or are merely a mapping into a different file system used as a backing store. The file system access to both the content of files and the metadata about those files. It is responsible for arranging storage space, reliability, efficiency, before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning, by 1964 it was in general use. A file system consists of two or three layers, sometimes the layers are explicitly separated, and sometimes the functions are combined. The logical file system is responsible for interaction with the user application and it provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing. The logical file system manage open file table entries and per-process file descriptors and this layer provides file access, directory operations, security and protection. The second optional layer is the file system. This interface allows support for multiple concurrent instances of physical file systems, the third layer is the physical file system

9.
Exokernel
–
Exokernel is an operating system kernel developed by the MIT Parallel and Distributed Operating Systems group, and also a class of similar operating systems. Operating systems generally present hardware resources to applications through high-level abstractions such as file systems, the idea behind exokernels is to force as few abstractions as possible on application developers, enabling them to make as many decisions as possible about hardware abstractions. Implemented applications are called library operating systems, they may request specific memory addresses, disk blocks, the kernel only ensures that the requested resource is free, and the application is allowed to access it. This low-level hardware access allows the programmer to implement custom abstractions and it also allows programmers to choose what level of abstraction they want, high, or low. Traditionally kernel designers have sought to make individual hardware resources invisible to programs by requiring the programs to interact with the hardware via some abstraction model. These models include file systems for storage, virtual address spaces for memory, schedulers for task management. These abstractions of the make it easier to write programs in general. One option is to remove the kernel completely and program directly to the hardware, the program can then link to a support library that implements the abstractions it needs. MIT developed two exokernel-based operating systems, using two kernels, Aegis, a proof of concept with limited support for storage, and XOK, which applied the exokernel concept more thoroughly. The MIT exokernel manages hardware resources as follows, Processor The kernel represents the processor resources as a timeline from which programs can allocate intervals of time, a program can yield the rest of its time slice to another designated program. The kernel notifies programs of processor events, such as interrupts, hardware exceptions, if a program takes a long time to handle an event, the kernel will penalize it on subsequent time slice allocations, in extreme cases the kernel can abort the program. Memory The kernel allocates physical memory pages to programs and controls the translation lookaside buffer, a program can share a page with another program by sending it a capability to access that page. The kernel ensures that programs access only pages for which they have a capability, disk storage The kernel identifies disk blocks to the application program by their physical block address, allowing the application to optimize data placement. When the program initializes its use of the disk, it provides the kernel with a function that the kernel can use to determine which blocks the program controls. The kernel uses this callback to verify that when it allocates a new block, networking The kernel implements a programmable packet filter, which executes programs in a byte code language designed for easy security-checking by the kernel. The available library operating systems for Exokernel include the custom ExOS system, in addition to these, the exokernel team created the Cheetah web server, which uses the kernel directly. The exokernel concept has been around since at least 1994, a concept operating exokernel system is Nemesis, written by University of Cambridge, University of Glasgow, Citrix Systems, and the Swedish Institute of Computer Science. MIT has also built several exokernel based systems, including ExOS, in modern computing the MINIX3 Kernel implements some of the ideas of exokernel, but with the constraint that programs should be subject to reincarnation for the goal of reliability

10.
Computer multitasking
–
In computing, multitasking is a concept of performing multiple tasks over a certain period of time by executing them concurrently. As a result, a computer executes segments of multiple tasks in a manner, while the tasks share common processing resources such as central processing units. Multitasking does not necessarily mean that multiple tasks are executing at exactly the same time, even on multiprocessor or multicore computers, which have multiple CPUs/cores so more than one task can be executed at once, multitasking allows many more tasks to be run than there are CPUs. In the case of a computer with a single CPU, only one task is said to be running at any point in time, multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to one is called a context switch. Multiprogramming systems are designed to maximize CPU usage, in time-sharing systems, the running task is required to relinquish the CPU, either voluntarily or by an external event such as a hardware interrupt. Time sharing systems are designed to allow programs to execute apparently simultaneously. In real-time systems, some waiting tasks are guaranteed to be given the CPU when an event occurs. Real time systems are designed to control devices such as industrial robots. The term multitasking has become a term, as the same word is used in many other languages such as German, Italian, Dutch, Danish. In the early days of computing, CPU time was expensive, when the computer ran a program that needed access to a peripheral, the central processing unit would have to stop executing program instructions while the peripheral processed the data. The first computer using a system was the British Leo III owned by J. Lyons. During batch processing, several different programs were loaded in the memory. When the first program reached an instruction waiting for a peripheral, the context of program was stored away. The process continued until all programs finished running, multiprogramming doesnt give any guarantee that a program will run in a timely manner. Indeed, the very first program may very well run for hours without needing access to a peripheral. As there were no users waiting at a terminal, this was no problem, users handed in a deck of punched cards to an operator. Multiprogramming greatly reduced wait times when multiple batches were being processed, the expression time sharing usually designated computers shared by interactive users at terminals, such as IBMs TSO, and VM/CMS

11.
Disk partitioning
–
Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or other secondary storage, so that an operating system can manage information in each region separately. Partitioning is typically the first step of preparing a newly manufactured disk, the disk stores the information about the partitions locations and sizes in an area known as the partition table that the operating system reads before any other part of the disk. Each partition then appears in the system as a distinct logical disk that uses part of the actual disk. Partitioning a drive is when you divide the total storage of a drive into different pieces, once a partition is created, it can then be formatted so that it can be used on a computer. Creating more than one partition has the advantages, Separation of the operating system. This allows image backups to be made of only the operating system, having a separate area for operating system virtual memory swapping/paging. Keeping frequently used programs and data near each other, having cache and log files separate from other files. These can change size dynamically and rapidly, potentially making a file system full, use of multi-boot setups, which allow users to have more than one operating system on a single computer. Protecting or isolating files, to make it easier to recover a corrupted file system or operating system installation, If one partition is corrupted, other file systems may not be affected. Raising overall computer performance on systems where smaller file systems are more efficient, short stroking, which aims to minimize performance-eating head repositioning delays by reducing the number of tracks used per HDD. The basic idea is that you make one partition approx, 20–25% of the total size of the drive. This partition is expected to, occupy the outer tracks of the HDD, If you limit capacity with short stroking, the minimum throughput stays much closer to the maximum. This technique, however, is not related to creating multiple partitions, for example, a 1 TB disk may have an access time of 12 ms at 200 IOPS with an average throughput of 100 MB/s. When it is partitioned to 100 GB access time may be decreased to 6 ms at 300 IOPS with a throughput of 200 MB/s. Partitioning for significantly less than the size available when disk space is not needed can reduce the time for diagnostic tools such as checkdisk to run or for full image backups to run. It also prevents disk optimizers from moving all frequently accessed files closer to other on the disk. Files can still be moved closer to other on each partition. This issue does not apply to Solid-state drives as access times on those are neither affected by nor dependent upon relative sector positions, may prevent using the whole disk capacity, because it may break free capacities apart

12.
Process (computing)
–
In computing, a process is an instance of a computer program that is being executed. It contains the code and its current activity. Depending on the system, a process may be made up of multiple threads of execution that execute instructions concurrently. A computer program is a collection of instructions, while a process is the actual execution of those instructions. Several processes may be associated with the program, for example. Multitasking is a method to allow processes to share processors. Each CPU executes a task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. Depending on the system implementation, switches could be performed when tasks perform input/output operations. A common form of multitasking is time-sharing, time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor and this seeming execution of multiple processes simultaneously is called concurrency. In general, a system process consists of the following resources. Memory, which includes the code, process-specific data, a call stack. Operating system descriptors of resources that are allocated to the process, such as file descriptors or handles, security attributes, such as the process owner and the process set of permissions. Processor state, such as the content of registers and physical memory addressing, the state is typically stored in computer registers when the process is executing, and in memory otherwise. The operating system holds most of this information about processes in data structures called process control blocks. Any subset of the resources, typically at least the processor state, the operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures. The operating system may also provide mechanisms for communication to enable processes to interact in safe

In computing, multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of …

Modern desktop operating systems are capable of handling large numbers of different processes at the same time. This screenshot shows Linux Mint running simultaneously Xfce desktop environment, Firefox, a calculator program, the built-in calendar, Vim, GIMP, and VLC media player.