2
6.1 Introduction Things to Do – Laundry – Study for Test – Cook and eat dinner – Call Mom for her birthday How would you do it?

3
6.2 Programs and Processes What is an operating system? What are resources? How do we create programs?

4
6.2 Programs and Processes What is the memory footprint of a user program? What is the overall view of memory? Why? Use by the OS Program stack Program heap Program global data Program code Use by the OS Low memory High memory Memory footprint of User program Program 1 Program 2 Program n...... OS Data Structures OS routines

5
6.2 Programs and Processes What resources are required to run: Hello, World! What is a scheduler? Process 1 Process 2 Process n...... scheduler Processor winner Program Properties Expected running time Expected memory usage Expected I/O requirements Process/System Properies Available system memory Arrival time of a program Instantaneous memory requirements

6
6.2 Programs and Processes Program On disk Static No state – No PC – No register usage Fixed size Process In memory (and disk) Dynamic – changing State – PC – Registers May grow or shrink Fundamental unit of scheduling One program may yield many processes

7
6.2 Programs and Processes NameUsual ConnotationUse in this chapter JobUnit of schedulingSynonymous with process ProcessProgram in execution; unit of scheduling Synonymous with job ThreadUnit of scheduling and/or execution; contained within a process Not used in the scheduling algorithms described in this chapter TaskUnit of work; unit of scheduling Not used in the scheduling algorithms described in this chapter, except in describing the scheduling algorithm of Linux

9
NameEnvironmentRole Long term schedulerBatch oriented OSControl the job mix in memory to balance use of system resources (CPU, memory, I/O) LoaderIn every OSLoad user program from disk into memory Medium term schedulerEvery modern OS (time- shared, interactive) Balance the mix of processes in memory to avoid thrashing Short term schedulerEvery modern OS (time- shared, interactive) Schedule the memory resident processes on the CPU DispatcherIn every OSPopulate the CPU registers with the state of the process selected for running by the short-term scheduler

12
Schedulers come in two basic flavors – Preemptive – Non-preemptive Basic scheduler steps 1.Grab the attention of the processor. 2.Save the state of the currently running process. 3.Select a new process to run. 4.Dispatch the newly selected process to run on the processor.

13
6.4 Scheduling Basics What information is important to know about a process?

16
6.4 Scheduling Basics NameDescription CPU burstContinuous CPU activity by a process before requiring an I/O operation I/O burstActivity initiated by the CPU on an I/O device PCBProcess context block that holds the state of a process (i.e., program in execution) Ready queueQueue of PCBs that represent the set of memory resident processes that are ready to run on the CPU I/O queueQueue of PCBs that represent the set of memory resident processes that are waiting for some I/O operation either to be initiated or completed Non-Preemptive algorithm Algorithm that allows the currently scheduled process on the CPU to voluntarily relinquish the processor (either by terminating or making an I/O system call) Preemptive algorithm Algorithm that forcibly takes the processor away from the currently scheduled process in response to an external event (e.g. I/O completion interrupt, timer interrupt) ThrashingA phenomenon wherein the dynamic memory usage of the processes currently in the ready queue exceed the total memory capacity of the system

17
6.5 Performance Metrics System Centric. – CPU Utilization: Percentage of time the processor is busy. – Throughput: Number of jobs executed per unit time. – Average turnaround time: Average elapsed time for jobs entering and leaving the system. – Average waiting time: Average of amount of time each job waits while in system User Centric – Response time: Time until system responds to user.

23
6.6 Non-preemptive Scheduling Algorithms Non-preemptive means that once a process is running it will continue to do so until it relinquishes control of the CPU. This would be because it terminates, voluntarily yields the CPU to some other process (waits) or requests some service from the operating system.

24
6.6.1 First-Come First-Served (FCFS) Intrinsic property: Arrival time May exhibit convoy effect No starvation High variability of average waiting time

25
6.6.2 Shortest Job First (SJF) Uses anticipated burst time No convoy effect Provably optimal for best average waiting time May suffer from starvation – May be addressed with aging rules

26
6.6.3 Priority Each process is assigned a priority May have additional policy such as FCFS for all jobs with same priority Attractive for environments where different users will pay more for preferential treatment SJF is a special case with Priority=1/burst time FCFS is a special case with Priority = arrival time

27
6.7 Preemptive Scheduling Algorithms Two simultaneously implications. – Scheduler is able to assume control of the processor anytime unbeknownst to the currently running process. – Scheduler is able to save the state of the currently running process for proper resumption from the point of preemption. Any of the Non-preemptive algorithms can be made Preemptive

28
6.7.1 Round Robin Scheduler Appropriate for time-sharing environments Need to determine time quantum q: Amount of time a process gets before being context switched out (also called timeslice) – Context switching time becomes important FCFS is a special case with q = ∞ If n processes are running under round robin they will have the illusion they have exclusive use of a processor running at 1/n times the actual processor speed

29
6.7.1.1 Details of Round Robin Algorithm What do we mean by context? How does the dispatcher get run? How does the dispatcher switch contexts?

31
6.8 Combining Priority and Preemption Modern general purpose operating systems such as Windows NT/XP/Vista and Unix/Linux use multi-level feedback queues System consists of a number of different queues each with a different expected quantum time Each individual queue uses FCFS except base queue which uses Round Robin

36
6.11 Summary and a Look ahead NamePropertyScheduling criterionProsCons FCFSIntrinsically non- preemptive; could accommodate preemption at time of I/O completion events Arrival time (intrinsic property) Fair; no starvation;high variance in response time; convoy effect SJFIntrinsically non- preemptive; could accommodate preemption at time of new job arrival and/or I/O completion events Expected execution time of jobs (intrinsic property) Preference for short jobs; provably optimal for response time; low variance in response times Potential for starvation; bias against long running computations PriorityCould be either non- preemptive or preemptive Priority assigned to jobs (extrinsic property) Highly flexible since priority is not an intrinsic property, its assignment to jobs could be chosen commensurate with the needs of the scheduling environment Potential for starvation SRTFSimilar to SJF but uses preemption Expected remaining execution time of jobs Similar to SJF Round Robin Preemptive allowing equal share of the processor for all jobs Time quantumEqual opportunity for all jobs; Overhead for context switching among jobs

37
6.12 Linux Scheduler – A case study Scheduler designed to match personal computing and server domains Goals – High efficiency Spending as little time as possible in scheduler, important goal for server environment – Support for interactivity Important for the interactive workload of the desktop environment – Avoid starvation Ensure that computational workload do not suffer as a result of interactive workloads – Support for soft real-time scheduling Meet the demands of interactive applications with real-time constraints

40
6.12 Linux Scheduler – Algorithm Pick first task with highest priority from active array and run it. If task blocks (due to I/O) put it aside and pick next highest one to run. If time quantum runs out (does not apply to FCFS tasks) for currently scheduled task then place it in expired array. If a task completes its I/O then place it in active array at right priority level adjusting its remaining time quantum. If there are no more tasks to schedule in active array, simply flip active and expired array pointers and continue with scheduling algorithm (i.e., expired array becomes the active array and vice versa).