This is an introduction post for “Graphics and Internet” for WBUT 2014, which includes C as system programming and the underlying source code concepts for drawing basic graphics using C. Earlier in this post, I discussed data-structures in C for WBUT University in details which covered the practical aspects of year 2014. This post in contrary will mention the graphical aspect of the C programming keeping in mind the practical labs. I am glad I could post as far as the syllabus is concerned (which is really outdated as discussed in the last post), but this will help the current students who could have a look here after 2014 passes.

5.) Draw a Triangle

There are others as well which are related to Web Development. The posts belonging to Web-development could already be accessed from here. The section is for everything related to ‘code’ and working with code in different languages. HTML could be found there. I would appreciate if the readers leave behind a feedback. Roger out./-

The Introduction Preface

Hi, this is about all the practical lab C for WBUT Winters (Semester 3rd of the University prescribed Syllabus) section data-structures questions solved at once place for a ready reference to the students who might seek help and also on an information as to why colleges now should be moving from the stone age. All the programs are complied under Tubro C++ compiler and would not be/should not be expected to be executed/processed to create an object under GNU C compiler or any Windows variant compilers (such as Visual C compiler). Since the syllabus itself is too outdated and people had been talking about them in various forums; I came up with a compilation of the code as well as the questions which could help them understand the core with the ready service code to be executed only within from Turbo C++ compiler. This is C code and not C++ code. The source code for each of them had been practically tested on Windows 8.1 platform using DOSBOX as an emulator to run Turbo C.

The Recommendations

It’s recommended to move on with the syllabus and pay less attention there, if anyone really is serious on C and C++ programming. Python, Ruby, Delphi, Lua and Perl are the modern spectacular languages to choose from and Java is really not that friendly as you might think if you have to do a certain task. Java isn’t that flexible in terms that Java would really need long source code for a simple problem. See “Python to other languages – A comparison“. Get to know more at Stackoverflow. To refernce from the original source, here’s is the justification why someone would prefer to code in Python rather than any other older languages in a nutshell:

Speed of development. I can crank out working code before the Java folks have stuff that will compile.

Flexibility. I can refactor and rework a module in an afternoon. The Java folks have to redesign things on paper to be sure they can get some of the bigger refactorings to recompile. In Java, the changes aren’t as small and focused.I should probably blog about some of the nastier refactorings that would be a huge pain — even using the refactoring tools in Eclipse or NetBeans. The most recent example was a major change to the top-most superclass of a class hierarchy that would have invalidated a mountain of Java code. But the method was used in exactly two places, allowing a trivial change to the superclass and those two places. 100’s of unit tests passed, so I’m confident in the change without the compiler overhead.

Simplicity. Python doesn’t need a sophisticated IDE and super-complex WS libraries. Python code can be written in Komodo Edit. A WS interface between your Java front-end and PostgresSQL can be knocked together in Werkzeug in a day. You can plug your transactions into this without much overhead.

The post primarily covered Python against Java, but there are instances why some one could just prefer Python over C which could be referenced from here in this post. This does not at all mean to compare C and Python on basis of what they have to deliver but rather compares them on an average on the end-users opinion perspective. There are other considerations as well into modern computing which is resolved below in the list.

Why Would Python be Slow but equivalently faster ever than C Deployments?

Python is a higher level language than C, which means it abstracts the details of the computer from you – memory management, pointers, etc, and allows you to write programs in a way which is closer to how humans think. It is true that C code usually runs 10 to 100 times faster than Python code if you measure only the execution time. However if you also include the development time Python often beats C. For many projects the development time is far more critical than the run time performance. Longer development time converts directly into extra costs, fewer features and slower time to market.

Why use C contrary to the switching to any other programming languages, a hope out of no-where!?

“If there ever were a quote that described programming with C, it would be this. To many programmers, this makes C scary and evil. It is the Devil, Satan, the trickster Loki come to destroy your productivity with his seductive talk of pointers and direct access to the machine. Then, once this computational Lucifer has you hooked, he destroys your world with the evil “segfault” and laughs as he reveals the trickery in your bargain with him.

But, C is not to blame for this state of affairs. No my friends, your computer and the Operating System controlling it are the real tricksters. They conspire to hide their true inner workings from you so that you can never really know what is going on. The C programming language’s only failing is giving you access to what is really there, and telling you the cold hard raw truth. C gives you the red pill. C pulls the curtain back to show you the wizard. C is truth. Why use C then if it’s so dangerous? Because C gives you power over the false reality of abstraction and liberates you from stupidity.”

The Coverage

Now since we are already hopped into C with Turbo C as a compiler. I’d go writing the source code of the practicals here and try to keep it updated as much as I could to my disposal. Instead this is the responsibility at your side to post back with comments on what else its to be added so this could help others in the process.

Notes

The source code has replaced all the instances of \n to n. Take care of that plus the include headers have not been defined to be kept away from copy/paste practices. This in my opinion will lead to a better coding practices and would lead to a great start with self-research.

The Question Set

The questions I have dug so far follows these:

Doubly Linked List with insertion and deletion.

Priority Queue with addition and deletion of elements.

Dequeue or Double Ended Queue using Linked List.

Dequeue or Double Ended Queue using Circular Array.

Priority Queue Implementation using Arrays.

Circular Queue using Linked List with Circular Queue Concept Notes.

Circular Queue Implementation in C Data-Structures using Arrays.

Stack for PUSH and POP operations in C both via recursive and regular methods.

Linear Search Implementation in C using Arrays.

Binary Search Implementation in C using Arrays.

Insertion Sort Implementation in C using Arrays.

Merge Sort Implementation in C using Arrays.

Quick Sort Implementation in C using Arrays.

Heap Sort Implementation in C using Arrays.

Source Code Solutions

1.) Doubly Linked List with Insertion, Deletion and Reverse Display

Here’s the source code (to avoid copy/paste gimmicks, I have stripped off all the includes. Feel free to add them at your own expense) for Doubly Linked List:

3.) Dequeue or Double Ended Queue using Linked List

What is Dequeue?

The word dequeue is short form of double ended queue. In a dequeue , insertion as well as deletion can be carried out either at the rear end or the front end. The following diagram illustrates a version of the same concept:

4.) Dequeue or Double Ended Queue using Circular Array

The available source code is different from the previous one which used linked list. This time the code uses the concept for circular array and implements a Dequeue or Double Ended Queue implementation around circular array. The source code is below as per the references.

5.) Priority Queue using Array

I noticed I didn’t updated a priority queue implementation using arrays. This is the reason the source code is here for a ready reference to the readers of the blog. Here is the original version of the source code which is tested against the terms and conditions posted before which should be able to run via a Turbo C compiler. Any other compiler just won’t work because the syllabus itself is outdated, and people (and universities) must realize this fast.

6.) Circular Queue using Linked List in C Data-structures with Notes

In a standard queue data structure re-buffering problem occurs for each dequeue operation. To solve this problem by joining the front and rear ends of a queue to make the queue as a circular queue
Circular queue is a linear data structure. It follows FIFO principle.

In circular queue the last node is connected back to the first node to make a circle.

Circular linked list fallow the First In First Out principle

Elements are added at the rear end and the elements are deleted at front end of the queue

Both the front and the rear pointers points to the beginning of the array.

It is also called as “Ring buffer”.

Items can inserted and deleted from a queue in O(1) time.

Circular Queue can be created in three ways. They are:

· Using single linked list

· Using double linked list

· Using arrays

Follow is the source code for implementing a Circular Queue in C using Linked List:

7.) Circular Queue Implementation in C using Arrays

Earlier in the post, we talked about implementation circular queue using Linked List. This section talks about the same implementation but using arrays. The source code below could be used for the purposes to demonstrate the concepts:

Round Robin Scheduling

Previous post discussed the two types of CPU Scheduling, Preemptive and Non-Preemptive CPU Scheduling and was focused with FCFS, SJF and Priority Scheduling techniques. We had discussed in the last post how SJF and Priority Scheduling could be modified to attain preemptive scheduling and why FCFS scheduling cannot be used as preemptive scheduling. This post will go further with another preemptive type of CPU Scheduling which is known as Round Robin CPU Scheduling. Now, as we already know Preemptive Scheduling means forcibly stopping a job from the execution and keep it in the waiting state/blocked state for a period of time until it finishes executing another job execution processing, Round Robin is another instance of Preemptive Scheduling wherein it is possible to stop the job and pick up another job from the READY QUEUE and start executing it. But there is a difference in the way Round Robin Scheduling in implemented.

In SJF (modified to SRT {Shortest Remaining Time First Scheduling}) and Priority (modified with LONG TERM and SHORT TERM Scheduler), it was possible to attain preemptive CPU Scheduling, and used time (CPU BURST TIME) and priority respectively; this however isn’t the case with Round Robin Scheduling (does not depend on CPU BURST Time of the job, or the priority of the job. All the jobs are treated equally). Round Robin Scheduling, the CPU time is divided into number of quanta. Assume the CPU quanta to be duration = 2 ms (Milli-Second). Now, whenever a job is allocated to the CPU for execution, the job will be executed for a maximum of 2 ms. If the CPU BURST time required for the job execution to be completed is more than 2 ms, after 2 ms the job will be forcibly terminated and pushed back to the READY QUEUE and the new job from the READY QUEUE will be allocated to the CPU for execution and again that this new job will be only until 2 ms since that is the quanta defined in terms of Round Robin CPU Scheduling. In case the job execution requirement for any job is less than 2 ms, the job terminates normally and not forcefully. So after termination of the the job in case the job finishes before 2 ms, the Job might go to the I/OWAITING STATE/BLOCK STATE for the CPU to handle I/O operations or might go to the HALTED STATE/TERMINATED STATE. The CPU can’t stay idle for the remaining time if the jobs has finished before the determined quanta, hence the best efficient applied as per Round Robin implementation is to take a new job and start executing the job from that point of time and hence save time with efficiency. This means, assume if the quanta is 2 ms and the job was finished in 1 ms, a new job from the READY QUEUE is taken by the CPU Scheduler and executed, this new job quanta time starts from the earlier saved time which was 1 ms (2-1 = 1 ms). So for an example, there are 4 jobs in the READY QUEUE to be executed by the CPU :

J1 = 4 ms
J2 = 2 ms
J3 = 3 ms
J4 = 6 ms

Assumption is CPU quanta to be 2 ms. That is 1 quanta = 2 ms. Now for Round Robin CPU Scheduling, the job is taken as FCFS basis, but a limited quanta is available for all the jobs per execution cycle, that is in this case 2 ms (1 quanta). The timeline, these jobs will be served is as follows:

Since, Round Robin CPU Scheduler is taking the jobs in FCFS (First Come First Served) basis, there is no priority or CPU time burst minimum time required job first assigned with the scheduling process. The allocation is done purely on first come, first served basis this way and each execution cycle for each job spends 1 quanta of time that is 2 ms which is in this case. After 1 quanta of time spent on a particular job, the job if remaining CPU BURST remains, get’s to the READY QUEUE and next job is taken. If this next job is processed and only required the exact processing time equal to that of the 1 quanta time, the job can either be pushed to the I/O WAITING STATE to cover I/O operation or might reach the HALTED STATE/TERMINATED STATE which is normal Job execution termination. If the Job has finished before the 1 quanta time, the CPU Scheduler without wasting any time pushes the next job from the READY QUEUE to the ACTIVE STATE and starts executing it until the 1 quanta time period which is being assigned similarly to other jobs. When jobs are pushed to the READY QUEUE, they always are pushed to the back of the READY QUEUE since other jobs which are finished first should be at the first to be taken by the CPU. This means the CPU takes the job from the READY QUEUE head, which again means it is following all the rules of the FCFS basis. It has also to be noted that jobs could be pushed to the READY QUEUE from the I/O WAIT STATE, NEW STATE, or ACTIVE STATE (in-case of forcible termination). Here, we need to look at the process diagram to verify if the jobs could be taken to the READY QUEUE in three ways:

The process state diagram suggests, that indeed the READY QUEUE (READY STATE as per process state diagram) get’s the jobs from all the three directions, that is the active state when forcible termination of job is done by the CPU, the new state when new arrivals of jobs are scheduled to be transferred from the new state to the ready state and the wait or blocked state when the jobs are awaiting I/O operations. Hence, after the first execution cycle, the timeline for the above example would look like this:

Now, after this timeline, all the JOBS will be completed assuming no new job arrivals were in the READY QUEUE from the NEW STATE. Hence all the jobs are treated equally in the Round Robin CPU Scheduling. There must be a timer in the system to remind the CPU what 1 quanta is over for a job, this brings the topic to a new curve. The timer we are talking about here could be programmable as well since in some installations, an administrator might want a small quanta, in other installations, the same administrator might want a higher quanta. SO this timer should be programmable, to interrupt the processor reminding the CPU that 1 quanta is past and a job must be stopped.

Timer (Programmable)

Whenever the CPU Scheduler decides that a new job has to be allocated to the CPU or the CPU allocates a new job for execution, immediately the timer has to be reset. At the end of the time pointer, the timer will generate an interrupt when again the CPU will go to the system program and the responsibility of that system program will be to terminate the current executed job, get a new job from the READY QUEUE, give it to the CPU for execution and re-start the timer. That is what happens when the required CPU BURST time in more than the quanta set in the timer. If the CPU BURST time is less then that of the quanta described in the timer, the process is not executed completely till the timer gives an interrupt. The execution is completed before the timer gives an interrupt but the process for the whole cyclic execution process is not yet completed wholly. So the last statement of every CPU BURST must be a system call. The system call for performing an I/O operation or a system call indicating that the process is complete. Again following the system call, the CPU Scheduler has to fetch a new process (job) from the READY QUEUE, give it to the CPU for execution and at the same time, the timer has to be reset so that the next time quanta starts from that point.

Variations of Round Robin Scheduling

There could be different variations of the Round Robin Scheduling, earlier there were no considerations on the Round Robin Scheduling but there could be Jobs which require more CPU BURST time duration depending if the CPU BURST time is more for I/O Operations (I/O Bound Jobs) or CPU Time (CPU Bound Jobs). The variation will depend on these CPU BURST time duration requirements! The main algorithm used will be Round Robin Scheduling.

To implement these concepts, there is a need to know MULTI-LEVEL QUEUE

MULTI-LEVEL QUEUE

There could be multiple READY QUEUE’s as such the following demonstrating how it would work:

The entire diagram suggests that there could be multiple READY QUEUE which could be taken into consideration depending upon the jobs since the variation of jobs have either I/O operation taking the CPU BURST or CPU time taking the time duration of the CPU Processing. This way, if efficiency has to be maintained such that I/O Operations are given the highest priority, the multiple READY QUEUE would have (for an example) 1 READY QUEUE divided into 3 READY QUEUE, they could be:

Q1

Q2

Q3

Here are the Jobs which would be as per the priority of execution (Round Robin Scheduling Implementation does not use Priority, here we are only giving the job prioritizing the jobs in the READY QUEUE not in the Scheduling Algorithm).

Q2 – only after Q1 is empty, the jobs in Q2 are taken. That is all I/O Bound jobs have to be completed first.

Q3 – only after both Q1 and Q2 are empty, the jobs pending on Low priority that is CPU Bound jobs are taken.

There are disadvantages here. Consider that the jobs could be dynamic in nature which means, a job which is an I/O Bound Job could be changed into requiring CPU time (converts itself into being CPU Bound Job). That way, with the current MULTI-LEVEL QUEUE implementation, the jobs in the Q1 (requiring to be executed first in order) in-spite the jobs being transformed to CPU Bound Job by being of Dynamic Nature has to be in the Q1 and executed in first preference. This is unwanted and there has to be some techniques to resolve this. This is where MULTI-LEVEL FEEDBACK QUEUE is used.

MULTILEVEL FEEDBACK QUEUE

In Multilevel feedback queue, there would be again multiple number of queue’s. Assume there are 3 queues as shown below in the diagram (but with a provision that the jobs could be taken from one queue to another queue).

Whenever the jobs are pushed into the READY QUEUE, it’s always taken to the QUEUE number 1 which is Q1. The time quanta for all the queue’s are:

Q1 = 2 ms

Q2 = 5 ms

Q3 = 10 ms

Whenever a job is entered the first time, let’s assume the nature of the jobs isn’t known that is whether the Job is I/O Bound Job or a CPU Bound Job (what will be the CPU BURST Duration). In queue number 1, the job is to be put. In queue number 1 (Q1) the jobs will be allocated to the CPU using Round Robin Scheduling, for execution of the job. Once it is allocated to the CPU, it will be executed for a minimum of 2 ms, if one fins that the job (The CPU BURST) is over is finished before 2 ms, the job will be kept in Q1 and would not be moved to Q2. This is because the first it has executed, it has shown that the CPU BURST requirement is less than 2 ms, so it’s predicted that the next CPU BURST time will also be less than 2 ms. If the CPU BURST time of the Job is less than 2 ms, the Job remains in Q1. If the timer trigger has occurred and the job is greater than 2 ms, the job will be pushed to next queue which is Q2 which has the time quanta of more than 2 ms. This will again follow the Round Robin CPU Scheduling technique for the job to be allocated to the CPU for execution. This time the time quanta is 5 ms in Q2. In Q2, if one fins that the CPU time requirement for the job is less than 5 ms, it will remain in Q2 itself. Otherwise if it is greater than 5 ms, it will be pushed to the next queue which is Q3, with time quanta 10 ms. All of these queue will follow the Round Robing CU Scheduling algorithm for CPU jobs allocation for job execution.

So conclusively, the first time the job is pushed to the Q1, if the job CPU BURST duration is grater than the current quanta threshold of that particular queue, it would be pushed to the second queue or else be retained there itself. This depends on the nature of the jobs depending upon the requirement of the jobs for the time duration needed for a complete execution of the job with levels of time quanta divided into the READY QUEUE since the QUEUE itself now has been divided according to the time duration using quanta’s. In a similar fashion, the jobs which could be pushed downwards, should be the job require more I/O Bound Operations, the same should be the procedure, to push back the job upwards that is from Q3 to Q2 and then Q2 to Q1 as per the requirement and the nature of the job (if dynamic!). This means if the job at some point of time remain to be I/O Bound, the job remains in the Q1, if at another point of time duration. the job changes it’s nature from I/O Bound to CPU Bound, it could be pushed downward (lower preference). Similarly, low priority jobs (CPU Bound Jobs) could be pushed upwards by increasing it’s priority as per the dynamic nature of the job (changes from CPU Bound job to I/O Bound Job). Hence Jobs are kept at the queue level according to the nature of the job. These are the variations of the Round Robin Technique.

The diagram above shows clearly how the jobs are now flexibly aligned in accordance to the time duration required and the dynamic nature of the job. If the jobs are changed from CPU Bound job to the I/O bound, it’d go upwards the direction and if the I/O bound jobs are changed to CPU Bound jobs, it’d move downwards the direction as per the requirement. I/O Bound Jobs have the highest priority since it requires less CPU BURST time duration than the CPU bound Jobs which requires more CPU BURST time duration and hence has a lower priority.

A feedforward is also used, since there is a concept called switch time which is the extra time required by the CPU to change from a queue to another queue. The queue could be switched directly to the appropriate queue without having them to move through the intermediate queue. This means if a job has to be transfer from the Q3 to Q1, it is directly possible but needs switch time and some extra CPU processing. This is demonstrated in the diagram below:

We have concluded over preemptive and non-preemptive CPU Scheduling from the last post and concluded this with Round Robin CPU Scheduling. All the posts which covered process management are:

FCFS (First Come First Served) – Nonpreemptive

SJF (Shortest Job First) – Nonpreemptive

Priority Scheduling – Nonpreemptive

SRT (Shortest Remaining Time First) – Preemptive

Round Robin Scheduling – Preemptive

Multilevel Queue – Preemptive

But all these CPU Scheduling which we have discussed are based on a single CPU. The modern operating systems should not remain satisfied with single processor CPU. There are distributing computing nature which is used in modern operating system be it networked processors among many systems or a single set-up having multiple core processors. We discussed single resource with multiple processes among which the resource is to be shared. What we will see next is a setup wherein there are multiple resource which is to be shared with multiple processes.

DISTRIBUTED COMPUTING

There are models which could go under distributed computing:

WORKSTATION MODEL

PROCESSOR POOL MODEL

Workstation Model could be considered as such every user which has one full fledged computer having it’s own memory, hard-disk, etc but shared on for example a LAN network without which the the computer remains functional. Every user has the processing power.

Processor Pool Model could be considered wherein one can have a high-end server having multiple processors and to the users, there could be high-end terminals (example a graphics terminal), so that the user terminal doesn’t have any processing capabilities. Users does not have processing capabilities, the processing is centralized.

Naturally for these two types of models, the approach taken for CPU sharing must be different. There hence must be two different kind of allocation techniques:

NON-MIGRATORY

MIGRATORY

NON-MIGRATORY to some extend is static in nature. In NON-MIGRATORY CPU allocation, once the process is created, it is decided on which of the processors the process has to be executed; once decided it is fixed and that process is executed on that processor until termination.

MIGRATORY is dynamic wherein the process could be migrated from one processor to another depending upon the requirement. The process itself hence could be terminated in another processor and could be originated or allocated to another processor during initiation of the process.

The next post would cover these aspects of CPU process sharing since this post was dedicated to cover Round Robin Scheduling and Multilevel Queue. Single CPU Management and Process Management hence has been covered in these five posts which could be used for a ready reference:

Previous post discussed about CPU scheduling and an introduction towards Process Management; This post will take it further to an introduction to the process management which is required for WBUT 3rd Semester BCA candidates. Particularly this post is dedicated to process state diagram and will cover the entire aspect of the same.

To start off with the details, a program when needs to be executed goes through a process. This process has several state changes in the entire operation until termination of the program. Upon successful termination, the program would get useful results to the user. This entire process progression goes through state changes which are mention below in steps:

The process enters a state called NEW STATE.

The process then enters the READY STATE.

The process then goes to an ACTIVE STATE/RUNNING STATE (Execution of the program starts here).

The process ends with the HALT STATE/TERMINATED STATE (after all the program BURST’s are over. The program might be terminated forcibly or else is terminated normally.)

The following PROCESS STATE DIAGRAM would show the entire operation:

Note, there is a intermediary state which is known to be the WAITING STATE/BLOCKED STATE. The program goes through this particular state when the CPU is busy with interaction with the I/O devices during I/O operations (this is called I/O BURST). During I/O BURST, since the CPU time is being wasted, to avoid this; the pending jobs are brought up from the queue by the CPU Scheduler and then this new job is executed with first READY STATE and then after the whole operation is finished, the original Job is picked up and executed from middle-way. This way, the CPU saves significant amount of time and maintains the efficiency. The following describes the process timeline:

CPU SCHEDULING

According to the process timeline, it could be observed that the program initiation starts with a CPU BURST and is terminated with a CPU BURST as well. During the entire process progression, the CPU has to interact with I/O devices and hence the pending jobs are completed during this time. The original job is held with a WAITING STATE/BLOCKED STATE status and upon completion of the pending process, the original job is taken. From the previous post we had discussed about two types of CPU Scheduling:

FCFS (First Come First Serve) CPU Scheduling.

SJF (Shortest Job First) CPU Scheduling.

SJF CPU Scheduling saves the time and is an efficient way to schedule jobs. This is done by the CPU Scheduler. Contrary to SJF CPU Scheduling, FCFS CPU Scheduling cannot save time and prediction for the next CPU BURST could not be determined (we need to determine or calculate the amount of time the next CPU BURST would be going to take to maintain an efficiency!) cannot be done. FCFS CPU Scheduling has a strict rule to take a Job, process and execute it and only after the termination of the original job, the next job could be taken by the CPU. Hence there is no real time efficiency scheduling been done with FCFS CPU Scheduling; the job which comes first is executed first and therefore prediction for the next CPU BURST could be applied for the SJF Scheduling algorithm. Since, SJF CPU Scheduling algorithm takes time efficiency into consideration, the CPU must predict the amount of time the CPU has to spend it’s time on NEXT CPU BURST.

CPU BURST: The amount of time spent by the CPU for a program in order to execute the program, process it for the execution.I/O BURST: The amount of time spent by the CPU for a program interacting with I/O devices for I/O operations.

Now, since a calculated prediction of the NEXT CPU BURST could only be determined for SJF CPU Scheduling, the following is considered as the standard process to calculate an estimate of the amount of time spent for the NEXT CPU BURST which would be yet to occur for a program process:

Now, there is yet another calculated special case of SJF algorithm which could be used for efficiency purposes. This special case of SJF CPU Scheduling algorithm is known as PRIORITY SCHEDULING. In case of priority scheduling, the priority levels could beset. In that case the job which has the highest priority should be executed first. Low priority jobs will be executed only after high specified priority jobs had been executed. It could be said that PRIORITY SCHEDULING could be considered as the reciprocal of the NEXT CPU BURST.

NON-PREEMPTIVE CPU SCHEDULING

Now, all of these scheduling algorithms, that is:

FCFS

SJF

Priority

are known to be NON-PREEMPTIVE SCHEDULING. The reason it is known to be NON-PREEMPTIVE Scheduling is since the processing of jobs once allocated, the CPU cannot be taken out of the job until that entire CPU BURST is complete. At the end of the current CPU BURST, another Job could be assigned to the CPU by the CPU Scheduler.

PREEMPTIVE CPU SCHEDULING

The situation with PREEMPTIVE CPU SCHEDULING is wherein the job is in it’s CPU BURST but could be made PREEMPTIVE by allocating another JOB (the current job has to be stopped before it’s natural completion). As per the three CPU Scheduling we have seen so far, the FCFS CPU Scheduling cannot be PREEMPTIVE CPU SCHEDULING because FCFS has to follow the strict rules regarding the jobs which are sent first must be completed first (executed first), and only after the normal completion of the first job execution, other pending jobs should be executed. The remaining two, that is SJF CPU Scheduling and Priority Scheduling algorithm could be modified to get PREEMPTIVE CPU SCHEDULING.

Let’s take an example of jobs which are in the READY Queue with SJF CPU Scheduling modified to suit PREEMPTIVE CPU SCHEDULING that is SHORTEST REMAINING TIME FIRST CPU SCHEDULING (SRT):

J1 – 15
J2 – 9
J3 – 3
J4 – 5

These jobs are in the ready queue with their CPU BURST time. As per the SJF CPU SCHEDULING, the shortest execution time required has to be picked up which in this case would be J3, since J3 has 3 units of time requirement and others have a longer span of time requirement. Let’s consider the execution has been started and the J3 job has spent 1 unit of time with the CPU Processing:

J3

|———–|
1 UNIT

There are remaining 2 UNITS of time completion left for job J3, but another job which is Job J5 arrives which requires an execution CPU BURST time of 1 unit, so, now on the READY Queue we have:

Now, the CPU Scheduler has to check the ready queue to find out which job has the minimum of time requirement as per the policy of SJF CPU SCHEDULING algorithm. The CPU Scheduler would find that the new job which has arrived only requires 1 time unit and hence will allocate the job J5 by PREEMPTIVE procedure of job J3 which was in the middle of the execution; the job J5 will be processed for execution:

J3 J5 2 UNIT LEFT

|———–|———–|______________|
1 UNIT 1 UNIT

So, the Job J5 will be executed since it has been scheduled to be executed in the middle of Job J3’s execution. After completion of Job J5, the CPU Scheduler would again check the READY QUEUE to check the status, and the CPU BURST requirement would be:

Assuming no new jobs came to the READY QUEUE with lower time unit requirement as compared to that of Job J3 time unit (which was left!), the CPU Scheduler would assign the CPU to execute J3 which requires 2 time units:

J3 J5 J3

|———–|———–|———————-|
1 UNIT 1 UNIT 2 UNIT

After the completion of the job J3, the CPU Scheduler yet again has to check the READY QUEUE:

J1 – 15
J2 – 9
J3 – 0
J4 – 5
J5 – 0

Now, the modified SJF algorithm will treat Job J4 to be minimum and start executing it assuming no newer jobs were upfront available in the READY QUEUE. This is how the SJF CPU Scheduling algorithm can be used to maintain efficiency in CPU processing with modification which is known to be SRT (SHORTEST REMAINING TIME FIRST) CPU Scheduling.

Now there must be a way to modify the Priority Scheduling algorithm to obtain an efficient result. This could be done using the same logic but using ‘priority’ in mind. The higher priority job must be executed first. But there would be a conflict using priorities since if a job which always have a high priority in the READY QUEUE than the others has to be executed first and this job comes coming over and again to the READY QUEUE, the lower priority jobs will starve for CPU time and hence might not get executed ever. Now, this situation is unwanted, a concept called ‘aging‘ is used. While the job remains in the READY QUEUE, at regular intervals of time, the CPU will go incrementing the priority of the jobs. Now the priority of the job isn’t decided by the user or the administrator, but it’s decided by the time spent by the CPU for a particular job while other incrementing pending jobs were getting higher priority hits since it’s been aging for CPU time. At one point the maximum priority level will be reached by the job(s) pending, and when it reaches this priority level, the CPU will start executing that job leaving the current job to the WAIT/BLOCK status.

The time taken by the CPU Scheduler which has to be executed by the CPU as well program should be negligible compared to the CPU BURST time of the jobs. Now for a brief overview of what we had discussed here were process block diagram where we talked that a process could migrate from READY state to the ACTIVE state and from the ACTIVE state to the WAITING STATE and then again from the WAITING state to the READY state until the job completion. But with new concepts involved such as PREEMPTIVE CPU Scheduling, there is another route of this migration of state, which hence could be also from the ACTIVE state to the READY state. Hence below is something which completes the Process State Diagram:

The ‘blue’ color path is the new path which is available because of PREEMPTIVE CPU Scheduling. With respect to the process state diagram; in a system if there are jobs which require more amount of CPU BURST time, such jobs would be called as ‘CPU Bound Jobs‘, another type of CPU BURST time is where a process would require more I/O time and less CPU time, such jobs are referenced as ‘I/O Bound Jobs‘. Hence two types of Jobs are:

CPU Bound Jobs

I/O Bound Jobs

Now, assume, In the READY Queue, all the jobs are CPU Bound Jobs; this means for none of the jobs, I/O operation is much concerned which again means I/O operation in the whole processing is negligible and very less time units are spent in I/O operations whereas much time is spent over CPU processing. So in such cases, the CPU time taken will be quite high and I/O devices will remain inactive. Now contradictory to the past situation where CPU Bound Jobs were in READY QUEUE and I/O operations were negligible, there could be situation where all the jobs in the READY QUEUE are I/O operation jobs and negligible CPU process jobs, which means time spent by the CPU will be negligible for these jobs and the I/O devices will be very busy since all the jobs have I/O operations. None of the previous and the current situation are wanted or desirable. Because the efficiency lies in the point that a system should be always busy with all of it’s components and in these cases either the CPU remains inactive or the I/O devices remains inactive, hence wasting ‘time’ and resources therefore go without proper management scheduling of the jobs in the first place. To avoid this situation, or rather to schedule the jobs efficiently, there are schedulers.

Technically all of this means that there must be some kind of management at the READY QUEUE to avoid situations wherein resource time are being wasted. To address the problem, there are schedulers. There are two kind of schedulers, one which takes the job from the NEW STATE to the READY STATE (shifting the job from NEW to the main memory which is dubbed as READY QUEUE – The state diagram will describe it as the READY STATE) and another which takes the job from the READY STATE to the ACTIVE STATE. Since the procedure where the job has to be taken from the READY STATE (READY Queue) to the ACTIVE STATE is complex because of very low time span of the CPU BURST. The CPU BURST time span being short, the Scheduler cannot decide which operation (whether the job has to be migrated from the READY STATE to the ACTIVE STATE or ACTIVE STATE to the READY STATE (READY QUEUE) because there are two routes here {see the diagram above which shows the new path in blue because of PREEMPTIVE CPU Scheduling}) to be done. So this scheduler which takes the job from the READY STATE (READY QUEUE) to the ACTIVE STATE is known to be SHORT TERM SCHEDULER. The other scheduler which takes the job from the NEW STATE to the READY STATE (READY QUEUE) is called LONG TERM SCHEDULER. Therefore the two kinds of Schedulers are:

LONG TERM SCHEDULER

SHORT TERM SCHEDULER

Since we discussed that in the READY QUEUE it is not desirable to have CPU Bound Jobs or I/O Bound jobs, the LONG TERM SCHEDULER would be responsible to decide what jobs should be sent to the READY STATE (READY QUEUE). The duration of the LONG TERM SCHEDULER is quite long since it has plenty of time to decide, but the scope of time involved for SHORT TERM SCHEDULER is quite less which is the reason we can afford a complex LONG TERM SCHEDULER.

Last post I went ahead to introduce process management but I really forgot to add the first module. This post will not only cover the module aspect but also answer the questions which would be related to the specific section of Operating Systems – Introduction and System Structure. Having said that one must initially assume the title would spread the post in-depth in itself. Yes, the post is about the introductory part and is consumed with answering analytically deducted questions which were asked basically in previous years before 2014. Since, this would be my take in because none in WBUT BCA did a good job at sharing, I will go ahead with my take at this.

Following are the frequently asked questions which just as well might appear for 2014 WBUT, BCA. This post might also be beneficial not only to the current in-taking the examinations but would also help other students, researchers or examiners in developing or inspiring ‘writing’ what they had already documented on the hard copy. I believe sharing the world would be the utmost priority where everyone else only hopes.

The questions related with ‘analysis’ (prediction of possibilities of their coming up in year 2014) are as follows:

1.) Differentiate between Logical and Physical Address Space.
2.) What is Operating System? State the importance of Operating System.
3.) Discuss the relationship of Operating System to basic computer hardware. Explain the hierarchy of the Operating System.
4.) Write a Short Note on Device Management and Virtual Machine.

Now there are these objective questions along with answers (a one liner) which I think might help with the Objective based questions:

Which is not a layer of the operating system? Kernel is not a layer of the operating system, others like ‘Shell’, ‘Application Program’, and ‘Critical Section’ are a part of the operating system.

The Operating system is responsible for ‘controlling peripheral devices such as monitor, printers, disk drives, etc. It also helps detecting errors in user programs. It provides an interface which allows users to choose programs to run and to manipulate files. Pretty much everything‘. Anything else would be a wrong answer. The question asked were almost every-time belonged to all of these when objectively asked.

When an interrupt occurs, ”resuming execution of interrupted process after processing the interrupt‘ happens, anything else would be a wrong attempt at answering the question.

What is a Shell? – Shell is a command interpreter, anything else is wrong.

Multiprogramming Systems ‘execute each job faster‘.

Multiprogramming is ‘more than one program executing on a machine‘.

In System mode, machine is executing operating system instructions. So basically it is in system mode that the operating system prefers to execute OS system instructions. Other modes are Normal, Safe, and User.But none of the latter would be correct if objectively asked about executing system instructions.

That been done, we now have very basic touch with the objective part. The questions which were given are as per the subjective analysis and on this analysis, i would be getting the answers in this post. The subjective questions which are predicted to come for year 2014 has already been detailed in this post before, so I would straight away drive to answering them at a go.

1.) Differentiate between logical v/s physical address space.

Answer: First off, let’s dive what really an address space is. To the definition, an address space is the amount of memory allocated for all possible addresses of a computational entity such as for example: a file, a a device, a server, or a networked computer. An address space may refer to a range of addresses which are available to the processor or available to a process. This range of addresses might be logical or physical.

Now, to answer the second part and differentiate between logical and physical address space, we need to know what are logical and what are physical address space. Logical address are the addresses generated by the CPU. From the perspective of the program that is running, an item seems to be from the address which is logically assigned by the CPU. The user-program never looks down at the Physical Addresses, it always has to refer to logical addresses generated by the CPU. In other words, the logical address space is the set of logical addresses generated by a program. Logical addresses need to be mapped to physical addresses before they are used and this mapping is handled using a hardware device called the Memory Management Unit (MMU). Now, for Physical address space, Physical address or the real address is the address seen by the memory unit and it allows the data bus to access a particular memory cell in the main memory. Logical addresses generated by the CPU when executing a program are mapped in to physical address using the MMU.

The difference between Logical Address Space and Physical Address Space: Logical address is the address generated by the CPU (from the perspective of a program that is running) whereas physical address (or the real address) is the address seen by the memory unit and it allows the data bus to access a particular memory cell in the main memory. All the logical addresses need to be mapped in to physical addresses before they can be used by the MMU. Physical and logical addresses are same when using compile time and load time address binding but they differ when using execution time address binding.

2.) What is an Operating System? State the Importance of the Operating System.

Answer: The low level software which is a collection of programs and utilities and supports basic functions such as scheduling tasks, and controlling peripherals is known as the Operating System. It sits between the user and the hardware, and lets the user use the interface to control the machine and produce or generate output. The operating system manages the I/O operations, handles interrupts, manages the file system, storage space and additionally provides utilities which could be handy for an user to automate tasks.

Importance of the Operating System: The importance of an Operating System is that is provides the user with a power to create programs, account, execute a program, access files in a controlled way, access additional systems, detect error, and access I/O devices for an automated way of working. Users depend on their Operating Systems to automate tasks which are repetitive in nature for them and also detect, manipulate and quarantine error accordingly without requiring the user to take care of the low level tasks. It interacts with the hardware and allows the hardware to instruct other hardware to progressively execute a job/task and produce useful results as an output.

3.) Discuss the Relationship of operating system to basic hardware. Explain the hierarchy structure of the operating system.

Answer: The basic computer hardware are monitor, the CPU, the keyboard, memory, and other I/O and secondary devices. Operating system manages all of these resources which is the reason it has been also termed as ‘Resource Manager’. The efficiency with which an operating system handles all these resources are remarkable and it also handles the scheduling i.e: which job depending upon their priority must be executed first by the hardware involved and which jobs are to be queued.

The structure of the Operating System is organized the following way:

This resembles to the following image which deduces out the functionality associated with each layer:

The application programs are dependent on the users of the operating system. The system program layer consists of computer, assembler, linker, library, routine, etc. The kernel directly interacts with the hardware and provides services such as hardware drivers etc. The kernel comprises of I/O drivers, CPU scheduler, pager, swapper, etc. Altogether, the structure of the Operating system manages the hardware resources in timed and at an efficient manner.

4.) Write a Short Note on Device Management and Virtual Machine.

Answer: Device Management: The operating system has an important role in handling devices and managing them. The devices can be managed by the operating system via three distinct ways:

a.) Dedicated
b.) Shared
c.) Virtual

The dedicated devices are tape drives, plotters, etc. The shared devices are printers, hard drives, etc and the virtual devices are virtual printers (spooling), etc. The status of channels, control units, and devices must be checked by the device management routines embedded into the operating system. Some devices are capable of doing an I/O peration withot any support from the channel or the control units. However most devices require the control unit and the support of channels.

Virtual Machine: Virtual Machine are visualized operating system within the operating system. The virtual machine works the same way a operating system installed on a physical hard disk might work but relies on the virtual machine control program called “VMM”. VMM stands for Virtual Machine Monitor and is responsible to link the virtual machine, often called as the ‘guest operating system‘ to the underlying primary hardware. The hardware partition for the virtual operating system would also be virtual and depends on the VMM. The advantages of visualization are:

This post was all about the introductory part of the operating system syllabus for BCA WBUT, 2014. I would come up with the continuation of the posts related to process management since I had been working on them. Related answers to process management in Operating Systems could be found later in this blog. I would first detail them and then come up with analysis (prediction) for the coming 2014 Winter exams. Stay chilled this winter and have a great start of the week ahead. Taking a leave!