This course will introduce you to modern operating systems. We will focus on UNIX-based operating systems, though we will also learn about alternative operating systems, including Windows. The course will begin
with an overview of the structure of modern operating systems. Over the course of the subsequent units, we will discuss the history of modern computers, analyze in detail each of the major components of an operating system (from processes to threads),
and explore more advanced topics in the field, including memory management and file input/output. The class will conclude with a discussion of various system-related security issues.

First, read the course syllabus. Then, enroll in the course by clicking "Enroll me in this course". Click Unit 1 to read its introduction and learning outcomes. You will then see the learning materials and instructions on how to use them.

We will begin this course with a high level introduction to Operating Systems (OS). The Operating System acts as a platform of information exchange between your computer's hardware and the applications running on it. Most people are familiar with the Windows Operating System family (2000, XP, Vista, etc.) or Apple's suite of Operating Systems (Leopard, Snow Leopard, etc.), but for the purposes of this course, we will focus on UNIX: the open-source OS deployed all over the world in both personal and commercial systems. First, we will start with a discussion on some of the earliest Operating Systems, including those which are considered precursors to the Operating Systems that we are familiar with today. Then, we will review the general OS structure and give a basic functional overview. We will conclude this module with a discussion of the modern Operating Systems and devices that we are familiar with.

We will discuss two central building blocks of modern operating systems: Processes and Threads. Processes (instances of a running computer program) and threads (a specific task running within a program) are integral to the understanding of how an OS executes a program and the communication of information between each of the computer's architectural layers. We will start with an overview of each concept, including definitions, uses, and types. We will then discuss the commonalities and differences between processes and threads. We will conclude this unit with a discussion on Context Switches and the important role they play in CPU scheduling, which will be discussed more in depth in Unit 4.

Because a number of different entities will need to access data, it is important to learn how to maintain a consistent view of data across the OS. This is why we need a good synchronization management system. We will begin this section with an overview of why synchronization is so important to an Operating System and the problems that could arise if synchronization is not handled properly. The discussion will continue with an overview of Race Conditions (or system flaws in which the output of a given process is problematically dependent on the sequence of other events), and finally, Semaphores as a way of preventing Race Conditions and other more advanced alternatives to Semaphores, such as Monitors and Messages.

Central Process Unit (CPU) scheduling deals with having more processes/threads than processors to handles those tasks, meaning how the CPU determines which jobs it is going to handle in what order. A good understanding of how a CPU scheduling algorithm works is essential to understanding how an Operating System works; a good algorithm will optimally allocate resources, allowing an efficient execution of all running programs. A poor algorithm, however, could result in any number of issues, from process being "starved out” to inefficient executing, resulting in poor computer performance. In this unit, we will first discuss the CPU problem statement and the goals of a good scheduling algorithm. Then, we will move on to learning about types of CPU scheduling, such as preemptive and non-preemptive. Finally, we will conclude the module with a discussion on some of the more common algorithms found in UNIX-based Operating Systems.

Deadlock is a paralyzing process state resulting from improper CPU scheduling, process management, and synchronization management. During this time, processes are blocked as they compete for system resources or only communicate with each other. Although it cannot be guaranteed that deadlock may be avoided 100% of the time, it is important to know how to avoid the deadlocked state and how to recover from it once it has been achieved. We will build upon the previous two units of CPU Scheduling and Processes and Threads when discussing Deadlock. First, we will discuss what deadlock is by establishing a working definition and the conditions in which it presents itself. Then, we will talk about how to prevent and avoid deadlock. Finally, we will learn about deadlock detection, as well as methods for recovering from a deadlocked state.

Memory is the oil that keeps the computer running smoothly. It is present in various forms throughout the entire computer system. As software developers, it is absolutely essential to have a solid understanding of the role memory plays so that you are able to efficiently use memory in your programs, as well as understand what is going on "under the hood” should a problem arise. We will discuss the role of memory in an Operating System, first with an overview of the memory hierarchy and how memory and the OS interact with each other. Next, we will move on to discussing how memory is allocated for different purposes. Finally, we will discuss the two main topics regarding memory access: segmentation and paging.

File systems play an important role in the operating system. From the user's perspective, the file system is a simple filing cabinet. However, behind the scenes there is much complexity. We will discuss a general overview of file systems, a look at file allocation and methods as well as disk allocation algorithms.

Security is an important part of operating systems. There are many threats today to computer systems. This unit will begin with a brief overview of security issues, look at types of malware and discuss several security techniques, such as access controls, intrusion detection, and malware defense.

Computer networking has become an increasingly important field. When we discuss networking, we are not just referring to connecting computers together in one location but also about broader connections through the Internet.

Please take a few minutes to give us feedback about this course. We appreciate your feedback, whether you completed the whole course or even just a few resources. Your feedback will help us make our courses better, and we use your feedback each time we make updates to our courses.

If you come across any urgent problems, email contact@saylor.org or post in our discussion forum.

Take this exam if you want to earn a free Course Completion Certificate.

To receive a free Course Completion Certificate, you will need to earn a grade of 70% or higher on this final exam. Your grade for the exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again as many times as you want, with a 7-day waiting period between each attempt.

Once you pass this final exam, you will be awarded a free Course Completion Certificate.

Take this exam if you want to earn a Proctor-Verified Course Completion Certificate.

This optional final exam requires a proctor and a proctoring fee of $25. To receive a proctor-verified certificate, you will need to earn a grade of 70% or higher on this final exam. Your grade for the exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again a maximum of 3 times, with a 14-day waiting period between each attempt.

Once you pass this final exam, you will be awarded a Proctor-Verified Course Completion Certificate.