At the end of each week, students should be able to do the following:
## Week 1
### Terms
processor, main memory, IO modules, system bus, instruction fetch, interrupt, hit ratio, locality of reference, cache, kernel, job, monitor, multitasking, time sharing, virtual address, virtual memory, trap, system call, signal
### Outcomes
* (✓) Identify the four main structural elements of a computer.
* (✓) Draw a diagram showing the instruction execution cycle.
* (✓) Draw the memory hierarchy for a computer.
* (✓) Explain the concept of direct memory access.
* (✓) Explain how an operating system serves as a user interface, resource manager, and supports change.
* (✓) Explain how operating systems changed from serial processing through batch scheduling to modern structures.
* (✓) Draw a diagram showing the structure of a Modern UNIX System
* (✓) Explain the concept of a loadable module in Linux
* (✓) Draw a picture showing the relationship between Linux kernel components.
* (✓) Construct source code which performs a system call.
* (✓) Explain the concept of a trap.
* (✓) List some examples of System calls.
## Week 2
### Terms
process, program code, data, process control block, dispatcher, spawning, process table, user mode, system mode, kernel mode, trap, fork
### Outcomes
* (✓) Explain the contents of the text section, data section, heap, and stack of a program
* (✓) Draw a graphical representation of a process in memory
* (✓) Explain the concept of process state
* (✓) Draw a state transition diagram for process states
* (✓) List the contents of a process control block
* (✓) Explain what the process scheduler is responsible for doing within the operating system.
* (✓) Be able to obtain information about the processes which are running under linux.
* (✓) Explain the relationship between process ids, groups, and the general process hierarchy in Unix
* (✓) List the modes of operation for an operating system.
* (✓) Explain how to create a process.
* (✓) Explain the concept of the parent - child relationship between processes
* (✓) Explain the purpose for the UNIX fork, wait, and exec commands.
* Explain what happens when the exit method is called.
* (✓) Construct programs using the fork, wait, and exec unix commands
* (✓) List two methods for sharing information between processes
* (✓) Draw graphical representation for processes communicating using shared memory and message passing systems.
* (✓) Explain the flow necessary to use a shared memory partition
* (✓) Construct a rudimentary program which uses POSIX pipes to communicate between processes.
## Week 3
### Terms
pipe, file descriptor, read, write, Big Endian, Little Endian, socket, ip Address, stub, marshalling, client, server
### Outcomes
* (✓) List two methods for sharing information between processes
* (✓) Draw graphical representation for processes communicating using shared memory and message passing systems.
* Explain the flow necessary to use a shared memory partition
* (✓) Construct a rudimentary program which uses POSIX pipes to communicate between processes.
* (✓) Define a socket
* (✓) Explain the purpose for sockets in a system.
* Explain the difference between "big-endian" and "little-endian".
* (✓) Construct a simple program which communicates using sockets.
* (✓) List the advantages of using a socket over a pipe.
* Define the acronym RPC
* Explain how an RPC executes, specifically in regards to stubs and the concept of marshalling.
* Construct a simple application in Linux using RPC.
## Week 4
### Terms
lightweight process, user level threads, kernel threads, atomic operation, critical section, deadlock, livelock, mutual exclusion, race condition, starvation
### Outcomes
* (✓) Explain the concept of a thread
* (✓) Draw a representation of a single threaded process and a multi-threaded process.
* (✓) Compare and Contrast the advantages and disadvantages of threads versus processes
* (✓) Explain how multi-threaded program can be useful in a multi-core environment.
* (✓) Explain the difference between kernel threads and user threads
* Explain the difference between many to one, one to one, and many to many models of thread behaviour.
* (✓) Using C, construct a simple multithreaded POSIX compliant application.
* (✓) Explain the difference between asynchronous cancelation and deferred cancellation.
* (✓) Explain the concept of a race condition and justify the need for protection
* (✓) Explain how a race condition may corrupt data
* (✓) Explain why a simple lock variable will fail to synchronize a system properly.
* Demonstrate Dekker's Algorithm
* (✓) Demonstrate through an example Peterson's solution to race conditions
## Week 5
### Terms
busy wait, semaphore, mutex, spinlock, starvation, condition variable, blocking send, blocking receive, nonblocking send, nonblocking receive, direct addressing, indirect addressing, mailbox, deadlock, resource allocation graph, resource, reusable resource, consumable resource,
### Outcomes
* (✓) Compare and contrast semaphores and mutexes.
* (✓) Explain the difference between a binary semaphore and a counting semaphore.
* (✓) Construct code which uses a mutex for synchronization.
* (✓) Construct code which uses a semaphore for synchronization
* (✓) List the limitations of semaphores and mutexes when synchronizing processes and threads.
* (✓) Define the concept of a monitor.
* (✓) Explain the advantages and disadvantages of a monitor versus other constructs.
* (✓) Explain the advantage of using message passing for synchronization versus other systems.
* (✓) Explain the concept of a monitor.
* (✓) Construct a resource allocation graph.
* (✓) List the conditions necessary for a deadlock to occur.
* (✓) Explain the dining philosophers problem and how it results in a potential deadlock.
* (✓) Construct a resource allocation graph from a given problem.
* (✓) Analyze a resource allocation graph to determine if a deadlock is present.
## Week 6
### Terms
long term scheduling, medium term scheduling, short term scheduling, io scheduling
### Outcomes
* (✓) Explain the key aspect of multiprogramming
* (✓) Construct a queuing diagram for scheduling.
## Week 7
### Terms
turnaround time, response time, deadline, throughput, processor utilization, fairness, resource balancing, preemptive, non-preemptive, convoy effect, quantum, nice
### Outcomes
* (✓) Explain the CPU and IO Burst cycle used for scheduling
* (✓) Recognize the distribution of CPU activities on a system
* (✓) Explain the relationship between an IO bound program and CPU bound program in terms of CPU bursts
* (✓) List the five reasons why the scheduler may be invoked
* (✓) Compare and Contrast Pre-emptive and non-pre-emptive scheduling.
* (✓) Explain the purpose for the dispatcher and scheduler within the operating system.
* (✓) Define dispatch latency
* (✓) Define CPU utilization, Throughput, Turnaround time, Waiting time, Response time in terms of their impact on scheduling.
* (✓) Explain the operation of a FIFO scheduler
* (✓) Calculate the average waiting time for a given set of processes scheduled using FCFS scheduling
* (✓) Construct a GANTT chart for a given set of processes.
* (✓) Calculate the throughput for a given system.
* (✓) Explain the convoy effect of FCFS Scheduling
* (✓) List the advantages and disadvantages of FCFS scheduling.
* (✓) Be able to calculate the average waiting time for a set of processes.
* (✓) Be able to calculate the throughput for a set of processes.
* (✓) Be able to calculate the turnaround time for a set of processes.
* (✓) Explain the algorithm for SJF Scheduling
* (✓) Explain why exponential averaging can be used to estimate the shortest job burst.
* Calculate the exponential average based on a series of CPU bursts and an initial estimate.
* (✓) List the advantages and disadvantages of SJF scheduling
* (✓) Explain priority scheduling.
* (✓) Using priority scheduling, draw a schedule for a set of jobs
* (✓) Define starvation in terms of processor scheduling
* (✓) Demonstrate how processor ageing can solve the process of starvation
* (✓) Explain the impact of the quantum on round robin scheduling.
* (✓) Explain the operation of the Linux 2.6 O(1) scheduler.
* (✓) Explain the concept of the Linux 2.6.23 [completely fair scheduler](https://web.archive.org/web/20180618020434/https://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/).
* Explain the operation of the traditional UNIX scheduler.
Week 9 and following
### Outcomes
* (✓) Define a logical address space for a process
* (✓) Explain the operation of a page table.
* (✓) Compare and contrast real memory and virtual memory.
* (✓) Explain what happens when memory is swapped (paged)
* (✓) Define backing store.
* (✓) Define victim frame.
* (✓) Explain the concept of page replacement.
* (✓) Given an address, determine the offset and page number for a logical address.
* (✓) Given an address and page table state, convert a logical address into a physical address.
* (✓) Understand the impact of a TLB on paging.
* (✓) Justify the reasoning for page sizes to be powers of 2.
* (✓) Explain the concept of copy on write.
* (✓) Explain how memory is handled when an initial fork operation occurs.
* (✓) Given the hit ratio, calculate the effective memory access time for a TLB based system.
* (✓) Calculate the swap time for a given system.
* (✓) List the steps necessary to handle a page fault.
* Explain the concept of thrashing.
* Explain the relationship between page size and fault rate and number of allocated frames and fault rate.
* Explain the purpose for the dirty bit within a virtual memory system.
* Determine the number of hits and misses for a given address trace.
* (✓) Explain how hardware determines if an address is valid for a memory reference
* (✓) Interpret a linux map file to determine where in memory a given variable is stored.
* (✓) Interpret a map file to understand the protection given to a segment of memory.
* (✓) Explain how a map file shows segmentation in the development of a program.
* Compare and contrast dynamic and static linking of modules
* Compare and contrast compile time, load time, and execution time binding of programs
* Explain in Linux how to statically link libraries into a program.
* Identify in a map file the differences between statically and dynamically linked code and programs.
* (✓) Define the concepts of transfer rate, seek time, and rotational latency.
* Explain the concept of a disk partition.
* (✓) List the benefits of solid state devices versus traditional magnetic hard drives,
* Explain how file systems software is structured and the layers present in such a system.
* List the basic elements of a file directory.
* Compare and contrast absolute and relative path names.
* Interpret the access rights given to a file under Linux.
* Compare and contrast single level directories, two level directories, tree structured directories, acyclic-graph directories, and general graph structures.
* Explain the relationship between contiguous allocation, linked allocation, and indexed allocation within a file system
* Explain how free space is managed on a disk.
* Explain how BSD uses iNodes and Files to store data.
Acknowledgement: Outcomes originally by Dr. Schilling