CS3841
Outcomes
<!-- (&#x2713;) --> <!-- <~/tmp2 sed 's_^"__g;s_"$__g;s_""_"_g;s_^\([^ ]\)_* \1_;' --> ## Week 1 ### Terms processor, main memory, IO modules, system bus, instruction fetch, interrupt, hit ratio, locality of reference, cache, kernel, job, monitor, multitasking, time sharing, virtual address, virtual memory, trap, system call, signal ### Outcomes * (&#x2713;) Identify the four main structural elements of a computer. * (&#x2713;) Draw a diagram showing the instruction execution cycle. * (&#x2713;) Draw the memory hierarchy for a computer. * (&#x2713;) Explain the concept of direct memory access. * (&#x2713;) Explain how an operating system serves as a user interface, resource manager, and supports change. * (&#x2713;) Explain how operating systems changed from serial processing through batch scheduling to modern structures. * (&#x2713;) Draw a diagram showing the structure of a Modern UNIX System * (&#x2713;) Explain the concept of a loadable module in Linux * (&#x2713;) Draw a picture showing the relationship between Linux kernel components. * (&#x2713;) Construct source code which performs a system call. * (&#x2713;) Explain the concept of a trap. * (&#x2713;) List some examples of System calls. ## Week 2 ### Terms process, program code, data, process control block, dispatcher, spawning, process table, user mode, system mode, kernel mode, trap, fork ### Outcomes * (&#x2713;) Explain the contents of the text section, data section, heap, and stack of a program * (&#x2713;) Draw a graphical representation of a process in memory * (&#x2713;) Explain the concept of process state * (&#x2713;) Draw a state transition diagram for process states * (&#x2713;) List the contents of a process control block * (&#x2713;) Explain what the process scheduler is responsible for doing within the operating system. * (&#x2713;) Be able to obtain information about the processes which are running under linux. * (&#x2713;) Explain the relationship between process ids, groups, and the general process hierarchy in Unix * (&#x2713;) List the modes of operation for an operating system. * (&#x2713;) Explain how to create a process. * (&#x2713;) Explain the concept of the parent - child relationship between processes * (&#x2713;) Explain the purpose for the UNIX fork, wait, and exec commands. * Explain what happens when the exit method is called. * (&#x2713;) Construct programs using the fork, wait, and exec unix commands * (&#x2713;) List two methods for sharing information between processes * (&#x2713;) Draw graphical representation for processes communicating using shared memory and message passing systems. * (&#x2713;) Explain the flow necessary to use a shared memory partition * (&#x2713;) Construct a rudimentary program which uses POSIX pipes to communicate between processes. ## Week 3 ### Terms pipe, file descriptor, read, write, Big Endian, Little Endian, socket, ip Address, stub, marshalling, client, server ### Outcomes * (&#x2713;) List two methods for sharing information between processes * (&#x2713;) Draw graphical representation for processes communicating using shared memory and message passing systems. * <strike>Explain the flow necessary to use a shared memory partition</strike> * (&#x2713;) Construct a rudimentary program which uses POSIX pipes to communicate between processes. * (&#x2713;) Define a socket * (&#x2713;) Explain the purpose for sockets in a system. * Explain the difference between "big-endian" and "little-endian". * (&#x2713;) Construct a simple program which communicates using sockets. * (&#x2713;) List the advantages of using a socket over a pipe. * <strike> Define the acronym RPC</strike> * <strike> Explain how an RPC executes, specifically in regards to stubs and the concept of marshalling.</strike> * <strike>Construct a simple application in Linux using RPC.</strike> ## Week 4 ### Terms lightweight process, user level threads, kernel threads, atomic operation, critical section, deadlock, livelock, mutual exclusion, race condition, starvation ### Outcomes * (&#x2713;) Explain the concept of a thread * (&#x2713;) Draw a representation of a single threaded process and a multi-threaded process. * (&#x2713;) Compare and Contrast the advantages and disadvantages of threads versus processes * (&#x2713;) Explain how multi-threaded program can be useful in a multi-core environment. * (&#x2713;) Explain the difference between kernel threads and user threads * Explain the difference between many to one, one to one, and many to many models of thread behaviour. * (&#x2713;) Using C, construct a simple multithreaded POSIX compliant application. * (&#x2713;) Explain the difference between asynchronous cancelation and deferred cancellation. * (&#x2713;) Explain the concept of a race condition and justify the need for protection * (&#x2713;) Explain how a race condition may corrupt data * (&#x2713;) Explain why a simple lock variable will fail to synchronize a system properly. * Demonstrate Dekker's Algorithm * (&#x2713;) Demonstrate through an example Peterson's solution to race conditions ## Week 5 ### Terms busy wait, semaphore, mutex, spinlock, starvation, condition variable, blocking send, blocking receive, nonblocking send, nonblocking receive, direct addressing, indirect addressing, mailbox, deadlock, resource allocation graph, resource, reusable resource, consumable resource, ### Outcomes * (&#x2713;) Compare and contrast semaphores and mutexes. * (&#x2713;) Explain the difference between a binary semaphore and a counting semaphore. * (&#x2713;) Construct code which uses a mutex for synchronization. * (&#x2713;) Construct code which uses a semaphore for synchronization * (&#x2713;) List the limitations of semaphores and mutexes when synchronizing processes and threads. * (&#x2713;) Define the concept of a monitor. * (&#x2713;) Explain the advantages and disadvantages of a monitor versus other constructs. * (&#x2713;) Explain the advantage of using message passing for synchronization versus other systems. * (&#x2713;) Explain the concept of a monitor. * (&#x2713;) Construct a resource allocation graph. * (&#x2713;) List the conditions necessary for a deadlock to occur. * (&#x2713;) Explain the dining philosophers problem and how it results in a potential deadlock. * (&#x2713;) Construct a resource allocation graph from a given problem. * (&#x2713;) Analyze a resource allocation graph to determine if a deadlock is present. ## Week 6 ### Terms long term scheduling, medium term scheduling, short term scheduling, io scheduling ### Outcomes * (&#x2713;) Explain the key aspect of multiprogramming * (&#x2713;) Construct a queuing diagram for scheduling. ## Week 7 ### Terms turnaround time, response time, deadline, throughput, processor utilization, fairness, resource balancing, preemptive, non-preemptive, convoy effect, quantum, nice ### Outcomes * (&#x2713;) Explain the CPU and IO Burst cycle used for scheduling * (&#x2713;) Recognize the distribution of CPU activities on a system * (&#x2713;) Explain the relationship between an IO bound program and CPU bound program in terms of CPU bursts * (&#x2713;) List the five reasons why the scheduler may be invoked * (&#x2713;) Compare and Contrast Pre-emptive and non-pre-emptive scheduling. * (&#x2713;) Explain the purpose for the dispatcher and scheduler within the operating system. * (&#x2713;) Define dispatch latency * (&#x2713;) Define CPU utilization, Throughput, Turnaround time, Waiting time, Response time in terms of their impact on scheduling. * (&#x2713;) Explain the operation of a FIFO scheduler * (&#x2713;) Calculate the average waiting time for a given set of processes scheduled using FCFS scheduling * (&#x2713;) Construct a GANTT chart for a given set of processes. * (&#x2713;) Calculate the throughput for a given system. * (&#x2713;) Explain the convoy effect of FCFS Scheduling * (&#x2713;) List the advantages and disadvantages of FCFS scheduling. * (&#x2713;) Be able to calculate the average waiting time for a set of processes. * (&#x2713;) Be able to calculate the throughput for a set of processes. * (&#x2713;) Be able to calculate the turnaround time for a set of processes. * (&#x2713;) Explain the algorithm for SJF Scheduling * (&#x2713;) Explain why exponential averaging can be used to estimate the shortest job burst. * Calculate the exponential average based on a series of CPU bursts and an initial estimate. * (&#x2713;) List the advantages and disadvantages of SJF scheduling * (&#x2713;) Explain priority scheduling. * (&#x2713;) Using priority scheduling, draw a schedule for a set of jobs * (&#x2713;) Define starvation in terms of processor scheduling * (&#x2713;) Demonstrate how processor ageing can solve the process of starvation * (&#x2713;) Explain the impact of the quantum on round robin scheduling. * (&#x2713;) Explain the operation of the Linux 2.6 O(1) scheduler. * (&#x2713;) Explain the concept of the Linux 2.6.23 [completely fair scheduler](https://web.archive.org/web/20180618020434/https://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/). * Explain the operation of the traditional UNIX scheduler. Week 9 and following ### Outcomes * (&#x2713;) Define a logical address space for a process * (&#x2713;) Explain the operation of a page table. * (&#x2713;) Compare and contrast real memory and virtual memory. * (&#x2713;) Explain what happens when memory is swapped (paged) * (&#x2713;) Define backing store. * (&#x2713;) Define victim frame. * (&#x2713;) Explain the concept of page replacement. * (&#x2713;) Given an address, determine the offset and page number for a logical address. * (&#x2713;) Given an address and page table state, convert a logical address into a physical address. * (&#x2713;) Understand the impact of a TLB on paging. * (&#x2713;) Justify the reasoning for page sizes to be powers of 2. * (&#x2713;) Explain the concept of copy on write. * (&#x2713;) Explain how memory is handled when an initial fork operation occurs. * (&#x2713;) Given the hit ratio, calculate the effective memory access time for a TLB based system. * (&#x2713;) Calculate the swap time for a given system. * (&#x2713;) List the steps necessary to handle a page fault. * Explain the concept of thrashing. * Explain the relationship between page size and fault rate and number of allocated frames and fault rate. * Explain the purpose for the dirty bit within a virtual memory system. * Determine the number of hits and misses for a given address trace. * (&#x2713;) Explain how hardware determines if an address is valid for a memory reference * (&#x2713;) Interpret a linux map file to determine where in memory a given variable is stored. * (&#x2713;) Interpret a map file to understand the protection given to a segment of memory. * (&#x2713;) Explain how a map file shows segmentation in the development of a program. * Compare and contrast dynamic and static linking of modules * Compare and contrast compile time, load time, and execution time binding of programs * Explain in Linux how to statically link libraries into a program. * Identify in a map file the differences between statically and dynamically linked code and programs. * (&#x2713;) Define the concepts of transfer rate, seek time, and rotational latency. * Explain the concept of a disk partition. * (&#x2713;) List the benefits of solid state devices versus traditional magnetic hard drives, * Explain how file systems software is structured and the layers present in such a system. * List the basic elements of a file directory. * Compare and contrast absolute and relative path names. * Interpret the access rights given to a file under Linux. * Compare and contrast single level directories, two level directories, tree structured directories, acyclic-graph directories, and general graph structures. * Explain the relationship between contiguous allocation, linked allocation, and indexed allocation within a file system * Explain how free space is managed on a disk. * Explain how BSD uses iNodes and Files to store data. <br> <br> Acknowledgement: Outcomes originally by Dr. Schilling</p>