Saturday, June 29, 2024

CST334 Week 2 Report

Operating Systems Week 2 Summary

This week we learned a ton about processes and how the operating system manages them. The operating system virtualizes one or a few CPUs in order to run many processes concurrently. The operating system employs a combination of low level machinery and higher level scheduling policies to accomplish this. On the low level part, we have techniques such as context switches, which is utilized when changing the currently running process to a new one. We also see various system calls such as fork(), exec(), and wait(). In order to protect the system from any negatives a process may incur, we utilize limited direct execution - the process must run under limitations imposed by the operating system. While the process typically runs in user mode, the operating system by default runs in kernel mode, which means it has unlimited access to machine resources. 

We must combine low level machinery with higher level policies, or disciplines - this is while trying to simultaneously optimize for performance metrics such as turnaround time and response time. There are several approaches that can be considered, from shortest job first to round robin, in determining how to schedule processes. Ultimately, a more optimal approach is using a Multi-level feedback queue, or MLFQ. In this treatment, we have a number of distinct queues, each with a different priority level. The process with a higher priority level runs first in a round robin fashion with processes sharing its priority level, which means each process runs for a predetermined time slice, or scheduling quantum in alternating fashion with other processes. When each job's allotment is spent, its priority level is automatically decremented - this is so that more interactive processes can stay higher priority, while longer and more CPU intensive processes remain in the lower priority levels.

No comments:

Post a Comment