Anda di halaman 1dari 11

CS250 Challenge Project

Submi ed by Lean Louiel A. Peria

Item 1
The na ve scheduling algorithm of Minix is implemented in kernel space as a mul -level round robin policy where
there are sixteen separate queues each represen ng a priority level, and the scheduler picks the highest non-empty
queue then picks the first process in that queue to surrender the CPU to. There is a userspace SCHED server that
implements scheduling policy then interfaces to the kernel level scheduler through syscalls. The PM sends
scheduling requests to SCHED whenever it performs opera ons on processes.

In Minix 3.3.0, all Minix source files are in the path /usr/src/minix but we will omit the path prefix /usr/src
from all file paths for brevity.

First, the ready queues are linked lists as declared in minix/kernel/cpulocals.h with the relevant lines shown
below. NR_SCHED_QUEUES is defined in minix/include/minix/config.h as 16. run_q_head is an array of 16
linked list head pointers to the 16 ready queues for each priority level.

1 /* Line 36 */
2 #define DECLARE_CPULOCAL(type, name) type name
3
4 /* Line 72-73 */
5 DECLARE_CPULOCAL(struct proc *, run_q_head[NR_SCHED_QUEUES]); /* ptrs to ready list headers */
6 DECLARE_CPULOCAL(struct proc *, run_q_tail[NR_SCHED_QUEUES]); /* ptrs to ready list tails */
7

The minix/kernel/proc.c file contains process and message handling and together with minix/kernel/mpx.s in
assembly language form the lowest layer of Minix. Important procedures in proc.c include:

1 void switch_to_user(void) /* Performs the context switch to userspace */


2 void enqueue(register struct proc *rp )
3 static void enqueue_head(struct proc *rp)
4 void dequeue(struct proc *rp)
5 static struct proc * pick_proc(void)
6 static void notify_scheduler(struct proc *p) /* sends NO_QUANTUM msg to scheduler of p*/
7 void proc_no_time(struct proc * p) /* calls notify_scheduler if p is preemptible */

We focus on enqueue below. Lines 1549-1557 prove that the each ready queue is a simple linked list, and the
process being enqueued is added to the tail by default.
Next we focus on pick_proc() below. At Line 1731 the for loop starts checking the highest priority queue (q=0) first.
Lines 1731-1735 prove that the basic low level kernel scheduling policy is to simply pick the first process in line in
the highest non-empty ready queue.

Another important procedure is notify_scheduler shown below. It is called by proc_no_time which is in turn
called by switch_to_user when the current process to be context switched into has ran out of quantum.
notify_scheduler prepares an IPC message and loads it with accoun ng informa on for the process it received
as argument. This IPC message is then sent to the scheduler of the process, pointed to by p->p_scheduler (Line
1820). This is a good me to point out that in Minix all processes are assigned a scheduler. So far all that we have
described (the naive mul -level round robin policy) is the default scheduling policy in Minix. Processes spawned by
PM are assigned SCHED as scheduler, but in many situa ons p->p_scheduler is null and this is where the default
policy is needed.

The accoun ng informa on, in Lines 1809-1813 are poten ally useful for more informed scheduling policies. The
na ve scheduler however does not make use of them. We go back to this point later in the design of the aged
priority scheduler.

Now we proceed to the userspace SCHED server. Below is the core while loop of SCHED, contained in
minix/servers/sched/main.c :
The opera on of SCHED is to wait for syscalls in the form of IPC messages, either from the PM or from the kernel
below, and then process these requests accordingly. The procedures that service those requests are implemented in
minix/servers/sched/scheduler.c and we will take a look at some of them. It can be said that the en re policy
of SCHED is implemented in scheduler.c .

do_noquantum processes the NO_QUANTUM message received from the notify_scheduler procedure in the
kernel as discussed earlier. It simply checks for validity of the process endpoint in the message (Line 95), fetches the
corresponding schedproc struct in SCHED’s process table for the process that has run out of quantum,
decrements that process’ priority up to the maximum allowable (Line 102-103), then schedules the process again by
calling schedule_process .

do_start_scheduling ini alizes a schedproc struct for the target process within SCHED’s process table. For
system processes, time_slice and priority values are set explicitly (Line 194-200) while for userspace
processes they are inherited from the parent (Line 202-212) as shown below. Then do_start_scheduling sends a
sys_schedctl syscall to the kernel (Line 221), which no fies the kernel that SCHED is taking over the scheduling
of the process.
The second entrypoint of SCHED into the kernel is through the sys_schedule syscall, which simply modifies the
priority, me quantum and assigned cpu of the target process, then enqueues it. Shown below is
schedule_process which is called by do_noquantum , do_start_scheduling and do_nice . It simply modifies
the corresponding schedproc struct according to the argument flags (Lines 307-320), then sends a
sys_schedule syscall for the target process.

One last important detail is balance_queues that is called every 5 seconds to rebalance the queues. Upon
ini alizing SCHED, a mer is set to call balance_queues (Line 340).

The opera on is very simple: balance_queues checks the en re process table of SCHED (Line 357) for processes
that are in use or running (Line 358), then raises the priority up to the maximum allowable priority for that process
(Lines 359-360). This combined with the automa c lowering of priority of processes in do_noquantum cons tute
the core scheduling policy of SCHED: all processes decrease in priority immediately a er finishing their me
quantum, then all running processes are raised in priority in fixed intervals. The result is that CPU-heavy processes
quickly drop to the lowest priority queue (q=15) as the rate at which they hit NO_QUANTUM tend to be faster than
the rate that balance_queues is called. Meanwhile processes that are not CPU-heavy will slowly raise in priority.
Note that this policy works on top of the mul -level round robin scheduling of the low level kernel scheduler.

Item 2
The most reliable method to see how the scheduler works is to see real me which process it is picking to surrender
the CPU to. This can be done in theory by pu ng a print statement just a er the kernel chooses a process inside
pick_proc() so whenever the kernel chooses a new process this decision is logged. However this method did not
work no ma er what workaround I a empted. My compromise was to instead put a print statement (Line 322)
inside schedule_process() from within the SCHED server, which logs every me SCHED issues a sys_schedule
syscall, or in other words every me SCHED queues a process into one of the ready queues of the kernel scheduler.

The print statements are analyzed this way: every “Scheduling process X” no fica on in the terminal indicates that
either a new process enters the ready queues and is being scheduled, or a running process has run out of its
quantum, and is being rescheduled. If process A is a currently running process and “Scheduling process A” is printed,
we can immediately conclude that process A was running just un l that print statement. This is already enough
informa on to deduce how the scheduler works.

However it is also important to provide good test programs to run for verifying the opera on of the scheduler. Here
I provide two programs. The first is static.c shown below. It has the simple opera on of taking two arguments,
an iden fier string and a nice value. It calls setpriority() for the nice value then sets an infinite loop adding 1 to
an integer c and prin ng periodically every me c resets back to 0 a er exhaus ng its 32 bits.

1 #include <stdio.h>
2 #include <unistd.h>
3 #include <stdlib.h>
4
5 int main(int argc, char * argv[]){
6 printf("Hello. I am child %s with pid %i and nice=%s.\n", argv[1], getpid(), argv[2]);
7 setpriority(PRIO_PROCESS, getpid(), atoi(argv[2]));
8
9 int c=0;
10 int counter = 1;
11 while (1){
12 if (c==0){
13 printf("Child %s performing loop %i.\n",argv[1], counter);
14 counter += 1;
15 }
16 c += 1;
17 }
18 }

The value of static.c is it is an infinitely running CPU-heavy program with a priority that can be adjusted. It is
called by generator.c shown below. This program accepts an arbitrary number of nice values as arguments, then
forks off static.c for each nice value. Each forked process is uniquely iden fied and its printouts on the screen
are easily iden fiable thanks to static.c taking in an iden fier string argument. A running generator.c process
is intended to be killed off a er analysis is finished.

1 #include <stdio.h>
2 #include <stdlib.h>
3 #include <unistd.h>
4
5 int main(int argc, char *argv[]){
6 int n = argc - 1;
7 printf("I received %i nice values.\n", n);
8 pid_t pid;
9 char *nullenv[] = {NULL};
10 printf("Parent PID: %d\n", getpid());
11 for (int i=1; i<=n; i++){
12 if ((pid = fork()) == 0){
13 printf("Child process pid: %d, ppid: %d\n", getpid(), getppid());
14 char childnum[10];
15 sprintf(childnum,"%i",i);
16 char *args[] = {"./static", childnum , argv[i], NULL};
17
18 execve("./static", args,nullenv);
19 }
20
21 }
}

It is easy to use generator.c to verify the opera on of a scheduler. You can choose your choice of nice value
combina ons then watch as static processes are forked off with those nice values. Every me a process is
scheduled, schedule_process() will print a statement. While generator.c is running, you can monitor the
scheduler opera on by analyzing the pa erns of printouts from SCHED as well as the printouts from static
processes. It is recommended to open a pseudoterminal and use top to monitor the running processes. Take note
that it is necessary to use the kernel process table dump (F1) to check the process number of each process, because
the PID is visible only to the PM and SCHED has no access to this.

Here is a runthrough of an example analysis: With the unmodified Minix 3.3.0 kernel that uses a mul -level round
robin scheduling policy, run ./generator 20 20 20 which forks three iden cal processes all with the lowest
possible priority, 15. Since they are all within the same priority queue, they will all be alterna ng in round robin
fashion. Observe as SCHED will printout rota ng process numbers accordingly.

Item 3
Recall that notify_scheduler() in minix/kernel/proc.c sends a NO_QUANTUM message that contains
accoun ng informa on which are sta s cs describing the internals of the kernel. SCHED is a naive scheduler as it
uses none of this informa on, simply lowering the priority of any process that has run out of quantum disregarding
any other informa on.

An important value being passed by notify_scheduler() is the total me the process spent in the ready queue.
We can use this to keep track of the total me a process has spent in the ready queue a er having run on the CPU
mul ple mes. First we extend the schedproc struct, which is a slot in the SCHED process table, by edi ng
minix/servers/sched/schedproc.h shown below. The edits are Lines 36-37:

Now the rest of the modifica ons are in minix/servers/sched/schedule.c as it contains the core of the
scheduling policy of SCHED. First, all new processes must have ini al values for the newly added fields in the
schedproc struct. This is done on Lines 167-168 of do_start_scheduling() :
Next, do_noquantum() has to add to the sojourn me every me it receives a NO_QUANTUM message. This is
done in Line 102 shown below. As discussed earlier, the accoun ng informa on embedded in the IPC message sent
by the kernel is u lized by our modified scheduler instead of going to waste in the na ve scheduler.

In addi on, the naive policy of lowering the priority of any process that has run out of quantum is modified in Lines
104-106. Here we only do it once, while rmp->lowered == 0 and then we defer from doing it in the future un l
the value is reverted. The result is that processes only get lowered in priority once a er they run out of quantum,
preven ng CPU-heavy processes from always dropping to the lowest priority. This of course has to be matched in
balance_queue . See below that a matching modifica on is placed in Lines 366-368. Processes with rmp->lowered
== 1 are raised in priority but only once because lowered is then reverted. These two modifica ons mean that
processes can only drop in priority once because of do_noquantum before this drop in priority will eventually be
corrected by balance_queues.

Now comes the aging priority part, which we call Senior Ci zen Rights. We define a macro AGING_THRESHOLD as
1000 (Line 358), which is the threshold number of milliseconds of sojourn me in the ready queue and processes
that go beyond this threshold are raised in priority by one level. This is done in Lines 370-373 and duplicated in
Lines 375-378 for the case in which rmp->lowered == 1 .

Our strategy is thus very simple: we negate the na ve policy of automa cally lowering the priority of a process
every me it runs out of quantum. This stops CPU-heavy processes to always sink to the lowest priority level. Then
we reimplement balance_queues so that every me it checks a process and it sees that that process’ total sojourn
me has exceeded that AGING_THRESHOLD, it raises that process’ priority. AGING_THRESHOLD is then the me
period in milliseconds for a process to gain in priority by 1 while wai ng in the ready queue. With a value of 1000,
we set that CPU-heavy processes gain every second. Another important detail is that in Line 375 we set that
processes can only reach a maximum priority of USER_Q through this aging priority mechanism. This prevents
CPU-heavy processes from eventually surpassing system processes in priority, resul ng in the system being
unresponsive.

This is the last modifica on in SCHED necessary for the Senior Ci zen Rights policy to work correctly. To see it in
ac on, use the same methodology in Item 2 and run ./generator 20 20 20 . No ce that all three forked static
processes will rise in priority from 15 up to 8.

Anda mungkin juga menyukai