Anda di halaman 1dari 4

OPERATING SYSTEMS LAB

A New Scheduler in xv6


Sumith (140050081)
Shubham Goel (140050086)

Design of the scheduler

We propose a modified weighted round-robin scheduler as our design for the required
priority scheduling. We store the priority
​ in the ​prio variable of the struct proc. We also
have
​ a ​pending variable in the struct which stores the number of slices that process is to
be run for. In every round (iteration over all processes in ptable.proc), we increment the
number of pending cycles of processes by their priority (which we have constrained to be
positive). This ensures that all processes are (eventually) allotted cpu times in proportion
to their priorities. Say a process blocks running for `pending` cycles, then the cycles
pending to be run remain stored in the `pending` variable. Hence, when the process
unblocks at a later time it gets the slices it missed out because of blocking.

On timer interrupt

If there
​ are some ​pending slices left, we anyways are going to run the same process again
after switching into the scheduler, so we optimize by reducing the overhead of 2 context
switches. A call to `yield()` context switches the scheduler in only if there are no pending
slices left, or the process has been running for too long (yield() context switches out every
100th call).

On blocking calls

Our ​pending variable stores the number of slices remaining to be run, hence when it is
scheduled later, it gets the slices it missed out because of blocking, and compensates for
the lost cpu-time.
Runtime Complexity

O(1) (amortized), as the default xv6 scheduler is of the same complexity. We just update
required variables at the right locations.

Implementation of the scheduler

setprio(prio) and getprio( )

We implement these as standard syscalls which update or read from struct proc. We use
ptable.lock inside these functions to preserve critical data structures.

On timer interrupt

The process tries to schedule via the yield() function. If there are slices pending to be run,
then do not switch to the scheduler and give the next slice to same process. Here we try
to reduce the overhead of going to the scheduler again. If the pending variable goes to 0,
then all the slices for the current round have been run and the process can now call the
scheduler. We also switch to the scheduler if the process has too many pending slices to
be run. This ensures that other processes do not starve.

On blocking calls

Our ​pending variable might not have reached 0 and hence will store the slices yet to be
run. The scheduler switches in a new process. In the interim, for all the further rounds, if
the process is sleeping we add the number of slices it deserved in the current round to
the pending variable so that when the process unblocks it can claim its share. Note a
subtle point here, when there is no process that can be scheduled in current round, there
is no point in incrementing pending for all the processes. This ensures that the pending
slices do not increase too much. Hence, if no process can be scheduled in the current
round, we do not increment pending variable of the processes.

Initialization

We set the prio to be 1 and pending to be 0 for the userinit process. Also, in fork, we set
the priority of the child to same as that of the parent with it’s pending equal to 0.

2
Corner cases

What is the default priority of a process before its priority is set?

- We have set default priority as 1

What will happen when a process you want to schedule blocks before its quantum
finishes?

- The pending slices is kept track and given to it when it unblocks (pending variable)

Is it safe when run over multiple CPU cores?

- Most importantly, the scheduler code assumes that there can be RUNNING
processes during scheduling. Also we acquire the lock at all updates to the proc
table.

We also prevent starvation by not letting a single process running for too long and yield()
letting the context switch after the 100th consecutive cycle.

Test-cases and observations

Instructions to run test-cases

Commands:
$​ ​make 
$ make CPUS=1 qemu  

In the QEMU emulator, run:

$ testmyscheduler  

Explanations

1) 2 CPU bound processes P1 with priority 1 and P2 with priority 2 with same amount
of work. We noticed the time taken from start to end for the two processes

3
(measured via uptime) is in the ratio 4:3. This is expected and can be calculated
theoretically. For priorities a and b (b > a), the ratio of times is 2 * b / (a + b).
Different from default xv6 due to priority scheduling.

2) 3 CPU bound processes with priorities 2, 4 and 8 with same amount of work. We
noticed the time taken from start to end for the three processes (measured via
uptime) is in the ratio 12:10:7. This also is expected and can be calculated
theoretically. ​Different from default xv6 due to priority scheduling.

3) 1 CPU bound and 1 partially IO bound with the same priority. In the original xv6
implementation, the CPU bound process would have finished much before the
partially IO bound process. However we ensure that the partially IO bound
process also gets its fair share of CPU time and hence observe that both process
end at almost the same time. ​Different from default xv6 due to priority scheduling.

We have designed IO processes using the sleep function as it behaves just like an IO
bound process which makes blocking system calls.

4) 2 IO bound processes with priorities 1 and 2. Both end at the same time and take
the same time to run. This is because priorities only change CPU time allocated to
different processes. Since the two processes are completely IO bound having a
greater CPU time will not affect the time taken by them to run. Hence priorities
have no affect to run. ​Consistency of implementation.

Anda mungkin juga menyukai