Anda di halaman 1dari 5

Scheduling Algorithms for Multi-Tasking in Real-Time Environments

Francesco Bullo GE 393, Intro to Mechatronics, Spring 2003 January 29, 2003 Introduction
The purpose of this handout is to present some analysis and design tools for the problem of scheduling multiple tasks on a single processor. In other words, this handout reviews tools needed to answer the following key question: how do we schedule on a single processor multiple tasks described by dierent request frequency and run-time? This brief presentation follows the lines of the treatment in the successful research article [1].

Introduction

Consider a mechatronic system where a computer is in charge of control and monitoring of an industrial processes. Assume the processor is in charge of multiple tasks, and it performs these tasks in an asynchronous manner. In other words, the processor is shared between multiple timecritical functions. For simplicity, we assume that the processor can switch between these tasks in negligible time, and that there are no memory constraints. We assume that the tasks are executed in response to events in the sensors and actuators connected to computer. The tasks cannot be executed before the event occurs. Each tasks must be completed before some xed time has elapsed following the request for it. The objective is to design a software system that can meet the deadlines associated with all the assigned tasks. We achieve this via a careful scheduling of these functions and a systematic approach to software design.

1.1

Tasks and their characterization

As usual, to obtain analytical results we need to make certain assumptions to dene the problem. (A1) the requests for all tasks are periodic with constant interval between requests; we call this period the request period, (A2) each task must be completed before the next request for it occurs, 1

(A3) request for a certain task does not depend on status of other requests, (A4) the run-time for each tasks is constant. According to these assumptions, any task is fully characterized by two numbers. We use 1 , . . . , m to denote m periodic tasks with T1 , . . . , Tm their requests periods, and C1 , . . . , Cm their run-times. 1. Assumption (A1) is realistic but restrictive. (The article [1] also discusses non-periodic tasks such as initialization routines) 2. Assumption (A2) eliminates queuing problems. 3. Assumption (A3) does not exclude the situation in which an occurrence of a taks j must follow a certain xed number, say N , of occurrences of a task i . This can be modeled by choosing request periods Tj , Ti such that Tj = N Ti and by requiring the rst request for Tj to be synchronized with the N th request for Ti . 4. Assumption (A4) is realistic.

1.2

Scheduling algorithms

A scheduling algorithm is a set of rules that determine the taks to be executed at a particular moment. We focus on algorithms that are pre-emptive and priority-driven. This means that whenever an event takes place requiring a task that has a priority higher that the task currently being performed by the processor, the running task is interrupted and the newly requested task is started. Given this pre-emptive priority-driven logic, a scheduling algorithm is completely characterized by the method in which priorities are assigned to tasks. If priorities are assigned to tasks once and for all, then the algorithm is said static or xed priority. If priorities assigned to tasks might change from time to time, then the algorithm is said dynamic. Finally, mixed scheduling algorithms are also possible, when certain tasks have xed priorities and others are dynamically allocated.

A xed priority scheduling algorithm

Because of assumption (A2), the deadline for a request is the time of the next request for the same task. If at time t a certain deadline expires and a request is not fullled before its next occurrence, we say that an overow occurs at time t. A scheduling algorithm is feasible if no overow takes place. The response time of a task request is the time span between the request and the end of the response to that request. A critical instant is an instant (and a state for all other requests) for which a task request will have the largest possible response time. (what is the scenario that will lead to the worst case response time?) Theorem 2.1. A critical instant for any tasks occurs whenever the task is requested simultaneously with requests for all higher priority tasks. 2

Proof. Suppose task m is requested at time t. If no higher priority tasks is requested inside the time period [t, t + Cm ], the response time for m will be Cm . Assume instead a higher priority tasks is requested at time t inside the time period [t, t + Cm ] (possibly multiple times), say the task i . Then the response time for m will increase at least to Cm + Ci , possibly more depending on how many times task m is preempted by task ti before being completed. Accordingly, the worst delay in the response time happens when t is equal to t. To complete the proof, repeat the argument for all tasks i with priority higher than m . We can use this theorem to verify whether of not a given scheduling algorithm (i.e., priority assignment) will be feasible. If the requests for all tasks at their critical instants are fullled before the respective deadlines, then the algorithm is feasible. Example 2.2. Consider a set of two tasks 1 and 2 with request periods T1 = 2, T2 = 5, and run-times C1 = 1, C2 = 1. If 1 has higher priority, then algorithm is feasible, and furthermore C2 can be increased to 2. If 2 has higher priority, then neither values of C1 and C2 can be increased. Let us expand on the example. Consider two tasks 1 , 2 with T1 < T2 . Assume 1 has higher priority. If the overow is avoided then T2 /T1 C1 + C2 T2 (1)

where x denotes the largest integer smaller than or equal to x. If we let instead 2 have higher priority, and if overow is avoided then C1 + C 2 T 1 (2)

One can show that equation(2) implies equation (1), but not vice-versa. Proceed as follows. Assume equality (2) and consider the following chain of equalities: T2 /T1 C1 + C2 T2 /T1 C1 + T2 /T1 C2 T2 /T1 (C1 + C2 ) T2 /T1 T1 T2 . One can instead show that the opposite is not true. In other words, it is not true that equation (1) implies equation (2). Hence, it is better to assign higher priority to 1 as opposed to 2 . According to this reasoning, we design the rate-monotonic priority assignment as the scheme in which higher priorities are assigned to tasks with higher request rates. Theorem 2.3. If a feasible priority assignment exists for some task set, the rate-monotonic priority assignment is feasible for that task.

Achievable Processor Utilization


Let us determine a least upper bound to processor utilization in xed priority systems. To do this, we need to dene the processor utilization factor as the fraction of processor time spent executing the task set. Since the fraction of processor time for task i equals Ci /Ti , the utilization factor for the task set {1 , . . . , m } is m Ci . (3) U= Ti i=1 Next, we investigate how large can the utilization factor be. A task set fully utilizes the processor time if the priority assignment is feasible and if any increase in the run-time of any of the tasks will lead to overow. For a given xed priority scheduling algorithm, the least upper bound of the utilization factor is the minimum U over all sets that fully utilize the processor. (This unfortunately is not 1.) Theorem 2.4. For a set of m tasks with xed priority assignment, the least upper bound to processor utilization factor is U = m(21/m 1). For m = 2, the least upper bound to processor utilization factor is U = 2(21/2 1) .83, and for m + the processor utilization factor is U ln(2) .69. Let us prove the case of m = 2. Theorem 2.5. For a set of 2 tasks with xed priority assignment, the least upper bound to processor utilization factor is U = 2(21/2 1). Proof. As usual, the tasks 1 , 2 have request periods T1 , T2 and run-times T1 , T2 . Assuming T2 > T1 , the xed priority assignment algorithm leads to 1 having higher priority than 2 . In a critical time-zone for 2 (i.e., the time period between a critical time and the end to the response to the corresponding request), there are T2 /T1 requests for 1 . Lets now choose C2 to fully utilize the processor time inside the critical zone. Two cases occur: Case 1 The run-time C1 is so short that all requests for 1 are completed before the second 2 request. That is: C1 T2 T1 T2 /T1 . Thus, the largest possible value for C2 is C2 = T2 C1 T2 /T1 The corresponding U = C1 /T1 + C2 /T2 equals U = 1 + C1 1 1 T2 /T1 T1 T2

In this case, U is monotonically decreasing with C1 .

Case 2 The execution of the T2 /T1 th request for 1 overlaps with the following request for 2 . That is C1 T2 T1 T2 /T1 . Thus, the largest possible value for C2 is C2 = C1 T2 /T1 + T1 T2 /T1 and the corresponding utilization factor is U = (T1 /T2 ) T2 /T1 + C1 1 1 T2 /T1 T1 T2

In this case, U is monotonically increasing with C1 . The minimum of U occurs at the boundary between these two cases. That is, for C1 = T2 T1 T2 /T1 . The corresponding utilization factor is U = 1 (T1 /T2 ) ( T2 /T1 (T2 /T1 )) ((T2 /T1 ) T2 /T1 ) The minimum of this function can be shown to be at T2 /T1 = 1 and (T2 /T1 ) T2 /T1 = 21/2 1. 1f (Hint: let U = 1 f I+f , where the integer I = T2 /T1 and the fraction f = T2 /T1 I, minimize with respect to f and I, this leads to I = 1, and f = 2 1.) This concludes the proof.

Deadline Driven Scheduling Algorithm


The idea is to dynamically assign priorities on the base of deadlines of the current requests. A task will be assigned the highest priority if the deadline for its current request is the nearest. We refer to this scheduling algorithm as deadline driven scheduling algorithm. Theorem 2.6. When the deadline driver scheduling algorithm is used to schedule a task set on a processor, there is no processor idle time prior to an overow. The proof is very interesting and slick. Please read the reference. Theorem 2.7. For a given set of m tasks, the deadline driver scheduling algorithm is feasible if and only if (C1 /T1 ) + . . . + (Cm /Tm ) 1.

References
[1] C. L. Liu and J. W. Layland, Scheduling algorithms for multiprogramming in a hard real-time environment, Journal of the Association for Computing Machinery, vol. 20, no. 1, pp. 4661, Jan. 1973. 5

Anda mungkin juga menyukai