Anda di halaman 1dari 43

RTOS-I

Unit V

Contents

Architecture of the kernel


Tasks and Task states
Task scheduler
Scheduling algorithms
Monotonic analysis
Interrupt Service Routine
Semaphores
Mutexes

Running
Running means that the
microprocessor is executing the
instructions that make up this task.
Unless yours is a multiprocessor
system, there is only one
microprocessor, and hence only one
task that is in the running state at
any given time.

Ready
Ready means that some other task
is in the running state but that this
task has things that it could do if the
microprocessor becomes available.
Any number of tasks can be in this
state.

Blocked
Blocked means that this task
hasnt got anything to do right now,
even if the microprocessor becomes
available. Tasks get into this state
because they are waiting for some
external event. Any number of tasks
can be in this state as well.

Rate monotonic analysis


Static
Priority Assignment
Dynamic

It may not be accurate but it is a good starting


point.
It makes the following assumptions
1. Highest priority task runs first (Priority
based pre-emptive kernel)
2. All the tasks run at regular intervals (i.e
tasks are periodic)
3. Tasks do not synchronize with each other
(i.e no shared data)

RMA
In RMA, the priority is proportional to
the frequency of execution.
If (i)th task has an execution period of
Ti, and Ei is its execution time, then
(Ei/Ti) gives the percentage of the CPU
time required for execution of the (i)th
task
In RMA, the scheduling test
indicates how much CPU time is
actually utilized by the tasks.

RMA Scheduling Test


If n is the total number of tasks, the
equation for scheduling test is given by
(Ei/Ti) <= U(n) = n(2^(1/n) -1)
where, U(n) is called the utilization
factor.
Note: If number of tasks tend to infinity,
then 70% of the CPU time is utilized.
Hence ensure that CPU utilization is not
above 70%.

Task management function


calls

Create a task
Delete a task
Suspend a task
Resume a task
Change priority of a task
Query a task

ISR
Interrupts cause the microprocessor in the
embedded system to suspend doing whatever
it is doing and to execute some different code
instead, code that will respond to whatever
event caused the interrupt.
Interrupt service routine is also called as
interrupt handler or sometimes interrupt
routine.
ISR acts like a sub-routine that is called
whenever the hardware asserts the interrupt
request signal.

Continued..
Disabling the interrupts
Non maskable interrupts
Non maskable interrupts cannot be used for
shared data
Because of this, the non maskable interrupts
are most commonly used for event that are
completely beyond the normal range of the
ordinary processing.
For example, to allow system to react to a
power failure etc.

Continued..
What happens, if two interrupts
happen at the same time?
Can an interrupt request signal
interrupt another interrupt routine?

Continued..
Interrupt Latency time : The max time
interrupts are disabled + time to start the
execution of the first instruction in the ISR
Interrupt Response time: Time between the
receipt of the interrupt signal and starting
the code that handles the interrupt
Interrupt Recovery time: Time required for
the CPU to return to the interrupted code/
highest priority task

Interrupt latency, response


time and recovery time
Task running CPU context ISR execution CPU context Task running
saving
saving

Interrupt
Recovery
time
Interrupt
request
Interrupt
latency
Interrupt
response time

In preemptive kernel
Response time = interrupt latency +
time to save CPU registers context.
Recovery time = time to check whether
highest priority task is ready + time to
restore CPU context of the highest
priority task + time to execute the
return instruction
In non-preemptive kernel, Recovery
time = time to restore the CPU contents
+ time to execute the return instruction

The Shared-Data Problem


The shared data problem arises is the problem
that arises when an interrupt service routine
and the task code share the data or between
two tasks that share the data.
In order to solve the problem, we need to make
the shared data atomic
A part of the program is said to be atomic if
it cannot be interrupted.
Another definition of atomic is mean not that
a part of the program to be interrupted at all
but rather to mean that it cannot be
interrupted by anything that might mess up
the data.

Semaphores
Semaphore is a h/w or a s/w tag variable or a
tool whose value indicates the status of a
common source.
Its purpose is to lock the resource being used.
A process which needs the resource will check
the semaphore for determining the status of
the resource followed by the decision or
proceeding.
In multitasking OS, the activities are
synchronized using the semaphore techniques.

Semaphores and Shared


Data
A semaphore is a key that your code
acquires in order to continue execution.
If the semaphore is already in use, the
requesting task is suspended until the
semaphore is released by its current
owner.
In other words, the requesting task
says:Give me the key. If someone else
is using it, I am willing to wait for it!.

Rail road baron semaphore

Continued..
Semaphore is a kernel object that is
used for both resource
synchronization and task
synchronization.
Resource synchronization mutual
exclusion
Task synchronization serialization

Resource synchronization
Display

Task 1

Task 2

Task synchronization

Voice
single

A
D
C

Task to read
the Data

Task to write
the Data

Memory

D
A
C

Display Semaphore
Display

Semaphore

3
2

Task 1

Task 2

Semaphores and Shared Data


By switching the CPU from task to task, an
RTOS changes the flow of execution.
There, like interrupts, it can also cause a
new class of shared-data problems.
Semaphores is one of the tools used by
the RTOS to solve such problems.
Semaphores are used to implement any of :
Control access to a shared resource.
Signal the occurrence of an event.
Allow two tasks to synchronize their activities.

Types of semaphores
Binary semaphore
in which tasks can call two RTOS
functions, TakeSemaphore and
ReleaseSemaphore,
Counting

Counting semaphore

Pool of buffers
Buffer 1

Task1

Buffer 10

Task n

Semaphores and Shared


Data
Generally, only three operations can be
performed on a semaphore:

INITIALIZE
WAIT
PEND), and
SIGNAL

(also called CREATE),


(also called
(also called POST).

The initial value of the semaphore must be


provided when the semaphore is initialized.
The waiting list of tasks is always initially
empty.

Semaphore management
function calls

Create a semaphore
Delete a semaphore
Acquire a semaphore
Release a semaphore
Query a semaphore

Multiple Semaphores
What is the advantages of having multiple
semaphores instead of a single one that protects
all ?

This avoids the case when higher priority


tasks waiting for lower priority tasks even
though they dont share the same data.
By having multiple semaphores, different
semaphores can correspond to different
shared resources.

Multiple Semaphores (cont)


How does the RTOS know which semaphore
protects which data?

It doesnt. If you are using multiple


semaphores, it is up to you to
remember which semaphore
corresponds to which data.

Semaphores as a Signaling
Device
Semaphores can also be used as a simple way
of communication between tasks, or between
an interrupt routine and its associated task.

Semaphore Problems
Potential problems

Forgetting to take the semaphore


Forgetting to release the semaphore
Taking the wrong semaphore
Holding a semaphore for too long
Priority inversion
Some RTOSs resolve this problem with
priority inheritance, they temporarily
boost the priority of Task C to that of
Task A whenever Task C holds the
semaphore and Task A is waiting for it.

Semaphore Variants
Counting semaphores semaphores that can
be taken multiple times.
Resource semaphores useful for the shareddata problem.
Mutex semaphore automatically deal with
the priority inversion problem.
If several tasks are waiting for a semaphore
when it is released, different RTOS may vary in
the decision as to which task gets to run.
Task that has been waiting longest.
Highest-priority task that is waiting for the
semaphore

Deadlock
A deadlock, also called a deadly
embrace, is a situation in which two
tasks are each unknowingly waiting for
resources held by the other.
Assume task T1 has exclusive access to
resource R1 and task T2 has exclusive
access to resource R2. If T1 needs
exclusive access to R2 and T2 needs
exclusive access to R1, neither task can
continue. They are deadlocked.

Deadlock (cont)
The simplest way to avoid a deadlock is
for the tasks to
Acquire all resources before proceeding,
Acquire the resources in the same order, and
Release the resources in the reverse order.
Most RTOS allow you to specify a timeout when
acquiring a semaphore. This features allows a
deadlock to be broken.

Mutexes
Mutex stands for mutual exclusion
It is used for resource synchronization and
task synchronization
It can be achieved through the following
mechanisms
1. Disabling the interrupts
2. Disabling the scheduler
3. By test-and-set operations
4. Semaphores

Mutex management
function calls

Create a mutex
Delete a mutex
Acquire a mutex
Release a mutex
Query a mutex
Wait on mutex

Special features of mutex


(differences between semaphores and
mutexes)

Mutex is a special binary semaphore. A


mutex can be either in locked state or
unlocked state.
A task acquires (locks) a mutex and
after using resource, releases
(unlocks) it.
It is much more powerful than
semaphore because of its special
features listed below:

Continued..
1. It will be an owner.

(The task which acquires it is the


owner. Only the task that is the owner can release the mutex,
not any other task. Binary semaphore can be released by any
task that need not to be the originally task that acquired it.)

2. Owner can acquire a mutex multiple times


in the locked state. If the owner locks it n
times, the owner has to unlock it n times.
3. A task owning a mutex, cannot be deleted.
4. The mutex supports priority inheritance
protocol to avoid priority inversion problem.

Ways to Protect Shared Data


So far we have learnt two methods:
disabling interrupts and use
semaphores.
The third way is disabling task switches.
Disabling task switches To avoid
shared-data corruptions caused by an
inappropriate task switch, you can
disable task switches while you are
reading or writing the shared data.

Ways to Protect Shared Data


Comparison of the 3 methods of
protecting shared data:
1. Disabling interrupts.
Advantages.
Only method that works if your data is shared
between you task code and your interrupt
routines and of all other tasks in the system.
It is fast.

Disadvantages.
Affect the response times of all the interrupt
routines.

Ways to Protect Shared Data


2. Taking semaphores.
Advantages.
It affects only those tasks that need to take
the same semaphore.

Disadvantages.
Take up microprocessor time.
Will not work for interrupt routine.

3. Disabling task switches.


Somewhere in between the two. It has
no effect on interrupt routines, but it
stops response for all other tasks cold.

Summary
Task is nothing but an event or a
process running on an operating
system.
Scheduler is the heart of the kernel.
Rate monotonic analysis will be a good
start for designing a real time system.
Shared data problems can be solved
by semaphores and mutexes.

Anda mungkin juga menyukai