Anda di halaman 1dari 16

Development process of an embedded system

Software Design Overview

Requirements Analysis

Architecture

Design

Implementation

Module Testing

Design Validation Testing

Development Process
Consists of cycles of editing-testing debugging.
Processor and hardware part once chosen remains fixed, the application software codes
have to be perfected by a number of runs and tests.

Software Tools

Software Development Lit (SDK)

Source-code Engineering Software

RTOS

Integrated Development Environment

Prototyper

Editor

Interpreter

Compiler

Assembler

Cross Assembler

Testing and debugging tools

Locator

Typical tool Features

Comprehension,

Navigation and browsing,

Editing,

Debugging,

Configuring (disabling and enabling specific C++ features)

Compiling

Monitors, enables and disables the implementation virtual functions.

Finds the full effect of any code change on the source code.

Searches and lists the dependencies and hierarchy of included header files.

What is software quality?

Software quality is defined as

Build the software described in the system Requirements and Specifications

Conformance to explicitly documented development standards, i.e. Build the software


the right way

Conformance to implicit characteristics that are expected of all professionally


developed software, i.e. Build software that meets the expectations of a reasonable
person

Managing Software Quality


1.

Define what quality means for large software systems


2. Measure Quality of a complete or partial system
3. Devise actions to improve quality of the software

Process improvements

Process Performance improvements => Product Productivity improvements

Product improvements
4. Monitor Quality during development
Software Quality Assurance - a team devoted to encouraging and enforcing quality standards

Some quality attributes and metrics


Performance

Reliability
Correctness
Maintainability
Security
Interoperability
Usability
Extensibility
Reusability

Operating System
Operating system is nothing but the interface between the hardware and user of the computer.
In other words, operating system is a software program which will be working according to
the user commands using the hardware parts of the computer. Operating system plays a key
role in offering services like memory management, process management, file management,
resource allocation etc. Examples of operating systems are Windows. Linux, UNIX, apple
leopard, Novel Netware, Solaris etc. All these operating systems come under general purpose
operating systems (GPOS).
Suppose a person is driving a car on a highway at a speed of 70 miles per hour. Now,
somehow the car meets with an accident. Fortunately, the airbag deployed at the right time
and saved the life of the driver. So, we see that airbag is a very good feature in a car which
can save a life someday. But, did we think what would have happened if the airbag would
have deployed a few seconds late? Yes, we would have lost a life. So just imagine the
dependency on the accuracy of opening of the airbag.
So, what makes that airbag deploy at the right time? Its RTOS. RTOS stands for Real time
operating systems. Be it on cell phones, air conditioners, digital homes, cars etc. Most of
these systems use RTOS. Some of the most widely used RTOS are: LynxOS, OSE, QNX,
RTLinux, VxWorks, Windows CE
If we look at real time operating system (RTOS) is also an operating system, which will also
work as an interface between the hardware of the system and user. As the name suggests,
there is a deadline associated with tasks and an RTOS adheres to this deadline as missing a
deadline can cause affects ranging from undesired to terrible. RTOS also do the functions
like file management, process management, memory management etc.
If RTOS does the same functions like general purpose operating system (GPOS) then what is
the difference?

Difference Between General Purpose Operating System (GPOS)and


Real Time Operating System(RTOS):
We know that real time operating systems are deterministic, time dependent and mainly used
in hard real time systems. Whereas if we take general purpose operating systems they are not
deterministic, or time independent and they are mainly used for soft real time system.
Main difference between RTOS and GPOS:
Determinism: RTOS are deterministic operating system which will take same time to give
output for different inputs. Whereas in GPOS we cannot predict the output but we cannot
predict the time of the output, in GPOS mostly randomness is involved.
Example for RTOS is mainly used in defence systems which are more deterministic (missile
launching operations, communication systems etc.). GPOS are used for normal applications
like audio video systems, personal computer etc.
Hardware architecture: GPOS are used on heavy architecture; for example, PC, Servers,
mainframes etc. RTOS are light weight operating systems which are used in small
architecture like mobile phones and other embedded applications.
Usually RTOS process are fixed and normally used in small applications where we will use
only limited number of tasks to do, example: if we take normal chocolate vending machine it
will show the display of different chocolates and price of the chocolates. If the user selected
the chocolate vending machine must check the amount given by the user and deliver the
chocolate, if the amount is not sufficient it should display a warning message. But in GPOS
we can add another process accordingly to do more tasks; for example, in normal windows
operating system we add many extra applications like IDE, softwares for games, players
etc.).
In the hardware RTOS can run on very small configuration like (few kilobytes of RAM,
micro controller as CPU etc.) if we take GPOS need minimum of (64 MB of RAM and need
high end microcontroller or microprocessor as CPU).
Kernel: RTOS will use pre-emptive kernel where as normal GPOS will use non pre-emptive
kernel. Pre-emptive kernel will run in kernel mode and process the threads according to the
priority. If a high priority process came to the CPU pre-emptive kernel will immediately treat
all other process as the external process, context switch to the high priority process and
process it. Where as in the non-pre-emptive kernel if a high priority process came to the
processor, the existing process which CPU is processing may allow the high priority process
to context switch and process high priority process or high priority process should wait until
the existing process is completed.

Scheduling: Scheduling is the process of arranging, controlling and optimizing work and
workloads in a production process. The scheduling algorithm is developed for multitasking
operating system which has to manage number of process at a time. RTOS scheduling
algorithm is based on priority, in which the process should be scheduled according to their
priority. In RTOS always highest priority tasks are scheduled first and low priority process
are paused until the high priority process are processed. If any high priority process came for
scheduling the existing process are context switch and schedule the high priority process first
for processing.
In GPOS Scheduling is not based on priority if a high priority process came for processing it
should wait until the existing process is processed.
For GPOS throughput is high (throughput means number of process complete there
processing at one time) for RTOS throughput is always low. RTOS used rate monotonic,
earlier deadline first, pre-emptive scheduling algorithms whereas GPOS uses completely fair
scheduling, round robin, rotary inventor fair scheduling, etc.
Interrupt Latency: interrupt latency of the RTOS is always zero. When we take the GPOS
interrupt latency is more when it has to process when number of process at a time.
Multitasking and Task Scheduling
Manages tasks.
Transparently interleaves task execution, creating the appearance of many programs
executing simultaneously and independently.
RTOS is more economical for embedded devices rather than choosing GPOS.

Types of Real Time Operating Systems:

System is nothing but group of peripherals connected to each other to process the input data
and give output. System which is time dependent that is to process the input data and give
output in given time, such systems are called real time systems. Real time system is divided
into two systems

Hard Real Time Systems.

Firm Real Time Systems.

Soft Real Time Systems.

Hard Real Time Systems:


Hard real time system is purely deterministic and time constraint system for example users
expected the output for the given input in 10sec then system should process the input data and
give the output exactly by 10th second. Here in the above example 10 sec. is the deadline to
complete process for given data. Hard real systems should complete the process and give the
output by 10th second. It should not give the output by 11th second or by 9th second, exactly by
10th second it should give the output. In the hard real time system meeting the deadline is very
important if deadline is not met the system performance will fail. Another example is defence
system if a country launched a missile to another country the missile system should reach the
destiny at 4:00 to touch the ground what if missile is launched at correct time but it reached
the destination ground by 4:05 because of performance of the system, with 5 minutes of
difference destination is changed from one place to another place or even to another country.
Here system should meet the deadline.

Firm Real Time Systems:


These type of RTOS are also required to adhere to the deadlines because missing a deadline
may not cause a catastrophic affect but could cause undesired affects, like a huge reduction in
quality of a product which is highly undesired.
Soft Real Time System:
In soft real time system, the meeting of deadline is not compulsory for every time for every
task but process should get processed and give the result. Even the soft real time systems
cannot miss the deadline for every task or process according to the priority it should meet the
deadline or can miss the deadline. If system is missing the deadline for every time the
performance of the system will be worse and cannot be used by the users. Best example for
soft real time system is personal computer, audio and video systems, On-line Databases etc.

Important Terms

Memory Management: In simple words how to allocate memory for every program which
is to be run and get processed in the memory (RAM or ROM). The schemes like demand
paging, virtual memory, segmentation will under this management only.
Segmentation: It is a memory management scheme where the physical memory is dividing
into logical segments according to the length of the program. In the segmentation it will avoid
unused memory, sharing will be done easily, protection for the program. Sometime Main
memory cannot allocate memory to the segments Because of its variable length and large
segments
Paging: in this scheme the physical memory is divided in to fixed size pages. It has all
functions of segmentation and also solves its disadvantages. Virtual memory is a memory
management scheme where some part of secondary storage device will be used a physical
memory when program lacks the physical memory to run the program.
Process Management: thread contains set of instructions which can execute independently
of other programs. Collection of thread is called the process or we can say process contains
the sequential execution of program and state control of the operating system. Every
operating system works by executing series of processes by the processor and give result
back to the main memory. Operating systems contains two types of process
System process: these processes are main responsible for working of operating system.
Application process: These processes are invoked when particular application is stared and
start executing with the help of other system process.
Operating system should process each and every process given by the user and give results
back, OS will also process the process according to the priority. Scheduling algorithm will
take care of processes processing and Inter Process communications (IPC) semaphores,
message queues, shared memory, pipes, FIFOs will take care of resource allocation of the
processes.
File Management: How the files are place in the memory which file should be used by user,
file permission (read, write and execute permissions), arrangement of files in the secondary
memory and primary memory using file system etc. all the above functions are done by the
file management.
Device Management: management of devices like tape drive, hard drives, processor speed,
optical drive, and memory devices will be done by the operating system.

Features of RTOS
An RTOS must be designed in a way that it should strike a balance between supporting a rich
feature set for development and deployment of real time applications and not compromising
on the deadlines and predictability.
The following points describe the features of an RTOS :

Context switching latency should be short. This means that the time taken while
saving the context of current task and then switching over to another task should be
short.

Interrupt latency: The time taken between executing the last instruction of an
interrupted task and executing the first instruction of interrupt handler should be
predictable and short. This is also known as interrupt latency.

Interrupt Dispatch latency: Similarly, the time taken between executing the last
instruction of the interrupt handler and executing the next task should also be short
and predictable. This is also known as interrupt dispatch latency.

Reliable and time bound inter process mechanisms should be in place for processes to
communicate with each other in a timely manner.

An RTOS should have support for multitasking and task pre-emption. Pre-emption
means to switch from a currently executing task to a high priority task ready and
waiting to be executed.

Real time Operating systems but support kernel pre-emption where-in a process in
kernel can be pre-empted by some other process.

Main Functionality of RTOS-Kernels


Task management:

Execution of quasi-parallel tasks on a processor using processes or threads


(lightweight process) by
maintaining process states, process queuing
allowing for pre-emptive tasks (fast context switching) and quick interrupt handling

CPU scheduling (guaranteeing deadlines, minimizing process waiting times, fairness


in granting resources such as computing power)

Process synchronization (critical sections, semaphores, monitors, mutual exclusion)

Inter-process communication (buffering)

Support of a real-time clock as an internal time reference

Task synchronization:

In classical operating systems, synchronization and mutual exclusion is performed via


semaphores and monitors.

In real-time OS, special semaphores and a deep integration into scheduling is


necessary (priority inheritance protocols

Further responsibilities:

Initializations of internal data structures (tables, queues, task description blocks,


semaphores, )

Task States

Run:

A task enters this state as it starts executing on the processor

Ready:

State of those tasks that are ready to execute but cannot be executed because the
processor is assigned to another task.

Wait:

A task enters this state when it executes a synchronization primitive to wait for an
event, e.g. await primitive on a semaphore. In this case, the task is inserted in a queue
associated with the semaphore. The task at the head is resumed when the semaphore is
unlocked by a signal primitive.

Idle:

A periodic job enters this state when it completes its execution and has to wait for the
beginning of the next period.

Multitasking
:
The kernel is the core component within an operating system. Operating systems
such as Linux employ kernels that allow users access to the computer seemingly

simultaneously. Multiple users can execute multiple programs apparently


concurrently.
Each executing program is a task (or thread) under control of the operating system.
If an operating system can execute multiple tasks in this manner it is said to be
multitasking.
The use of a multitasking operating system can simplify the design of what would
otherwise be a complex software application:

The multitasking and inter-task communications features of the operating


system allow the complex application to be partitioned into a set of smaller
and more manageable tasks.

The partitioning can result in easier software testing, work breakdown within
teams, and code reuse.

Complex timing and sequencing details can be removed from the application
code and become the responsibility of the operating system.

Threads

Thread is the smallest sequence of programmed instructions that can be managed


independently by a scheduler; e.g., a thread is a basic unit of CPU utilization.

Multiple threads can exist within the same process and share resources such as
memory, while different processes do not share these resources:

Typically shared by threads: memory.

Typically owned by threads: registers, stack.

Thread advantages and characteristics:

Faster to switch between threads; switching between user-level threads requires no


major intervention by the operating system.

Typically, an application will have a separate thread for each distinct activity.

Thread Control Block (TCB) stores information needed to manage and schedule a
thread

Multithreading

In computer architecture, multithreading is the ability of a central processing unit


(CPU) or a single core in a multi-core processor to execute multiple processes or
threads concurrently, appropriately supported by the operating system.

Multiprocessing systems include multiple complete processing units, multithreading


aims to increase utilization of a single core by using thread-level as well as
instruction-level parallelism. As the two techniques are complementary, they are
sometimes combined in systems with multiple multithreading CPUs and in CPUs with
multiple multithreading cores.

Scheduling:

The scheduler is the part of the kernel responsible for deciding which task should be
executing at any particular time. The kernel can suspend and later resume a task many
times during the task lifetime.

The scheduling policy is the algorithm used by the scheduler to decide which task to
execute at any point in time. The policy of a (non real time) multi user system will

most likely allow each task a "fair" proportion of processor time. The policy used in
real time / embedded systems is described later.

In addition to being suspended involuntarily by the kernel a task can choose to


suspend itself. It will do this if it either wants to delay (sleep) for a fixed period, or
wait (block) for a resource to become available (eg a serial port) or an event to occur
(eg a key press). A blocked or sleeping task is not able to execute, and will not be
allocated any processing time.

Referring to the numbers in the diagram above:

At (1) task 1 is executing.

At (2) the kernel suspends (swapps out) task 1 ...

... and at (3) resumes task 2.

While task 2 is executing (4), it locks a processor peripheral for its own
exclusive access.

At (5) the kernel suspends task 2 ...

... and at (6) resumes task 3.

Task 3 tries to access the same processor peripheral, finding it locked task 3
cannot continue so suspends itself at (7).

At (8) the kernel resumes task 1.

The next time task 2 is executing (9) it finishes with the processor peripheral
and unlocks it.

The next time task 3 is executing (10) it finds it can now access the processor
peripheral and this time executes until suspended by the kernel.

Intertask Communication and Synchronization

Different tasks in an embedded system typically must share the same hardware and software
resources or may rely on each other in order to function correctly. For these reasons,
embedded OSs provide different mechanisms that allow for tasks in a multitasking system to
intercommunicate and synchronize their behaviour so as to coordinate their functions, avoid
problems, and allow tasks to run simultaneously in harmony.
Embedded OSs with multiple intercommunicating processes commonly implement interposes
communication (IPC) and synchronization algorithms based upon one or some combination
of memory sharing, message passing, and signalling mechanisms.
With the shared data model shown in below figure, processes communicate via access to
shared areas of memory in which variables modified by one process are accessible to all
processes.

Figure: Memory sharing.


While accessing shared data as a means to communicate is a simple approach, the major issue
of race conditions can arise. A race condition occurs when a process that is accessing shared
variables is pre-empted before completing a modification access, thus affecting the integrity
of shared variables. To counter this issue, portions of processes that access shared data, called
critical sections, can be earmarked for mutual exclusion (or Mutex for short). Mutex
mechanisms allow shared memory to be locked up by the process accessing it, giving that
process exclusive access to shared data. Various mutual exclusion mechanisms can be
implemented not only for coordinating access to shared memory, but for coordinating access
to other shared system resources as well. Mutual exclusion techniques for synchronizing
tasks that wish to concurrently access shared data can include.
Processor-assisted locks for tasks accessing shared data that are scheduled such
that no other tasks can pre-empt them; the only other mechanisms that could
force a context switch are interrupts. Disabling interrupts while executing code in
the critical section would avoid a race condition scenario if the interrupt handlers
access the same data.
Semaphores, which can be used to lock access to shared memory (mutual
exclusion) and also can be used to coordinate running processes with outside
events (synchronization).

Intertask communication via message passing is an algorithm in which messages (made up of


data bits) are sent via message queues between processes. The OS defines the protocols for
process addressing and authentication to ensure that messages are delivered to processes
reliably, as well as the number of messages that can go into a queue and the message sizes. As

shown in Figure below, under this scheme, OS tasks send messages to a message queue, or
receive messages from a queue to communicate.

Figure . Message queues

Anda mungkin juga menyukai