Critical
software
[Software Testing &
tools]
M S Prasad
This lecture note is based on open literature and information available on web. It is
suitable for Grad/Post grad students of Avionics.
Testing safety-critical software systems
And Tools :
Introduction
. Safety-critical standards
On the other hand, other programming features which provide reliability and
are less likely to lead to errors are:
Strong typing.
Runtime constraint checking.
Parameter checking.
And, in general, programming languages well structured in blocks and
Safety Critical Software Testing # m s prasad Page 4
which force modular programming.
However, using formal methods for large scale systems is fairly complicated
and time-consuming and has big problems such as human introducing errors
in the specification and proves.
It is also almost impossible to formally prove everything used to develop the
system such as the compiler, the operating system in which the system will
ultimatelyoperate..
For small systems, where formal specifications and proves are easier to deal
with, the approach can be very successful. The technique used to overcome
problems with large scale systems is to try to separate the critical functionality
of the system from the other non-critical parts. This way of using components
with different safety integrity levels works well providing it is proved that the
non-critical components cannot affect the high integrity ones or the whole
system.
Note :
. “The program used to control the NASA space shuttle is a significant example
Safety Critical Software Testing # m s prasad Page 5
of software whose development has been based on specifications and formal
methods. As of March 1997 the program was some 420,000 lines long. The
specifications for all parts of the program filled some 40,000 pages. To
implement a change in the navigation software involving less than 2% of the
code, some 2,500 pages of specifications were produced before a single line of
code was changed. Their approach has been outstandingly successful. The
developers found 85% of all coding errors before formal testing began, and
99.9% before delivery of the program to NASA. Only one defect has been
discovered in each of the last three versions. In the last 11 versions of the
program, the total defect count is only 17, an average of fewer than 0.004
defects per 1,000 lines of code.”
The second approach is based on assuming that errors exist and the main aim
will be to design prevention and recovery mechanisms in order to avoid
hazards or risks caused by the system. These mechanisms will go from small
parts of the code such as control errors inside procedures and functions,
through all the software and ending on the whole system. This prevention
and recovery mechanisms are based sometimes on redundancy approaches
which mean replicating the critical parts of the system. For instance, it is
common to use redundancy techniques systems in aircrafts where some parts
of the control system may be triplicated. An example of redundancy focusing
only in the software part of a critical-system is the N-version programming
technique also known as multi-version programming. In this approach,
separate groups develop independent versions of the same system
specifications. Then, the outputs are tested to check that they match in the
different versions. However, this is not infallible as the errors could have been
introduced in the development of the specifications and also because different
versions may coincide in errors.
Some well-known techniques used to generate test cases to test these kinds of
systems are white box and black box testing and reviews. However, they are
taken to a further level of detail than with typical systems.
Complex static analysis techniques with control and data flow analysis as well
as checking that the source code is consistent with a formal mathematical
specification are also used. Tools such as SPARK Examiner are available for that
[12]. Dynamic analysis testing and dynamic coverage analysis are also
performed using known techniques such as equivalence partitioning,
boundary value analysis and structural testing.
Any tools used to verify and test safety-critical software systems have to be
developed in the same formal way as the systems they will test
Considering now some of the specific techniques from safety engineering to
test and verify safety-critical software systems, we can name a range of them
such as probabilistic risk assessment, a method which combines failure modes
and effect analysis (FMEA) with fault trees analysis (FTA), also failure modes,
effects and criticality analysis (FMECA), an extension of the former FMEA,
hazard and operability analysis (HAZOP), hazard and risk analysis and finally
tools such as cause and effect diagrams (also known as fishbone diagrams).
The main idea behind all these techniques is the same. The first step in PRA is
to perform a preliminary hazard analysis to determine which hazards can
affect system safety. Then, the severity of each possible adverse consequence
is calculated. It can be classified as catastrophic, hazardous, major, minor or
not safety related. The probability of each possible consequence to occur is
also calculated next. We can classify them in probable, remote, extremely
remote and extremely improbable. Assessment of risks is made by combining
both the severity of consequences with the probability of occurrence (in a
matrix). For this evaluation we use different risk criteria like risk-cost trade-
offs, risk benefit of technological options, etc. Risks that fall into the
Safety Critical Software Testing # m s prasad Page 7
unacceptable category (e.g.: high severity and high probability), that is to say,
are unacceptable, must be mitigated by some means such as safeguards,
redundancy, prevention and recovery mechanisms, etc., to reduce the level of
safety risk. Probabilistic risk assessment also uses tools such as cause and
effect diagrams .
. References
[3] Bill St. Clair, Software securing critical systems, VME and Critical Systems,
2006,
[4] IPL Information Processing Ltd, An Introduction to Safety Critical Systems,
Testing Papers.
[5] NASA LaRC Formal Methods Program: What is Formal Methods?
[6] Frederick P. Brooks, Jr. , No Silver Bullet: Essence and Accidents of Software
Engineering,
1986.
LDRA
Our coding standards compliance tools let you combine standards and define appropriate
rule subsets, select individual rules, and add your own. Within the tool, you can easily
check for coding standards compliance to any single standard or combination of
standards or subsets. And you can check compliance of a single code base against
multiple standards to compare how the code fulfills each one and see what would be
required to adapt the code to conform to it.
LDRA’s coverage analysis tools address the most stringent requirements in the safety-
and security-critical markets
We are the only company that offers coverage analysis at both the source code and
object code levels to help you meet the most stringent coverage requirements. LDRA
tools automatically generate test cases, execute those test cases, and visually report
levels of coverage analysis, such as statement, branch/decision, procedure/function call,
MC/DC, dynamic data flow, and more.
Our tools support C, C++, Java, Ada and Assemblers, running on a broad range of target
platforms—from powerful 64-bit microprocessors to highly constrained 8- or 16-bit
microcontrollers. LDRA tools can automatically generate test cases that provide 50-80% of
coverage. And our intuitive test case building environment lets developers quickly
augment those test cases to increase their coverage if necessary.
LDRA is a stand alone tool to meet Code coverage a sper DOD standards.
LDRA lets you perform unit and integration tests on the host and target hardware
With LDRA, you can quickly and easily generate and execute tests at the unit and
integration levels, both on the host (standalone or with target simulation) as well as on
the target hardware. We provide test generation (test harness, test vectors, code stubs)
and result-capture support for a wide range of host and target platforms. Our optimised
instrumentation technology lets you pull test information even off highly constrained 8-
and 16-bit microcontrollers, and through high-performance 32- and 64-bit processors.
With that range of support, your team has a common unit testing and integration testing
environment for multiple projects with different target platforms.