-1
Data flow diagrams (DFD) are the part of the SSADM method (Structured Systems Analysis
and Design Methodology), intended for analysis and information systems projection. Data
flow diagrams illustrate how data is processed by a system in terms of inputs and outputs.
1
Data Flow Diagram Notations:
Process: A process transforms incoming data flow into outgoing data flow.
DataStore: Datastores are repositories of data in the system. They are sometimes also
referred to as files.
Dataflow: Dataflows are pipelines through which packets of information flow. Label the
arrows with the name of the data that moves through it.
External Entity: External entities are objects outside the system, with which the system
communicates. External entities are sources and destinations of the system's inputs and
outputs.
Levels of DFDs:-
Data flow diagrams can be expressed as a series of levels. We begin by making a list of
business activities to determine the DFD elements (external entities, data flows, processes,
and data stores). Next, a context diagram is constructed that shows only a single process
(representing the entire system), and associated external entities. The Diagram-0, or Level 0
diagram, is next, which reveals general processes and data stores. Following the drawing of
Level 0 diagrams, child diagrams are drawn (Level 1 diagrams) for each process illustrated
by Level 0 diagrams, and so on.
Context Diagrams:
A context diagram is a top level (also known as Level 0) data flow diagram. It only contains
one process node (process 0) that generalizes the function of the entire system in relationship
to external entities.
2
Figure: General Form of a Level 0 DFD.
Level 1 DFD:
The first level DFD shows the main processes within the system. Each of these processes can
be broken into further processes until we reach pseudocode. DFD Level 1 refers to functions
(bubbles) in a Context Diagram. It is one level more detailed DFD for Context Diagram. The steps to
develop DFD level 1 are:
• Given the abstract data flows in Context Diagram, explicitly define in-flows and out-
flows in your software in details.
• For each type of in-flow, we should create a bubble (a process) to handle the in-flow.
• For each type of out-flow, we should use a bubble created in the last step or create a new
bubble to handle it.
• Define the necessary internal processes and data flows that are necessary to handle all
functionalities of our software, such that all in-flows (sources) and out-flows (sinks) are
connected.
3
Figure: Example of a Level 1 DFD Showing the Data Flow and Data Store Associated
with a Sub Process “Digital Sound Wizard.”
When producing a first-level DFD, the relationship of the system with its environment must
be preserved. In other words, the data flow in and out of the system in the Level 1 DFD must
be exactly the same as those data flows in Level 0.
Level 2 DFD:
DFD level 2 is a further decomposition of DFD level 1. The steps to develop DFD level 2 are
extremely similar to those for DFD level 1, but there are some extra requirements:
• Each bubble should both be clearly defined and also easy enough to implement and do
testing in later stages.
• The data flows should be consistent with DFD level 1 and Context Diagram.
Draw data flow diagrams in several nested layers. A single process node on a high level
diagram can be expanded to show a more detailed data flow diagram. Draw the context
diagram first, followed by various layers of data flow diagrams.
4
The nesting of data flow layers
Advantages of DFDs:-
• The DFD method is an element of object oriented analysis and is widely used. Use of
DFDs promotes quick and relatively easy project code development.
• DFDs are easy to learn with their few and simple-to-understand symbols.
• The structure used for designing DFDs is simple, employing English nouns or noun-
adjective-verb constructs.
Disadvantages of DFDs:-
• DFDs for large systems can become cumbersome, difficult to translate and read, and be
time consuming in their construction.
• Data flow can become confusing to programmers, but DFDs are useless without the
prerequisite detail.
• Different DFD models employ different symbols (circles and rectangles, for example, for
entities).
5
Experiment no.-2
Abstract:-
In today’s world, Information Technology has become an inseparable part of every business
organization. It not only helps to make business operations easier but also provides
significant rise in productivity. A Supermarket needs to keep records of the inventory and
sales. Alongwith, it needs to keep records of employees and generate their pay-slips, not to
forget the generation of bills for the customers. All this, if done manually, consumes a lot of
time, space and stationery. A better option is to computerize these operations using a suitable
software. This case study is an effort to study the way in which such automation of
supermarket can be done. This case study explains the whole procedure of computerization
step by step using a desktop application. It looks into the effective working of the
supermarket and so ensures that the desired result of profitability is achieved.
The software developed under the project ‘Computerization of Supermarket’ will be a user
friendly software that will allow the owner of software to perform the following automated
functions:
1. Records of suppliers
2. Records of supplies
3. Keep track of stock available
4. Automatic tracking for re-order levels
5. Bill generation
6. Employee database
7. Pay slip generation
8. Answers to various queries to the database by manager
9. Generation of details about budget and profits
The automation of all such operations would result in efficient working of the business, better
record keeping and productivity. It would make the running of supermarket an altogether
different experience for the owner. It will save customer time also, giving them a better
shopping experience.
The software under consideration in this case study will be referred to as ‘Supermarket
Automation Software’ (SAS). By utilizing the detailed reports generated by SAS, the
business can be better analyzed to get prepared for the future and hence give the customers
what they really want from the store, leading to bigger profits, which is the ultimate name of
the game in retailing.
6
Disadvantages of the existing system: -
The present system uses manual methods and the disadvantages of the existing system are as
listed below: -
7
Feasibility study:-
Purpose:
A feasibility study is a compressed, capsule version of the analysis phase of the system
development life cycle aimed at determining quickly and at a reasonable cost if the problem
can be solved and if it is worth solving. A feasibility study can also be viewed as an in-depth
problem definition.
A well-conducted feasibility study provides a sense of the likelihood of success and of the
expected cost of solving the problem, and gives management a basis for making resource
allocation decisions. In many organizations, the feasibility study reports for all pending
projects are submitted to a steering committee where some are rejected and others accepted
and prioritized.
Because the feasibility study occurs near the beginning of the system development life cycle,
the discovery process often uncovers unexpected problems or parameters that can
significantly change the expected system scope. It is useful to discover such issues before
significant funds have been expended. However, such surprises make it difficult to plan,
schedule, and budget for the feasibility study itself, and close management control is needed
to ensure that the cost does not balloon out of control. The purpose of a feasibility study is to
determine, at a reasonable cost, if the problem is worth solving.
It is important to remember that the feasibility study is preliminary. The point is to determine
if the resources should be allocated to solve the problem, not to actually solve the problem.
Conducting a feasibility study is time consuming and costly. For essential or obvious
projects, it sometimes makes sense to skip the feasibility study.
The feasibility study begins with the problem description prepared early in the problem
definition phase of the system development life cycle. The feasibility study is, in essence, a
preliminary version of the analysis phase of the system development life cycle. The
information collected during the feasibility study is used during project planning to prepare
schedules, budgets, and other project management documents using different tools.
Prototypes and simulation models are sometimes used to demonstrate technical feasibility.
Economic feasibility is typically demonstrated using cost/benefit analysis.
8
Cost of feasibility study:
The point of the feasibility study is to determine, at a reasonable cost, if the problem is worth
solving. Thus the cost of the feasibility study should represent a small fraction of the
estimated cost of developing the system, perhaps five or ten percent of the scope.
Types of feasibility:
• Technical feasibility—
Proof that the problem can be solved using existing technology. Typically, the analyst proves
technical feasibility by citing existing solutions to comparable problems. Prototypes, physical
models, and analytical techniques are also effective.
For SAS being studied here, requisite technology is easily available in market and can be
acquired for software development.
• Economic feasibility— Proof that the likely benefits outweigh the cost of solving the
problem; generally demonstrated by a cost/benefit analysis. The analyst demonstrates
economic feasibility through cost/benefit analysis.
In case of SAS, the increase in profit will certainly outweigh the investment. So, the project
is clearly economically feasible.
• Operational feasibility— Proof that the problem can be solved in the user’s environment.
Perhaps a union agreement or a government regulation constrains the analyst. There might be
ethical considerations. Maybe the boss suffers from computer phobia. Such intangible factors
can cause a system to fail just as surely as technology or economics. Some analysts call this
criterion political feasibility.
In case of supermarket, computers can be bought alongwith the required software and
employees can easily be trained for their respective usage if they are not technically sound
upto the required level.
• Organizational feasibility— Proof that the proposed system is consistent with the
organization’s strategic objectives. If not, funds might be better spent on some other project.
SAS provides an efficient way to achieve the organizational objectives. Hence, this project is
organizational feasible.
The steps in a typical feasibility study are summarized in Figure that follows.
9
Fig. The steps in a typical feasibility study.
Starting with the initial problem description, the system’s scope and objectives are more
precisely defined. The existing system is studied, and a high-level logical model of the
proposed system is developed using one or more of the analysis tools. The problem is then
10
redefined in the light of new knowledge, and these first four steps are repeated until an
acceptable level of understanding emerges.
Given an acceptable understanding of the problem, several possible alternative solutions are
identified and evaluated for technical, economic, operational, and organizational feasibility.
The responsible analyst then decides if the project should be continued or dropped, roughs
out a development plan (including a schedule, a cost estimate, likely resource needs, and a
cost/benefit analysis), writes a feasibility study report, and presents the results to management
and to the user.
Requirement Analysis:-
Functional Requirements:
The set of functionalities that are supported by the system are documented below –
register sales
Whenever any item is sold from the stock of the supermarket, this function will prompt the
clerk to provide product_id for each item. The data regarding the item type and the quantity
get automatically registered then. After the end of a sales transaction, it will print the bill
containing the serial number of the sales transaction, the name of the item, code number,
quantity, unit price, and item price. The bill should indicate the total amount payable.
Input given is automatically registered data about the item along with its quantity. The
processing is done as the sold-item gets registered.
generate bill
Input provided is automatically generated "generate bill command". The output is the
transaction bill containing the serial number of the sales transaction, the name of the item,
code number, quantity, unit price, and item price etc. The bill also mentions the total amount
payable.
update inventory
In order to support inventory management, this function decreases the inventory whenever an
item is sold. Again, when there is a new supply arrival, an employee can update the inventory
11
level by this function. The input is new supply (when arrives) or registered sold-items. The
processing is done whenever new supply arrives or items are sold this updates the inventory.
check inventory
The manager upon invoking this function can issue query to see the inventory details. In
response, it shows the inventory details.
print sales-statistics
Upon invoking this function, it will generate a printed out sales statistics for every item the
supermarket deals with for any particular day or any particular period.
update price
The manager can change the price of an item by invoking this function. Input given is change
price command along with the new assigned price. The processing updates the price of the
corresponding item in the inventory.
generate payslip
SAS keeps track of employees information and performance. It also generates pay-slips for
the employees. It can provide reports regarding employees based on particular requirements
of the management.
Non-functional Requirements:
Bill format
The bill should contain the serial number of the sales transaction, the name of the name of the
item, code number, quantity, unit price, and item price. The bill should indicate the total
amount payable.
The sales statistics report should indicate the quantity of an item sold, the price realized, and
the profit.
12
Payslip format
It should contain individual employee names, ids and the pay details.
Technical Requirements
• Smart Draw 6
• MS SQL Server
• 2 GB RAM
• Laser printers
Structured Analysis:-
Data Flow Diagram: The context level diagram of the Supermarket Automation Software is
shown below. It is a top level (also known as Level 0) data flow diagram. It only contains one
process node (process 0) named SAS that generalizes the function of the entire automated
system in relationship to external entities.
13
Employee Pay slip
Database
Salesman
Sales
Sold items statistics
SAS new
supply Employee
Bill
query
display generate sales
Change inventory statistics command
price
Manager
14
Fig: First level diagram for SAS
sales information
0.6
inventory details
salary
calculation
generate
sales 0.4 pay-slip
statistics
print sales 0.5
statistics
update
price
Experiment no.-4
15
0 .6
0 .1 0 .2
r e g is t e r _ s a s a la ry
u p d a te c a lc u la tio n
le s s
in v e n t o r y
Employee
database
s o ld it e m s 0 .1 .1
0 .6 .1
r e a d id p r o d u c t d e t a ils
0 .2 .1
read
in v e n t o r y e m p lo y e e _ re
re c o rd s c o rds
on
a c ti
0 .1 .2 rt a n s t a i l s
de
p e r f o r m a n c e d e t a ils
re g is te r
s o ld ite m s 0 .6 .2
s u p p lie r
Re
c a lc u la te
g is
in f o r m a t io n
0 .2 .2
te r
s alary f in a n c ia l d e t a ils
so
ld
o rde r ne w
ite
ms
0 .1 .3 ite m s
g e n e ra te
p a y s lip
su
g e n e ra te b ill
tran s a ctio n
p p ly
0 .6 .3
g e n e rate
p a y s lip
p ay s lip
Experiment no.-5
16
Diagrams created using this process are called entity-relationship diagrams, or ER diagrams
or ERDs for short.
ERDs:
Entity Relationship Diagrams (ERDs) illustrate the logical structure of databases.
An ER Diagram
17
Entity
An entity is an object or concept about which you want to store information.
Weak Entity
Attributes are the properties or characteristics of an entity.
Key attribute
A key attribute is the unique, distinguishing characteristic of the entity. For example, an
employee's social security number might be the employee's key attribute.
Multivalued attribute
A multivalued attribute can have more than one value. For example, an employee entity can
have multiple skill values.
Derived attribute
A derived attribute is based on another attribute. For example, an employee's monthly salary
is based on the employee's annual salary.
Relationships
Relationships illustrate how two entities share information in the database structure.
Cardinality
Cardinality specifies how many instances of an entity relate to one instance of another entity.
Ordinality is also closely linked to cardinality. While cardinality specifies the occurences of a
relationship, ordinality describes the relationship as either mandatory or optional. In other
words, cardinality specifies the maximum number of relationships and ordinality specifies the
absolute minimum number of relationships.
Recursive relationship
In some cases, entities can be self-linked. For example, employees can supervise other
employees.
18
Entity Relationship Diagrams for case study:
S _ id F nam e
Lnam e
N am e
Ph
Phone Q ty
on
N am e
e
P r _ id P _ id
S u p _ id
1 N
S U P P L IE R S s u p p lie s PRO D U CTS
Bal
R e o r d e r _ le v e l
N e x t _ d e l_ A d dres
P r ic e _ p e r _ u n it
d a te s
P in
R e o r d e r _ d a te
D e l_ t y p e
O f_n o M
S ta te
C it y
s o ld _ t o
N
F nam e
C _ id B a la n c e _
pay
N am e
Lnam e
CU STO M ERS
Ph
A m o u n t _ p a id
A d dress
Hno
M _ d a te
C it y
P in
S ta te
19
Lnam e
F nam e
D _o_j E _ id
E _ id
N am e TA
Ac_no DA
Acc_no
EM PLOYEES 1 1
e a rn SA LA R Y
Ph
B a s ic
A d d ress P in T o ta l
I n c e n t iv e s
H_no
S ta te
C it y
Experiment no.-6
20
Objective :- To familiarize with the concept of mapping cardinalities and relationships in
ER diagram.
H/W Requirement :-
•Processor – Any suitable Processor e.g. Celeron
•Main Memory - 128 MB RAM
•Hard Disk – minimum 20 GB IDE Hard Disk
•Removable Drives
–1.44 MB Floppy Disk Drive
–52X IDE CD-ROM Drive
•PS/2 HCL Keyboard and Mouse
Method:-
Mapping Cardinalities: These express the number of entities to which another entity can
be associated via a relationship. For binary relationship sets between entity sets A and B,
the mapping cardinality must be one of:
• many-to-one
• one-to-many
• one-to-one
• many-to-many
21
3. One-to-One (1:1): A one-to-one relationship between two entities indicates that each
occurrence of one entity in the relationship is associated with a single occurrence in the
related entity. There is a one-to-one mapping between the two, such that knowing the
value of one entity gives you the value of the second. For example, in this relationship an
Employee uses a maximum of one Workstation:
22
4. Many-to-Many (M:M): A many-to-many relationship between two entities indicates that
either entity participating in the relationship may occur one or several times. The
example indicates that there may be more than one Employee associated with each Project,
and that each Employee may be associated with more than one Project at a time. That is,
projects may share employees.
The appropriate mapping cardinality for a particular relationship set depends on the real
world being modeled.
Relationships:
A relationship is an association that exists between two entities. For example, Instructor
teaches Class or Student attends Class. Most relationships can also be stated inversely. For
example, Class is taught by Instructor.
23
Relationships Between Entities:
There can be a simple relationship between two entities. For example, Student attends a
Class:
Some relationships involve only one entity. For example, Employee reports to Employee:
There can be a number of different relationships between the same two entities. For example:
24
• Employee is assigned to a Project,
• Employee bills to a Project.
One entity can participate in a number of different relationships involving different entities.
For example:
Characteristics of Relationships:
A relationship may be depicted in a variety of ways to improve the accuracy of the
representation of the real world. The major aspects of a relationship are:
25
Naming the Relationship: Place a name for the relationship on the line representing the
relationship on the E-R diagram. Use a simple but meaningful action verb (e.g., buys, places,
takes) to name the relationship. Assign relationship names that are significant to the business
or that are commonly understood in everyday language.
Bi-directional Relationships: Whenever possible, use the active form of the verb to name
the relationship. Note that all relationships are bi-directional. In one direction, the active
form of the verb applies. In the opposite direction, the passive form applies.
For example, the relationship Employee operates Machine is named using the active verb
operates:
However, the relationship Machine is operated by Employee also applies. This is the passive
form of the verb.
By convention, the passive form of the relationship name is not included on the E-R diagram.
This helps avoid clutter on the diagram.
Relationship Dependency:
26
• mandatory,
• optional,
• contingent.
Mandatory Relationship:
A mandatory relationship indicates that for every occurrence of entity A there must exist an
entity B, and vice versa.
Optional Relationship:
An optional relationship between two entities indicates that it is not necessary for every entity
occurrence to participate in the relationship. In other words, for both entities the minimum
27
number of instances in which each participates, in each instance of the relationship is zero
(0).
As an example, consider the relationship Man is married to Woman. Both entities may be
depicted in an Entity-Relationship Model because they are of interest to the organization.
However, not every man, or woman, is necessarily married. In this relationship, if an
employee is not married to another employee in the organization, the relationship could not
be shown.
The optional relationship is useful for depicting changes over time where relationships may
exist one day but not the next. For example, consider the relationship "Employee attends
Training Seminar." There is a period of time when an Employee is not attending a Training
Seminar or a Training Seminar may not be held.
Contingent Relationship:
A contingent relationship represents an association which is mandatory for one of the
involved entities, but optional for the other. In other words, for one of the entities the
minimum number of instances that it participates in each instance of the relationship is one
(1), the mandatory association, and for the other entity the minimum number of instances that
it participates in each instance of the relationship is zero (0), the optional association.
Contingent relationships may exist due to business rules, such as Project is staffed by
Consultant.
28
In this case, a Project may or may not be staffed by a Consultant. However, if a Consultant is
registered in the system, a business rule may state that a Consultant must be associated with a
Project.
The mapping cardinalities as referred from the ER diagram of case study described in
previous experiment are as follows:
supply
SUPPLIERS PRODUCTS
1:M
sold to
PRODUCTS CUSTOMERS
M:N
29
earn
EMPLOYEES PRODUCTS
1:1
reports
1:1
EMPLOYEES
1:M
1:M
SUPPLIERS
M:1
PRODUCTS
M:1
1:M CUSTOMERS
Experiment no.-7
30
Aim:- Techniques used in Black Box testing.
Theory:-
Black box testing is also known as functional testing. A software testing technique whereby
the internal structure of the item being tested are not known by the tester .For example, in a
black box test on a software designer the tester only knows the inputs and what the expected
outcomes should be and not how the program arrives at those outputs .
The tester doesn’t ever examine the programming code and doesn’t need any further
knowledge of the program other than its specifications. Black box testing is not a type of
testing; it instead is a testing strategy, which doesn’t need any knowledge of internal design
or code etc. As the name “black box” suggests, no knowledge of internal logic or code
structure is required. The base of the black box testing strategy lies in the selection of
appropriate data as per functionality and testing it against the functional specifications in
order to check for normal and abnormal behavior of the system. In order to implement black
box strategy, the tester is needed to be through with the requirement specifications of the
system and as user, should know, how the system should behave in response to the particular
action. The types of testing under this strategy are totally based on the testing for
requirements and functionality of the work product or software application. Black box testing
is sometimes also known as “Opaque Testing”, “Functional or Behavioral Testing” and also
as “Closed Box Testing”.
There are essentially the following two main approaches to designing black box test cases.
Equivalence class partitioning
Boundary value analysis
31
1. If the input data values to a system can be specified by a range of values then one valid
and two invalid equivalence classes should be defined.
2. If the input can assume values from a set of discrete members of some domain, then one
equivalence class for valid input values and another equivalence class for invalid input
values should be defined.
Example : For software that computes the square root of an input integer which can assume
values in the range from 1 to 5000, there are three equivalence classes: the set of negative
integers, the set of integers in the range of 1 to 5000, and the integers larger than 5000.
Therefore, the test cases must include representatives from each of the three equivalence
classes and a possible test set can be {-5, 500, 6000}.
Example: For a function that computes the square root of integer values in the range from 1
to 5000, the test cases must include the following values: {0,1,500,6000},
Experiment no.-8
32
Aim:- Techniques used in White Box testing.
Theory:-
White box test cases require thorough knowledge of the internal structure of the software.
Therefore, it is also known as structural testing. The different approaches to white box testing
are explained below:
• Statement coverage:
The statement coverage methodology aims to design test cases so as to force the execution of
every statement in a program at least once. The principle idea behind this methodology is that
unless the statement is executed we have no way of determining if an error existing in that
statement. In other words the criteria are based on the observations that an error existing in
one part of program cannot be discovered if the part of program containing error and
generating a failure is not executed. However, the executing a statement and that to for just
one input and observing that the programs behaves properly doesn’t guarantee that it will
work correctly for all the inputs values.
int compute_gcd(x,y)
int x,y;
{
while (x!=y)
if (x > y) then
x=x-y;
else y=y-x;
}
return x;
By choosing the test set {(x=3, y=3), (x-4, y=3), (x=3, y=4)}, we exercise the program such
that all statements are executed atleast once.
• Branch coverage:
In branch coverage based testing methodology test cases are designed such that different
branches are given true or false values in turn. The branch testing guarantees statement
coverage and thus is a stronger testing criterion then statement based coverage.
For example:
33
int compute_gcd(x,y)
int x,y;
{
while (x!=y)
if (x > y) then
x=x-y;
else y=y-x;
}
return x;
By choosing the test cases {(x=3, y=3), (x=3, y=2), (x=4, y=3), (x=3, y=4)}all the branches
of the program can be executed atleast once.
• Condition coverage:
In this whitebox testing test cases are designed such that each component of a condition in
composite conditional expression is given both true and false values.
eg. in conditional expression (c1and c2or c3).c1,c2and c3 are excerised atleast once i.e there
are given true or false values. Condition testing is a stronger testing method than the branch
testing and branch testing is greater than the statement coverage testing. However for a
Boolean expression of ‘n’ variables for condition coverage 2n test cases are required.
Therefore a condition coverage based testing technique is practical only if ‘n’ is small.
• Path Coverage:
Path coverage based testing stratergy requires us to design test cases such that all linearly
independent paths in the program are executed atleast once. A linearly independent path is
defined in terms of a control flow graph(CFG) , which describes the sequence in which the
different instructions of a program get executed .in other words a CFG describes how the
flow of control passes through the program. In order to draw the CFG of a program we first
number all the statements of a program.
Eg.
34
6.return x;
The different numbered statements served as nodes of the CFG and edge from one node to
another exists if the execution of the control statement representing a first node can result in
the transfer of control to the other node.
A path through the program is a node and edge sequence from the starting node to a terminal
node of a CFG of a program. An independent path is any path through the program that
introduces atleast one new node that is not included in any other linearly independent path.
Experiment no.-9
35
Aim:- Representing sequence in a structured chart.
H/W Requirement :-
•Processor – Any suitable Processor e.g. Celeron
•Main Memory - 128 MB RAM
•Hard Disk – minimum 20 GB IDE Hard Disk
•Removable Drives
–1.44 MB Floppy Disk Drive
–52X IDE CD-ROM Drive
•PS/2 HCL Keyboard and Mouse
Method:-
Description of a Module:
Logically, a module is one problem-related task that the program performs, such as Create
Invoice or Validate Customer Request. Physically, a module is implemented as a sequence of
programming instructions bounded by an entry point and an exit point.
Common Modules:
There are two categories of common modules:
Some common modules (e.g., security, navigation, audit trails, and help) do not fall clearly
in either category. These modules may be common to more than one application but are not
system level modules.
36
System common modules should be defined as part of the preliminary design. Application
common modules should be defined as early as possible but many may not be identified until
detailed design.
37
Result: This experiment introduces the concept of structured charts.
38
Experiment no.-10
H/W Requirement :-
•Processor – Any suitable Processor e.g. Celeron
•Main Memory - 128 MB RAM
•Hard Disk – minimum 20 GB IDE Hard Disk
•Removable Drives
–1.44 MB Floppy Disk Drive
–52X IDE CD-ROM Drive
•PS/2 HCL Keyboard and Mouse
Method:-
A rectangle is used to represent a module on a Structure Chart. The module name is written
inside the rectangle. Other than the module name, the Structure Chart gives no information
about the internals of the module.
Existing Module:
Existing modules may be shown on a Structure Chart. An existing module is represented by
double vertical lines.
39
Unfactoring Symbol:
An unfactoring symbol is a construct on a Structure Chart that indicates the module will not
be a module on its own but will be lines of code in the parent module. An unfactoring
symbol is represented with a flat rectangle on top of the module that will not be a module
when the program is developed.
An unfactoring symbol reduces factoring without having to redraw the Structure Chart. Use
an unfactoring symbol when a module that is too small to exist on its own has been included
on the Structure Chart. The module may exist because factoring was taken too far or it may
be shown to make the chart easier to understand. (Factoring is the separation of a process
contained as code in one module into a new module of its own).
Result: This experiment introduces the concept of creating modules in structured charts
40