Anda di halaman 1dari 68

CHAPTER 1

INTRODUCTION

1
1. INTRODUCTION

1.1. PROBLEM STATEMENT

The purpose of this document is to build an online system to manage and control Electric equipment
using Camera sensing and to also take care of security of any premises.

1.2. INTENDED AUDIENCE AND READING SUGGESTIONS

This project is a prototype for the Electricity management system and it is for any office or
organizational premises which deal with a number of people. This has been implemented under
the guidance of college professors. This project is useful for the security purpose and as well as
for diminishing the wastage of electricity.

1.3. PROJECT SCOPE

The purpose of the online electricity management using Camera sensing is to reduce the
unnecessary wastage of electricity in absence of any one and to create a convenient and easy-to-
use application for organizations, dealing with large number of people visiting and working there.
We will have a sensing camera supporting different organizational premises including classrooms,
offices, hotel rooms and an administrator to monitor and control it. Above all, we hope to provide
a comfortable user experience along with the best pricing available.
1.4. PLATFORM SPECIFICATION

1.4.1. HARDWARE:

 Camera
 Arduino
 Electric Equipment

1.4.2. SOFTWARE:

 Arduino IDE
 Database MYSQL
 Python

1.4.3. IMPLEMENTATION LANGUAGE:

 Python
 Java

2
CHAPTER 2

SYSTEM ANALYSIS

3
2. SYSTEM ANALYSIS

2.1. IDENTIFICATION OF NEED:

Unnecessary wastage of electricity and also the security of working and living environment is
major concern in the world. Talking about the wastage of electricity, its highly common for
people to forget switching off of lights and fans after living the place which gradually becomes a
cause of electricity wastage. Moreover, a residential as well as organizational security system is
becoming more and more important due to the increasing crime and theft around us. There is
need to have a automated system that can manage both the electricity and the security purpose.

2.2. PRELIMINARY INVESTIGATION:

Investigation has revealed that the current camera systems are good but in a way expensive,
adding to this, these camera systems somewhere fail to detect during occlusion. Furthermore,
talking about automated systems to manage and control electric equipments by turning them off
in emptiness still do not prevail.

4
CHAPTER 3

FEASIBILITY STUDY

5
3. FEASIBILITY STUDY
The feasibility study is a major factor which contributes to the analysis and development of the
system. The decision of the system analyst whether to design a particular system or not depends
on its feasibility study.

Study of requirement analysis is done through different feasibility study. Feasibility study is
undertaken whenever a possibility of probability of improving the existing system or designing
new system. Feasibility study helps to meet user requirements. It enables us to determine the
potential of existing system and improving it. It helps to develop a technically and economically
feasible system. It helps to know what should be embedded in the system. It also helps to
develop a cost-effective system. We can make better utilization of available resources.

The project concept is feasible because of the following:

3.1 Technical Feasibility

3.2 Economical Feasibility

3.3 Operational Feasibility

3.1. TECHNICAL FEASIBILITY:

It is a measure of the how practical solutions are and whether the technology is already available
within the organization. If the technology is not available to the firm, technical feasibility also
looks at whether it can be acquired. Technical feasibility centers around the existing system and
to what extent its support can be extended to the proposed system. This project is technically
feasible and to maximum extent the existing systems support the proposed system.

3.2. ECONOMICAL FEASIBILITY:

It is a measure of the cost-effective of a project or solutions. It is a measure of whether a solution


will pay for itself or how profitable a solution will be, this is often called a cost-benefit analysis.

3.3. OPERATIONAL FEASIBILITY:


It is a measure of how well the system will work in the organization. It is also a measure of how
people feel about the system/project. In this project the user feels that the system is very user
friendly. This project developed is worth and solutions to the problem will work successfully.

6
CHAPTER 4

LITERATURE SURVEY

7
4. LITERATURE SURVEY

This part will just give an idea of all the research during this last ten years to resolve this
complicated problem which is count people. In fact, thanks to the fast evolution of computing, it
is possible to count people using computer-vision even if the process is extremely costly in terms
of computing operations and resources.

Computer-vision based people counting offers an alternative to these other methods. The first and
common problem of all computer-vision systems is to separate people from a background scene
(determine the foreground and the background). Many methods are proposed to resolve this
problem. Several suggest counting people systems use multiple cameras (most of the time 2) to
help with this process.

Terada et al. creates a system that can determine people direction movement and so count people
as they cross a virtual line [TER99]. The advantages of this method is it avoids the problem of
occlusion when groups of people pass through the camera's field of view. To determine the
direction of people, a space-time image is used. Like Terada et al, Beymer and Konolige also use
stereo-vision in people tracking [BEY99]. Their system uses continuous tracking and detection to
handle people occlusion. Template based tracking is able to drop detection of people as they
become occluded, eliminating false positives in tracking. Using multiple cameras improve the
resolution of occlusion problem. But the problem is the need to have a good calibration of two
cameras (when 3D reconstruction is used).

4.1. WORK DONE BY OTHER:

Hashimoto et al. resolve the problem of people counting using a specialized imaging system
designed by themselves (using IR sensitive ceramics, mechanical chopping parts and IR-
transparent lenses) [HAS97]. The system uses background subtraction to create “thermal” images
(place more or less importance depending of the regions of the image; Region Of Interest) that are
analysed in a second time. They developed an array based system that could count persons at a
rate of 95%. So their system is extremely accurate but with certain conditions. In order to work in
good conditions, the system requires a distance of at least 10 cm between passing people to
distinguish them and thus to count them as two separate persons. The system also shows some
problem in counting with large movements from arms and legs. So this system will be not so
8
appropriate in commercial centre because of the high density traffic when people entering or
exiting. In fact, most of the time, person come in supermarkets with their family so make a close
group of people which is the most difficult problem to resolve for counting people system.

Tesei et al. use image segmentation and memory to track people and handle occlusions [TES96].
In order to highlight regions of interests (blobs3), the system uses background subtraction. It
consists to subtract a reference frame (background image previously compute) from the current
frame and then threshold it (this algorithm will be more detailed in the analysis section). Using
features such as blob area, height and width, bounding box area, perimeter, mean gray level, the
blobs are tracked from frame to frame. By memorizing all this features over time, the algorithm
cans resolve the problem of merging and separating of blobs that occurs from occlusion. In fact,
when blobs merge during occlusion a new blob is created with other features but the idea of this
algorithm is that in this new blob, it stores the blobs' features which form it. So when the blobs
separate themselves, the algorithm cans assigned their original labels. This system doesn't resolve
all the problems but it's a good idea and does not request a lot of computing operations.

Shio and Sklanksy try to improve the background segmentation algorithm (detect people
occlusion) by simulating the human vision more particularly the effect of perceptual grouping4
[SHI91]. First, the algorithm calculates an estimation of the motion from consecutive frames
(frames differencing is more detailed in the analysis section) and use this data to help the
background subtraction algorithm (segment people from the background) and try to determine the
boundary between closer persons (when occlusions occurs, make boundaries to separate people by
using frame differencing information). This segmentation uses a probabilistic object model which
has some information like width, height, direction of motion and a merging/splitting step like this
seen before. It was found that using an object model is a good improvement for the segmentation
and a possible way to resolve the occlusions problem. But using perceptual grouping is totally
ineffective in some situations like, for example, a group of people moving in the same direction at
speed almost equals. Another method of separation of people from a background image is used by
Schofield et al. [SCH95]. All the background segmentation algorithm is done by simulating a
neural networks5 and uses a dynamically adjusted spacing algorithm in order to solve occlusions.
But because of the reduce speed of neural network, the algorithm only deal with counting people

9
in a specific image. This paper is just an approach of how resolve people counting by using neural
networks. Tracking people is not considered.

As simple and effective approach, Sexton et al. use simplified segmentation algorithm [SEX95].
They test their system in a Parisian railway station and get error rate ranging 1% to 20%. Their
system uses a background subtraction to isolate people from the background. The background
image (reference frame) is constantly updated to improve the segmentation and reduce the effect
of lighting or environment modification. The tracking algorithm is simply done by matching the
resulting blobs, given by the background subtraction process, with the closest centroids6. Means
that the tracking operation is operated frame to frame and the label of the blob resulting with the
current frame is the same that the blob resulting with the previous frame which has the closest
centroid. In order to avoid the majority of occlusions, an overhead video camera is used. Segen
and Pingali concentrate on image processing after segmentation [SEG96]. A standard background
algorithm is used to determine the different regions of interest. Then, in each of those areas, the
algorithm identifies and tracks features between frames. All the paths of each feature is stored and
represent the motion of person during all the process. Then, by using those paths, the algorithm
can easily determine how many people crossed a virtual line and the direction of this crossing.
This system does not deal with occlusion problems and can be reduce in performance if there is a
lot of persons in the field of the video camera. In fact, the paths' data will be big which will
complicate the calculation of intersection between the line and all the paths. Haritaoglu and
Flickner adopt an another method to resolve the problem of real- time tracking of people [HAR01].
In order to segment silhouettes from the background, they choose to use a background subtraction
based with colour and intensity of pixel values. Those informations will help to classify all the
pixels in the image. Three classifications are used : foreground, background and shadow. Then all
the pixels classified as foreground make different regions. All these foreground groups are then
segmented into individual people by using 2 different motion constraints as temporal and global.
In order to track these individuals, the algorithm uses an appearance model based on colour and
edge densities. Gary Conrad and Richard Johnsonbaugh simplify the entire people counting
process by using an overhead camera (it permits to greatly reduce the problem of occlusions)
[CON94]. To avoid the problem of light modification, they use consecutive frames differencing
instead of using background subtraction. To limit computation, their algorithm reduces the
working space in a small window of the full scene perpendicular to the flow traffic. At any given

10
time, their algorithm is able to determine the number of people in the window and the direction of
travel by using the centre of mass in each little images of the window. With a quick and simple
algorithm, they obtained very good results and achieved a 95,6% accuracy rate over 7491 people.

11
LITRATURE SURVEY AS TABLE:

TABLE 4.1

12
4.2. BENEFITS:

There are different benefits of this project like

 Reduces unnecessary wastage of electricity


 Automated System
 Better People Counting in crowd

4.3. PROPOSED SOLUTION:

 Still there is no existing system to monitor, control electricity in organization or any other
premises. Our project aims at providing a automated camera system which monitor and
control electricity.
 In our project, we have focused upon the people counting system and used it to control
the electric supply. For instance , if there is no one in the room the people counting in the
camera comes out to be zero which trigger the camera to send message to switch of the
electric supply in that room.
 Apart from this feature we have also worked on making the camera function and work at
night also. A time duration will be set in the camera for night which will activate the
alerts and notification to the owner or admin on detecting some unwanted activity.

4.4. TECHNOLOGY USED:

Front End:

 HTML
 CSS
 Python

Back End :

 Arduino
 Python
 Database-MYSQL
 JAVA

13
CHAPTER 5

TECHNICAL PART

14
5. TECHNICAL PART
5.1. FUNCTIONAL REQUIREMENTS:

Functional requirements describe how a product must behave, what its features and functions.
Functional requirements are product features or functions that developers must implement to
enable users to accomplish their tasks. So, it’s important to make them clear both for the
development team and the stakeholders. Generally, functional requirements describe system
behavior under specific conditions.

Functional Requirements:

1. When someone enters in a room it should properly count the persons and generate an ID
for that person.
2. The user should be able see the live footage of the camera using a website from anywhere
just by authenticating himself.
3. If someone enters in the room and Electric lights of that room were Off then after counting
that person it should Turn On the lights.
4. If there is no one in the room and lights are On then the lights should get Turn Off.
5. If someone enters in the room in particular period of time say at night then a notification
should be sent to the administrator.

5.2. NON-FUNCTIONAL REQUIREMENTS:

Any requirement which specifies how the system performs a certain function. In other words, a
non-functional requirement will describe how a system should behave and what limits there are on
its functionality.

Describes the user needs for using the functionality. The user perceives the system as an electronic
tool that helps to automate what would otherwise be done manually. From this point of view, the
user is concerned with how well the system operates.

15
Non-Functional Requirements:

a. ACCESS SECURITY

The website could only be accessed by that person who have the proper authorization
to access it.

b. ACCESSIBILITY

The website could be accessed from anywhere, where the internet connection is present
and could be accessed by any device like PCs, mobile phones, tablets etc.

c. AVAILABILITY

The camera will stay on 24 hours so that it could count every person entering the room
and so that it could switch on or off the lights.

d. EFFICIENCY

It should properly count the peoples in the room by counting every entering person
separately.

5.3. DIFFERENT MODULES:

MODULE 1:-

In the 1st module of the project a proper Headcounting algorithm should be implemented. That
algorithm should count the persons present in the room. It should count in such a way that
even if 2 persons are standing close to each other than it should count them separate and assign
them a separate ID.

The Algorithm should also be able to detect that whether the person is entering in the room or
exiting the room.

16
MODULE 2:-

In this module Arduino hardware should be coded according to the requirements that is if
there is no person present in the room then Lights should turn off. And if someone enters in
the room the Lights should turn on.

In this module the code to control the Arduino using the python should also be implemented
that should be able to give serial output from the python code to the Arduino to control it.

MODULE 3:-

In this module alert system should be implemented. A alert siren should be blown if someone
enters in the room in a particular time period say at night.

MODULE 4:-

In this module a website will be made where a user can login in it and will be able to watch
the live video from the room and also the count of the peoples that have entered in that room
till then.

In this module only a alert notification will be send to the user if someone enters the room in
a particular period of time say at night.

17
CHAPTER 6

SOFTWARE ENGINEERING
APPROACH

18
6. SOFTWARE ENGINEERING APPROACH

6.1. SOFTWARE ENGINEERING PARADIGM APPLID:

 We have used incremental model for our project as software engineering approach.

6.1.1. DISCRIPTION:

Software Engineering | Incremental process model

Incremental process model is also know as Successive version model.

First, a simple working system implementing only a few basic features is built and then
that is delivered to the customer. Then thereafter many successive iterations/ versions are
implemented and delivered to the customer until the desired system is realized.

Fig. 6.1

A, B, C are modules of Software Product that are incrementally developed and delivered.

Life cycle activities –


Requirements of Software are first broken down into several modules that can be
incrementally constructed and delivered. At any time, the plan is made just for the next
increment and not for any kind of long term plans. Therefore, it is easier to modify the
version as per the need of the customer. Development Team first undertakes to develop
core features (these do not need services from other features) of the system.
Once the core features are fully developed, then these are refined to increase levels of
capabilities by adding new functions in Successive versions. Each incremental version is
usually developed using an iterative waterfall model of development.

19
As each successive version of the software is constructed and delivered, now the feedback
of the Customer is to be taken and these were then incorporated in the next version. Each
version of the software have more additional features over the previous ones.

Fig. 6.2

After Requirements gathering and specification, requirements are then spitted into several
different versions starting with version-1, in each successive increment, next version is
constructed and then deployed at the customer site. After the last version (version n), it is
now deployed at the client site.

Types of Incremental model –

1. Staged Delivery Model – Construction of only one part of the project at a time.

20
Fig. 6.3

1. Parallel Development Model – Different subsystems are developed at the same


time. It can decrease the calendar time needed for the development, i.e. TTM (Time
to Market), if enough Resources are available.

When to use this –


1. Funding Schedule, Risk, Program Complexity, or need for early realization of
benefits.
2. When Requirements are known up-front.
3. When Projects having lengthy developments schedules.
4. Projects with new Technology.

6.1.2. ADVANTAGES
 Error Reduction (core modules are used by the customer from the beginning of
the phase and then these are tested thoroughly)
 Uses divide and conquer for breakdown of tasks.
 Lowers initial delivery cost.
 Incremental Resource Deployment.

21
6.1.3 DISADVANTAGES:
 Requires good planning and design.
 Total cost is not lower.
 Well defined module interfaces are required.

6.2. REQUIREMENT ANALYSIS:

6.2.1 SOFTWARE REQUIREMENT SPECIFICATION:

Software requirement specification (SRS) is a technical specification of requirements for the


software product. SRS represents an overview of products, features and summaries the
processing environments for development operation and maintenance of the product. The goal
of the requirement specification phase is to produce the software specification document also
called requirement document.

Requirement Specification

This requirement specification must have the system properties. Conceptually every SRS
should have the components:

 Functionality
 Performance
 Design constraints imposed on an implementation
 External interfaces

6.2.1.1 GLOSSARY:

PYTHON:

In technical terms, Python is an object-oriented, high-level programming language


with integrated dynamic semantics primarily for web and app development. It is
extremely attractive in the field of Rapid Application Development because it offers
dynamic typing and dynamic binding options.

Python is relatively simple, so it's easy to learn since it requires a unique syntax that
focuses on readability. Developers can read and translate Python code much easier than

22
other languages. In turn, this reduces the cost of program maintenance and
development because it allows teams to work collaboratively without significant
language and experience barriers.

Additionally, Python supports the use of modules and packages, which means that
programs can be designed in a modular style and code can be reused across a variety
of projects. Once you've developed a module or package you need, it can be scaled for
use in other projects, and it's easy to import or export these modules.

One of the most promising benefits of Python is that both the standard library and the
interpreter are available free of charge, in both binary and source form. There is no
exclusivity either, as Python and all the necessary tools are available on all major
platforms. Therefore, it is an enticing option for developers who don't want to worry
about paying high development costs.

If this description of Python over your head, don't worry. You'll understand it soon
enough. What you need to take away from this section is that Python is a programming
language used to develop software on the web and in app form, including mobile. It's
relatively easy to learn, and the necessary tools are available to all free of charge.

That makes Python accessible to almost anyone. If you have the time to learn, you can
create some amazing things with the language.

CHOOSING PYTHON IN IOT BASED PROJECT

It started as a scripting language to glue together real code, but it’s increasingly used
as the main language for many developers. When small devices have enough memory
and computational power, the developers are free to choose the language that makes
their life easier and that is more and more often turning out to be Python.

Kinman Covey, a microcontroller developer, says that Python is both easy to learn and
supported by a large, helpful community. The syntax is clean and simple, attracting a
greater range of programmers. The language is often the first choice for social scientists

23
and biologists, for instance. When they need a smart device in the lab, they’re happy
to use a language they know, Python.

“Python is the language of choice for one of the most popular microcontrollers on the
market, the Raspberry Pi,” said Covey. Much of the training literature is written in
Python, and many schools use the platform to teach computer programming. If the
project is relatively simple and there are no great computational demands, it’s possible
to build effective tools from the same boards and libraries that are used in elementary
schools.

There are also versions designed to be even smaller. The MicroPython board and
software package is a small microcontroller optimized to run Python on a small board
that’s only a few square inches.

6.2.1.2 SUPPLEMENTARY SPECIFICATION:

Admin specification:

 Admin can view footage


 Admin has its user id and password to login
 Admin also have control on light
 Admin get alert notification if activity is fount between 4pm to 7am.

6.2.1.3 USE CASE MODEL:

In software and systems engineering, a use case is a list of actions or event steps,
typically defining the interactions between a role (known in the Unified Modelling
Language as an actor) and a system, to achieve a goal. The actor can be a human or
other external system.

Use case analysis is an important and valuable requirement analysis technique that
has been widely used in modern software engineering since its formal introduction
by Ivar Jacobson in 1992. Use case driven development is a key characteristic of
many process models and frameworks such as ICONIX, the Unified Process (UP),
the IBM Rational Unified Process (RUP), and the Oracle Unified Method (OUM).

24
With its inherent iterative, incremental and evolutionary nature, use case also fits
well for agile development. Use cases are not only texts, but also diagrams, if
needed. The purpose of use case diagram is to capture the dynamic aspect of a
system. But this definition is too generic to describe the purpose.

Because other four diagrams (activity, sequence, collaboration and State chart) are
also having the same purpose. So we will look into some specific purpose which
will distinguish it from other four diagrams. Use case diagrams are used to gather
the requirements of a system including internal and external influences. These
requirements are mostly design requirements. So when a system is analysed to
gather its functionalities use cases are prepared and actors are identified. Now when
the initial task is complete use case diagrams are modelled to present the outside
view.

So in brief, the purposes of use case diagrams can be as follows:

 Used to gather requirements of a system.


 Used to get an outside view of a system.
 Identify external and internal factors influencing the system.
 Show the interacting among the requirements are actors.

25
Fig. 6.4

26
Conceptual Level class diagram:

In software engineering, a class diagram in the Unified Modelling Language


(UML) is a type of static structure diagram that describes the structure of a system
by showing the system's classes, their attributes, operations (or methods), and the
relationships among objects. The class diagram is the main building block of object-
oriented modelling. It is used both for general conceptual modelling of the
systematics of the application, and for detailed modelling translating the models
into programming code. Class diagrams can also be used for data modelling. The
classes in a class diagram represent both the main elements, interactions in the
application, and the classes to be programmed.

In the diagram, classes are represented with boxes that contain three compartments:

 The top compartment contains the name of the class. It is printed in bold
and centred, and the first letter is capitalized.
 The middle compartment contains the attributes of the class. They are
left-aligned and the first letter is lowercase.
 The bottom compartment contains the operations the class can execute.
They are also left-aligned and the first letter is lowercase.

In the design of a system, a number of classes are identified and grouped together
in a class diagram that helps to determine the static relations between them. With
detailed modelling, the classes of the conceptual design are often split into a number
of subclasses.

Conceptual Level Sequence diagram:

A Sequence diagram is an interaction diagram that shows how processes operate


with one another and in what order. It is a construct of a Message Sequence Chart.
A sequence diagram shows object interactions arranged in time sequence. It depicts
the objects and classes involved in the scenario and the sequence of messages
exchanged between the objects needed to carry out the functionality of the scenario.
Sequence diagrams are typically associated with use case realizations in the Logical

27
View of the system under development. Sequence diagrams are sometimes called
event diagrams or event scenarios. A sequence diagram shows, as parallel vertical
lines (lifelines), different processes or objects that live simultaneously, and, as
horizontal arrows, the messages exchanged between them, in the order in which
they occur. This allows the specification of simple runtime scenarios in a graphical
manner.

Fig. 6.5

6.2.2 Conceptual Level Activity diagram:

An activity diagram is a UML diagram that models the dynamic aspects of a system. It is a
simplification of the UML state chart diagram for modeling control flows in computational and
organizational processes. It allows you to represent a functional decomposition of a system
behavior. An activity diagram provides a complete specification of a behavior and not, like the
interaction diagrams, a single possible scenario.

28
The activity diagram gives a simplified representation of a process, showing control flows (called
transitions) between actions performed in the system (called activities). These flows represent the
internal behavior of a model element (use case, package, classifier or operation) from a start point
to several potential end points.

Fig. 6.6

29
Fig. 6.7

6.3. PLANNING MANAGERIAL ISSUES

6.3.1 PLANNING SCOPE

The first activity in software project planning is the determination of software


scope.Software scope describes the data and control to be processed, function,
performance, constraints, interfaces, and reliability. Functions described in the statement
of scope are evaluated and in some cases refined to provide more detail prior to the
beginning of estimation. Because both cost and schedule estimates are functionally
oriented, some degree of decomposition is often useful. Performance considerations

30
encompass processing and response time requirements. Constraints identify limits placed
on the software by external hardware, available memory, or other existing systems.

6.3.2 PROJECT RESOURCES

The second software planning task is estimation of the resources required to accomplish
the software development effort.
The development environment—hardware and software tools—sits at the foundation
of the resources pyramid and provides the infrastructure to support the development
effort. At a higher level, we encounter reusable software components—software
building blocks that can dramatically reduce development costs and accelerate delivery.
At the top of the pyramid is the primary resource—people.
Each resource is specified with four characteristics: description of the resource, a
statement of availability, time when the resource will be required; duration of time that
resource will be applied. The last two characteristics can be viewed as a time window.
Availability of the resource for a specified window must be established at the earliest
practical time.
 Human Resources
The planner begins by evaluating scope and selecting the skills required to complete
development. Both organizational positions (e.g., manager, senior software engineer)
and specialty (e.g., telecommunications, database, and client/server) are specified. For
relatively small projects (one person-year or less), a single individual may perform all
software engineering tasks, consulting with specialists as required. The number of
people required for a software project can be determined only after an estimate of
development effort (e.g., person-months) is made.
 Reusable Software Resources
Component-based software engineering (CBSE) emphasizes reusability—that is, the
creation and reuse of software building blocks. Such building blocks, often called
components, must be cataloged for easy reference, standardized for easy application,
and validated for easy integration.
 Off-the-shelf components: Existing software that can be acquired from a third
party or that has been developed internally for a past project.

31
 Full-experience components: Existing specifications, designs, code, or test data
developed for past projects that are similar to the software to be built for the
current project. Members of the current software team have had full experience
in the application area represented by these components. Therefore,
modifications required for full-experience components will be relatively low-
risk.
 Partial-experience components: Existing specifications, designs, code, or test
data developed for past projects that are related to the software to be built for
the current project but will require substantial modification. Members of the
current software team have only limited experience in the application area
represented by these components. Therefore, modifications required for partial-
experience components have a fair degree of risk.
 New components: Software components that must be built by the software team
specifically for the needs of the current project.
 Environmental Resources
The environment that supports the software project, often called the software
engineering environment (SEE), incorporates hardware and software. Hardware
provides a platform that supports the tools (software) required to produce the work
products that are an outcome of good software engineering practice. Because most
software organizations have multiple constituencies that require access to the SEE, a
project planner must prescribe the time window required for hardware and software
and verify that these resources will be available.
When a computer-based system (incorporating specialized hardware and software) is
to be engineered, the software team may require access to hardware elements being
developed by other engineering teams. For example, software for a numerical control
(NC) used on a class of machine tools may require a specific machine tool (e.g., an NC
lathe) as part of the validation test step; a software project for advanced page-layout
may need a digital-typesetting system at some point during development. Each
hardware element must be specified by the software project planner.

32
6.3.3 TEAM ORGANIZATION:

The following options are available for applying human resources to a project that will
require n people working for k years:
1. N individuals are assigned to M different functional tasks, relatively little combined
work occurs; coordination is the responsibility of a software manager who may
have six other projects to be concerned with.
2. N individuals are assigned to m different functional tasks ( m< n) so that informal
"teams" are established; an ad hoc team leader may be appointed; coordination
among teams is the responsibility of a software manager.
3. N individuals are organized into t teams; each team is assigned one or more
functional tasks; each team has a specific structure that is defined for all teams
working on a project; coordination is controlled by both the team and a software
project manager.
Although it is possible to voice arguments for and against each of these approaches, a
growing body of evidence indicates that a formal team organization (option 3) is most
productive.
The “best” team structure depends on the management style of your organization, the
number of people who will populate the team and their skill levels, and the overall
problem difficulty.
How should a software team be organized?
Democratic decentralized (DD): This software engineering team has no permanent
leader. Rather, "task coordinators are appointed for short durations and then replaced
by others who may coordinate different tasks." Decisions on problems and approach
are made by group consensus. Communication among team members is horizontal.
Controlled decentralized (CD): This software engineering team has a defined leader
who coordinates specific tasks and secondary leaders that have responsibility for
subtasks. Problem solving remains a group activity, but implementation of solutions
is partitioned among subgroups by the team leader. Communication among
subgroups and individuals is horizontal. Vertical communication along the control
hierarchy also occurs.

33
Controlled Centralized (CC):Top-level problem solving and internal team coordination
are managed by a team leader. Communication between the leader and team members
is vertical.
Seven project factors that should be considered when planning the structure of software
engineering teams:
• The difficulty of the problem to be solved.
• The size of the resultant program(s) in lines of code or function points
• The time that the team will stay together (team lifetime).
• The degree to which the problem can be modularized.
• The required quality and reliability of the system to be built.
• The rigidity of the delivery date.
• The degree of sociability (communication) required for the project.
Because a centralized structure completes tasks faster, it is the most adept at handling
simple problems. Decentralized teams generate more and better solutions than
individuals. Therefore such teams have a greater probability of success when working
on difficult problems. Since the CD team is centralized for problem solving, either a
CD or CC team structure can be successfully applied to simple problems. A DD
structure is best for difficult problems.
Because the performance of a team is inversely proportional to the amount of
communication that must be conducted, very large projects are best addressed by team
with a CC or CD structures when subgrouping can be easily accommodated. It has been
found that DD team structures result in high morale and job satisfaction and are
therefore good for teams that will be together for a long time. The DD team structure
is best applied to problems with relatively low modularity, because of the higher
volume of communication needed. When high modularity is possible (and people can
do their own thing), the CC or CD structure will work well.
It’s often better to have a few small, well-focused teams than a single large team.
CC and CD teams have been found to produce fewer defects than DD teams, but these
data have much to do with the specific quality assurance activities that are applied by
the team. Decentralized teams generally require more time to complete a project than
a centralized structure and at the same time are best when high sociability is required.

34
6.3.4 PROJECT SCHEDULING

Software project scheduling is an activity that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering tasks.
It is important to note, however, that the schedule evolves over time. During early
stages of project planning, a macroscopic schedule is developed. This type of schedule
identifies all major software engineering activities and the product functions to which
they are applied. As the project gets under way, each entry on the macroscopic schedule
is refined into a detailed schedule. Here, specific software tasks (required to accomplish
an activity) are identified and scheduled.
Scheduling for software engineering projects can be viewed from two rather different
perspectives. In the first, an end-date for release of a computer-based system has
already (and irrevocably) been established. The software organization is constrained
to distribute effort within the prescribed time frame. The second view of software
scheduling assumes that rough chronological bounds have been discussed but that the
end-date is set by the software engineering organization. Effort is distributed to make
best use of resources and an end-date is defined after careful analysis of the software.
Unfortunately, the first situation is encountered far more frequently than the second.

Basic principles for software project scheduling:


 Compartmentalization: The project must be compartmentalized into a number
of manageable activities and tasks. To accomplish compartmentalization, both
the product and the process are decomposed
 Interdependency: The interdependency of each compartmentalized activity or
task must be determined. Some tasks must occur in sequence while others can
occur in parallel. Some activities cannot commence until the work product
produced by another is available. Other activities can occur independently.
 Time allocation: Each task to be scheduled must be allocated some number of
work units (e.g., person-days of effort). In addition, each task must be assigned
a start date and a completion date that are a function of the interdependencies
and whether work will be conducted on a full-time or part-time basis.

35
 Effort validation: Every project has a defined number of staff members. As time
allocation occurs, the project manager must ensure that no more than the
allocated number of people has been scheduled at any given time.
 Defined responsibilities: Every task that is scheduled should be assigned to a
specific team member.
 Defined outcomes: Every task that is scheduled should have a defined outcome.
For software projects, the outcome is normally a work product (e.g., the design
of a module) or a part of a work product. Work products are often combined in
deliverables.
 Defined milestones: Every task or group of tasks should be associated with a
project milestone. A milestone is accomplished when one or more work
products has been reviewed for quality and has been approved.

6.3.5 ESTIMATION

 The accuracy of a software project estimate is predicated on:


o The degree to which the planner has properly estimated the size (e.g., KLOC)
of the product to be built
o The ability to translate the size estimate into human effort, calendar time, and
money
o The degree to which the project plan reflects the abilities of the software team
o The stability of both the product requirements and the environment that
supports the software engineering effort

PROJECT ESTIMATION OPTIONS

 Options for achieving reliable cost and effort estimates


o Delay estimation until late in the project (we should be able to achieve 100%
accurate estimates after the project is complete)
o Base estimates on similar projects that have already been completed
o Use relatively simple decomposition techniques to generate project cost and
effort estimates

36
o Use one or more empirical estimation models for software cost and effort
estimation.
 Option #1 is not practical, but results in good numbers.
 Option #2 can work reasonably well, but it also relies on other project influences being
roughly equivalent
 Options #3 and #4 can be done in tandem to cross check each other.

PROJECT ESTIMATION APPROACHES

 Decomposition techniques

o These take a "divide and conquer" approach


o Cost and effort estimation are performed in a stepwise fashion by breaking
down a project into major functions and related software engineering
activities

 Empirical estimation models


o Offer a potentially valuable estimation approach if the historical data used to
seed the estimate is good

6.3.6 RISK ANALYSIS

Risk analysis and management are a series of steps that help a software team to understand
and manage uncertainty. Many problems can plague a software project. There is general
agreement that risk always involves two characteristics

Uncertainty—the risk may or may not happen; that is, there are no 100%
probable risks.
Loss—if the risk becomes a reality, unwanted consequences or losses will
occur.
When risks are analyzed, it is important to quantify the level of uncertainty and
the degree of loss associated with each risk. To accomplish this, different
categories of risks are considered.

37
 Project risks: if project risks become real, it is likely that project
schedule will slip and that costs will increase. Project risks identify
potential budgetary, schedule, personnel (staffing and organization),
resource, customer, and requirements problems and their impact on a
software project.
 Technical risks: It threatens the quality and timeliness of the software to
be produced. If a technical risk becomes a reality, implementation may
become difficult or impossible. Technical risks identify potential
design, implementation, interface, verification, and maintenance
problems. Technical risks occur because the problem is harder to solve
than we thought it would be.
 Business risks: It threatens the viability of the software to be built.
Business risks often jeopardize the project or the product
 Known risks are those that can be uncovered after careful evaluation of
the project plan, the business and technical environment in which the
project is being developed, and other reliable information sources (e.g.,
unrealistic delivery date, lack of documented requirements or software
scope, poor development environment).
 Predictable risks are extrapolated from past project experience (e.g.,
staff turnover, poor communication with the customer, dilution of staff
effort as ongoing maintenance requests are serviced).
 Unpredictable risks are the joker in the deck. They can and do occur,
but they are extremely difficult to identify in advance.

There are a few well-known types of risk analysis that can be used [21]. In
software engineering, risk analysis is used to identify the high-risk elements of
a project. It provides ways of documenting the impact of risk mitigation
strategies. Risk analysis has also been shown to be important in the software
design phase to evaluate criticality of the system, where risks are analyzed
and necessary countermeasures are introduced . The purpose of risk analysis
is to understand risk better and to verify and correct attributes. A successful

38
analysis includes essential elements like problem definition, problem
formulation, data collection.

Risk Tree Analysis and Assessment Method:

In risk tree analysis method, software risks are classified at first. Then risks are identified
in each group. Afterwards, primary or basic risk events, intermediate events, top event, and
the necessary sub-tree are found. All these require that managers have a complete
knowledge about the projects. Then the risk tree can be constructed. Likelihood and impact
must be assigned to each event and failure. Then probabilities starting from primary events
to the top event are calculated. The events are ordered according to their probabilities.
Maximum probability indicates the importance of those events; therefore, it is necessary to
attend more to them. Managers should use solutions to prevent risks from occurring or
reduce undesirable incidents.
The presented classifications and risk tree structures can apply with some software tools.
Fault Tree Creation and Analysis Program, Fault Tree Tool or Relax Fault Tree can be used
for this analysis. These tools have facilities that help users to create tree symbols and
construct the risk tree structures.

6.3.7 SECURITY PLAN

Security plan include these steps:

1. Identify the assets you want to protect and the value of these assets.
2. Identify the risks to each asset.
3. Determine the category of the cause of the risk (natural disaster risk, intentional risk,
or unintentional risk).
4. Identify the methods, tools, or techniques the threats use.
After assessing your risk, the next step is proactive planning. Proactive planning
involves developing security policies and controls and implementing tools and
techniques to aid in security.

39
The various types of policies that could be included are:
 Password policies
o Administrative Responsibilities
o User Responsibilities
 E-mail policies
 Internet policies
 Backup and restore policies

6.4 DESIGN

6.4.1. DESIGN CONCEPT

The purpose of the design phase is to plan a solution of the problem specified by the
requirement of the problem specified by the requirement document. This phase is the first
step in moving from the problem domain to the solution domain. In other words, starting
with what is needed, design takes us towards how to satisfy the needs. The design of system
is the most critical factor affecting the quality of the software and has major impact on
testing and maintenance. The output of this phase is the design document.

6.4.2. DESIGN TECHNIQUE

System Design
System design provides the understandings and procedural details necessary for
implementing the system recommended in the system study. Emphasis is on the translating
the performance requirements into design specifications. The design phase is a transition
from a user-oriented document (System proposal) to a document oriented to the
programmers or database personnel.
System Design goes through two phases of development:

 Logical design
 Physical Design

40
A data flow diagram shows the logical flow of the system. For a system it describes the
input (source), output (destination), database (data stores) and procedures (data flows) all
in a format that meets the user’s requirement. When analysis prepares the logical system
design, they specify the user needs at a level of detail that virtually determines the
information flow into an out of the system and the required data resources. The logical
design also specifies input forms and screen layouts.
The activities following logical design are the procedure followed in the physical design
e.g., producing programs, software, file and a working system.
The logical design of an information system is analogous to an engineering blue print of
an automobile. It shows the major features and how they are related to one another. The
detailed specification for the new system was drawn on the basis of user’s requirement
data. The outputs inputs and databases are designed in this phase. Output design is one of
the most important features of the information system. When the output is not of good
quality the user will be averse to use the newly designed system and may not use the
system. There are many types of output, all of which can be either highly useful or can be
critical to the users, depending on the manner and degree to which they are used. Outputs
from computer system are required primarily to communicate the results of processing to
users, They are also used to provide a permanent hard copy of these results for later
consultation. Various types of outputs required can be listed as below:

 External Outputs, whose destination is outside the organization


 Internal outputs, whose destination is with the organization
 Operational outputs, whose use is purely with in the computer department e.g.,
program-listing etc.
 Interactive outputs, which involve the user is communicating directly with the
computer, it is particularly important to consider human factor when designing
computer outputs.
End user must find outputs easy to use and useful to their jobs, without quality output, user
may find the entire system unnecessary and avoid using it. The term “Output” in any
information system may apply to either printer or displayed information. During the

41
designing the output for this system, it was taken into consideration, whether the
information to be presented in the form of query of report or to create documents etc.
Other important factors that were taken into consideration are:

 The End user, who will use the output.


 The actual usage of the planned information
 The information that is necessary for presentation when and how often output and
their format is needed. While designing output for project based Attendance
Compilation System, the following aspects of outputs designing were taken into
consideration.
Detailed Design

During detailed design the internal logic of each of the modules specified in the system
design is decided. In system design the focus is on identifying the modules, where as during
detailed design the focus is on designing the logic for each of modules. In other words, in
system design the attention is on what components are needed, while in detailed design
how the components can be implemented in the software. During this phase further details
of the data structures and algorithmic design of each of the module is usually specified in
a high – level design description language, which is independent of the target language in
which the software will eventually be implemented. Thus a design methodology is a
systematic approach to creating a design by application of a set of techniques and
guidelines.

6.4.3. MODELING

6.4.3.1. ER MODEL

The ER model defines the conceptual view of a database. It works around real-world
entities and the associations among them. At view level, the ER model is considered a
good option for designing databases.

42
Entity
An entity can be a real-world object, either animate or inanimate, that can be easily
identifiable. For example, in a school database, students, teachers, classes, and courses
offered can be considered as entities. All these entities have some attributes or properties
that give them their identity.

An entity set is a collection of similar types of entities. An entity set may contain entities
with attribute sharing similar values. For example, a Students set may contain all the
students of a school; likewise a Teachers set may contain all the teachers of a school from
all faculties. Entity sets need not be disjoint.

Attributes
Entities are represented by means of their properties, called attributes. All attributes have
values. For example, a student entity may have name, class, and age as attributes.

There exists a domain or range of values that can be assigned to attributes. For example, a
student's name cannot be a numeric value. It has to be alphabetic. A student's age cannot
be negative, etc.

Types of Attributes
 Simple attribute − Simple attributes are atomic values, which cannot be divided
further. For example, a student's phone number is an atomic value of 10 digits.

 Composite attribute − Composite attributes are made of more than one simple
attribute. For example, a student's complete name may have first_name and
last_name.

 Derived attribute − Derived attributes are the attributes that do not exist in the
physical database, but their values are derived from other attributes present in the
database. For example, average_salary in a department should not be saved directly
in the database, instead it can be derived. For another example, age can be derived
from data_of_birth.

 Single-value attribute − Single-value attributes contain single value. For example


− Social_Security_Number.

43
 Multi-value attribute − Multi-value attributes may contain more than one values.
For example, a person can have more than one phone number, email_address, etc.

These attribute types can come together in a way like −

 simple single-valued attributes

 simple multi-valued attributes

 composite single-valued attributes

 composite multi-valued attributes

Entity-Set and Keys


Key is an attribute or collection of attributes that uniquely identifies an entity among entity
set.

For example, the roll_number of a student makes him/her identifiable among students.

 Super Key − A set of attributes (one or more) that collectively identifies an entity
in an entity set.

 Candidate Key − A minimal super key is called a candidate key. An entity set may
have more than one candidate key.

 Primary Key − A primary key is one of the candidate keys chosen by the database
designer to uniquely identify the entity set.

Relationship
The association among entities is called a relationship. For example, an
employee works_at a department, a student enrolls in a course. Here, Works_at and
Enrolls are called relationships.

Relationship Set
A set of relationships of similar type is called a relationship set. Like entities, a relationship
too can have attributes. These attributes are called descriptive attributes.

44
Degree of Relationship
The number of participating entities in a relationship defines the degree of the relationship.

 Binary = degree 2

 Ternary = degree 3

 n-ary = degree

Mapping Cardinalities
Cardinality defines the number of entities in one entity set, which can be associated with
the number of entities of other set via relationship set.

 One-to-one − One entity from entity set A can be associated with at most one entity
of entity set B and vice versa.

Fig. 6.8

 One-to-many − One entity from entity set A can be associated with more than one
entities of entity set B however an entity from entity set B, can be associated with
at most one entity.

45
Fig. 6.9

 Many-to-one − More than one entities from entity set A can be associated with at
most one entity of entity set B, however an entity from entity set B can be
associated with more than one entity from entity set A.

Fig. 6.10

 Many-to-many − One entity from A can be associated with more than one entity
from B and vice versa.

46
Fig. 6.11

6.4.3.2 DATA FLOW DIAGRAM MODEL:

A data flow diagram shows the way information flows through a process or system. It
includes data inputs and outputs, data stores, and the various subprocesses the data moves
through. DFDs are built using standardized symbols and notation to describe various
entities and their relationships.

Data flow diagrams visually represent systems and processes that would be hard to describe
in a chunk of text. You can use these diagrams to map out an existing system and make it
better or to plan out a new system for implementation. Visualizing each element makes it
easy to identify inefficiencies and produce the best possible system.

For more information on data flow diagrams, read our in-depth article.

Physical and logical data flow diagrams

Before actually creating your data flow diagram, you’ll need to determine whether a
physical or logical DFD best suits your needs. If you’re new to data flow diagrams, don’t
worry—the distinction is pretty straightforward.

Logical data flow diagrams focus on what happens in a particular information flow: what
information is being transmitted, what entities are receiving that info, what general

47
processes occur, etc. The processes described in a logical DFD are business activities—a
logical DFD doesn’t delve into the technical aspects of a process or system. Non-technical
employees should be able to understand these diagrams.

Physical data flow diagrams focus on how things happen in an information flow. These
diagrams specify the software, hardware, files, and people involved in an information flow.
A detailed physical data flow diagram can facilitate the development of the code needed to
implement a data system.

Both physical and logical data flow diagrams can describe the same information flow. In
coordination they provide more detail than either diagram would independently. As you
decide which to use, keep in mind that you may need both.

Want more info on physical and logical DFDs? Read more.

Data flow diagram levels

Data flow diagrams are also categorized by level. Starting with the most basic, level 0,
DFDs get increasingly complex as the level increases. As you build your own data flow
diagram, you will need to decide which level your diagram will be.

6.4.3.2.1 Level 0 DFD

This also known as context diagrams, are the most basic data flow diagrams. They provide
a broad view that is easily digestible but offers little detail. Level 0 data flow diagrams
show a single process node and its connections to external entities.

6.4.3.2.2 Level 1 DFD

are still a general overview, but they go into more detail than a context diagram. In a level
1 data flow diagram, the single process node from the context diagram is broken down into
subprocesses. As these processes are added, the diagram will need additional data flows
and data stores to link them together.

48
Fig. 6.12

6.4.3.3 An activity diagram

is a UML diagram that models the dynamic aspects of a system. It is a simplification of


the UML state chart diagram for modeling control flows in computational and
organizational processes. It allows you to represent a functional decomposition of a system
behavior. An activity diagram provides a complete specification of a behavior and not, like
the interaction diagrams, a single possible scenario.

The activity diagram gives a simplified representation of a process, showing control flows
(called transitions) between actions performed in the system (called activities). These flows
represent the internal behavior of a model element (use case, package, classifier or
operation) from a start point to several potential end points.

49
Fig. 6.13

Fig. 6.14

50
6.4.3.4 Software Architecture

The architecture of a system describes its major components, their relationships


(structures), and how they interact with each other. Software architecture and design
includes several contributory factors such as Business strategy, quality attributes, human
dynamics, design, and IT environment.

Fig. 6.15

We can segregate Software Architecture and Design into two distinct phases: Software
Architecture and Software Design. In Architecture, nonfunctional decisions are cast and
separated by the functional requirements. In Design, functional requirements are
accomplished.

Software Architecture
Architecture serves as a blueprint for a system. It provides an abstraction to manage the
system complexity and establish a communication and coordination mechanism among
components.

51
 It defines a structured solution to meet all the technical and operational
requirements, while optimizing the common quality attributes like performance
and security.

 Further, it involves a set of significant decisions about the organization related to


software development and each of these decisions can have a considerable impact
on quality, maintainability, performance, and the overall success of the final
product. These decisions comprise of −

o Selection of structural elements and their interfaces by which the system is

composed.

o Behavior as specified in collaborations among those elements.

o Composition of these structural and behavioral elements into large

subsystem.

o Architectural decisions align with business objectives.

o Architectural styles guide the organization.

Software Design
Software design provides a design plan that describes the elements of a system, how they
fit, and work together to fulfill the requirement of the system. The objectives of having a
design plan are as follows −

 To negotiate system requirements, and to set expectations with customers,


marketing, and management personnel.

 Act as a blueprint during the development process.

 Guide the implementation tasks, including detailed design, coding, integration, and
testing.

It comes before the detailed design, coding, integration, and testing and after the domain
analysis, requirements analysis, and risk analysis.

52
Fig. 6.16

Goals of Architecture
The primary goal of the architecture is to identify requirements that affect the structure of
the application. A well-laid architecture reduces the business risks associated with
building a technical solution and builds a bridge between business and technical
requirements.

Some of the other goals are as follows −

 Expose the structure of the system, but hide its implementation details.

 Realize all the use-cases and scenarios.

 Try to address the requirements of various stakeholders.

 Handle both functional and quality requirements.

 Reduce the goal of ownership and improve the organization’s market position.

 Improve quality and functionality offered by the system.

 Improve external confidence in either the organization or system.

Limitations
Software architecture is still an emerging discipline within software engineering. It has
the following limitations −

53
 Lack of tools and standardized ways to represent architecture.

 Lack of analysis methods to predict whether architecture will result in an


implementation that meets the requirements.

 Lack of awareness of the importance of architectural design to software


development.

 Lack of understanding of the role of software architect and poor communication


among stakeholders.

 Lack of understanding of the design process, design experience and evaluation of


design.

Role of Software Architect


A Software Architect provides a solution that the technical team can create and design for
the entire application. A software architect should have expertise in the following areas −

Design Expertise
 Expert in software design, including diverse methods and approaches such as
object-oriented design, event-driven design, etc.

 Lead the development team and coordinate the development efforts for the
integrity of the design.

 Should be able to review design proposals and tradeoff among themselves.

Domain Expertise
 Expert on the system being developed and plan for software evolution.

 Assist in the requirement investigation process, assuring completeness and


consistency.

 Coordinate the definition of domain model for the system being developed.

Technology Expertise
 Expert on available technologies that helps in the implementation of the system.

54
 Coordinate the selection of programming language, framework, platforms,
databases, etc.

Methodological Expertise
 Expert on software development methodologies that may be adopted during SDLC
(Software Development Life Cycle).

 Choose the appropriate approaches for development that helps the entire team.

Hidden Role of Software Architect


 Facilitates the technical work among team members and reinforcing the trust
relationship in the team.

 Information specialist who shares knowledge and has vast experience.

 Protect the team members from external forces that would distract them and bring
less value to the project.

Deliverables of the Architect


 A clear, complete, consistent, and achievable set of functional goals

 A functional description of the system, with at least two layers of decomposition

 A concept for the system

 A design in the form of the system, with at least two layers of decomposition

 A notion of the timing, operator attributes, and the implementation and operation
plans

 A document or process which ensures functional decomposition is followed, and


the form of interfaces is controlled

Quality Attributes
Quality is a measure of excellence or the state of being free from deficiencies or defects.
Quality attributes are the system properties that are separate from the functionality of the
system.

55
Implementing quality attributes makes it easier to differentiate a good system from a bad
one. Attributes are overall factors that affect runtime behavior, system design, and user
experience.

They can be classified as −

Static Quality Attributes


Reflect the structure of a system and organization, directly related to architecture,
design, and source code. They are invisible to end-user, but affect the development
and maintenance cost, e.g.: modularity, testability, maintainability, etc.

Dynamic Quality Attributes


Reflect the behavior of the system during its execution. They are directly related to
system’s architecture, design, source code, configuration, deployment parameters,
environment, and platform.

They are visible to the end-user and exist at runtime, e.g. throughput, robustness,
scalability, etc.

Quality Scenarios
Quality scenarios specify how to prevent a fault from becoming a failure. They can be
divided into six parts based on their attribute specifications −

 Source − An internal or external entity such as people, hardware, software, or


physical infrastructure that generate the stimulus.

 Stimulus − A condition that needs to be considered when it arrives on a system.

 Environment − The stimulus occurs within certain conditions.

 Artifact − A whole system or some part of it such as processors,


communication channels, persistent storage, processes etc.

 Response − An activity undertaken after the arrival of stimulus such as detect


faults, recover from fault, disable event source etc.

 Response measure − Should measure the occurred responses so that the


requirements can be tested.

56
6.5 IMPLEMENTATION PHASE:

6.5.1 Language used characteristics

PYTHON

Python is a high-level, interpreted, interactive and object-oriented scripting language. Python


is designed to be highly readable. It uses English keywords frequently where as other languages
use punctuation, and it has fewer syntactical constructions than other languages.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.

 Python is Interactive − You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.

 Python is Object-Oriented − Python supports Object-Oriented style or technique of


programming that encapsulates code within objects.

 Python is a Beginner's Language − Python is a great language for the beginner-level


programmers and supports the development of a wide range of applications from simple
text processing to WWW browsers to games.

Python's features include −

 Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.

 Easy-to-read − Python code is more clearly defined and visible to the eyes.

 Easy-to-maintain − Python's source code is fairly easy-to-maintain.

 A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.

 Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.

57
 Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.

6.5.2. CODING:

6.5.2.1 CODE EFFICIENCY:

Code efficiency is an important aspect for smooth working of code. There are various factors
which focus on code efficiency. Basically we need to answer three questions for estimation
code efficiency:

 Have functions been optimized for speed?


 Have repeatedly used block of code been formed into subroutines?
 Are there memory leaks or overflow errors?

For optimization of speed we have not imported the complete library rather we have
imported individual classes of that library for fulfilling our requirements. For sort, we
haven’t used math library for arithmetic calculations. Also we have taken a low quality of
icons and .png images so that our project can run smoothly.

There were many occasions where we need to use same piece of code repeatedly(like
JSONParser.class file). But instead of using the same code again and again, we used its
object wherever required.

For preventing overflow errors and memory leaks, we have restricted the user to enter data
of specific size. In database , the size of each datatype is predefined. This will not let memory
leaks and overflow errors to occur.

58
6.5.2.2 Optimization of code

Code optimization is one of the most important aspect for efficiency measurement. Optimization
of code is defined as how efficiently a code can run with fewest possible resources. Here are some
of the optimization practices that we have involved in our project:

 Avoid_constant_expressions_in_loops
 Avoid_duplication_of_code
 Do_not_declare_members_accessed_by_inner_class_private
 Avoid_synchronized_modifier_in_method
 Avoid_empty_if
 Avoid_unnecessary_if
 Avoid_unnecessary_parentheses
 Avoid_unnecessary_implementing_Clonable_interface
 Remove_unnecessary_if_then_else_statement
 Avoid_instantiation_of_class_with_only_static_members
 Close_jdbc_connections
 Avoid_boolean_array
 Avoid_string_concatenation_in_loop
 Place_try_catch_out_of_loop
 Avoid_empty_try_blocks
 Avoid_empty_loops
 Avoid_unnecessary_substring
 Avoid_unnecessary_exception_throwing
 Use_PreparedStatement_instead_of_Statement
 Avoid_Extending_java_lang_Object
 Avoid_empty_catch_blocks
 Avoid_synchronized_methods_in_loop
 Avoid_synchronized_blocks_in_loop

59
6.6 TESTING:

Testing is the major quality control that can be used during software development. Its basic
function is to detect the errors in the software. During requirement analysis and design, the output
is a document that is usually textual and non-executable. After the coding phase, computer
program is available that can be executed for testing purposes. This implies that testing not only
has to uncover errors introduced during coding, but also errors introduced during previous phases.
Thus the goal of the testing is to uncover requirement, design and coding errors in the program.

An elaborate testing of data is prepared and the system is tested using that test date. Errors noted
and corrections made during the testing. The corrections are also noted for future use. The users
are trained to operate the developed system. Both hardware and software securities are made to
run the developed system successfully in future. System testing is the stage of implementation,
which is aimed at ensuring that the system works accurately before live operation commences.
Testing is vital to the success of any system. System testing makes a logical assumption that if all
the parts of the system are correct, the goal will be successfully achieved.

6.6.1 TESTING OBJECTIVES

 Testing is a process of executing a program with the intent of finding an error


 A good test case is one that has a high probability of finding an undiscovered error
 A successful test is one that uncovers an as-yet undiscovered error

6.1.1.1 Testing Principles

 All tests should be traceable to customer requirements


 Tests should be planned long before testing begins
 Testing should begin “in the small” and progress toward testing “in the large”
 Exhaustive testing is not completely possible
 To be most effective, testing should be conducted by an independent third party

60
6.6.2 TESTING METHODS

6.6.2.1 Software Testing Strategies

A strategy for software testing integrates software test case design methods into a well-
planned series of steps that result in the successful construction of software. As
important, a software testing strategy provides a road map. Testing is a set of activities
that can be planned in advance and conducted systematically.

Various strategies are given below:


 Unit Testing
 Integration Testing
 Validation Testing
 User Acceptance Testing
 System Testing

Unit Testing

Unit testing focuses verification efforts on the smallest unit of software design of
module. This is also known as “Module Testing”. Acceptance of package is used for
computerization of module. Machine Utilization was prepared and approved by the
project leader.
In this testing step, each module is found to be working satisfactory as regards to the
expected output from the module. The suggested changes were incorporated into the
system. Here each module in the Machine Utilization has been tested.

Integration Testing

After the package is integrated, the user test version of the software was released. This
testing consists of testing with live data and various stress tests and result were noted
down. Then the corrections were made based on the users feedback. Integration testing
is systematic testing for constructing the program structure, while at the same time

61
conducting tests to uncover errors associated within the interface. The objective is to
take unit tested modules and build a program structure. All the modules are combined
and tested as a whole. Here correction is difficult because the vast expenses of the entire
program complicate the isolation of causes. Thus the integration testing step, all the
errors uncovered are corrected for the next steps.

Validation Testing

At the culmination of integration testing, software is completely assembled as a


package; interfacing errors have been uncovered and corrected, and a final series of
software tests - Validation testing - may begin.

User Acceptance Testing

User acceptance of a system is the key factor for the success of any system. The system
under consideration is tested for user acceptance by constantly keeping in touch with
prospective system users at time of development and making changes wherever
required.

This is done in regard to the following points:

 Input Screen Design


 On-line Messages to guide the user
 Format of reports and other outputs

After performing all the above tests the system was found to be running successfully
according to the user requirements i.e., (constraints).

System Testing

Software is only one element of a larger computer-based system.

62
Ultimately, software is incorporated with other system elements and a series of system
integration and validation tests are conducted. The various types of system testing are:

 Recovery Testing: Many computer-based systems must recover from faults and
resume processing within a pre specified time.
 Security Testing: Security testing attempts to verify that protection
mechanisms built into a system will in fact protect it from improper penetration.
 Stress Testing: Stress tests are designed to confront programs with abnormal
situations.
 Performance Testing: Performance testing is designed to test run-time
performance of software within the context of an integrated system.

Black Box Testing

Black box testing is carried out to check the functionality of the various modules.
Although they are designed to uncover errors, black-box tests are used to demonstrate
that software functions are operational; that input is properly accepted and output is
correctly produced; and that the integrity of external information is maintained. A
black-box test examines some fundamental aspect of the system with little regard for
the internal logical structure of the software.

White Box Testing

White-box testing of software is predicated on close examination of procedural detail


providing the test cases that exercise specific sets of conditions and, loops tests logical
paths through the software. White-box testing, sometimes called glass-box testing is a
test case design method that uses the control structure of the procedural design to derive
test cases. Using white-box testing methods, following test cases can be derived.

 Guarantee that all independent paths within a module have been exercised at
least once.

63
 Exercise all logical decisions on their true and false sides.
 Execute all loops at their boundaries and within their operational bounds.
 Exercise internal data structures to assure their validity.
 The errors that can be encountered while conducting white-box testing are
Logic errors and incorrect assumptions.
 Typographical errors

TEST CASES:-

Test
Expected
Case Test Case Senerio Test Step Test Data
Result
ID

Camera is Properly Switch on the A room where no Shows zero


1
Counting people or not camera one is inside headcount

Camera is Properly Swich on the A room where Shows correct


2
Counting people or not Camera someone is inside headcount

The System turns off the Switch On the A room where no


3 Turns off Lights
light in case of no one Camera one is inside

The System turns on the Swich on the A room where


4 Turns on Lights
light in case of someone Camera someone is inside

1. Set Time for the


The Camera blows siren Survillience A room where
5 Siren Blows
if any unwanted activity 2. Switch On the someone is inside
camera
1. Set Time for the
The Camera blows siren Survillience A room where no Siren does not
6
if any unwanted activity 2. Switch On the one is inside Blows
camera

64
CHAPTER 7

Conclusion & Discussion

65
7. CONCLUSION:

We are having the problem of electricity monitoring , unnecessary wastage of electricity is


growing day by day so to control this problem we have tried our best to prepare an automated
system which can do head count surveillance and monitor electricity as well. We are successfully
implementing the project but a lot of work needs to be done to meet future scope and to create an
advance version of it.

7.1.1 LIMITATIONS OF PROJECT:

 Camera angle should be such that it covers the whole room.


 Camera quality should be good to detect properly.

7.1.2 DIFFICULTIES ENCOUNTERED:

 Unrealistic Schedule-if too much work is given in too little time, problems are inevitable.
 Inadequate testing-no one will know whether or not the program is any good until the
customer complain or system collide.
 Futurities-request to pile on new features after development is underway; extremely
common.
 Miscommunication-if developers do not know what’s needed or customers have wrong
expectations, problem are assured

7.1.3 FUTURE ENHANCEMENT SUGGESTIONS:

Implementation of our project has definitely added a lot of new features to the existing camera
system like head counting notification alert, switching off the power. The project do have scope
for some enhancement. The picture quality if the video recorded could be made better by using
advance camera system, a lot of new algorithm could be implemented and used to improve the
accuracy of people counting. Industry and government mandates are regulating technologies
leading to accepted standards across industries allowing for interoperability among devices.
Additionally ,the cost and size of devices continues.

66
CHAPTER 8

Bibliography & References

67
8. BIBLOGRAPHY AND REFERENCES:

8.1.1 Reference Books:

1. A Whirlwind Tour of Python Author: DescriptoinJake VanderPlas

2. 20 Python Libraries You Aren't Using Author: Caleb Hattingh

3. Python in Education Author: Nicholas Tollervey.

4. Getting started with Internet of Things

8.1.2 Websites:

1. https://www.geeksforgeeks.org/python-programming-language/

2. https://www.tutorialspoint.com/python/

3. https://www.tutorialspoint.com/arduino/

4. https://www.anaconda.com/distribution/

5. https://anaconda.org/

68

Anda mungkin juga menyukai