21 tayangan

Diunggah oleh Honey Shakya

algorithmic model

- Aga
- E-Learning Timetable Generator Using Genetic Algorithms
- GENETIC ALGORITHM
- Evolutionary Sound- a Non-Symbolic Approach to Creating Sonic Art With Genetic Algorithms.pdf
- Ppt on Genetic
- A comparison of the algorithms used in electricity power systems
- 452-3919-1-PB
- 09 Genetic Algorithms
- Genetic algorith
- 8
- AI07 Evolution
- Genetic Algorithm
- Ce 25481484
- 08075586
- Evolutionary Algorithms
- LossReduction-GA
- Academic Scheduling Problem Made Easy Through Optimization
- 10.1.1.29.5939
- 1
- 3

Anda di halaman 1dari 10

in

International Journal Of Engineering And Computer Science ISSN:2319-7242

Volume 2 Issue 3 March 2013 Page No. 623-632

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 623

Literature Survey On Algorithmic And Non- Algorithmic Models For Software Development

Effort Estimation

K.Ramesh

1

P.Karunanidhi

2

,

Assistant professor , V.R.S college of engineering and technology

Email:rameshvrscet@gmail.com

, professor, V.R.S college of engineering and technology

Email:Karunanidhi_p1971@yahoo.co.in

Abstract :-The effort invested in a software project is probably one of the most important and most analyzed variables in

recent years in the process of project management. Computing techniques is a consortium of methodologies centering in

fuzzy logic, artificial neural networks, and evolutionary computation. It is important, to mention here, that these

methodologies are complementary and synergistic, rather than competitive. They provide in one form or another flexible

information processing capability for handling real life ambiguous situations. These methodologies are currently used

for reliable and accurate estimate of software development effort. The aim of this study is to analyze algorithmic modes

and non-algorithmic models in the existing models and to provide in depth review of software and project estimation

techniques existing in industry and literature based on the different test datasets along with their advantages and

disadvantages.

Keywords: Expert J udgement, Function Point

Analysis, Analogy, Effort Estimation, Fuzzy Logic,

Genetic Programming, Particle Swarm Optimization,

MMRE, Neural Networks.

Introduction

Software development effort estimation is one of the

most major activities in software project management.

A number of models have been proposed to

construct a relationship between software size and

effort; however there are many problems. This is

because project data, available in the initial stages of

project is often incomplete, inconsistent, uncertain

and unclear [20]. Effort estimates may be used as

input to project plans, iteration plans, budgets,

investment analyses, pricing processes so it becomes

very important to get accurate estimates. Software

effort prediction models fall into two main categories:

algorithmic and non-algorithmic. The most popular

algorithmic estimation models include Boehms

COCOMO[8], Putnams SLIM [14] and Albrechts

Function Point [5].These models require as inputs,

accurate estimate of certain attributes such as line of

code (LOC), complexity and so on which are difficult to

obtain during the early stage of a software

development project. The models also have difficulty

in modelling the inherent complex relationships

between the contributing factors, are unable to

handle categorical data as well as lack of reasoning

capabilities [6]. The limitations of algorithmic

models led to the exploration of the non-

algorithmic techniques which are soft computing

based. . These include artificial neural network,

evolutionary computation, fuzzy logic models, case-

based reasoning, and combinational models and so

on. This paper focuses on the outcomes of application

of non-algorithmic models in software effort estimation

to predict the best method of estimation.

Literature Review:

It has been surveyed that nearly one-third projects

overrun their budget and late delivered and two-thirds

of all major projects substantially overrun their original

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 624

estimates. The accurate prediction of software

development costs is a critical issue to make the good

management decisions and accurately determining how

much effort and time a project required for both project

managers as well as system analysts and developers.

Without reasonably accurate cost estimation capability,

project managers cannot determine how much time and

manpower cost the project should take and that means

the software portion of the project is out of control from

its beginning; system analysts cannot make realistic

hardware-software trade off analyses during the system

design phase; software project personnel cannot tell

managers and customers that their proposed budget and

schedule are unrealistic. This may lead to optimistic

over promising on software development and the

inevitable overruns and performance compromises as a

consequence. But, actually huge overruns resulting from

inaccurate estimates are believed to occur frequently.

The overall process of developing a cost estimate for

software is not different from the process for estimating

any other element of cost. There are, however, aspects

of the process that are peculiar to software estimating.

Some of the unique aspects of software estimating are

driven by the nature of software as a product. Other

problems are created by the nature of the estimating

methodologies. Software cost estimation is a continuing

activity which starts at the proposal stage and continues

through the lift time of a project. Continual cost

estimation is to ensure that the spending is in line with

the budget.

It is very difficult to estimate the cost of software

development. Many of the problems that plague the

development effort itself are responsible for the

difficulty encountered in estimating that effort. One of

the first steps in any estimate is to understand and

define the system to be estimated. Software, however, is

intangible, invisible, and intractable. It is inherently

more difficult to understand and estimate a product or

process that cannot be seen and touched. Software

grows and changes as it is written. When hardware

design has been inadequate, or when hardware fails to

perform as expected, the "solution" is often attempted

through changes to the software. This change may occur

late in the development process, and sometimes results

in unanticipated software growth.

After 20 years research, there are many software cost

estimation methods available including algorithmic

methods, estimating by analogy, expert judgment

method, price to win method, top-down method, and

bottom-up method. No one method is necessarily better

or worse than the other, in fact, their strengths and

weaknesses are often complimentary to each other. To

understand their strengths and weaknesses is very

important when you want to estimate your projects.

Algorithmic methods

The algorithmic method is designed to provide some

mathematical equations to perform software estimation.

These mathematical equations are based on research

and historical data and use inputs such as Source Lines

of Code (SLOC), number of functions to perform, and

other cost drivers such as language, design

methodology, skill-levels, risk assessments, etc. The

algorithmic methods have been largely studied and

there are a lot of models have been developed, such as

COCOMO models [4], Putnam model [5], and function

points based models[10].

3.1 Expert J udgment Method

Expert judgment techniques involve consulting with

software cost estimation expert or a group of the experts

to use their experience and understanding of the

proposed project to arrive at an estimate of its cost.

Generally speaking, a group consensus technique,

Delphi technique, is the best way to be used. The

strengths and weaknesses are complementary to the

strengths and weaknesses of algorithmic method. To

provide a sufficiently broad communication bandwidth

for the experts to exchange the volume of information

necessary to calibrate their estimates with those of the

other experts, a wideband Delphi technique is

introduced over standard Delphi technique. The

estimating steps using this method:

Coordinators present each expert with a specification

and an estimation form.

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 625

Coordinator calls a group meeting in which the experts

discuss estimation issues with the coordinator and each

other.

Experts fill out forms anonymously

Coordinator prepares and distributes a summary of the

estimation on an iteration form.

Coordinator calls a group meeting, specially focusing

on having the experts discuss points where their

estimates varied widely.

Experts fill out forms, again anonymously, and steps 4

and 6 are iterated for as many rounds as appropriate.

The wideband Delphi Technique has subsequently been

used in a number of studies and cost estimation

activities. It has been highly successful in combining

the free discuss advantages of the group meeting

technique and advantage of anonymous estimation of

the standard Delphi Technique. The advantages of this

method are:

The experts can factor in differences between past

project experience and requirements of the proposed

project.

The experts can factor in project impacts caused by new

technologies, architectures, applications and languages

involved in the future project and can also factor in

exceptional personnel characteristics and interactions,

etc.

The disadvantages include:

This method cannot be quantified.

It is hard to document the factors used by the experts or

experts-group.

Expert may be some biased, optimistic, and pessimistic,

even though they have been decreased by the group

consensus.

The expert judgment method always compliments the

other cost estimating methods such as algorithmic

method.

3.2 Estimating by Analogy

Estimating by analogy means comparing the proposed

project to previously completed similar project where

the project development information id known. Actual

data from the completed projects are extrapolated to

estimate the proposed project. This method can be used

either at system-level or at the component-level.

Estimating by analogy is relatively straightforward.

Actually in some respects, it is a systematic form of

expert judgment since experts often search for

analogous situations so as to inform their opinion. The

steps using estimating by analogy are:

Characterizing the proposed project.

Selecting the most similar completed projects whose

characteristics have been stored in the historical data

base.

Deriving the estimate for the proposed project from the

most similar completed projects by analogy.

The main advantages of this method are:

The estimation is based on actual project characteristic

data.

The estimator's past experience and knowledge can be

used which is not easy to be quantified.

The differences between the completed and the

proposed project can be identified and impacts

estimated.

However there are also some problems with this

method,

Using this method, we have to determine how best to

describe projects. The choice of variables must be

restricted to information that is available at the point

that the prediction required. Possibilities include the

type of application domain, the number of inputs, the

number of distinct entities referenced, the number of

screens and so forth.

Even once we have characterized the project, we have

to determine the similarity and how much confidence

can we place in the analogies. Too few analogies might

lead to maverick projects being used; too many might

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 626

lead to the dilution of the effect of the closest analogies.

Martin Sheppard etc. introduced the method of finding

the analogies by measuring Euclidean distance in n-

dimensional space where each dimension corresponds

to a variable. Values are standardized so that each

dimension contributes equal weight to the process of

finding analogies. Generally speaking, two analogies

are the most effective.

Finally, we have to derive an estimate for the new

project by using known effort values from the

analogous projects. Possibilities include means and

weighted means which will give more influence to the

closer analogies.

It has been estimated that estimating by analogy is

superior technique to estimation via algorithmic model

in at least some circumstances. It is a more intuitive

method so it is easier to understand the reasoning

behind a particular prediction..

3.3 Top-Down and Bottom-Up Methods

3.3.1 Top-Down Estimating Method

Top-down estimating method is also called Macro

Model. Using top-down estimating method, an overall

cost estimation for the project is derived from the global

properties of the software project, and then the project is

partitioned into various low-level components. The

leading method using this approach is Putnam model.

This method is more applicable to early cost estimation

when only global properties are known. In the early

phase of the software development, It is very useful

because there are no detailed information available.

The advantages of this method are:

It focuses on system-level activities such as integration,

documentation, configuration management, etc., many

of which may be ignored in other estimating methods

and it will not miss the cost of system-level functions.

It requires minimal project detail, and it is usually

faster, easier to implement.

The disadvantages are:

It often does not identify difficult low-level problems

that are likely to escalate costs and sometime tends to

overlook low-level components.

It provides no detailed basis for justifying decisions or

estimates.

Because it provides a global view of the software

project, it usually embodies some effective features

such as cost-time trade off capability that exists in

Putnam model.

3.3.2 Bottom-up Estimating Method

Using bottom-up estimating method, the cost of each

software components is estimated and then combine the

results to arrive at an estimated cost of overall project. It

aims at constructing the estimate of a system from the

knowledge accumulated about the small software

components and their interactions. The leading method

using this approach is COCOMO's detailed model.

The advantages:

It permits the software group to handle an estimate in an

almost traditional fashion and to handle estimate

components for which the group has a feel.

It is more stable because the estimation errors in the

various components have a chance to balance out.

The disadvantages:

It may overlook many of the system-level costs

(integration, configuration management, quality

assurance, etc.) associated with software development.

It may be inaccurate because the necessary information

may not available in the early phase.

It tends to be more time-consuming.

It may not be feasible when both time and personnel are

limited.

3.4 COCOMO Models

One very widely used algorithmic software cost model

is the Constructive Cost Model (COCOMO). The basic

COCOMO model [4] has a very simple form:

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 627

MAN-MONTHS = K1* (Thousands of Delivered

Source Instructions)

K2

Where K1 and K2 are two parameters dependent on the

application and development environment.

Estimates from the basic COCOMO model can be made

more accurate by taking into account other factors

concerning the required characteristics of the software

to be developed, the qualification and experience of the

development team, and the software development

environment. Many of these factors affect the person

months required by an order of magnitude or more.

COCOMO assumes that the system and software

requirements have already been defined, and that these

requirements are stable. This is often not the case.

COCOMO model is a regression model. It is based on

the analysis of 63 selected projects. The primary input is

KDSI. The problems are:

In early phase of system life-cycle, the size is estimated

with great uncertainty value. So, the accurate cost

estimate cannot be arrived at.

The cost estimation equation is derived from the

analysis of 63 selected projects. It usually have some

problems outside of its particular environment. For this

reason, the recalibration is necessary.

3.5 Putnam model

Another popular software cost model is the Putnam

model. The form of this model is:

Technical constant C=size * B

1/3

* T

4/3

Total Person Months B=1/T

4

*(size/C)

3

T=Required Development Time in years

Size is estimated in LOC

Where: C is a parameter dependent on the development

environment and It is determined on the basis of

historical data of the past projects.

Rating: C=2,000 (poor), C=8000 (good) C=12,000

(excellent).

The Putnam model is very sensitive to the development

time: decreasing the development time can greatly

increase the person-months needed for development.

3.5 Function Point Analysis Based Methods

The Function Point Analysis is another method of

quantifying the size and complexity of a software

system in terms of the functions that the systems

delivers to the user. A number of proprietary models for

cost estimation have adopted a function point type of

approach, such as ESTIMACS and SPQR/20[10].

The function point measurement method was developed

by Allan Albrecht at IBM and published in 1979. He

believes function points offer several significant

advantages over SLOC counts of size measurement.

There are two steps in counting function points:

C o u n t i n g t h e user functions. The raw function counts

are arrived at by considering a linear combination of

five basic software components: external inputs,

external outputs, external inquiries, logic internal files,

and external interfaces, each at one of three complexity

levels: simple, average or complex.. .The sum of these

numbers, weighted according to the complexity level, is

the number of function counts (FC).

Adjusting for environmental processing complexity.

The final function points is arrived at by multiplying FC

by an adjustment factor that is determined by

considering 14 aspects of processing complexity. This

adjustment factor allows the FC to be modified by at

most 35% or -35%.

The collection of function point data has two primary

motivations. One is the desire by managers to monitor

levels of productivity. Another use of it is in the

estimation of software development cost.

There are some cost estimation methods which are

based on a function point type of measurement, such as

ESTIMACS and SPQR/20. SPQR/20 is based on a

modified function point method. Whereas traditional

function point analysis is based on evaluating 14

factors, SPQR/20 separates complexity into three

categories: complexity of algorithms, complexity of

code, and complexity of data structures. ESTIMACS is

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 628

a propriety system designed to give development cost

estimate at the conception stage of a project and it

contains a module which estimates function point as a

primary input for estimating cost.

The advantages of function point analysis based model

are:

Function points can be estimated from requirements

specifications or design specifications, thus making it

possible to estimate development cost in the early

phases of development.

Function points are independent of the language, tools,

or methodologies used for implementation.

Non-technical users have a better understanding of what

function points are measuring since function points are

based on the system user's external view of the system.

4. Non- Algorithmic methods

4.1 Neural Networks

Neural networks are nets of processing elements

that are able to learn the mapping existent between

input and output data. The neuron computes a

weighted sum of its inputs and generates an output

if the sum exceeds a certain threshold. This output

then becomes an excitatory (positive) or inhibitory

(negative) input to other neurons in the network.

The process continues until one or more outputs are

generated [18]. It reports the use of neural

networks for predicting software reliability, including

experiments with both feed forward and J ordan

networks with a cascade correlation learning algorithm

The Neural Network is initialized with random

weights and gradually learns the relationships

implicit in a training data set by adjusting its

weights when presented to these data. The network

generates effort by propagating the initial inputs

through subsequent layers of processing elements to

the final output layer. Each neuron in the network

computes a non-linear function of its inputs and

passes the resultant value along its output [3]. The

favoured activation function is Sigmoid Function given

as:

1

( )

1

x

f x

e

=

+

Among the several available training algorithms the

error back propagation is the most used by software

metrics researchers. The drawback of this method lies in

the fact that the analyst cant manipulate the net once

the learning phase has finished [10]. Neural

Networks limitations in several aspects prevent it from

being widely adopted in effort estimation. It is a

black box approach therefore it is difficult to

understand what is going on internally within a

neural network. Hence, justification of the prediction

rationale is tough. Neural network is known of its

ability in tackling classification problem. Contrarily, in

effort estimation what is needed is generalization

capability. At the same time, there is little guideline

in the construction of neural network topologies [3].

One of the methods is the use of Wavelet Neural

Network (WNN) to forecast the software

development effort. The effectiveness of the WNN

variants is compared with other techniques such as

multiple linear regressions in terms of the error

measure which is mean magnitude relative error

(MMRE) obtained on Canadian financial (CF) dataset

and IBM data processing services (IBMDPS) dataset

[13]. Based on the experiments conducted, it is

observed that the WNN outperformed all the other

techniques. Another method is proposed to use radial

basis neural network for effort estimation [20]. A

case study based on the COCOMO81 database

compares the proposed neural network model with the

Intermediate COCOMO. The results are analyzed using

different criterions and it is observed that the Radial

Basis Neural network provided better results..

4.2 Particle Swarm Optimization

Particle swarm optimization (PSO) is a

computational method that optimizes a problem

by iteratively trying to improve a candidate solution

with regard to a given measure of quality. Such

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 629

methods are commonly known as Meta Heuristics as

they make few or no assumptions about the problem

being optimized and can search very large spaces of

candidate solutions. PSO shares many similarities

with evolutionary computation techniques such as

Genetic Algorithms (GA). The system is initialized

with a population of random solutions and

searches for optima by updating generations.

However, unlike GA, PSO has no evolution

operators such as crossover and mutation. In PSO, the

potential solutions, called particles, fly through the

problem space by following the current optimum

particles. One method has been proposed to use

Particle Swarm Optimization (PSO) for tuning the

parameters of the Constructive COst Model

(COCOMO).for better effort estimation [5]. The

performance of the developed models using PSO

was tested on NASA software project data presented

in [12]. A comparison between the PSO-tuned

COCOMO, FL, Bailey-Basili and Doty models was

provided. The proposed models provided good

estimation capability compared to traditional model

structures. An algorithm [19] is developed named

Particle Swarm Optimization Algorithm (PSOA) to fine

tune the fuzzy estimate for the development of software

projects.

The basic concept of PSO lies in accelerating each

particle towards its Pbest and Gbest locations with

regard to a random weighted acceleration at each

time.The modifications of the particles positions can be

mathematically modeled by making use of the

following equations:

V

ik

+1 =w* V

ik

+c1 * rand()1 * (Pbest Si

k

) +c2

*rand()2 * (Gbest Si

k

)

Si

k

+1 =Si

k

+V

ik

Where,

Si

k

is current search point,

Si

k

+1 is modified search point,

V

ik

is the current velocity,

V

ik

+1is the modified velocity,

Vpbest is the velocity based on Pbest ,

Vgbest =velocity based on Gbest,

W is the weighting function,

Cj is the weighting factors,

Rand() are uniformly distributed random numbers

between 0 and 1.

4.3 Genetic Programming

Genetic programming is one of

the evolutionary methods for effort

estimation. Evolutionary computation

techniques are characterized by the fact that

the solution is achieved by means of a cycle of

generations of candidate solutions that are pruned by

the criteria 'survival of the fittest [24]. When GA is

used for the resolution of real-world problems, a

population comprised of a random set of individuals

is generated. The population is evaluated during the

evolution process. For each individual a rating is

given, reflecting the degree of adaptation of the

individual to the environment. A percentage of the

most adapted individuals is kept, while that the

others are discarded. The individuals kept in the

selection process can suffer modifications in their

basic characteristics through a mechanism of

reproduction. This mechanism is applied on the current

population aiming to explore the search space and to

find better solutions for the problem by means of

crossover and mutation operators generating new

individuals for the next generation. This process,

called reproduction, is repeated until a satisfactory

solution is found [6]. A comparison is suggested by [9]

based on the well-known Desharnais data set of 81

software projects derived from a Canadian software

house. It shows that Genetic Programming can offer

some significant improvements inaccuracy and has

the potential to be a valid additional tool for

software effort estimation. Genetic Programming is a

nonparametric method since it does not make any

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 630

assumption about the distribution of the data, and

derives the equations according only to fitted values. An

effort based model is proposed by [4] for estimation

of COCOMO model using genetic algorithm. The

algorithm considers methodology linearly related to

effort.

A method [1] has been proposed for feature

selection and parameters optimization for machine

learning regression for software effort estimation.

Simulations are carried out using benchmark data

sets of software projects, namely, Desharnais [9],

NASA [19], COCOMO [8]. The results are

compared to those obtained by methods using neural

networks, support vector machines, multiple additive

regression trees. In all data sets, the simulations

have shown that the proposed GA-based method was

able to improve the performance of the machine

learning methods.

A genetic algorithm requires genetic representation of

the solution domain and the fitness function for that.

How the genetic algorithm works can be clearly

understood by the pseudo code [6] as follows:

Step 1- Initialize: To initialize the programme give

initial random values to genes in population

Step 2- Evaluation: Evaluate This Gene Population.

Each gene is tested in the present population and its

fitness is calculated as a solution to the problem. If any

gene has solved the problem, or it provides a good

enough fit, depending on the application and its

requirements, then terminate the programme , go to

SOLVED

Step 3- Next population: Generation of new genes by

crossover from pairs of the highest fitness (scoring) last

population genes.

Randomly mutate or modify the values of a small

fraction of a small number of these new genes

Step 4- Go to Evaluation.

Step 5- Solved: Finished.

.

4.4 Fuzzy Logic

Fuzzy logic is a valuable tool, which can be used

to solve highly complex problems where a

mathematical model is too difficult or impossible to

create. It is also used to reduce the complexity of

existing solutions as well as increase the accessibility

of control theory [21]. The development of software has

always been characterized by parameters that possess

certain level of fuzziness. Study showed that fuzzy

logic model has a place in software effort

estimation [16]. The application of fuzzy logic is

able to overcome some of the problems which are

inherent in existing effort estimation techniques [7].

Fuzzy logic is not only useful for effort prediction,

but that it is essential in order to improve the

quality of current estimating models [22].

Fuzzy logic enables linguistic representation of the

input and output of a model to tolerate imprecision

[17]. It is particularly suitable for effort estimation

as many software attributes are measured on nominal

or ordinal scale type which is a particular case of

linguistic values [2]. A method is proposed as a

Fuzzy Neural Network(FNN) approach for embedding

artificial neural network into fuzzy inference

processes in order to derive the software effort

estimates [23]. Artificial neural network is utilized to

determine the significant fuzzy rules in fuzzy inference

processes. The results showed that applying FNN

for software effort estimates resulted in slightly

smaller mean magnitude of relative error (MMRE)

and probability of a project having a relative error

of less than or equal to 0.25 (Pred (0.25)) as

compared with the results obtained by just using

artificial neural network and the original model.

Another proposal [15] is the use of subset selection

algorithm based on fuzzy logic for analogy software

effort estimation models. Validation using two

established datasets (ISBSG, Desharnais) shows that

using fuzzy features subset selection algorithm in

analogy software effort estimation contribute to

significant results Another proposal based on same

logic is by [7], Fuzzy logic also improves the

interpretability of the model allowing the user to

view, evaluate, criticize and adapt the model. Another

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 631

model is proposed for optimization of effort for

specific application, based on fuzzy logic sizing

rather than using a single number. (KLOC) is taken

as a triangular number [11]. Empirical study is done

not only on the 10 projects of NASA but also

compared their results to the existing models.

Comparative study shows better results so

methodology proposed is general enough to be

applied to other models based on function point

methods and to other areas of quantitative software

engineering. Fuzzy logic is a logic that is represented

by fuzzy expressions which satisfies the following[4]:

Truth values, 0 and 1, and variables xi ([0,1], i =1, 2,

, n) are fuzzy expressions

If f is a fuzzy expression, ~f (not f) is also a fuzzy

expression

If f and g are fuzzy expressions, f g and f g are also

fuzzy expressions

As in fuzzy expression, a fuzzy proposition can have its

truth value in the interval [0, 1]

f: [0, 1] [0,1]

5. CONCLUSION

Researchers have developed different models for

estimation but there is no estimation method which

can present the best estimates in all various situations

and each technique can be suitable in the special

project. There are many software cost estimation

methods available including algorithmic methods,

estimating by analogy, expert judgment method, top-

down method, and bottom-up method. No one method

is necessarily better or worse than the other, in fact,

their strengths and weaknesses are often complimentary

to each other. In an absolute sense, none of the

models perform particularly well at estimating

software development effort, particularly along the

MMRE dimension. But in a relative sense NN approach

is competitive with traditional models. Again as a

comparative analysis, genetic programming can be

used to fit complex functions and can be easily

interpreted. Genetic Programming can find a more

advanced mathematical function between KLOC and

effort. Particle Swarm Optimization alone gives

almost same results as basic models. So the

research is on the way to combine different

techniques for calculating the best estimate.

6. REFERENCES

[1] Adriano L.I. Oliveira *, Petronio L. Braga,

Ricardo M.F. Lima, Mrcio L. Cornlio, (2010),

GA-based feature selection and parameters

optimization for machine learning Regression applied

to software effort estimation, |information and

Software Technology 52, 11551166.

[2] Agustin Gutierrez T., Cornelio Yanez M.and

J erome Leboeuf Pasquier, (2005), Software

Development Effort Estimation Using Fuzzy Logic:

A Case Study, Proceedings of the Sixth Mexican

International Conference on Computer Science

(ENC05).

[3] A. Idri, T. M. Khoshgoftaar, A. Abran, (2002),

Can neural networks be easily interpreted in

software cost estimation?, IEEE Trans. Software

Engineering, Vol. 2, pp. 1162 1167.

[4] Alaa Sheta., (2006), Estimation of the

COCOMO Model Parameters Using Genetic

Algorithms for NASA Software Projects, J ournal of

Computer Science 2 (2): 118-123.

[5] Alaa Sheta, David Rine and Aladdin Ayesh,

,(2008), Development of Software Effort and

Schedule Estimation Models Using Soft Computing

Techniques IEEE Congress on Evolutionary

Computation pp 1283-1289.

[6] A.P. Engelbrecht,(2006),Fundamentals of

Computational Swarm Intelligence, J ohnWiley &

Sons, New Jersy.

K.Ramesh, IJECS Volume 2 Issue 3 March 2013 Page No. 623-632 Page 632

[7] A. R. Gray, S. G. MacDonell,(1997),

Applications of Fuzzy Logic to Software Metric

Models for Development Effort Estimation, Fuzzy

Information Processing Society 1997 NAFIPS 97,

Annual Meeting of the North American, September 21

24pp. 394 399.

[8] B. W. Boehm, (1981.)Software engineering

economics, Englewood Cliffs, NJ : Prentice-Hall.

[9] C. J . Burgess, M. Lefley, (2005), "CanGenetic

Programming improve Software Effort Estimation? A

Comparative Evaluation", Machine Learning

Applications In Software Engineering: Series on

Software Engineering and Knowledge Engineering, pp.

95105.

[10] Finnie, G. R., G.E. Wittig and J -M. Desharnais,

(1997), "A Comparison of Software Effort Estimation

Techniques Using Function Points with Neural

Networks, Case- Based Reasoning and Regression

Models", J ournal of Systems and Software, Vol. 39, pp.

281-289

[11] Harish Mittal, Pradeep Bhatia, (2007),

Optimization Criterion for Effort Estimationusing

Fuzzy Technique CLEI Electronic Journal, Vol. 10

Num. 1 Pap. 2.

[12] J . W. Bailey and V. R. Basili, (1981), A meta

model for software development resource

expenditure, in Proceedings of the International

Conference on Software Engineering, pp. 107115.

[13] K. Vinay Kumar, V. Ravi, Mahil Carr and N. Raj

Kiran, (2008), "Software development cost

estimation using wavelet neural networks", J ournal of

Systems and Software ,Volume 81, pp 1853-1867.

[14] L. H. Putnam, (1987), A general empirical

solution to the macrosoftware sizing and estimating

problem. IEEE Transactions onSoftware

Engineering, SE-4(4) pp 345-361.

[15] Mohammad Azzeh, Daniel Neagu and

PeterCowling, (,2008),"Improving analogy

software effort estimation using fuzzy feature subset

selection algorithm", Proceedings ofthe 4th

international workshop on Predictormodels in software

engineering, pp 71-78.

[16] Moon Ting Su1, Teck Chaw Ling, Keat Keong

Phang, Chee Sun Liew, Peck Yen Man, (2007),

"Enhanced Software Development Effort And Cost

Estimation Using Fuzzy Logic Model", Malaysian

J ournal of Computer Science, Vol. 20(2), pp 199-

207.

[17] N. E. Fenton, S. L. P fleeger, (1997),

Software Metrics, A Rigorous and Practical

Approach, 2nd Edition, PWS Publishing Company,

Thomson Publishing, Boston.

[18] N. Karunanitthi, D. Whitley, and Y. K.

Malaiya, (1992), "Using Neural Networks in Reliability

Prediction, IEEE Software, Vol. 9, no.4, pp. 53-59.

[19] Prasad Reddy, (2010), Particle Swarm

Optimization in the fine-tuning of Fuzzy Software

Cost Estimation Models, International J ournal of

Software Engineering (IJ SE), Volume (1): Issue (1), pp

12-23.

[20] Prasad Reddy P.V.G.D, Sudha K.R, Rama Sree

P and Ramesh S.N.S.V.S.C, (2010), Software Effort

Estimation using Radial Basis and Generalized

Regression Neural Networks, J ournal of Computing,

Volume 2, Issue 5, pp 87-92.

[21] Razaz, M. And King, J . (2004) Introduction to

Fuzzy Logic Information Systems - Signal and

Image ProcessingGroup.

http://www.sys.uea.ac.uk/king/restricted/boards/

[22] S. Kumar, B. A. Krishna, and P. S. Satsangi,

(1994), Fuzzy systems and neural networks in

software engineering project management, J ournal of

Applied Intelligence, no. 4, pp. 31-52.

[23] Sun-J en Huang and Nan-Hsing Chiu, (2007),

"Applying fuzzy neural network to estimate software

development effort", journal ofApplied Intelligence.

Vol 30 Issue 2, pp.73-83

[24] Urkola Leire , Dolado J . J avier , Fernandez

Luis and Otero M. Carmen , (2002), "Software

Effort Estimation: the Elusive Goal in Project

- AgaDiunggah olehrameshaarya99
- E-Learning Timetable Generator Using Genetic AlgorithmsDiunggah olehUpender Dhull
- GENETIC ALGORITHMDiunggah olehVajiha Wadwan
- Evolutionary Sound- a Non-Symbolic Approach to Creating Sonic Art With Genetic Algorithms.pdfDiunggah olehBrandy Thomas
- Ppt on GeneticDiunggah olehSalman Khan
- A comparison of the algorithms used in electricity power systemsDiunggah olehmrlu
- 452-3919-1-PBDiunggah olehSebastián Cáceres G
- 09 Genetic AlgorithmsDiunggah olehTaimoor Shakeel
- Genetic algorithDiunggah olehJosue Ramirez
- 8Diunggah olehKumar Nori
- AI07 EvolutionDiunggah olehluckyme246
- Genetic AlgorithmDiunggah olehPrateek Sancheti
- Ce 25481484Diunggah olehAnonymous 7VPPkWS8O
- 08075586Diunggah olehJayson Alva
- Evolutionary AlgorithmsDiunggah olehCristina Elena
- LossReduction-GADiunggah olehEyad A. Feilat
- Academic Scheduling Problem Made Easy Through OptimizationDiunggah olehAlexander Decker
- 10.1.1.29.5939Diunggah olehstudent8989
- 1Diunggah olehBikram Shaw
- 3Diunggah olehParul Kaushik
- paper ID-29201424Diunggah olehIJRAT
- review on fms and agv.pdfDiunggah olehitzgaya
- 06198104Diunggah olehNorbey Marin Moreno
- Vol.2.3.4.November.10Diunggah olehAnonymous TxPyX8c
- A Comparative Analysis of Evolutionary Algorithms for Function ionDiunggah olehjbhpoa
- 10.1.1.87Diunggah olehGGurfinkel
- scheduling algorithmsDiunggah olehkonekote
- OPTIMAL PARAMETER SELECTION FOR UNSUPERVISED NEURAL NETWORK USING GENETIC ALGORITHM.pdfDiunggah olehBilly Bryan
- A Simulation Study on Some Search Algorithms for Regression Test Case Prioritization (1)Diunggah olehAbdirahman A. Osman
- ismvl2008-7Diunggah oleharunvinodh

- Feasibility Analysis Monte KarloDiunggah olehHoney Shakya
- Terms and ConditionsDiunggah olehHoney Shakya
- Replay AttackDiunggah olehHoney Shakya
- Learning Management System(LMS) FEATURESDiunggah olehHoney Shakya
- spmDiunggah olehHoney Shakya
- Project EstimationDiunggah olehnanapopo
- new formDiunggah olehHoney Shakya

- CNC King Cut Y3_Manual InstructionDiunggah olehHoang Van Manh
- CIM Standards Overview CIM U Windsor Part 1Diunggah olehFernando Barretto
- TK155 ITU .pdfDiunggah olehtamnguyen29842764
- Font Creation Tutorial Part..Diunggah olehCristi banciu
- Mitigasi Week 4Diunggah olehADINDA NABILA
- Marginal Moment Generating Function Based Analysis of Channel Capacity for Rayleigh FadingDiunggah olehseventhsensegroup
- Chapter 8 the Physical Data WarehouseDiunggah olehbhaskarp23
- README.txtDiunggah olehFahim Khan
- 14897SecurityConceptsPCS7 WinCC Whitepages Lo-ResDiunggah olehserkalemt
- Digital Data CompressionDiunggah olehJeff Webber
- BitDefender ManagementServer AdminGuide EnDiunggah olehAnonymous pVLDvOo
- Head Pose estimation.docxDiunggah olehمسافر عابر
- Interview TipsDiunggah olehSmoke1035
- CRM TemplateDiunggah olehHalyna Nguyen
- 22118691-TPM-Road-Map (1)Diunggah oleh01dynamic
- En Dev or User GuideDiunggah olehAndré Deniz Hanai
- SQP 2005 L3B ISO 9126 Software Prod EvalDiunggah olehapi-3840192
- Allegro FAQDiunggah olehTasneem Akhtar
- En Ccnp Ocn Slm v5010Diunggah olehkkvan23
- Studio One 3 - Release NotesDiunggah olehKlaudellsh Larieux
- HRMapDiunggah olehvvadan
- pp5Diunggah olehMandy Gill
- Resolution Establishing An Open Data Policy For The City Of Oakland For Making Public Data Available In Machine Readable Formats Using Open Data StandardsDiunggah olehGovFresh
- DBR Winning ApproachDiunggah olehDiego Andrés Triana Rodríguez
- SAP BI InterviewDiunggah olehrajesh
- Lecture_08 GARP and Afriatís Theorem ProductionDiunggah olehAhmed Gouda
- pricelistDiunggah olehAnil Kumar
- PartsDiunggah olehRie Jan
- 07_RN31587EN20GLA0_HSPADiunggah olehlugano
- Omc Alcatel PDFDiunggah olehAnthony