Anda di halaman 1dari 14

computers

Article
Air Condition’s PID Controller Fine-Tuning Using
Artificial Neural Networks and Genetic Algorithms
Maryam Malekabadi, Majid Haghparast * ID
and Fatemeh Nasiri
Department of Computer Engineering, Yadegar-e-Imam Khomeini (RAH) Shahre Rey Branch,
Islamic Azad University, Tehran, Iran; malekabadi@iausr.ac.ir (M.M.); nasiri@iausr.ac.ir (F.N.)
* Correspondence: haghparast@iausr.ac.ir or mhaghparast@gmail.com; Tel.: +98-912-646-9737

Received: 9 February 2018; Accepted: 11 May 2018; Published: 21 May 2018 

Abstract: In this paper, a Proportional–Integral–Derivative (PID) controller is fine-tuned through the


use of artificial neural networks and evolutionary algorithms. In particular, PID’s coefficients are
adjusted on line using a multi-layer. In this paper, we used a feed forward multi-layer perceptron.
There was one hidden layer, activation functions were sigmoid functions and weights of network
were optimized using a genetic algorithm. The data for validation was derived from a desired results
of system. In this paper, we used genetic algorithm, which is one type of evolutionary algorithm. The
proposed methodology was evaluated against other well-known techniques of PID parameter tuning.

Keywords: genetic algorithms; optimization; artificial neural network

1. Introduction
In general, optimization is a set of methods and techniques that is used to achieve the minimum
and maximum values of mathematical functions, including linear and nonlinear functions. Essentially,
optimization methods are divided in two forms: evolutionary separation methods and gradient-based
methods. Nowadays, due to advances in science and technology, many methods have been developed
so that the traditional gradient-based optimization methods became obsolete. Through evolutionary
optimization methods, we introduce a genetic algorithm optimization method that can be applied to a
neural network and Fuzzy nerve training. The first simulation efforts were conducted by Mac Klvk
and Walter Pitts using the logical model of neuronal function that has formed the basic building blocks
of most of today’s artificial neural networks [1–4].
The performance of this model is based on inputs and outputs. If the sum of entries is greater
than the threshold value, the so-called neurons will be stimulated. The result of this model was the
implementation of simple functions such as AND and OR [5]. The neuron adaptive linear model is
another system, and was the first neural network used by Vydrv and Hough (Stanford University) in
1960 for real problems [6]. In 1969, Misky and Papert wrote a book where the perceptron plan was
unable to solve any problems and they stopped conducting research in this field for several years [5].
In 1974, Vrbas set up learning with backward propagation as if it was a multi-layered instruction with
even stronger rules education [7].
There are many studies on the tuning of PID controller parameters. For example [8] fuzzy
logic is used for parameter tuning of a PID controller [9,10], and the IMC method has been used
for the tuning of PID. The PID controller has been used in some studies involving air conditioning
systems. The CARLA algorithm has been used in the tuning of a PID controller in an air temperature
controller [11]. This algorithm is an online algorithm that has a good performance level in practical
applications. Nowadays, too many changes in artificial neural network technology have occurred. The
main contribution of this work is to create an online neural network for control that is based on an
online network where it is normal to expect better results.

Computers 2018, 7, 32; doi:10.3390/computers7020032 www.mdpi.com/journal/computers


Computers 2018, 7, 32 2 of 14

In Section 2, we describe the system model and related works. In Section 3, we provided the
proposed methodology. Finally, in Section 4, we present the simulation results, and we provide a
general conclusion in Section 5. A complete list of references is also provided.

2. Related Work
The Earth system model always displays many of changes in the earth’s climate. International
agreements such as the Kyoto Protocol and the Copenhagen Accord emphasize the reduction of carbon
dioxide propagation as one of the main ways to deal with risk of climate change [12]. The suggestions
made by the Chinese Government’s Electric and Mechanical Services Organization (25.5 ◦ C), EU
expansion and Innovation of China (26 ◦ C), and the Ministry of Economy and Knowledge Republic of
Korea (26–28 ◦ C) are related to the thermal environment and its optimization [13]. It is also possible to
refer to the suggestion of the Carrier 1 software (25–26.2 ◦ C) in relation to the internal temperature [14].
The result of this research is logical and presents an acceptable increase in the average temperature of
the air in summer. On the other hand, one of the most important human environmental requirements
is the provision of thermal comfort conditions. The most common definition of thermal comfort has
been defined by the ASHRAE Standard 55, which defines thermal comfort is that condition of mind
that expresses satisfaction with the thermal environment [15].
Furthermore, researchers have concluded that humans spend between 80 and 90 percent of
their lives in an indoor environment [16]. Hence, indoor air quality like thermal comfort, has a
direct impact on the health, productivity, and morale of humans [17]. An increase in the average
indoor air temperature in summer can improve the energy consumption of air conditioning systems
and consequently is a solution against undesirable climate changes [18]. However, this action may
disturb the residents’ thermal comfort conditions, especially in the summer. Therefore, optimizing
energy consumption in air conditioning systems is a useful action to maintain thermal comfort
conditions [19]. Among the most important studies in this area is the research that has been undertaken
by Tian et al. [20].
By assessing the mean local air age, Tian et al. showed the average of the people’s vote indicator,
relative to the thermal conditions of the environment and the percentage of dissatisfaction of the
individuals. The “Layer Air Conditioning System” is able to meet the requirements of an increased
internal air temperature by providing thermal comfort conditions only in the inhalation area of
individuals [13]. Morovvat [21] examined the “Layer Air Conditioning System” in conjunction with
the Hydronic Radiation Cooler, comprehensively in terms of thermal comfort, air quality, and energy
consumption, and as part of his findings, he confirmed the conclusions of Tian et al. [20]. The
combination of genetic algorithms with neural networks has been utilized, and provided prominent
results in many real life applications [22–24]. A genetic algorithm is used by the strategy to solve the
on-line optimization problem of multiple parameters [25].Marefat and Morovvat [26] introduced the
combination of “Layer Air Conditioning System” with ceiling radiant cooling as a new way to provide
thermal comfort. Lane et al. used a numerical simulation to compare the thermal comfort conditions
and indoor air quality with a displacement and mixing air conditioning system in a room [27–29].
The radial basis function network offers a viable alternative to the two-layer neural network in many
applications of signal processing [30].
Lane et al. compared annual energy consumption by mixing layer and displacement air
conditioning systems and concluded that for a case study, the energy savings of layer air conditioning
systems were 25% and 44% higher than displacement and mixing air conditioning systems. The
proposed dynamic model is as Equation (1):

a(k)·y(k − 1)
y(k) = + u(k − 1) + 0.2u(k − 2) (1)
1 + y2 (k − 1)
Computers 2018, 7, 32 3 of 14

In Equation
Computers 2018, (1), PEER
7, x; doi: FOR a(k) is a parameter
REVIEW changeable with time and will be given as Equation (2):
www.mdpi.com/journal/computers
 
−0.3k
a(=
a(k) k)1.4(1 − 0.9e
= 1.4− 10.9e−0.3k
) (2) (2)

3. The Proposed Methodology


3. The Proposed Methodology
Our work maintains an emphasis close loop control systems. In particular, the proposed approach
Our work maintains an emphasis close loop control systems. In particular, the proposed
updates in real time an A/C’s PID parameters using an optimized ANN. In this section, the following
approach updates in real time an A/C’s PID parameters using an optimized ANN. In this section, the
points should be considered:
following points should be considered:
1. 1.The input
The input
datadata to networks
to networks is designed
is designed to train
to train the neural
the neural network
network error,
error, an error
an error derived
derived fromfrom
the input
the input and and output.
output.
2. The designed network
2. The designed network has four has four
inputsinputs
and and
threethree outputs.
outputs.
3. 3.Simulations
Simulations
werewere executed
executed usingusing MatLab
MatLab software
software and and the neural
the neural network
network toolbox.
toolbox.
4. 4.Comparison
Comparison performance
performance of theofneural
the neural network
network system
system is the is squared
the squared square
square of system
of system errorerror
displayed
displayed as Equation
as Equation (3): (3):
N
N 1 1 N
1F = N ∑ 2
(e1i )2N=
N ∑ (2ti − ai )2 (3)
F = ∑(ei )i=1= ∑(t i −i= a i1) (3)
N N
i=1 i=1
In the above equation, (N) is the number of tested samples; (ti ) represents the neural network
In the above
outputs; and (aequation, (N) isthe
i ) represents theactual
numberandofavailable
tested samples; (ti)For
outputs. represents the neural network
more information on all types
outputs; and (ai) represents
of activation functions, the actual
in the and available
MATLAB outputs.
software, For more information
three activation functions—aonsigmoid
all types of
tangent,
activation functions, in the MATLAB software, three activation functions—a
a sigmoid log function and a linear function—are displayed in Figure 1. sigmoid tangent, a
sigmoid log function and a linear function—are displayed in Figure 1.

Figure 1. (a)1.Sigmoid
Figure tangent;
(a) Sigmoid (b) sigmoid
tangent; log function;
(b) sigmoid and (c)
log function; andpurelin function
(c) purelin charts.
function charts.
Computers 2018, 7, 32 4 of 14

Computation 2018, 6, x FOR PEER REVIEW 4 of 14


3.1. Control Systems
Computation 2018, 6, x FOR PEER REVIEW 4 of 14
3.1. Control
After Systems the structure and genetic algorithm, in this section our intention was to use a
expressing
Computation 2018, 6, x FOR PEER REVIEW 4 of 14
3.1. Control
powerfulAfter Systems
optimization
expressingmethod to train
the structure andthe neural
genetic networkinand
algorithm, this extract
section the
our controller
intention was PIDtoparameters.
use a
3.1.
we Control
First,powerful
After explain Systems
the structure
optimization
expressing method
the structure of tothe
and PID
train the
geneticcontroller.
neural network
algorithm, and section
in this extract the
our controller
intention PIDwas parameters.
to use a
First, we
powerful explain themethod
optimization structureto of thethe
train PID controller.
neural network and extract the controller PID parameters.
After expressing the structure and genetic algorithm, in this section our intention was to use a
3.1.1. Expression
First,powerful
we explain of structure
the the Performance
of the toPID ofcontroller.
Classical PID Controllers
optimization method train the neural network and extract the controller PID parameters.
3.1.1. Expression of the Performance of Classical PID Controllers
First, wesystems
Control explain the are structure
available of in
thebothPID controller.
open loop control and closed loop control methods. Both
3.1.1. Expression of the Performance of Classical PID Controllers
Control systems are available
methods of control depend on the system architecture in both open loop control
and theand closed
control loop control
method used formethods. Both
that architecture.
3.1.1. Expression
methods
Control of control
systems of the
are Performance
depend
available on bothof
in the Classical
system
open loop PID Controllers
architecture
control andand the control
closed method
loop control used Both
methods. for that
Control systems with feedback can be divided into two types. The first of these is single-input and
architecture.
methods Control
of control systems
depend on with feedbackarchitecture
the system can be divided andintothe two types.method
control The firstused
of these
foristhat
single-
single-output Control systems
systems, and arethe
available
secondinone bothisopen loop control
the multi-input and
andclosed loop control
multi-output methods.
control systems,Bothoften
input andControl
architecture. single-output
systems systems,
with and the
feedback can second
be dividedone into
is thetwomulti-input
types. The and
first multi-output
of these is control
single-
referredmethods of
to asoften control
multi-variable depend on the system architecture and the control method used for that
inputsystems, referred to ascontrol
and single-output
systems.
multi-variable control systems.
architecture. Controlsystems, and the
systems with second
feedback canonebe is the multi-input
divided and multi-output
into two types. The first of thesecontrol
is single-
systems,
input often
and referred to as multi-variable
single-output systems, andcontrol the secondsystems.one is the multi-input and multi-output control
Open-Loop
Open-Loop Control
Control Systems
Systems
systems, often referred to as multi-variable control systems.
Open-Loop
A loop Control
control Systems
system
A loop control systemisisdesigned
designed to produceananoptimal
to produce optimal output
output valuevalue without
without receiving
receiving a a
feedback
feedback
A system.
Open-Loop
loop system.
control In system
this
Control
In thiscase,
Systems thethe
case, reference
is designed reference signal
signalis
to produce isgiven
an given
optimal to
to the
outputactuator
actuator and
valueand theactuator
the
without actuator controls
controls
receiving a the
the process.
process.
feedback Figure
system. Figure
2 shows
In this2 shows
the
case, thereference
the structure
structure of of such
such
signal a agiven
control
control
is system.
system.
to the actuator and the actuator controls
A loop control system is designed to produce an optimal output value without receiving a
the process.
feedback Figure
system.2 shows
In thisthe structure
case, of such signal
the reference a control system.
is given to the actuator and the actuator controls
Desired
the process. Figure output
2 shows the structure of such a control system.
Desired output Activator process Output
response Activator process Output
response
Desired output Figure 2. Open-loop control systems.
Activator
Figure 2. process
Open-loop control systems. Output
responseFigure 2. Open-loop control systems.
Closed Loop Control Systems
Closed Loop Control Systems Figure 2. Open-loop control systems.
Closed Loop Control Systems
In closed loop control systems, the difference between the actual output and the desired output
isIn
In closed
Closed
sent
closed loop
toLoop
the
loop control
Control
controller,
control systems,
Systems
and
systems,thisthe the difference
difference
differenceis often between
betweenamplified the
andactual
the actual output
is known
output andas and
thean thesignal.
error
desired desired
output Theoutput
isissent
sent to
structure theof
to the controller,
controller, and
such a control
and this
system
this difference
is shown
difference is is Figure
in
oftenoften amplified
3.
amplified and isand is known
known as an as an
error errorThe
signal. signal. The
In closed loop control systems, the difference between the actual output and the desired output
structure
structure such
of to
is sent such acontrol
controlsystem
thea controller,
system isdifference
shown
is shown
and this
inisFigure
in Figure 3.
3. amplified
often and is known as an error signal. The
Reference
structure of such a control system is shown in Figure 3. Process
Reference Subtractor controller process Process
input Subtractor controller process Output
Reference
input Process
Output
Subtractor controller process
input Output
Sensor
Sensor
Figure 3. Closed loop control system.
Sensor
Figure 3. Closed loop control system.
Multi-Input, Multi-Output Control Figure 3. Closed loop control system.
Systems
Figure 3. Closed loop control system.
Multi-Input, Multi-Output Control Systems
By increasing the complexity of the control systems as well as increasing the number of control
Multi-Input, Multi-Output Control Systems
Multi-Input,
variables,
By increasing Multi-Output
it was necessary
the toControl
complexity use theSystems
of the structure
control of a multi-variable
systems control system,
as well as increasing which
the number of is the same
control
Byshown
as increasing
variables, the4.complexity
in Figure of the control systems as well as increasing theisnumber of control
By increasing the complexity of the control systems as well as increasing the number same
it was necessary to use the structure of a multi-variable control system, which the of control
as shown
variables, in
it Figure
was 4.
necessary to use the structure of a multi-variable control system, which is
variables, it was necessary to use the structure of a multi-variable control system, which is the same the same as
shown
asin Figure
shown 4.
in Figure 4.

Figure 4. Multi-input and multi-output control systems.


Figure 4. Multi-input and multi-output control systems.
Figure 4. Multi-input
Figure 4. Multi-inputand
andmulti-output controlsystems.
multi-output control systems.
Computers 2018, 7, 32 5 of 14

Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers


Proportional–Integral–Derivative Control (PID)
Proportional–Integral–Derivative Control (PID)
The PID control logic is widely used in controlling industrial processes. Due to the simplicity and
The PID
flexibility control
of these logic is widely
controllers, controlused in controlling
engineers are usingindustrial processes.
more of them Due to the
in industrial simplicity
processes and
and flexibility of these controllers, control engineers are using more of them in industrial
systems. A PID controller has proportional, integral, and derivative terms, and its transfer function processes
and
can besystems. A PID
represented as controller has proportional, integral, and derivative terms, and its transfer
Equation (4),
function can be represented as Equation (4),
K
K ( s ) = Kp + K i i + Kd s (4)
K(s) = K p + s + K d s (4)
s
where K p represents the proportional gain, Ki
where Kp represents the proportional gain, Ki represents therepresents the integral
integral gain,
gain, and
and KKdd represents
represents the
the
derivative gain. By adjusting the controller gain, the controller can perform the desired
derivative gain. By adjusting the controller gain, the controller can perform the desired control, whichcontrol, which
isisthe
thesame
sameasasthat
thatshown
shownin inFigure
Figure 5.
5.

Kp e(t)

u(t)

e(t)
r(t) t
Ki ʃ e(t)dt process y(t)
+ 0
-

Kd de(t)/dt

Closedloop
Figure5.5.Closed
Figure loopcontrol
controlsystem
systemwith
withPID
PIDcontroller.
controller.

3.2.Neural
3.2. NeuralNetwork
Network
Thestructure
The structureof ofthe
theartificial
artificialneural
neuralnetwork
networkmodel
modelisisinspired
inspiredby bybiological
biologicalneurons.
neurons.According
According
tothe
to thecharacteristics
characteristicsof ofbiological
biologicalneurons,
neurons,we wecancanmention
mentionnonlinearity,
nonlinearity,simplicity
simplicityof ofcomputational
computational
units, and
units, and learning
learning capabilities.
capabilities. InInan anartificial
artificialneuron,
neuron,each eachofofthe theinput
inputvalues
valuesisisinfluenced
influencedby by
weight, which
weight, which is isaafunction
functionof ofthis
thisweight,
weight,similar
similarto tothe
thesynaptic
synapticconnection
connectionin inaanormal
normalneuron.
neuron.The The
processor elements
processor elements consist
consist of
of two
two parts.
parts.The
Thefirst
firstpart
partaddsaddsthe
theweighted
weightedinputs,
inputs,andandthe
thesecond
secondpartpart
isis aa non-linear
non-linear filter
filter called
called the
the active
active function
function of of the
the neuron.
neuron. This
This function
function compresses
compresses the the output
output
values of
values of an
an artificial
artificial neuron
neuron between
between asymptotic
asymptotic values.
values. We We useuse multi-layer
multi-layer perceptron
perceptron with with one
one
hiddenlayer,
hidden layer, and
andinputs
inputsof ofnetwork
networkare arederived
derivedfromfromerrorerror of
of system.
system.Output
Outputof ofsystem
systemisiscontroller
controller
coefficients, and
coefficients, and performance
performance loss loss function
function isis error
error from
from desired
desired value in the next step.
This compression
This compressionallows allowsthe the output
outputof of the
the processor
processor elements
elementsto to remain
remain within
within aa reasonable
reasonable
range.Among
range. Amongthe thefeatures
featuresof ofartificial
artificialneural
neuralnetworks,
networks,we wecancannote
notelearning
learningcapability,
capability,information
information
dispersion,generalizability,
dispersion, generalizability,parallel
parallelprocessing,
processing,and andresistance.
resistance.OneOneof ofthe
theapplications
applicationsof ofthis
thistype
type
of network
of network isis classification,
classification, identification
identification and and pattern
pattern recognition,
recognition, signal
signal processing,
processing, time time series
series
prediction, modeling
prediction, modeling and andcontrol,
control, optimization,
optimization, financial issues,
financial insurance,
issues, security,
insurance, the stock
security, themarket,
stock
and entertainment.
market, and entertainment.Neural Neural
networks have the
networks haveability
the to change
ability online; online;
to change however, in the genetic
however, in the
genetic
algorithm, algorithm,
which which is essentially
is essentially an online
an online algorithm,
algorithm, thisthis is not
is not possible.ItItisisnormal
possible. normal that if if the
the
controller
controllerisis online,
online, itit will
will provide
provide better
better answers.
answers.
In
Inthe
thedesign
designof ofneural
neuralnetworks,
networks,and andespecially
especiallythe theneural
neuralnetwork
networkof ofdelayed
delayedtime,
time,the
thedesigner
designer
deals
dealswithwiththe theproblem
problemofofselecting
selecting the right
the network
right network forfor
thethe
design.
design.In general, if a network
In general, if a networkwithwith
the
least complexity
the least complexityandand the the
minimum
minimum parameter
parameter hashasthethegreatest
greatest accuracy
accuracyininidentifying
identifyingthe theinput
input
patterns,
patterns,itit isis called
called anan appropriate
appropriate network.
network.
A variety of network training is supervised learning, unsupervised learning, and reinforcement
learning, which introduce a number of ways of learning in the following:
Computers 2018, 7, 32 6 of 14

A variety of network training is supervised learning, unsupervised learning, and reinforcement


learning, which introduce a number of ways of learning in the following:

(A) Perceptron training


Computation 2018, 6, x FOR PEER REVIEW 6 of 14
For example, the perceptron learning algorithm includes:
(A)
1. Perceptron
Assigningtraining
random values to weights
2. example,
For Applyingthethe perceptron
perceptron to eachalgorithm
learning training example
includes:
3. Ensuring that all training examples have been evaluated correctly [Yes: the end of the algorithm,
1. Assigning random values to weights
No: go back to step 2]
2. Applying the perceptron to each training example
3. Ensuring
(B) Back that all trainingalgorithm
propagation examples have been evaluated correctly [Yes: the end of the algorithm,
No: go back to step 2]
Perceptron have been found in artificial neural network designs, which are characterized by the
(B) Back propagation algorithm
formation of a multi-layered feed forward structure. The neural synaptic coefficient was configured to
Perceptron have been
be able to estimate found
ideal in artificial
processing neuralthe
through network
methoddesigns,
of backwhich are characterized
propagation error. by the
formation of a multi-layered feed forward structure. The neural synaptic coefficient was configured
(C) toBack-propagation
to be able algorithmthrough the method of back propagation error.
estimate ideal processing
(C) Back-propagation algorithm on a multi-layer network weight, the back propagation method was
To obtain more information
conducted
To by using
obtain more the gradient
information descent method
on a multi-layer networkto minimize theback
weight, the errorpropagation
square between the outputs
method was of
the network and objective function.
conducted by using the gradient descent method to minimize the error square between the outputs
The proposed
of the network control
and objective structure in this designed method is presented as a block diagram of Figure 2
function.
andproposed
The its controlcontrol
equation is Equation
structure (5): designed method is presented as a block diagram of
in this
Figure 2 and its control equation is Equation (5): 
u(k) = u(k − 1) + kp ey (k) − ey (k − 1) + ki ey (k)
(5)
(ky )(k) 2eeyy((k eyk(ikey−(k)

u(k) = u(k − 1) +k+d kepy{e −− k −−11)}
) ++ 2)
(5)
+ k d {ey (k) − 2ey (k − 1) + ey (k − 2)}
where u is the controller output in the PID controller designed method using the neural network.
where u is The
the controller output
overall shape of in
thethe PID controller
closed loop structuredesigned
withmethod using the
the controller andneural network.
the system conversion
The overall shape of the closed
function is shown in Figures 6–8. loop structure with the controller and the system conversion
function is shown in Figures 6–8.

Error
BP Alghorithm
Display4 Kp
To Workspace2
r Kp Display
y Ki
e Ki To Workspace3
e De de Kd Display1
Kd
Neural Network To Workspace4
Differential Error Display2

Y
I To Workspace1
To Workspace5
kp
ki u u y
kd
e Scope
Step PID Controller1
Plant
Display5
error
To Workspace

FigureFigure
6. The structure of the simulinked
6. The structure closed closed
of the simulinked loop with the
loop neural
with network
the neural and theand
network PIDthe
controller.
PID controller.
Computers 2018, 7, 32 7 of 14
Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers
Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers
021
021
To Work space2
1 r 011 IN21 021 IN31
To Work space2 KP 1
r IN21 021 022 IN31 KP 1
r 1 011 022
To Work space1
r 022 IN32
To Work space1
2 u IN22 022 IN32
012 023
y 2 u 012 IN22 023
To Work space3
y 023 IN33 KI 2
023 To Work space3 IN33 KI 2
3 e 013 IN23 024
e 3 e 013 IN23024 To Work 024
space4
IN34
To Work space4
e 024 025 IN34
025
4 de 04 IN24 025 To Work space5
IN35 KD 3
de 4 de 04 IN24 025 To Work space5 IN35 KD 3
de Input Layer Hidden Layer Output Layer
Input Layer Hidden Layer Output Layer
FigureFigure
7. MLP7. (Multilayer Perceptron)
MLP (Multilayer neuralneural
Perceptron) network structure
network with three
structure layers.
with three layers.
Figure 7. MLP (Multilayer Perceptron) neural network structure with three layers.
[y1]
From6 [y1]
From6
1
[k] [k] K- eu K- + x x
y 1
Clock [k] [k] K- eu K- + / x
Go to 5 From5 Gain1 Math Gain2 + Product1 xDivide / y
Clock +
1 [U] Go to 5 From5 Function
Gain1 Math Gain2 + Product1 Divide
[U1] + + [y]
U 1 Go to 5 [U] 1.4 Function
U From1 [U1] + + Go to 4 [y]
Go to 5 Constant1 1.4 From1 + Go to 4
[U] 1/Z [U1] Constant1 [U2]
From [U] 1/Z
Unit Delay Go to 1
[U1] [y1] x From2 [U2]
From Unit Delay Go to 1 From4 [y1]Product x + From2
From4 +
1/Z 1/Z K- [U2] Product+
1
Unit 1/Z
Unit 1/Z K- [U2] +
Gain Go to 2 Constant 1
Delay2 Delay1
Unit Unit Constant
Gain Go to 2
Delay2 Delay1
[y] 1/Z [y1]
[y] Delay31/ZGo to 3 [y1]
From3 Unit
From3 Unit Delay3 Go to 3
Figure 8. Plant dynamic structure.
Figure 8. Plant dynamic structure.
Figure 8. Plant dynamic structure.
3.3. Genetic Algorithms
3.3. Genetic Algorithms
3.3. introduce
We Genetic Algorithms
the genetic algorithm optimization method and its basic concepts in this section.
We introduce the genetic algorithm optimization method and its basic concepts in this section.
The genetic We introduceisthe
algorithm an genetic
evolutionaryalgorithmsolution for optimizing
optimization method the and
problem.
its basicIn the neuralin
concepts network
this section.
The genetic algorithm is an evolutionary solution for optimizing the problem. In the neural network
design,
Thewe used algorithm
genetic genetic algorithm to optimizesolution
is an evolutionary the weights of the neural
for optimizing the network.
problem. According
In the neural to network
the
design, we used genetic algorithm to optimize the weights of the neural network. According to the
case study,
design, we used genetic algorithm to optimize the weights of the neural network. Accordingofto the
our network is an on-line network that is trained by passing time. The cost function
case study, our network is an on-line network that is trained by passing time. The cost function of
GA iscase
based on systems
study, our network error isofan desired
on-line value and this
network that algorithm
is trained tries to update
by passing time.weights
The costof neural
function of
GA is based on systems error of desired value and this algorithm tries to update weights of neural
networks
GA isinbased
a manner that theerror
on systems errorofofdesired
desire value
valuereaches
and thisthe minimum
algorithm amount.
tries to update weights of neural
networks in a manner that the error of desire value reaches the minimum amount.
networks in a manner that the error of desire value reaches the minimum amount.
3.3.1. Genetic Algorithm Optimization Method
3.3.1. Genetic
3.3.1. Genetic Algorithm
Algorithm Optimization
Optimization Method Method
The genetic algorithm is based on a biological process to solve optimization problems with and
The genetic
The genetic algorithm
algorithm isisbased
based on a biological process to solve optimization problems with and
without constraints. The basis of the geneticonalgorithms
a biological work process to on
is based solve optimization
population, which problems
includes with
without
and constraints.
without The basis
constraints. The of the genetic
basis algorithms work isworkbasedisonbased
population, which includes
the factors of population generation andofoptimization.
the genetic algorithms
In this algorithm, an initialon population,
population is which
the factors
includes of
thethe population
factors generation
of population and
generation optimization.
and amount In
optimization. this algorithm,
In this algorithm, an initial population is
generated, then individuals that have the least are selected randomlyan initial
from thepopulation
first
generated,
is generated, then the
thenThe individuals
the genetic
individuals that have
that have the least amount are selected randomly from the first
generated population. algorithm is anthe least amount
algorithm based areon selected randomlyprocess
the reproduction from the of first
generated
generatedThis population.
population. The genetic
Theisgenetic algorithm
algorithm is
is an an algorithm
algorithmofbasedbased on the reproduction
on the reproduction process
process of the of
the creatures. algorithm obtained from the generation chromosomes and chromosomal
the creatures.
creatures. This algorithm is obtained from the generation of chromosomes and chromosomal
changes made Thisin the algorithm
reproduction is obtained
process from andthewith
generation of chromosomes
the production and variationand chromosomal
of numberschanges of
changes
made in the made in the reproduction process and with the production and variation of numbers of
chromosomes, thereproduction
defined interval process willandtry with thean
to find production and variation of numbers of chromosomes,
optimal value.
chromosomes,
thethis
defined the
interval defined
willvalues interval
try to find will try
anvariables to find
optimal are value.an optimal value.
In method, the first of the randomly generated as the initial population.
In this method, the first values of the variables are randomly generated as
as the
the initial
initial population.
Then, by using mutation and intersection, the parents areare
In this method, the first values of the variables randomly
generated generated
from the initial population, population.
then
Then, by using
Then, by using mutation
mutation and intersection, the parents are generated from the initial population, then
new numbers are produced asandnewintersection,
populations.the Theparents are generated
population with the from
initialthe initial population,
population is placed then
new
new numbers
numbers are produced
are and
produced as new
as neware populations.
populations. The population with the initial population is placed
in the cost function, the values extracted The and population with the initial
arranged according to the population
minimum is placed
and in
in the
the costcost function,
function, and andvalues
the the values
are are extracted
extracted and and arranged
arranged according according
to the to the minimum
minimum and maximum and
maximum values. Finally, after proper algorithmic repetition, the optimal value of the function is
maximum values. Finally, after proper algorithmic repetition, the optimal value of the function is
Computers 2018, 7, 32 8 of 14

values. Finally, after proper algorithmic repetition, the optimal value of the function is extracted. In
this section, we deal with the definitions and common terminology in the genetic algorithm as follows:

• Individual: In genetics, each individual is referred to as a member of the population who can
participate in the reproduction process and create a new member population that will create an
increased population of these individuals.
• Population and production: In genetics, the population is a collection of all individuals that are
present in the population, are capable of generating a generation and creating a new one and,
consequently, producing a new population.
• Parents: This refers to individuals that intend to participate in the reproduction process and
produce a new individual.

3.3.2. The Mathematical Structure of the Genetic Algorithm


Genetic algorithms can be used to solve optimization problems using the following steps:
Step 1: At this stage, an initial population has been created randomly in proportion to the number
of variables that optimize its function. After creating the initial population, the function value is
calculated for each individual in the first population.
Step 2: In this step, the intersection operation is run to create new chromosomes from the current
population in the proceeding step. First, the rate of intersection Pc is chosen (the selection rate was
optional) to perform the intersection, then appropriate chromosomes are selected proportional to the
rate of the selected intersections before the intersection operation is run according to Equation (6):

V1new = V1 + (1 − µ)·V2
(6)
V2new = V2 + (1 − µ)·V1

In the above equations, V2new and V1new are new chromosomes that have been created by the
intersection operation, V2 and V1 are the parent chromosomes, and µ is a random number.
Step 3: In this step, the mutation action is run. First, a random number between zero and one is
created according to the size or number of genes in the population (number of genes = population
size * number of variables), then the Pm mutation rate is set (the selection of a random value). Then,
the mutation action is run according to the following equation where Vknew is a new chromosome
that has been caused by mutation action and Xknew is new gene has been altered by mutation action.
Equation (7) is:
 
Vknew = [X1 , . . . , Xknew , . . . , Xn ], Xknew = Xk + ∆ t, XU
k − Xk OR
(7)
  t b
Xknew = Xk − ∆ t, Xk − X1k , ∆(t, y) = y·r·(1 − )
T

In Equations (1) and (2), XU 1


k , Xk are the upper and lower borders of Xk , respectively; r is a random
number between zero and one; ∆(t, y) with increasing t will tend towards zero; T is the production
maximum in the algorithm; and t is the number of production at the moment.
Step 4: In this step, chromosomes in the initial population and chromosomes in the new population
in terms of size function based on the values in ascending order are arranged. In proportion to the
size of the population, they enter the production cycle. Finally, the minimum function is selected and
introduced as the minimum function.
Step 5: The algorithm is a repetition-based algorithm, so the algorithm must be repeated to the
maximum desired value so that the function is moved to the optimal point (maximal or minimal) or
stop conditions in the algorithm are created.
Computers 2018, 7, 32 9 of 14

4. Experimental Result

4.1. Dataset Description


In general, the studies and the results obtained with different numbers of hidden layers for the
neural network are summarized in the table below. It is worth mentioning that the chosen training
algorithm used a genetic algorithm. Furthermore, the size of the hidden layers in all cases was a single
layer. The size of the database was initially 10 samples. As the network was online, the dates changed
over time. The criterion for selecting the training and testing data was the lowest error. The number
of hidden layers was the layer between the input and output layers. 10 samples are initial samples
for training. Then new samples are generated by system as time passes and are used in training. The
genetic algorithm selected the best weights that avoided unwanted conditions such as over fitting.

4.2. Optimal Networks Topology


By examining Table 1, it is clear that a hidden layer led to better results. For this reason and to
continue the design of the neural network, a number of hidden layers were considered as one layer. In
the next step, selecting the best training algorithm for designing a neural network is considered.

Table 1. Consideration of the results of the neural network with a different number of hidden layers.

The Number of Hidden


Neural Network of Performance Criterion
Layer Neural Network
21.56 No hidden layer
1.34 A hidden layer
14.38 Two hidden layers
7.66 Three hidden layers
5.21 Four hidden layers

According to different training algorithms in MATLAB software, neural network training is


considered with two hidden layers. The activator function that is used in the hidden layers in all
tested networks, is the tansig activation function. The activation function of the output layer is also
considered as the linear.
According to the results presented in Table 2, the best training method for the existing data was the
training method with the genetic algorithm. Therefore, the training algorithm was finally considered as
the genetic algorithm. In the next step, the choice of the desired neural network parameters, the types
of activator functions in the hidden layers as well as the out layer of the designed neural network
were considered.
According to the results shown in Table 3, the lowest error rate and best performance occurred
when the activation function on the hidden layer, the logsig type, and the output layer were of the
logsig type. The final stage of selecting the neural network parameters was the selection of the size
of the hidden layers. By using the results from the previous section, there were two layers with
logsig activator functions. Furthermore, the output activator function was the logsig function. Now,
by searching for a number of results for different sizes of hidden layers, we reviewed and selected the
best available options. In this step, we simulate the system for hidden layers with different size from
1 to 10, and we obtain the results in Table 4.
According to the results obtained in Table 4, the best solution for the designed neural network
occurred when the hidden layer size was 5. It is notable that the proposed method for choosing the
best mode for the neural network was merely a local choice, and that it was possible to obtain better
results by changing and selecting the parameters out of the range investigated.
Computers 2018, 7, 32 10 of 14

Table 2. Consideration of the results of testing different training algorithms on the desired
neural network.

Training Algorithm Neural Network of Performance Criterion


Trainlm 1.384
Trainbfg 2.214
Trainrp 2.562
Trainscg 4.651
Traincgb 3.563
Traincgf 2.201
Traincgp 1.493
Traionoss 3.452
Traingdx 2.856
Traingd 1.132
Traingdm 1.081
Traingda 2/342
Genetic algorithm 0.596

Table 3. The results of the consideration of different activator functions in the proposed design of the
neural network.

Neural Network of Performance Activator Functions


Criterion Output Layer Hidden Layer
6.371 purelin Purelin
6.526 logsig Purelin
4.411 tansig Purelin
2.321 purelin Logsig
0.931 logsig Logsig
2.504 tansig logsig
0.956 purelin tansig
1.732 logsig tansig
3.043 tansig tansig

Table 4. The neural network performance results with different hidden layer size.

Neural Network of Neural Network of


Hidden Layer Size Hidden Layer Size
Performance Criterion Performance Criterion
0.932 1 2.788 6
1.215 2 1.546 7
1.368 3 1.012 8
1.116 4 0.994 9
0.874 5 1.016 10

4.3. Optimal PID Outputs


According to the discussion in the previous section, the final designed neural network is as
follows. The number of hidden layers of the designed network was considered to be two layers. The
hidden layer consisted of a logsig activator function and the output layer of the activator function was
also the logsig function. Additionally, the training algorithm was the genetic algorithm. The size of the
layers by using the values obtained in the table was 5, therefore there were 5 neurons for the hidden
layer. The output waveforms along with the output of the PID controller are shown in Figures 9–12.
According to the discussion in the previous section, the final designed neural network is as
follows. The number of hidden layers of the designed network was considered to be two layers. The
hidden layer consisted of a logsig activator function and the output layer of the activator function
was also the logsig function. Additionally, the training algorithm was the genetic algorithm. The size of
the layers by using the values obtained in the table was 5, therefore there were 5 neurons for the hidden
Computers 2018, 7, 32
layer. The output waveforms along with the output of the PID controller are shown in Figures 9–12. 11 of 14

Figure
Figure
Computers Output
9. 9.
2018, Output
7, x; ofof
x; doi:
doi: the
FOR controlled
thePEER
controlled
REVIEWsystem
systemwith
with the
the PID controller
controllerdesigned
designedwww.mdpi.com/journal/computers
withthe
with theneural
neural network.
network.
Computers 2018, 7, FOR PEER REVIEW www.mdpi.com/journal/computers
Computers 2018, 7, x; doi: FOR PEER REVIEW www.mdpi.com/journal/computers

Figure 10.
Figure 10. PID
PID controller
controller output
output was
was designed
designed with
with the
the neural network
network and
and genetic
genetic training.
training.
Figure PID
10.10.
Figure controller
PID output
controller outputwas
wasdesigned
designedwith
withthe
theneural
neural network andgenetic
network and genetictraining.
training.

Figure 11.
Figure 11. Minimization
Minimization chart
chart ofof the
the objective
objective function
function (sum
(sum of
of squared
squared errors)
errors) by
by the
the genetic
genetic
Figure
Figure 11. Minimization
11. Minimization chartchart
of theofobjective
the objective function
function (sum of(sum of squared
squared errors)
errors) by by thealgorithm.
the genetic genetic
algorithm.
algorithm.
algorithm.

Figure
Figure 12.12.
Figure Convergence
Convergence ofofthe
thePID
PIDcontroller
controllerparameters
parameters designed
designed by
by
by the
theneural
neuralnetwork.
network.
Figure 12.
12. Convergence
Convergence of
of the
the PID
PID controller
controller parameters
parameters designed
designed by the
the neural
neural network.
network.
As we
As we can see
see in Figure
Figure 11, the
the objective function
function was thethe sum of
of the squares
squares of the
the error. The
The
As we cancan see in
in Figure 11,
11, the objective
objective function was
was the sum
sum of the
the squares of
of the error.
error. The
objective
objective function had some defects that, consequently caused the over mutation in the transient
objective function
function hadhad some
some defects
defects that,
that, consequently
consequently caused
caused the
the over
over mutation
mutation inin the
the transient
transient
response of
response of the
the system.
system. ToTo solve
solve this
this problem,
problem, the
the objective
objective function,
function, in
in terms
terms of
of the
the sum
sum of the
the
response of the system. To solve this problem, the objective function, in terms of the sum of of the
squares
squares of the error plus the weight function of the over mutation, was chosen as Equation (8).
squares of
of the
the error
error plus
plus the
the weight
weight function
function of
of the
the over
over mutation,
mutation, was
was chosen
chosen as
as Equation
Equation (8).
(8).
n
n
n
Fitness function
Fitness function = ∑ error(i)222 +
∑ error(i) + W ∗∗ M
Mp (8) (8)
Fitness function =
=∑i=1
error(i) + W W ∗ Mpp (8)
(8) (8)
(8)
Computers 2018, 7, 32 12 of 14

As we can see in Figure 11, the objective function was the sum of the squares of the error. The
objective function had some defects that, consequently caused the over mutation in the transient
response of the system. To solve this problem, the objective function, in terms of the sum of the squares
of the error plus the weight function of the over mutation, was chosen as Equation (8).
n
Fitness function = ∑ error(i)2 + W ∗ Mp (8)
i=1

The output waveform of the closed-loop system with the genetic algorithm training in comparison
with the BP2018,
Computation
Computation algorithm
2018, 6, FORis
6, xx FOR shown
PEER
PEER in Figures 13 and 14.
REVIEW
REVIEW 12
12 of
of 14
14

Figure 13.
Figure13. The
13.The output
Theoutput waveform
outputwaveform
waveformofof the
ofthe controlled
thecontrolled system
controlledsystem with
systemwith controllers
withcontrollers designed
controllersdesigned by
designedby the
bythe neural
theneural
neural
Figure
networks BPNN
networksBPNN
networks (Back
BPNN(Back Propagation
(BackPropagation Neural
PropagationNeural Network)
NeuralNetwork) and
Network)and GANN
andGANN (Genetic
GANN(Genetic Algorithm
(GeneticAlgorithm coupled
Algorithmcoupled with
coupledwith
with
Neural Network).
NeuralNetwork).
Neural Network).

Figure 14.
Figure14. Neural
14.Neural network
Neuralnetwork controller
networkcontroller outputs
controlleroutputs designed
outputsdesigned by
designedby BPNN
byBPNN and
BPNNand GANN.
andGANN.
GANN.
Figure
5.
5. Conclusions
Conclusions
5. Conclusions
In
In this
this paper,
paper, we we discussed
discussed the the design
design ofof aa PID
PID controller
controller withwith aa neural
neural network
network andand the
the use
use ofof
In this
genetic paper, we
algorithms fordiscussed
an air the design system.
conditioning of a PIDFirst,
controller
the with a neural
modeling of the network
air and the system
conditioning use of
genetic algorithms for an air conditioning system. First, the modeling of the air conditioning system
genetic
was algorithms At for an airstage,
conditioningon system.available
First, the modeling of the air conditioning system
was carried
carried out.
out. At thisthis stage, based
based on the the available information,
information, the the system
system equations
equations were
were
was carried
obtained. out. At this stage, based on the available information, the system equations were obtained.
obtained. In the second stage, a discussion about the PID controller and how to determine its
In the second stage, a discussion about the PID controller and how to determine its
In the second stage,
coefficients a discussion about the PID controller neural
and how to determine its coefficients was
coefficients was
was completed.
completed. AfterAfter that,
that, we
we examined
examined the the neural networks
networks and and how
how toto determine
determine thethe
completed.
PID After that, we examined the neural networks and how to for determine the PID controller
PID controller
controller coefficients
coefficients withwith thisthis method.
method. Different
Different conditions
conditions for the the neural
neural network
network were
were
coefficients
considered, with
and this
in method.
these Different
conditions, we conditions
examined for
the the neural
designed network
network. were
Finally, considered,
we determinedand the
in
considered, and in these conditions, we examined the designed network. Finally, we determined the
these
best conditions, we examined the designed network. Finally, we determined the best network, which
best network,
network, which
which hashas the
the lowest
lowest operating
operating system
system error
error criterion.
criterion.
has theOutlowest operating system error criterion.
Out of all of the training algorithms, the genetic algorithm was
of all of the training algorithms, the genetic algorithm was selected
selected toto most
most effectively
effectively train
train
the
the neural network according to the obtained results. The objective function was minimized by
neural network according to the obtained results. The objective function was minimized by the
the
genetic
genetic algorithm,
algorithm, and and the
the sum
sum of of the
the squares
squares ofof errors
errors was
was considered
considered to to be
be defective.
defective. These
These defects
defects
caused
caused an an over
over mutation
mutation in in the
the transient
transient response
response system.
system. To To solve
solve this
this problem,
problem, the the objective
objective
function
function was defined as the sum of the squares of error plus the over mutation weighting function.
was defined as the sum of the squares of error plus the over mutation weighting function.
By
By defining
defining the
the appropriate
appropriate cost cost function,
function, wewe tried
tried to
to obtain
obtain values
values that
that were
were acceptable
acceptable with
with aa high
high
ratio. Finally, by simulating a final controlled system, we examined the
ratio. Finally, by simulating a final controlled system, we examined the system response, control system response, control
Computers 2018, 7, 32 13 of 14

Out of all of the training algorithms, the genetic algorithm was selected to most effectively train
the neural network according to the obtained results. The objective function was minimized by
the genetic algorithm, and the sum of the squares of errors was considered to be defective. These
defects caused an over mutation in the transient response system. To solve this problem, the objective
function was defined as the sum of the squares of error plus the over mutation weighting function.
By defining the appropriate cost function, we tried to obtain values that were acceptable with a high
ratio. Finally, by simulating a final controlled system, we examined the system response, control signal,
and compared the results of the trained neural network with the back propagation algorithm. By
comparing the results, it could be seen that the proposed method had advantages over other methods
such as the back-propagation algorithm.

Author Contributions: The authors contributed equally to the reported research and writing of the paper. All
authors have read and approved the final manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Winter, R.; Widrow, B. MADALINE RULE II: A training algorithm for neural networks (PDF). In Proceedings
of the IEEE International Conference on Neural Networks, San Diego, CA, USA, 24–27 July 1988; pp. 401–408.
2. Waibel, A.; Hanazawa, T.; Hinton, G.; Shikano, K.; Lang, K.J. Phoneme Recognition Using Time-Delay Neural
Networks. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 328–339. [CrossRef]
3. Krauth, W.; Mezard, M. Learning algorithms with optimal stabilty in neural networks. J. Phys. A Math. Gen.
2008, 20, L745–L752. [CrossRef]
4. Guimarães, J.G.; Nóbrega, L.M.; da Costa, J.C. Design of a Hamming neural network based on single-electron
tunneling devices. Microelectron. J. 2006, 37, 510–518. [CrossRef]
5. Mui, K.W.; Wong, L.T. Evaluation of neutral criterion of indoor air quality for air-conditioned offices in
subtropical climates. Build. Serv. Eng. Res. Technol. 2007, 28, 23–33. [CrossRef]
6. ASHRAE. Design for Acceptable Indoor Air Quality; ANSI/ASHRAE Standard 62-2007; American Society of
Heating, Refrigerating and Air-Conditioning Engineers, Inc.: Atlanta, GA, USA, 2007.
7. Levermore, G.J. Building Energy Management Systems: An Application to Heating and Control; E & FN Spon:
London, UK; New York, NY, USA, 1992.
8. Rahmati, V.; Ghorbani, A. A Novel Low Complexity Fast Response Time Fuzzy PID Controller for Antenna
Adjusting Using Two Direct Current Motors. Indian J. Sci. Technol. 2018, 11. [CrossRef]
9. Dounis, A.I.; Bruant, M.; Santamouris, M.J.; Guarracino, G.; Michel, P. Comparison of conventional and
fuzzy control of indoor air quality in buildings. J. Intell. Fuzzy Syst. 1996, 4, 131–140.
10. Tan, W. Unified tuning of PID load frequency controller for power systems via IMC. IEEE Trans. Power Syst.
2010, 25, 341–350. [CrossRef]
11. Howell, M.N.; Best, M.C. On-line PID tuning for engine idle-speed control using continuous action
reinforcement learning automata. Control Eng. Pract. 2000, 8, 147–154. [CrossRef]
12. European Union. The New Directive on the Energy Performance of Buildings, 3rd ed.; European Union: Brussels,
Belgium, 2002.
13. Tian, L.; Lin, Z.; Liu, J.; Yao, T.; Wang, Q. The impact of temperature on mean local air age and thermal
comfort in a stratum ventilated office. Build. Environ. 2011, 46, 501–510. [CrossRef]
14. Hui, P.S.; Wong, L.T.; Mui, K.W. Feasibility study of an express assessment protocol for the indoor air quality
of air conditioned offices. Indoor Built Environ. 2006, 15, 373–378. [CrossRef]
15. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). Thermal
Environment Conditions for Human Occupancy; ASHRAE Standard 55: Atlanta, GA, USA, 2004.
16. Wyon, D.P. The effects of indoor air quality on performance and productivity. Indoor Air 2004, 14, 92–101.
[CrossRef] [PubMed]
17. Fanger, P.O. What is IAQ. Indoor Air 2006, 16, 328–334. [CrossRef] [PubMed]
18. Lundgren, K.; Kjellstrom, T. Sustainability Challenges from Climate Change and Air Conditioning Use in
Urban Areas. Sustainability 2013, 5, 3116–3128. [CrossRef]
19. Maerefat, M.; Omidvar, A. Thermal Comfort; Kelid Amoozesh: Tehran, Iran, 2008; pp. 1–10.
Computers 2018, 7, 32 14 of 14

20. Tian, L.; Lin, Z.; Liu, J.; Wang, Q. Experimental investigation of thermal and ventilation performances of
stratum ventilation. Build. Environ. 2011, 46, 1309–1320. [CrossRef]
21. Morovat, N. Analysis of Thermal Comfort, Air Quality and Energy Consumption in a Hybrid System of the
Hydronic Radiant Cooling and Stratum Ventilation. Master’s Thesis, Department of Mechanical Engineering,
Tarbiat Modares University, Tehran, Iran, 2013.
22. Protopapadakis, E.; Schauer, M.; Pierri, E.; Doulamis, A.D.; Stavroulakis, G.E.; Böhrnsen, J.; Langer, S. A
genetically optimized neural classifier applied to numerical pile integrity tests considering concrete piles.
Comput. Struct. 2016, 162, 68–79. [CrossRef]
23. Oreski, S.; Oreski, G. Genetic algorithm-based heuristic for feature selection in credit risk assessment.
Expert Syst. Appl. 2014, 41, 2052–2064. [CrossRef]
24. Ghamisi, P.; Benediktsson, J.A. Feature selection based on hybridization of genetic algorithm and particle
swarm optimization. IEEE Geosci. Remote Sens. Lett. 2015, 12, 309–313. [CrossRef]
25. Wang, S.; Jin, X. Model-based optimal control of VAV air-conditioning system using genetic algorithms.
Build. Environ. 2000, 35, 471–487. [CrossRef]
26. Maerefat, M.; Morovat, N. Analysis of thermal comfort in space equipped with stratum ventilation and
radiant cooling ceiling. Modares Mech. Eng. 2013, 13, 41–54.
27. Yu, B.F.; Hu, Z.B.; Liu, M. Review of research on air conditioning systems and indoor air quality control for
human health. Int. J. Refrig. 2009, 32, 3–20. [CrossRef]
28. Yang, S.; Wu, S.; Yan, Y.Y. Control strategies for indoor environment quality and energy efficiency—A review.
Int. J. Low-Carbon Technol. 2015, 10, 305–312.
29. Mui, K.W.; Wong, L.T.; Hui, P.S.; Law, K.Y. Epistemic evaluation of policy influence on workplace indoor air
quality of Hong Kong in 1996–2005. Build. Serv. Eng. Res. Technol. 2008, 29, 157–164. [CrossRef]
30. Chen, S.; Cowan, C.F.N.; Grant, P.M. Orthogonal Least Squares Learning Algorithm for Radial Basis Function
Networks. IEEE Trans. Neural Netw. 2010, 2, 302–309. [CrossRef] [PubMed]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Anda mungkin juga menyukai