108]
On: 20 October 2014, At: 23:02
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
To cite this article: A. encan , S. Gk & E. Dikmen (2011) Prediction of Liquid and Vapor Enthalpies
of Ammonia-water Mixture, Energy Sources, Part A: Recovery, Utilization, and Environmental Effects,
33:15, 1463-1473, DOI: 10.1080/15567030903397891
To link to this article: http://dx.doi.org/10.1080/15567030903397891
1. Introduction
The ammonia-water mixture can be used as a working fluid in the absorption chillers. In
the absorption chillers operating with ammonia-water solution, water is the absorbent and
ammonia is the refrigerant. Since the 1970s, they are under consideration for residential
and commercial heating and cooling (Herold et al., 1996; ASHRAE, 1997; Darwish et al.,
2008).
Thermodynamic properties of fluid couples are very important parameters affecting
the performance of absorption systems. The engineering calculation and simulation of
absorption systems require the availability of simple and efficient mathematical formulations for the determination of thermodynamic properties of fluid couples. Vapor and
liquid enthalpies of ammonia-water mixture were presented in the literature as limited
data (Yamankaradeniz et al., 2002). In this study, in order to determine liquid and vapor
enthalpies of this mixture, artificial neural networks (ANNs) were used. Vapor and liquid
enthalpies of ammonia-water mixture with new formulations obtained from an ANN
can be easily estimated. The method proposed offers more flexibility and, therefore,
thermodynamic simulation of absorption systems is fairly simplified.
Address correspondence to Dr. Arzu Sencan,
1463
1464
A. S encan et al.
2. ANNs
ANN is an information processing paradigm that is inspired by the way biological nervous
systems, such as the brain, process information. The key element of this paradigm is the
novel structure of the information processing system. It is composed of a large number of
highly interconnected processing elements (neurons) working in unison to solve specific
problems. ANNs, like people, learn by example. An ANN is configured for a specific
application, such as pattern recognition or data classification, through a learning process.
Learning in biological systems involves adjustments to the synaptic connections that
exist between the neurons (Haykin, 1999; Fu, 1994; Tsoukalas and Uhrig, 1997; Lin and
Lee, 1996). The neural network process is described in Figure 1. Neural networks have
been used in the estimate of thermodynamic properties and analysis of energy systems
(Kalogirou, 2000a, b; Chouai et al., 2002; Pacheco-Vega et al., 2001; Bechtler et al.,
2001; Szen et al., 2004a, b; Szen and Akayol, 2004; Lazzs, 2009; Eslamloueyan
and Khademi, 2009).
ANN with a back-propagation algorithm learns by changing the connection weights,
and these changes are stored as knowledge. Some statistical methods, such as the rootmean-squared (RMS), the coefficient of multiple determination (R2 ), and the coefficient
of variation (cov) may be used to compare predicted and actual values. These formulations
have been given in Bechtler et al. (2001).
1465
neural network. The most popular of them is the back propagation algorithm, which has
different variants. Standard back propagation is a gradient descent algorithm. It is very
difficult to know which training algorithm will be the fastest for a given problem, and the
best one is usually chosen by trial and error. In this study, LevenbergMarquardt (LM)
back-propagation training in a feed forward, single hidden layer network was repeatedly
applied until satisfactory training was achieved. Trainlm is a network training function
that updates weight and bias values according to LevenbergMarquardt optimization.
Inputs and outputs are normalized. Tan-sig activation function has been used for both the
hidden layer and the output layer. The function used is given by:
F .z/ D
2
1Ce
2z
1;
(1)
where z is the weighted sum of the input. The computer program was performed under
MATLAB environment using the neural network toolbox. In the training, we used a
variable number of neurons (from 3 to 12) in the hidden layer to define the output
accurately. The dataset for the liquid and vapor enthalpies of NH3 -water mixture available
included 1,048 data patterns. Data patterns were collected from Yamankaradeniz et al.
(2002). From these, 838 data patterns were used for the training of the network and the
remaining 210 patterns were randomly selected and used as a test dataset. Figure 2 shows
the architecture of the ANN used for predicting the liquid and vapor enthalpies of NH3 water mixture. In this figure, the temperature, pressure, liquid, and vapor concentration are
the input data and liquid enthalpy of the mixture is the actual output. The configuration
4-9-2 appeared to be the most optimal topology for liquid enthalpy. The configuration
4-10-2 appeared to be the most optimal topology for vapor enthalpy.
Training results based on the 4-9-2 configuration for liquid enthalpy is shown in
Figure 3. Training results based on the 4-10-2 configuration for liquid enthalpy is shown
in Figure 4.
Figure 2. ANN topology used for liquid enthalpy and vapor enthalpy prediction.
1466
A. S encan et al.
In order to achieve the optimal result, different numbers of hidden neurons were
used. Statistical values, such as RMS, R2 , and cov, are given in Tables 1 and 2 for liquid
and vapor enthalpy for LM algorithm and 312 neurons in the hidden layer.
From the data presented in Table 1, it is shown that liquid enthalpy of NH3 -water
mixture LM algorithm with nine neurons in the hidden layer (LM-9) appeared to be
the most optimal topology. From the data presented in Table 2, it is shown that vapor
enthalpy of NH3 -water mixture LM algorithm with ten neurons in the hidden layer (LM10) appeared to be most optimal topology.
1467
Table 1
Statistical values of liquid enthalpy for
NH3 -water mixture
Algorithm neurons
RMS
Cov
R2
LM-3
LM-4
LM-5
LM-6
LM-7
LM-8
LM-9
LM-10
LM-11
LM-12
40.218
37.108
36.804
35.517
35.538
35.480
35.314
37.020
35.901
35.890
0.580
0.535
0.531
0.512
0.513
0.512
0.509
0.534
0.518
0.518
0.968
0.972
0.973
0.975
0.975
0.975
0.975
0.972
0.974
0.974
The regression curve of the output variable (liquid enthalpy) for the test data set is
shown in Figure 5. The correlation coefficient obtained in this case is 0.975, which is
very satisfactory.
Figure 6 shows the regression curve of the output variable (vapor enthalpy) for the
test data set. The correlation coefficient obtained in this case is 0.887, which is very
satisfactory.
RMS
Cov
R2
LM-3
LM-4
LM-5
LM-6
LM-7
LM-8
LM-9
LM-10
LM-11
LM-12
88.846
86.130
84.435
83.958
83.795
84.421
84.057
82.935
89.085
83.033
0.061
0.059
0.058
0.057
0.057
0.058
0.058
0.057
0.061
0.057
0.870
0.878
0.883
0.884
0.884
0.883
0.884
0.887
0.870
0.887
1468
A. S encan et al.
Figure 5. Comparison of actual and ANN-predicted values of NH3 -water mixture liquid enthalpy
for the test data set.
4
X
In wni C bn ;
(2)
nD1
Fi D
2
1Ce
2Ei
1:
(3)
In the above equations, for Ei the first two values are the multiplication of the input
parameters (In ) with their weights at location n, and the last constant value (bn ) represents
Figure 6. Comparison of actual and ANN-predicted values of NH3 -water mixture vapor enthalpy
for the test data set.
1469
the bias term. The subscript i represents the number of hidden neuron. The four input
parameters are:
I1 D Pressure .P /;
(4)
I2 D Temperature .T /;
(5)
(6)
(7)
In the ANN, nine hidden neurons are used for liquid enthalpy; thus, nine pairs of
equations, i.e., E1 to E9 and F1 to F9 are required, which represent the summation and
activation functions of each neuron of the hidden layer, respectively. The coefficients of
Eq. (2) are given in Table 3.
In the ANN, ten hidden neurons are used for vapor enthalpy; thus, ten pairs of
equations, i.e., E1 to E10 and F1 to F10 are required, which represent the summation and
activation functions of each neuron of the hidden layer, respectively. The coefficients of
Eq. (2) are given in Table 4.
Additionally, the actual input data of the various parameters need to be normalized.
For this purpose, the actual values of each parameter are divided with the coefficients
shown in Table 5.
Finally, the liquid enthalpy .hf / of NH3 -water mixture depending on temperature,
pressure, and concentration values can be computed from:
E10 D F1 .44:4563/ C F2 .0:095365/ C F3 . 0:76719/ C F4 . 0:0055292/
C F5 . 0:64566/ C F6 .0:04226/ C F7 . 40:0133/ C F8 .0:23058/ (8)
C F9 . 0:75846/ C 5:4231;
2
hf D
1 :991:
1 C e 2E10
(9)
Table 3
Weight coefficients and bias values used for the determination of liquid enthalpy
Neuron
position (wni )
1
2
3
4
5
6
7
8
9
I1 (P )
0.113
10.791
0.423
49.229
0.016
16.804
16.519
4.510
16.825
I2 (T )
I3 (Xf )
I4 (Xv )
0.206
8.848
0.015
4.814
0.017
0.028
0.920
0.467
2.030
1.573
3.333
1.125
2.330
2.606
3.527
0.577
0.370
15.251
0.424
20.526
0.523
16.648
0.476
14.241
0.035
0.912
5.760
bn
2.432
0.361
0.526
26.587
0.231
1.401
2.690
1.121
24.384
Note: In weights, n represents the input number and i represents the hidden neuron number.
1470
A. S encan et al.
Table 4
Weight coefficients and bias values used for the determination of vapor enthalpy
Neuron
position (wni )
I1 (P )
1
2
3
4
5
6
7
8
9
10
3.831
5.275
3.446
0.105
18.061
0.093
5.229
4.239
0.252
0.117
I2 (T )
2.016
0.88
1.270
0.803
19.350
0.034
0.592
8.328
6.855
0.844
I3 (Xf )
I4 (Xv )
2.918
16.090
26.761
1.703
14.800
0.366
0.209
26.906
3.450
1.667
7.828
17.312
24.422
2.580
14.410
0.225
0.568
25.099
15.297
2.656
bn
15.779
1.196
7.158
1.084
29.720
0.973
1.743
5.319
1.594
1.198
Note: In weights, n represents the input number and i represents the hidden neuron number.
The coefficient shown in Eq. (5) is used to convert the normalized output to actual output
.hf / of NH3 -water mixture.
Similarly, vapor enthalpy .hv / of NH3 -water depending on temperature, pressure,
and concentration values can be computed from:
E11 D F1 .2:3056/ C F2 .0:76346/ C F3 .0:095678/ C F4 .0:99316/
C F5 . 10:1107/ C F6 . 5:7407/ C F7 . 8:3/ C F8 . 0:67651/ (10)
C F9 .15:2952/ C F10 .1:1989/ C 5:0796;
2
hv D
1
:2802:
1 C e 2E11
(11)
Table 5
Normalization coefficients for the input
and output parameters
Coefficient
Input parameter
Pressure (TG )
Temperature (T )
Liquid concentration (Xf )
Vapor concentration (Xv )
Output parameter
Liquid enthalpy (hf )
Vapor enthalpy (hv )
3,000
232
102
102
991
2,802
1471
Table 6
Comparison between actual liquid enthalpy and liquid enthalpy obtained
with equations derived from ANN for NH3 -water mixture
P,
kPa
T , C
Xf , %
60
140
480
520
560
600
640
680
720
800
840
920
1,000
1,200
1,400
1,600
1,800
2,000
2,400
55.4
56
78.2
76.8
79.4
84.2
97.5
99.7
80.7
92.7
172.4
107.1
89.2
70
68.3
112.7
47.5
59.4
58
10
20
28
30
30
28
24
24
34
30
0
26
36
50
55
34
94
80
96
a Percentage
Xv , %
Actual
hf , kJ/kg
Predicted
hf , kJ/kg
74.9
90.47
93.77
94.4
94.14
92.44
88.41
88.05
95.33
92.59
0
88.42
95.08
98.36
98.74
92.04
99.95
99.79
99.99
155.4
87.3
136.6
125
138.4
177
247.2
261.1
131
202.7
729.3
284.9
164.7
66.3
68.7
281.8
182.8
142
250.2
155.32
86.95
136.99
124.70
138.32
178.33
247.22
259.89
130.97
202.26
729.68
286.18
165.43
66.41
68.60
281.58
183.556
143.180
251.224
Error
0.08
0.35
0.39
0.30
0.08
1.33
0.02
1.21
0.03
0.44
0.38
1.28
0.73
0.11
0.10
0.22
0.756
1.180
1.024
Percentage
difference,
%a
0.050
0.406
0.284
0.241
0.057
0.752
0.01
0.463
0.024
0.216
0.051
0.449
0.443
0.161
0.148
0.076
0.413
0.831
0.409
The coefficient shown in Eq. (7) is used to convert the normalized output to actual output
.hv / of NH3 -water mixture.
In Table 6, a comparison is presented between the actual liquid enthalpy and liquid
enthalpy predicted with the equations derived from ANN for NH3 -water mixture. In
Table 7, a comparison is presented between the actual vapor enthalpy and vapor enthalpy
predicted with the equations derived from ANN for NH3 -water mixture. As can be seen,
the error in both cases is very small.
5. Conclusions
A new methodology for forecasting NH3 -water mixture enthalpies is presented. This
methodology, based on ANN, is successfully applied to determine NH3 -water mixture
enthalpies. In order to calculate NH3 -water mixture enthalpies, mathematical formulations
were derived from the ANN model. Mathematical formulations have been obtained from
formulations of the summation and activation functions used in the ANN model and
weights of neurons. This approach is valid for estimating liquid and vapor enthalpies of
NH3 -water mixture at any temperature, pressure, and concentration. This formulation
can especially help manufacturers and engineers with thermodynamic simulation of
absorption systems.
1472
A. S encan et al.
Table 7
Comparison between actual vapor enthalpy and vapor enthalpy obtained with equations
derived from ANN for NH3 -water mixture
P,
kPa
T , C
Xf , %
140
320
480
520
560
600
680
720
760
800
880
920
1,000
1,100
1,400
1,600
1,800
2,000
2,400
56
17.7
78.2
153.3
79.4
84.2
99.7
106.1
82.7
92.7
20.8
111.7
89.2
85.2
120.8
50.3
55
59.4
58
20
55
28
0
30
28
24
22
34
30
100
24
36
40
28
80
80
80
96
a Percentage
Xv , %
Actual
hf , kJ/kg
Predicted
hf , kJ/kg
Error
90.47
99.87
93.77
0
94.14
92.44
88.05
85.23
95.13
92.59
100
86.16
95.08
96.28
88.1
99.82
99.81
99.79
99.99
1,527.6
1,318.2
1,535.9
2,747.4
1,523.5
1,558.2
1,639.8
1,686.3
1,508.8
1,563.3
1,280.9
1,683
1,520.1
1,491.6
1,669.2
1,324.5
1,328
1,331.6
1,300.3
1,531.10
1,326.80
1,544.20
2,752.90
1,532.00
1,572.50
1,646.30
1,693.90
1,516.40
1,576.60
1,292.00
1,685.30
1,520.00
1,491.50
1,666.60
1,325
1,325.7
1,328.8
1,306.7
3.50
8.60
8.30
5.50
8.50
14.30
6.50
7.60
7.60
13.30
11.10
2.30
0.10
0.10
2.60
0.5
2.3
2.8
6.4
Percentage
difference,
%a
0.229
0.652
0.540
0.200
0.557
0.917
0.396
0.450
0.503
0.850
0.866
0.136
0.006
0.006
0.155
0.037
0.173
0.210
0.492
References
ASHRAE. 1997. ASHRAE HandbookFundamentals. Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.
Bechtler, H., Browne, M. W., Bansal, P. K., and Kecman, V. 2001. New approach to dynamic
modelling of vapour-compression liquid chillers: Artificial neural networks. Appl. Therm.
Eng. 21:941953.
Chouai, A., Laugier, S., and Richon, D. 2002. Modeling of thermodynamic properties using neural
networks application to refrigerants. Fluid Phase Equilib. 199:5362.
Darwish, N. A., Al-Hashimi, S. H., and Al-Mansoori, A. S. 2008. Performance analysis and
evaluation of a commercial absorptionrefrigeration waterammonia (ARWA) system. Int.
J. Ref. 31:12141223.
Eslamloueyan, R., and Khademi, M. H. 2009. Estimation of thermal conductivity of pure gases by
using artificial neural networks. Intl. J. Therm. Sci. 48:10941101.
Fu, L. M. 1994. Neural Networks in Computer Intelligence. Singapore: McGraw-Hill International
Editions.
Haykin, S. S. 1999. Neural Networks: A Comprehensive Foundation. Upper Saddle River, NJ:
Prentice Hall.
Herold, K. E., Radermacher, R., and Klein, S. A. 1996. Absorption Chillers and Heat Pumps. New
York: CRC Press.
1473