Anda di halaman 1dari 19

I NTERNATIONAL J OURNAL OF C HEMICAL R EACTOR E NGINEERING

Volume 7 2009 Article A16

A Neural Network Approach for Identication and Modeling of Delayed Coking Plant
Gholamreza Zahedi Zohre Karami Ali Lohi

Razi Universtiy, Iran, grzahedi@yahoo.com Ryerson University, Canada, alohi@ryerson.ca Razi University, Iran, zohre 519@yahoo.com ISSN 1542-6580 Copyright c 2009 The Berkeley Electronic Press. All rights reserved.

A Neural Network Approach for Identication and Modeling of Delayed Coking Plant
Gholamreza Zahedi, Ali Lohi, and Zohre Karami

Abstract
In this study, an articial neural network (ANN) modeling of a delayed coking unit (DCU) is proposed. Different data from various DCU have been collected. Feed API and Cat Cracker (CCR) weight percent have been considered as network inputs. Coke, output CCR, light gases, gasoline, gas-oil and C5+ weight percents are the network outputs. 70 percent of the data have been used for training of ANN. Among the Multi Layer Perceptron (MLP) architectures a network with 31 hidden neurons has been found as best MLP predictor. Radial Basis Function (RBF) also has been implemented for identication of the plant. An RBF network with 20 spread was found as best estimator of the DCU. Best RBF network and best MLP network performance in prediction of 30 percent of unseen data were compared. It was found that RBF method has the best generalization capability and was used in DCU modeling. KEYWORDS: delayed coking unit, articial neural network, renery modeling

Zahedi et al.: Neural Modeling of Delayed Coking Plant

1. Introduction In the early 1930 DCU has been presented as a refinery process. In refineries coke was used to generate from cracking processes and was deposited in the heaters (Mapales (1993)). Later, refineries learned to remove coke before other treatment in a so called, delayed coking process. Fig. 1 illustrates a typical DCU :

Fig.1- Schematic of a typical DCU.


A typical DCU consist of one heater and two coke drum. The feed which is pre- heated to 350 oC by the heat exchange with the hot product flows and the flue gas in convection section is fed to the bottom of fractionator. This will cool down high temperature reaction vapors. A portion of heavier hydrocarbons in the reaction vapors are condensed for recycling while lighter hydrocarbons in the feedstock are vaporized meanwhile. The ratio of the condensed heavier hydrocarbons to the fresh feedstocks is defined as the recycle ratio of the coking unit. The preheated feedstocks from the bottom of the column, together with the condensed heavier hydrocarbons from the reaction vapors, are pumped into the radiation of the coking heater and quickly heated to slightly below 500 oC. After partially vaporized in the heater tubes and passed through a four-way switch valve, the feedstocks are then introduced into one of the two coke drums. Water with high pressure is injected into the heater tubes to minimize coke deposition

Published by The Berkeley Electronic Press, 2009

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

and to delay the coking reactions in the tubes. The superheated reaction vapors from the top of the coke drums are fed into the base of the fractionating column and are separated into various products according to their boiling points such as wet gas, naphtha, light gas oil (diesel) and heavy gas oil. Part of diesel is further cooled and used as the absorption oil in the unit of gasoline stabilization. The rich absorption oil returns to the fractionating column after preheated by the diesel. The coke from the condensation reactions is cooled and removed after the coke drums are filled to the safe margin from the top. Heater in DCU should be carefully designed to prevent coke formation inside the heater. A typical DCU consists of two decoking drum. Drums are designed to operate in 48 hr cycle. One of the drums is standby while other drum operates for 24 hr. In next 24 hr coke is removed from standby drum. Current article proposes a neural network based method for DCU modeling. ANN concept and its training will be presented in the following sections. Finally generalization results of various ANN architectures will be shown. Based on our literature survey no works has been done on ANN modeling of DCU. 2. Artificial Neural Network Concept Neural networks are models inspired by the structure and the functions of biological neurons. A network is composed of units or nodes, which represent the neuron bodies. The units are interconnected by links that act like axons and dendrites of their biological counterparts. A particular type of interconnected neural net is shown in Fig.2. In this figure an input layer, a central or hidden layer and an output layer are available. In a network each connecting line has an associated weight. Two important abilities of ANNs are supplying fast answers to a problem and capability of generalizing their answers, providing acceptable results for unknown samples. In this way, they should learn about the problem under study and the learning is commonly named training process. One of the well-known topologies of NN for learning is the Multi Layer Perceptron (MLP), which is used for classification and estimation problems. An example of layered network is shown in Fig.2. In this topology there are L inputs, m hidden unit and n output units. The output of jth-hidden unit is obtained by linear combination of the L inputs as:

ija j

(1)

where ij is weight going from input i to hidden unit j. Using an activation function f , the output of neuron j is obtained as:

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

b j = f( ij a j )
i =0

(2)

Sigmoid activation functions are widely used in ANN modelings. Sigmoid tangential and other functions also can be applied in ANN modeling. For more details of various activation functions see Bulsari (1995) book. ANN training is an optimization process in which an error function is minimized by adjusting the ANN weights. When an input training pattern is introduced to the ANN, it calculates an output. Output is compared with the real output provided by the user. This difference is Mean Square Error (MSE), Ej, which is given by equation (3): Ej = 1 n (Ci Cir ) 2 n i =1 (3)

where Cir is the ith real target and Ci is the network output corresponding to the jth input. Thus, training process is a path from input layer to output layer to calculate output, obtaining error and a backward path to update the weights. The procedure continues until Ej is minimized. During the training process, the training error decreases since the ANN weights are adjusted according to the predicted errors from this set. Training process should stop when the testing error reaches its lowest point along the training process.
I1
ai V1m Vi1 V11 V1j W 1n W j1 Vij Vim VL1 V Lj bj W jK W jn W m1 bm VLm W mK W mn b1 W 11 W 1K

C1

Ii

ai

CK

IL Input Layer

aL

Cn Output Layer

Hidden Layer

Fig. 2- Structure of an MLP neural network.

Published by The Berkeley Electronic Press, 2009

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

Besides MLP another class of networks has been known in recent years and is called Radial Basis Function (RBF) network. Like most feed forward networks, RBF networks have three layers, an input layer, hidden layer with Gaussian activation function and an output layer. The role of input layer is to distribute the inputs to each of hidden layer nodes. The weights on the links between the input layer and the hidden layer are set to unity and remain constant during training process. Second layer or hidden layer performs a fixed nonlinear transformation which maps the input space into a new space similar to MLP. The output layer implements a linear combination on this new space. The network is solved by initially clustering the monitored process data, calculating the predictive error between experimental and network output. The process continues until all examples in the training set have been used for testing exactly once. In the last section of this paper the performance of both MLP and RBF in the prediction of DCU performance will be investigated. Some studies have been done In neural network modeling of refinery units (Zahedi (2006), Zahedi et al.(2005), Bollas et al. (2003), Michalopoulos et al (2001), Al-Enezi et al. (2000)). To the best of our knowledge there is no attempt on ANN modeling of CDU. The model responds very fast which makes it interesting for optimization and online control of the plant. Generality of trained networks which is robust for different feed stocks is another feature of the current study.
3. Data set

Feed API, CCR weight percent are ANN input data. Network outputs are coke, gas, gasoline, gas oil and C5+ weight percents. Training data have been collected from (Mapales (1993)). Before implementing ANN routines, all data were pre screened and unreasonable data were omitted from data list. 70% of the collected data were selected for ANN training. The remaining 30 % of data were used to test generalization ability of the trained networks. Table 1 represents network parameters and their range.

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

Table 1 - DCU input-output parameters type and domain. Variable Range API 1.4-21.5 CCR (weight percent) 2.84-25.5 COKE (weight percent) 4.36-39.6 GAS (weight percent) 4.04-16.7 GASOLINE (weight percent) 10.35-22.28 GASOIL (weight percent) 28.3-69.02 + C5 (weight percent) 43.7-88.1
4. Results of ANN modeling of DCU

In this section first, various MLP architectures with different hidden layers for estimation of DCU products was trained. Among these architectures an MLP with 31 hidden neurons with Levenberg Marqurt algorithm (Matlab(2006)) was found as a best network. Figures 3-7 illustrate performance of this network in estimation of 30% of unseen data. Network estimation and generalization for each DCU product had good overlap with plant data. To evaluate accurate order of error, relative error between network estimation and plant data are depicted in figures 8 to 12. Order of error in estimations is between, 10-3 to 10-5 which an excellent estimation is.
25

20 weight percent

15

10

5 measured data predicted data 0 0 5 number of test data 10 15

Fig.3- Comparison between best MLP network estimation and plant data for coke weight percent.

Published by The Berkeley Electronic Press, 2009

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

14 measured data predicted data

12 weight percent

10

10 number of test data

15

Fig.4- Comparison between best MLP network estimation and plant data for gas yield.

22 measured data predicted data

20 weight percent

18

16

14

12

5 10 number of test data

15

Fig.5- Comparison between best MLP network estimation and plant data for gasoline weight percent.

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

70 measured data predicted data

65 weight percent

60

55

50

45

5 number of test data

10

15

Fig.6- Comparison between best MLP network estimation and plant data for gas oil yield.
90 measured data predicted data

85 weight percent

80

75

70

65

10 number of test data

15

Fig.7- Comparison between best MLP network estimation and plant data for C5+ weight percent.

Published by The Berkeley Electronic Press, 2009

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

x 10

-5

1.5 percent of error

0.5

5 number of test data

10

15

Fig.8- Relative percent of error for coke weight percent based on best MLP predictions.
3 2.5 percent of error 2 1.5 1 0.5 0 x 10
-5

5 number of test data

10

15

Fig.9- Relative percent of error for gas weight percent based on best MLP predictions.

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

1.2 1 percent of error 0.8 0.6 0.4 0.2 0

x 10

-5

5 number of test data

10

15

Fig.10- Relative percent of error for gasoline weight percent based on best MLP predictions.
x 10
-3

0.8 percent of error

0.6

0.4

0.2

5 number of test data

10

15

Fig.11- Relative percent of error for gas oil weight percent based on best MLP predictions.

Published by The Berkeley Electronic Press, 2009

10

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

3 2.5 2 1.5 1 0.5 0

x 10

-3

weight percent

5 number of test data

10
+

15

Fig.12- Relative percent of error for C5 weight percent based on best MLP predictions.
In the second phase of study, RBF network was considered for identification of DCU. RBE training algorithm with spread of 20 was observed as the best predictor of the plant. Ability of the obtained RBE network in prediction of 30% of un-seen data has been demonstrated in figures 13-17. Excellent agreement between plant data and the network predictions were seen between, 109 - 10-10(Figures 18-22). Based on these results RBF network is 100 times more accurate than MLP network.

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

11

25

20 weight percent

15

10

5 predicted data measured data 0 5 number of test data 10 15

Fig.13-. Comparison between best RBF network estimation and plant data for coke weight percent.
14

12 weight percent

10

6 predicted data measured data 0 5 number of test data 10 15

Fig.14- Comparison between best RBF network estimation and plant data for gas weight percent.

Published by The Berkeley Electronic Press, 2009

12

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

22

20 weight percent

18

16

14 predicted data measured data 0 5 number of test data 10 15

12

Fig.15- Comparison between best RBF network estimation and plant data for gasoline weight percent.
70 predicted data measured data

65 weight percent

60

55

50

45

5 number of test data

10

15

Fig.16- Comparison between best RBF network estimation and plant data for gas oil weight percent.

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

13

90 predicted data measured data 85 weight percent

80

75

70

65

5 number of test data

10

15

Fig.17- Comparison between best RBF network estimation weight percent. and plant data for C +5
1.4 1.2 1 percent of error 0.8 0.6 0.4 0.2 0 0 5 number of test data 10 15 x 10
-9

Fig.18- Relative percent of error for coke weight percent based on best RBF net.

Published by The Berkeley Electronic Press, 2009

14

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

1.5

x 10

-10

percent of error

0.5

5 number of tets data

10

15

Fig.19- Relative percent of error for gas weight percent based on best RBF network.
1.4 1.2 1 percent of error 0.8 0.6 0.4 0.2 0 0 5 number of tets data 10 15 x 10
-10

Fig.20- Relative percent of error for gasoline weight percent based on best RBF network.

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

15

x 10

-10

1.5 percent of error

0.5

5 number of test data

10

15

Fig.21- Relative percent of error for gas oil weight percent based on best RBF network.
1 x 10
-10

0.8 percent of error

0.6

0.4

0.2

5 number of test data

10

15

Fig. 22- Relative percent of error for C5+ weight percent based on best RBF network.

Published by The Berkeley Electronic Press, 2009

16

International Journal of Chemical Reactor Engineering

Vol. 7 [2009], Article A16

For better comparison of RBF and MLP networks, MSE between best MLP prediction and plant data and also between best RBF and plant data were calculated (Table 2). Based on the tabulated results, RBF network is 100 times more accurate than MLP network and is better model for DCU analysis. The model responds in a fraction of second on Pentium 5 computer which makes it interesting for optimization and online control strategies. The model is general and responds regardless of feed variation and skips process details. Table 2. Comparison of MSE for best MLP and best RBF predictors.

ANN MSE
5. Conclusions and remarks

MLP Error 1.1196* 10-011

RBF Error 7.8256* 10-022

The study depicts ANN capability in modeling of a DCU. Two types of ANNs; best MLP and best RBF networks were found by examining various training algorithms. RBF estimation ability in estimation was found 100 times greater than the best MLP predictor. The paper represents a new method for modeling of DCU regardless of process details with high precision which traditional soft wares like, PRO II, HYSIS, ASPEN can't provide such a precision. Quick response time is another advantage of current modeling which can be used in online control and plant scheduling and optimization.
NOMENCLATURE A B Ci Cir E F I N Nc V wij X Y

output of neuron, input layer output of neuron, hidden layer output of neuron I, output layer corresponding target of network, output i error neuron activation function input data of network number of outputs in ANN number of components weight on network Hidden layer weight on network output layer input variable of activation function mole fraction of specified component

http://www.bepress.com/ijcre/vol7/A16

Zahedi et al.: Neural Modeling of Delayed Coking Plant

17

Greek Letters

ij
Subscripts
I References

weight on the hidden layer of a MLP

component number

Al-Enezi G. and A. Elkamel, Predicting the effect of feedstock on product yields and properties of the FCC process, 2000, Petroleum Science and Technology, 18(3&4), PP. 407-428. Bulsari, A. B., Neural Networks for Chemical Engineers, 1995, Elsevier Science Press, Amesterdam. Bollas, G.M., S. Papadokonstadakis , J. Michalopoulos , G. Arampatzis , A.A. Lappas , I.A. Vasalos , A. Lygeros ,Using hybrid neural networks in scaling up an FCC model from a pilot plant to an industrial unit, 2003, Chemical Engineering and Processing 42 , pp.697_/713,. Mapales, R.E., Petroleum refinery process economics, 1993, Penn Well Publication Company, Tulsa, Oklahoma. Matlab neural network toolbox, www.mathwork.Com , 2006. Michalopoulos, J., S. Papadokenstadakis, G. Arampatzis, and A. Lygeres, Modeling of an industrial fluid catalytic cracking unit using neural networks, 2001, Trans. IChemE,. Zahedi, G., Identification of typical refinery hydrockracker unit with artificial neural network, 2006, University Teknologi MARA Pulau, Pinang, Malaysia, Dec,. Zahedi,G., H. Fgaier, A.Jahanmiri and G.Al-Enezi, Identification and evaluation of hydrotreater plant, 2005, Pet. Sci. and Tech., 24:14471456.

Published by The Berkeley Electronic Press, 2009

Anda mungkin juga menyukai