jdfjj

© All Rights Reserved

9 tayangan

jdfjj

© All Rights Reserved

- Financial Time Series Volatility 201 (Excel)
- logware ballou
- Forecasting is the Process of Making Statements About Events Whose Actual Outcomes
- Multiple+Choice+Questions_forecasting
- WyzMindz Services
- Clustering Time Series Data Stream – A Literature Survey
- IBF Certification Program
- Forecasting methods info
- [1st_Module] Introduction.pptx
- PPTs_Sales and Distribution Mgmt
- opm101chapter8_000
- ACKERT_Analysis of an Engine’s Shop Visit Rate
- Trend Analyses of Rainfall Patterns in Nigeria Using Regression Parameters
- Lecture 4 Forecasting f04 331
- Notes Forecasting
- Economic System
- zoo-faq
- SKG Final Presentation
- BM
- npg-12-451-2005

Anda di halaman 1dari 6

2, September 2007: 82 - 87

Neuro-Fuzzy in Time Series Forecasting

1

2

Institut fr Automatisierungstechnik Universitt Bremen

Email: indi@peter.petra.ac.id, natarajan@uni-bremen.de

ABSTRACT

This paper describes LSE method for improving Takagi-Sugeno neuro-fuzzy model for a multi-input and multi-output

system using a set of data (Mackey-Glass chaotic time series). The performance of the generated model is verified using

certain set of validation / test data. The LSE method is used to compute the consequent parameters of Takagi-Sugeno neurofuzzy model while mean and variance of Gaussian Membership Functions are initially set at certain values and will be

updated using Back Propagation Algorithm. The simulation using Matlab shows that the developed neuro-fuzzy model is

capable of forecasting the future values of the chaotic time series and adaptively reduces the amount of error during its

training and validation.

Keywords: forecasting, time series, gaussian membership function, neuro-fuzzy, least square.

INTRODUCTION

When considering fuzzy logic for analyzing and

solving certain problem in engineering which requires

computational intelligence, practically ones choose

between Mamdani model and Takagi-Sugeno model.

Of course another model also could be used as well.

But, Takagi-Sugeno model is preferred in the case of

data-based (or numerically-data-driven) fuzzy

modeling [1, 2, 3] as well as forecasting time series

data where in many cases it can be seen as system

with locally linear model. Another advantage is that

the inference formula of the Takagi-Sugeno model is

only two-step procedure, based on a weighted average

defuzzifier, whereas Mamdani type of fuzzy model

basically consists of four steps [4, 5].

In this paper, we implement neuro-fuzzy modeling

which combines the ability of reasoning on a

linguistic level and handling of uncertain or imprecise

data with the ability of learning and adaptation certain

aspects of intelligent system. Here we choose TakagiSugeno type fuzzy model along with differentiable

operators and membership function and also the

weighted average defuzzifier for defuzzification of

output data. The corresponding output inference can

then be represented in a multilayer feedforward

network structure such as described in [4, 5] which is

identical to the architecture of ANFIS [6]. Since we

use Takagi-Sugeno fuzzy model with Gaussian

Membership Function (GMF), then we have three

types of free parameters: mean and variance of each

Note: Discussion is impected before December, 1st 2007, and will

be publiced in the Jurnal Teknik Elektro volume 8, number 1

March 2008.

82

Parameters (we usually name them as theta).

Optimizing these parameters so that the speed of

convergence can be accelerated becomes special

interest in the research field of neuro-fuzzy technique.

This paper concentrates our main interest in the

optimization of theta by estimation using least square

method (LSE) for MIMO system. So, we arrange our

presentation as follows. First, we explain the

importance of MIMO Takagi-Sugeno Neuro-Fuzzy

implementation for time series forecasting and

treatment procedure of data, in this case we use

Mackey-Glass chaotic time series. Next, we explain

how we implement least square method for

optimizing the free parameters of Takagi-Sugeno

Rules Consequent Parameter. All important

equations or equation(s) derived directly from its

definition will be numbered, while its derivation steps

will not be numbered. Finally, the result from Matlab

simulation will be discussed.

MIMO TAKAGI-SUGENO NEURO-FUZZY

FOR FORECASTING CHAOTIC TIME

SERIES

Here we consider short-term forecasting as the

implementation case of MIMO Takagi-Sugeno

Neuro-Fuzzy for chaotic time series. The chaotic time

series are generated from deterministic nonlinear

systems that are sufficiently complicated as they

appear to be random. The chaotic time series we use

here is Mackey-Glass Time Series which available in

Matlab. This chaotic time series generated by solving

the Mackey-Glass time delay differential equation [7]:

(dx/dt) = (0.2*x(t-)/(1+x10(t-)))-0.1*x(t)

where x(0)=1.2, =17 and x(t)=0 for t<0.

(1)

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

exist, then data preparation for forecasting consists

only structuring of data and determination of training

and validation data set. According to Matlab

documentation, Mackey-Glass data consists of 1200

data points. For our experiment purpose, we neglected

the first 100 data due to its harmonic nature, then we

took next 200 data for training, and finally we used

the next 800 data for validation. So, we have time

series in the form of X = {X1, X2, X3, , Xn} which is

taken at regular intervals of time t = {1, 2, 3, , n}.

For this forecasting problem, usually a set of known

values of the time series up to a point in time, say t, is

used to predict the future value of the time series at

some point, lets say (t+P). Then we created a

mapping from S sample data points, sampled every s

units in time, to a predicted future value of the time

series at time point which resulted an S-dimensional

vector of the form:

XI(t) = [X{t-(S-1)s}, X{t-(S-2)s}, , X{t}]

Palit [3] shows that for predicting the Mackey-Glass

time series, the value of S=4 and s = P = 6 are good

choices. The output data of the fuzzy predictor

corresponds to the trajectory prediction, where for

MIMO case, it can be written as:

XO(t) = [X{t+P}, X{t+2P}, X{t+3P}, ]

(a)

(b)

Figure 1. Structure of Takagi-Sugeno neuro-fuzzy as

proposed by Palit (a) and Jang (b)

Referring to figure 1.a above, the fuzzy logic system

considered here for constructing neuro-fuzzy

structures is based on Takagi-Sugeno fuzzy model

with Gaussian membership functions. It uses product

inference rules and a weighted average defuzzifierdefines as:

and XO, name it as XIO which stands for InputOutput Matrix. For our MIMO case, we use 4 input

and 3 output so that the XIO matrix can be written in

the form:

X1

X

XIO = 4

...

X n 6

X2

X3

X4

X5

X5

X6

X7

X8

...

...

...

...

X n 5 X n 4 X n 3 X n 2

XI matrix

X7

X 9 X 10

...

...

X n1 X n

X6

( )

f j xp =

l =1

M

(2)

l =1

and l = 1,2,3,M reflects the number of rules

(Gaussian membership function).

The output consequent of the lth rule for each output jth

is:

n

y lj = 0l j + ijl xi

i =1

XO matrix

here is the same with that one described in [4, 5]

which is similar with MANFIS (Multiple-output

ANFIS) model [1, 6]. For convenient, here we redraw

the neuro-fuzzy model proposed by [5] in comparison

with MANFIS model [1], both for two input and two

output system.

y lj z l

(3)

Then we generate degree of fulfillment and fuzzification procedure as:

n

z l = Gl ( xi ) where Gl ( xi ) = e

i =1

( xi cil ) 2

il

(4)

83

function with the corresponding mean and variance

parameters cil and il , we assume that cil Ii , il > 0,

and y lj O j , where I i and O j are the input and

output universes of discourse respectively. The

corresponding l th rule from the above fuzzy logic

system can be written as

S / 0l as follows:

S = 0.5ETE = 0.5[e12 + e22 + . en2]

e n2

S 1 e12

=

+

+

en =

e

2

...

2

1

l

l

l

0 2 0

0

then y lj = ojl + 1jl x1 + 2jl x 2 + ... + njl x n

output consequents, and f j are its m outputs. If all

the parameters for neuro-fuzzy network are properly

selected, then the system can correctly approximate

any nonlinear system based on given XIO matrix data.

For training that neuro-fuzzy, which equivalent with

multi-input multi-output (MIMO) feedforward network, we use backpropagation algorithm (BPA).

Generally, BPA based on Steepest Descent Gradient

Rule uses a low learning rate () for network training

in order to avoid the oscillations in the final phase of

the training. That is why BPA needs large number of

epochs (recursive steps) for the smallest SSE (Sum of

Squared Error). The steepest descent gradient rule,

used for Neuro-Fuzzy network training is based on

the recursive expression:

0l j (k + 1) = 0l j (k ) .(S / 0l j )

cil (k + 1) = cil (k ) .(S / cil )

il (k + 1) = il (k ) .(S / il )

S j = 0 .5 ( e ) = 0 .5 E E j

p =1

p 2

j

T

j

(5)

m

SSE = S j

j =1

the difference between the real sampled data from

XIO matrix and its prediction, or, ej = (dj fj).

84

fj =

y z

l =1

M

l

j

, whereas

l =1

f j / 0l as follows:

1 zl

zl

m

=

+

+

...

y

y

j

j

M

M

0l 0l

zl

zl

l =1

l =1

f j

With

e j

0l

(7)

f j

d

f

(

)

j

j

0l

0l

m

z l y 1j

z M y j z l

zl

+

+

=

=

...

1

b 0l b

b

0l b 0l

f j

=0 if lm

= 1,2,3,M reflects the number of rules (Gaussian

membership function), and i = 1, 2, 3, n reflects the

number of input. In the sense of BPA, , c, and are

parameters that will be updated during the training

phase. Here we differentiate 0 from i since it will not

contain any part of input vector, hence will be

calculated independently. The SSE for each epoch

reflects the training performance function. Here we

use notation S for the squared error, which can be

described as:

N

(6)

e1 e12 e2 e22

en en2

+

+

...

l

0l

0l

0

where, b =

l =1

Then we have

e n e n2

S e1 e12 e 2 e 22

...

=

+

+

=

0l 0l

0l

0l

(8)

fn

f1

(f1 d1 ) l + ... + (fn dn ) l

0

0

z lj

S

(

)

= fj dj

0l

b j

(9)

derivative for the adjustable parameters can be

obtained as:

(S / ijl ) = ( f j d j ) ( z l / b) x

(S / cil ) = A {2 ( z l / b) ( xi cil ) /( il ) 2 }

(S / il ) = A {2 ( z l / b) ( xi cil ) 2 /( il ) 3 }

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

so that the consequent parameters can be written as:

where

m

A = y lj f j ( f j d j ) =

j =1

(10)

n

m l

0j + ijl x i f j ( f j d j )

j =1

i =1

model:

R 1 : If x 1 is G 11 and x 2 is G 21 and ... x n is G n1

y l1 = oj1 + 11 j x 1 + 21 j x 2 + ... + nj1 x n

then yl2 = oj2 + 12j x1 + 22j x 2 + ... + nj2 x n

R M : If x 1 is G1M and x 2 is G 2M and ... x n is G nM

then ylM = ojM + 1Mj x 1 + 2Mj x 2 + ... + njM x n

Takagi-Sugeno inference will be:

M

fj =

y lj z l

l =1

(z

y1j + z 2 y 2j + ... + z M y Mj

z + z + ... + z

1

OPTIMIZING THE FREE PARAMETERS OF

TAKAGI-SUGENO RULES CONSEQUENT

PARAMETER

then

l = [ 0l ,1l , 2l ,..., nl ]

(11)

l =1

which is computed for the n input system using the

product operator as:

previous section).

Then the matrix form of the corresponding TakagiSugeno inference can be written as:

1

f1 1 XIe1

1

f2 = 2 XIe2

M

1

fN N XIe N

12 XIe1

22 XIe2

M

XIe N

2

N

1

2

.... 22 XIe2

M

.... N2 XIe N M

....

12 XIe1

[f] = [XIe][], or,

[XIe]T[XIe][] = [XIe]T[f]

which yield:

(13)

[] = ([XIe]T[XIe])-1[XIe]T[f]

SIMULATION USING MATLAB

matrix form and we use Matlab to simulate them so

that we can observe its performance. For the first

experiment, we generate matrix c (mean) and

(variance) of the Gaussian membership function

randomly and matrix is also generated randomly.

Here is the result from the simulation using randomly

generated free parameters of Takagi-Sugeno neurofuzzy network.

z l = Gl ( xi )

(12)

i =1

l

rule in the form: l = z

M

zl

l =1

for ith training sample will be:

(

(

(

f i = i1 01 + 11 x1,i + ... + n1 x n ,i

+

2

i

M

i

2

0

M

0

2

1

2

n

M

1

x1,i + ... +

M

n

x n ,i

matrices. We can apply Least Square Estimation

using XI matrix into matrix fj. But first, we append 1

along with n inputs in XI which take care of 0l from

randomly generated free parameters of TakagiSugeno neuro-fuzzy network

85

different results from the above figure. We recorded

the best simulation which yield SSE = 0.14347. Here

is the best simulation for initially random parameters

during validation process.

parameters during validation process.

The next experiment is using LSE to optimize the rule

consequent parameters . First we modify the

equation for with small adaptation step to avoid

singularities problem during matrix calculation so it

became:

(14)

[] = ([XIe]T[XIe] + I)-1[XIe]T[f]

Here is the optimized result of the Gaussian

membership function after being updated in 200

iterations (recall that we use first 200 data of XIO

matrix for training).

The implementation of LSE method for rule conesquent parameters reduces the value of SSE down to

0.0206 which is much smaller than the previous SSE.

It means that the performance of neuro-fuzzy system

is increased by factor up to 700%. Simulation during

validation process yields the following result.

LSE method.

improvement from the previous result.

DISCUSSION AND CONCLUSION

membership function after being updated in 200

iterations

86

conclude two important remarks here:

a. From the simulation, we can see that LSE method

can enhance the performance of Takagi-Sugeno

neuro-fuzzy for time series forecasting although it

requires more computational power. Although

GMFs arent equally distributed along the universe of discourse, the system gives satisfactory

prediction of Mackey-Glass Data.

b. The value of for updating the parameters should

be choosen properly since this value is sensitive.

Too high value of this parameter makes the

program tends to oscillating. This computation

involves huge number of data. One must find

optimization procedure to simplify the program or

accelerate the calculation process.

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

this paper certainly confirm that the same performance can only be achieved with the high number of

rules which in turn will require much efforts to reduce

redundancies and conflicting rules.

REFERENCES

Fuzzy Modelling, Proceeding of the International Conference on Neural Network: 1995.

[2] Min-You Chen, D.A. Linkens, Rule-base selfgeneration and simplification for data-driven

fuzzy models, Fuzzy Set and System, 2003.

[3] Hossein Salehfar, Nagy Bengiamin, Jun Huang,

A Systematic Approach To Linguistic Fuzzy

Modeling Based On Input-Output Data, Proceedings of the 2000 Winter Simulation Conference:

2000.

Time Series using Neuro-Fuzzy Approach,

Proceeding of IEEE-IJCNN: 1999.

[5] Palit A. K., Popovic D., Computational Intelligence in Time Series Forecasting: Theory and

Engineering Application, Springer-Verlag London: 2005.

[6] J.-S. Roger Jang, ANFIS: Adaptive-Networkbased Fuzzy Inference Systems. IEEE Transaction on Systems, Man and Cybernetics: 1993.

[7] MATLAB, The Language of Technical Computing, Users Guide and Programming, The

MathWorks Inc: 1998.

[8] Felix Pasila, Indar Sugiarto, Mamdani Model

For Automatic Rule Generation of A MISO

System, Proceedings of Soft Computing, Intelligent Systems and Information Technology (SIIT2005): 2005.

87

- Financial Time Series Volatility 201 (Excel)Diunggah olehSpider Financial
- logware ballouDiunggah olehRubyyat A Hakim
- Forecasting is the Process of Making Statements About Events Whose Actual OutcomesDiunggah olehshashi
- Multiple+Choice+Questions_forecastingDiunggah olehjackywen1024
- WyzMindz ServicesDiunggah olehnaveens2013
- Clustering Time Series Data Stream – A Literature SurveyDiunggah olehijcsis
- IBF Certification ProgramDiunggah olehtrino_2lopez
- Forecasting methods infoDiunggah olehRobert Holems
- [1st_Module] Introduction.pptxDiunggah olehDesi Usfaliana
- PPTs_Sales and Distribution MgmtDiunggah olehHaroune Ambati
- opm101chapter8_000Diunggah olehManprita Basumatary
- ACKERT_Analysis of an Engine’s Shop Visit RateDiunggah olehAnonymous PsEz5kGVae
- Trend Analyses of Rainfall Patterns in Nigeria Using Regression ParametersDiunggah olehijteee
- Lecture 4 Forecasting f04 331Diunggah olehMilan Pasula
- Notes ForecastingDiunggah olehGarvit Jain
- Economic SystemDiunggah olehAshish Kumar
- zoo-faqDiunggah olehgordongekko1971
- SKG Final PresentationDiunggah olehSandip Ghosh
- BMDiunggah olehDeepak Sinha
- npg-12-451-2005Diunggah olehM s
- csit3218Diunggah olehCS & IT
- Analyzing Dynamic Causal Linkages Between Developed Stock Markets of Spain and CanadaDiunggah olehIjcemJournal
- 10.1.1.88Diunggah olehebinsams007
- 13-wangDiunggah olehMurat Çobaner
- Course QTDiunggah olehNeerja Nigam
- 01024aDiunggah olehaskarah
- FINN 32X -Financial Econometrics I-Syed Zahid AliDiunggah olehHaris Ali
- Why Panel Data - HsiaoDiunggah olehDeny Hastomo
- 10.1.1.144Diunggah olehCharis Sfyrakis
- Econ 198 - 1.4 Time Series AnalysisDiunggah olehEzekiel Fernandez

- 00019394Diunggah olehHarbey Alexander Millan Cardenas
- AaljbfpkjdjDiunggah olehHarbey Alexander Millan Cardenas
- qcerDQFCAWVCFFDiunggah olehHarbey Alexander Millan Cardenas
- Fritzing Creator Kit Download EnDiunggah olehHarbey Alexander Millan Cardenas
- 4N26Diunggah olehHarbey Alexander Millan Cardenas
- Emulations of a Digital Control SystemsDiunggah oleharafatasghar
- Hora Rio 2014Diunggah olehHarbey Alexander Millan Cardenas
- Dn Scl1700-d01 Product SpecDiunggah olehHarbey Alexander Millan Cardenas
- 83949100Diunggah olehHarbey Alexander Millan Cardenas
- A a a AaaaaaaaaaaaaaaaaaaaaaaaaaDiunggah olehHarbey Alexander Millan Cardenas
- 3-s2.0-B978075063429850018X-mainDiunggah olehHarbey Alexander Millan Cardenas
- 3-s2.0-B9780750634298500191-mainDiunggah olehHarbey Alexander Millan Cardenas
- visayDiunggah olehHarbey Alexander Millan Cardenas
- Profile FormDiunggah olehOrrego Cortes Luis
- KYIGPWUDÑQEfDiunggah olehHarbey Alexander Millan Cardenas
- Full TextDiunggah olehHarbey Alexander Millan Cardenas

- Final Year Projects 2012-13Diunggah olehstevencliff
- LSD Magazine Issue 6 - Stand and DeliverDiunggah olehLSD (London Street-Art Design) Magazine
- Chaotic ADiunggah olehbabak
- Thesis and Antithesis: architectural designDiunggah olehJennifer_
- Notes for Why is There Anything at All - 1Diunggah olehMichael Gemignani
- Licentiate Thesis Print VerDiunggah olehMiquel Badia Portoles
- Henri Poincare - CahoticDiunggah olehflausen
- Mandelbrot, Benoit.pdfDiunggah olehcreesha
- From Chaos to CoherenceDiunggah olehadnan
- Ned Gandevani's S&P500 Winning Edge Day Trading SystemDiunggah olehdoss186
- 1 programa oficialDiunggah olehapi-325735090
- document-51-60.pdfDiunggah olehGANA DINERO CON JUAN
- Dangerousness is DangerousDiunggah olehRXP91
- 1_1.5Diunggah olehnaseem_d20036139
- capitulo_1Diunggah olehAugusto Cabrera
- God and the ismDiunggah olehSdl Dkd
- Complexity and chaos in organisations: complex managementDiunggah olehGandolfo Dominici
- Overview project SCIENAR by Dr Mauro Francaviglia, Universities of Calabria and Torino (ItalyDiunggah olehJak Boumans
- Chua's Circuit ImplementationsDiunggah olehOscar Segura
- text.pdfDiunggah olehJorge Azael Becerril Cruz
- Nonlinear Dynamics of a SDOF Oscillator With Bouc-Wen Hysteresis (Li, Et Al. 2007)Diunggah olehJose Manuel
- Chaos Based-cryptogarphy ReportDiunggah olehah_shali
- ZhangEtAl1998_ForecastingwithArtificialNeuralNetworks_IntJForecastV14p35-52_10.1.1.138Diunggah olehalexandru_bratu_6
- The Chaos Model and the Chaos CycleDiunggah olehthomas.bhatia
- Write MathDiunggah olehjoniakom
- Modelling Ocean Climate Variability by Artem S. Sarkisyan and J¨urgen E. S¨undermannDiunggah olehTatiana Leon Rojas
- Balkan Mine 2015 ProceedingsDiunggah olehDumitru Tivig
- CoffmanNsgtheoristandtheirworkfor8thed.April2012 (1)Diunggah olehCyntia Theresia Lumintang
- Mason R Management DesicionDiunggah olehKenneth Mo
- Principle of ManagementDiunggah olehRam Kumar