Anda di halaman 1dari 53

EE699 (06790)

ADAPTIVE NEUROFUZZY CONTROL


Instructor:Dr.YuMingZhang
223CRMSBuilding
Phone:2576262Ext.223
2454518(Home)
Email:ymzhang@engr.uky.edu
Adaptive control and neurofuzzy control are two advanced methods for time-varying and nonlinear processes. This course will begin with adaptive control of linear systems. Nonlinear
systems and related control issues will be then briefly reviewed. Neural network and fuzzy
model will be described as general structures for approximating non-linear functions and
dynamic processes. Based on the comparison of the two methods, neurofuzzy model will be
proposed as a promising technology for the control and adaptive control of nonlinear processes.
This course will emphasize basic concepts, design procedures, and practical examples. The
assignments include two design projects: adaptive control of a linear system and neurofuzzy
method based modeling and adaptive control of a nonlinear system. A presentation on a selected
subject is required.
TR 03:30 PM-04:45 PM

Funkhouser 313

ADAPTIVENEUROFUZZYCONTROL
Introduction (1-2): Actually 3 including 2 for the example
Adaptive Control of Linear Systems (3-5)
Identification of Linear Models (2-3)
Project 1
Control of Nonlinear Systems (1-2)
Neural and Fuzzy Control (1-2)
Neural and Fuzzy Modeling (4-6)
Project 2: Modeling
Adaptive Neurofuzzy Control Design (7-9) & Project 2: Control
Design Examples (2)
Presentation (2)
Final Examination (1)
Projects:
1. Adaptive control of a linear system
2. Neurofuzzy modeling and control of a non-linear system

CHAPTER I: INTRODUCTION
Primary References:
Y. M. Zhang and R. Kovacevic, Neurofuzzy model based control of weld fusion zone
geometry," IEEE Transactions on Fuzzy Systems, 6(3): 389-401.
R. Kovacevic and Y. M. Zhang, "Neurofuzzy model-based weld fusion state estimation," IEEE
Control Systems, 17(2): 30-42, 1997.
1. Linear Systems

Classical Control, Linear Control (LQG, Optimal Control)

Model Mismatch between the process and the nominal model.


Reasons:
-Substantial range of physical conditions, modeling error (actual model is fixed, but different
with the nominal model)
Robust control or adaptive control
-Time-varying model: Varying physical condition
Robust control or adaptive control

Adaptive Control: Identify the real parameters of the model to minimize the mismatch
Robust Control: Allow the mismatch

2. Non-linear Systems

Lack of unified models, a variety of models and design methods

Unified model structure for non-linear systems: neural network models and fuzzy models

Comparison
Modeling: Disadvantage: large number of parameters
Advantages: adequate accuracy, simplicity
Control: Disadvantages: performance evaluation
Advantage: unified methods
Neural network and fuzzy methods
Modeling:
Neural networks: large number of parameters, but automated algorithm
Fuzzy models: moderate number of parameters, lack of automated algorithm
Control design:
Neural networks: large number of parameters
Fuzzy models: moderate number of parameters, time consuming

Neurofuzzy Control

Compared with Fuzzy Logic: automated identification algorithm, easier design


Compared with Neural Networks: less number of parameters, faster adaptation
3. Adaptive Non-Linear Control
Acceptable convergence speed (number of parameters), general model
4. Example: Neurofuzzy Control of Arc Welding Process

CHAPTER 2: ADAPTIVE CONTROL


Primary Reference:
D. W. Clarke, Self-tuning control, in The Control Handbook edited by W. S. Levine. IEEE
Press, 1996.
1.

Introduction

Most Control Theory: assuming (1) time-invariant, known (nominal) model,


(2) no difference between the nominal and actual model
Problems: initial model uncertainties (a difference between the nominal and actual model),
actual model varies during process
Examples:
Solutions
Robust fixed controller
Adaptive controller (self-tuning controller)
For unknown but constant dynamics, identify the model during initial period (auto-tuning or
self-tuning).
For time-varying system, identify and update the model all the time (adaptive control).
Structure of self-tuning control system

2. Simple Methods

sT
Industrial Processes G(s ) e

y(t ) KU (1 e ( t Td )/ T )

K
1 sT

parameters: Td , K , T

(t Td )

u( t ) U ( t 0 )
u( t ) 0 ( t 0 )

Identify parameters from the step response

Step Response
5
4.5
4

Amplitude

3.5
3
2.5
2
1.5
1
0.5
0

10

15

20

25

30

Time (sec.)

Td 0, K 5, T 5 sec ond

Step Response
5
4.5
4

Amplitude

3.5
3
2.5
2
1.5
1
0.5
0

Time (sec.)

Td 0, K 5, T 1 sec ond

dy
y Ku(t )
dt
t2
dy
(T dt y)dt
t t1

T [ y(t 2 ) y(t1 )]

t2

Ku(t )dt

t t1
t2

t2

y(t )dt K

t t1

u(t )dt

t t1

t1 jh, t 2 kh ( h: sampling period)


Denote a1 y(t 2 ) y(t1 )
a2
b1

t2

t t1

k 1

y(t )dt h y(ih)


i j

t2

u(t )dt

t t1

k 1

h u(ih)
i j

Then
a1 T + a 2 Kb1

a1 T b(1) K a2

a1 (l )T b(l ) K a 2 (l )

(l 1,2,... )

Control of a plant with unknown gain

-Plant: y(t 1) Ku(t )


-Set point: w(t )
-Control Problem: At t instant, for the known y(t ), y(t 1),..., u(t 1), u(t 2) , determine
u( t )

such that y(t 1) approaches w(t 1) .


-Controller:

y(t 1) y(t ) Ku(t ) Ku(t 1)

y(t 1) Ku(t )
e( t ) w ( t ) y ( t )

( y(t 1) y(t 1) y(t ), u(t ) u(t ) u(t 1)

Desired y(t 1) e(t )


Ku(t ) e(t )
u(t ) u(t 1) e(t ) / K
Control Algorithm

u(t ) u(t 1) e(t ) / K

-On-line Identification
At t 1 instant: the estimate of the gain is K (t 1)
Predicted output y (t / t 1) K (t 1)u(t 1)
At t instant:

y(t ) becomes available

The prediction error K (t 1) generated: (t ) y(t ) y (t / t 1)


In order to eliminate the prediction error,
7

(t ) u(t 1)( K
(t ) K
(t 1)) (t )
u(t 1) K
(t ) K
(t 1) (t ) / u(t 1)
K

On-line estimator: K (t ) K (t 1) (t ) / u(t 1)

3. Plant Model

Model Structure, Parameterization, and Parameter Set

M( )
First-order system

b0
a1s a0

{a0 , a1 , b0 )

M( )
Second-order system

b1s b0
a2 s 2 a1s a0

{a0 , a1 , a2 , b0 , b1 )

Uniqueness of Parameterization and Parameter Set

b0
M( )
s a0
First-order system

{a0 , b0 )

M( )
Second-order system

M( ) K
or

{K , a1 )

b1s b0
s 2 a1s a0

M( ) K
or

{a0 , a1 , b0 , b1 )

Selection of Model Structure


Criteria: - Sufficiency
- Uniqueness
- Simplicity, Realization, Robustness

Linear System: a general model structure


Continuous time: y(t )

1
a1s 1

b1s 1
a2 s 2 a1s 1

{K, a1 , a2 , b1 )

B( s)
u(t Td ) d (t )
A( s )

Dead time Td : mass transport, approximation of complex dynamics


Disturbance d (t ) :
measurement noise, unmodeled dynamics, nonlinear effects, disturbance (load)
On-line identification:
Faster faster tracking of the changed dynamics, less robust to noise
(easier to be affected by noise)
Slower slower tracking of the changed dynamics, more robust to noise
Pulse Response
Discrete-Time: y(t )

hi u(t i)
i 1

(for open-loop stable system)

Why not y(t ) hi u(t i ) ?


i 0

How to handle a dead time?

Truncation: y(t ) hi u(t i )


i 1

Advantage: simplicity in algorithm design and computation


Disadvantage: large number of parameters

y(t ) ay(t 1) bu(t 1) y(t ) ba i 1u(t i )


1%: a 0.5

i=7; a 0.9

i 1

i=44

DARMA (deterministic autoregressive and moving average) difference equation


y(t ) a1 y(t 1) a2 y(t 2 )... a na y(t na) b1u(t 1) b2 u(t 2)... bnb u(t nb)
na

nb

j 1

j 1

y ( t ) a j y ( t j ) b j u( t j )
Backward-shift operator z

z 1 y( j ) y( j 1), z n y( j ) y( j n )
z 1u( j ) u( j 1), z n u( j ) u( j n)

A( z 1 ) y(t ) B( z 1 )u(t )

B( z 1 )
u( t )
A( z 1 )
Disturbance Modeling: zero-mean disturbance
B( z 1 )
y
(
t
)

u(t ) d (t )
Additive disturbance
A( z 1 )

y( t )

C1 ( z 1 )
e(t ) (stationary random sequence)
D1 ( z 1 )
Uncorrelated Random Sequence e(t ) (while noise):
Modeling of disturbance: d (t )

10

e( t )

e 2 (t )

e( t )e( t
(

2
e

j)

t 2
1

t 2 t1 1 t t 1
1
e 2 (t )
t 2 t1 1
( j 0, j: positi

e( t )

e( t )e( t

j)

1
t 2 t1 1

Random Sequence: partially predictable


Uncorrelated Random Sequence: unpredictable
CARMA (controlled autoregressive and moving average) difference equation
A( z 1 ) y(t ) B( z 1 )u(t ) C( z 1 )e(t )
na

nb

nc

j 1

j 0

y(t ) a j y(t j ) b j u(t j ) c j e(t j )


j 1

(c 0 1)

Disturbance Modeling: non zero-mean disturbance

C1 ( z 1 )
( t ) d (t ) c
e(t ) c (c: unknown constant or slowly changing)
D1 ( z 1 )
(1 z 1 )C1 ( z 1 )
(t ) d (t ) c
e(t ) (Difference operator 1 z 1 )
D1 ( z 1 )
CARIMA model: A( z 1 )y(t ) B( z 1 )u(t ) C( z 1 )e(t )
4A. Least Squares Method
Model:
y(t ) a1 y(t 1) a2 y(t 2)... a na y(t na) b1u(t 1) b2 u(t 2)... bnb u(t nb) e(t )

na

nb

j 1

j 1

y(t ) a j y(t j ) b j u(t j ) e(t ) x T (t ) e(t )

x(t ) ( y(t 1), y(t 2),... y(t na), u(t 1), u(t 2),..., u(t nb)) T

(a1 , a2 ,..., ana , b1 , b2 ,..., bnb ) T


e(t ) y(t ) x T (t )

e (t ) y(t ) x T (t )

11

For t t 0 + 1 (t 0 max( na, nb)), t 0 2, t 0 3,..., t 0 N :


e(t 0 1) y(t 0 1) x T (t 0 1)
e(t 0 2) y(t 0 2) x T (t 0 2)
.......
e(t 0 N ) y(t 0 N ) x T (t 0 N )

E Y X

E (e(t 0 1), e(t 0 2),..., e(t 0 N )) T


Y ( y(t 0 1), y(t 0 2),..., y(t 0 N )) T
x T (t 0 1)

x (t 0 2 )
X

......

T
x (t 0 N )
T

N ( na nb )

2
T
T

Cost Function J e (t 0 j ) E E (Y X ) (Y X )
j 1

*
Criterion for determining the optimal estimate :

min J

R( na nb )

dJ
2 X T Y 2 X T X
d
X T Y X T X *

( X T X ) 1 X T Y
*

12

4. Recursive Prediction Error Estimators


Recursive Estimators: why

Principle
y (t / t 1) x T (t ) (t 1)

y(t ) x T (t ) e(t )

Prediction Error: (t ) y(t ) y (t / t 1) x T (t )( (t 1)) e(t )


e(t ): zero mean and unpredictable
(t ) x T (t )( (t 1))
(t )

x (t ) (t ) x (t ) x T (t )( (t ) (t 1))
( x (t ) x T (t )) 1 x (t ) (t ) (t ) (t 1)

(This is for illustration. Det ( x (t ) x T (t )) 0)


(t ) (t 1) ( x (t ) x T (t )) 1 x (t ) (t )

Recursive Estimator: (t ) (t 1) a(t ) M (t ) x (t ) (t )


a(t ) : large, small estimation speed, noise sensitivity

A Recursive Estimator
t

T
T
2
- Cost Function J (t ) ( (t ) (0 )) S(0)( (t ) (0 )) ( y(i ) x (i ) (t ))
i 1

Function of the first term:


The role of the first term ~ time
-

Recursive Form
S(t ) S(t 1) x (t ) x T (t )
(t ) (t 1) S 1 (t ) x (t ) (t )

Initials: S( 0) , (0)
Effects of S( 0) , (0) :
Gain vector: k (t )

(t ) S(0 )
( 0 ) [ S(t ) S( 0 )]
S(t )

P(t 1) x (t )
1 x T (t ) P(t 1) x (t )

Parameter Update: (t ) (t 1) k (t ) (t )
Covariance Update: P(t ) [ I k (t ) x T (t )]P(t 1)
Initials: P(0) 2 I and (0)

13

Forgetting Factor

Why? Filter effect


t
t ( t )
T
2
( y( ) x T ( ) (t )) 2 d
Solution: 0 ( y( ) x ( ) (t )) d 0 e
t

i 1

i 1

( y(i) x T (i ) (t )) 2 { t i ( y(i) x T (i) (t )}2


Recursive Equations:
Gain vector: k (t )

P(t 1) x (t )
x T (t ) P(t 1) x (t )

Parameter Update: (t ) (t 1) k (t ) (t )
Covariance Update: P(t ) [ I k (t ) x T (t )]P(t 1) /
(0 2 1)

14

5. Predictive Models

Consider y(t ) 0.9 y(t 1) e(t )

y( t )

1
e(t ) ( 0.9 j z j )e(t )
1
(1 0.9 z )
j 1

0.9 j e(t j ) e(t ) 0.9e(t 1) 0.9 2 e(t 2) 0.9 3 e(t 3)...


j 1

- k -step-ahead prediction:
Model: y(t k ) e(t k ) 0.9e(t k 1) 0.9 2 e(t k 2) 0.9 3 e(t k 3)...
Prediction:
y (t k / t ) 0.9 k e(t ) 0.9 k 1 e(t 1) 0.9 k 2 e(t 2)...
0.9 k {e(t ) 0.9e(t 1) 0.9 2 e(t 2)...}
e( t )
0.9 k
0.9 k y(t )
1 0.9 z 1

Prediction Error:
~
y (t k ) y(t k ) y (t k / t )
Variance of Prediction Error:
Variance of y :

k 1

0.9 j e(t k j )
j 0

k 1

0.9 2 j 2 2
j 0

1 0.9 2 k
1 0.9 2

1
2
1 0.9 2

Variance of prediction Error/Variance of y = 1 0.9 2 k

MA Model: y(t ) N ( z 1 )e(t )


N ( z 1 ) 1 n1 z 1 ... nk 1 z ( k 1) n k z k n k 1 z ( k 1)
{1 n1 z 1 ... nk 1 z ( k 1) } z k {n k n k 1 z 1 ... nk j z j ..}
N k* ( z 1 ) z k N k ( z 1 )

1
*
1
1
Model: y(t k ) N ( z )e(t k ) N k ( z )e(t k ) N k ( z )e(t )
N ( z 1 )
1
k-Step-Ahead Prediction: y (t k / t ) N k ( z )e(t ) k 1 y(t )
N (z )

C ( z 1 )
e(t )
ARMA Model: y(t )
A( z 1 )

15

1
C ( z 1 )
)
1
k F( z

E
(
z
)

z
N k* ( z 1 ) z 1 N k ( z 1 )
1
1
A( z )
A( z )

E( z 1 ) N k* ( z 1 ) e0 e1 z 1 ... e k 1 z ( k 1)
F( z 1 ) f 0 f1 z 1 ...

Diophantine Identity: C( z 1 ) A( z 1 ) E( z 1 ) z k F( z 1 )
{

C: nc,

E: ne k 1

A: na,

F: nf max( nc, na ( k 1)) k

k-step-ahead prediction
y (t k / t )

C ( z 1 )
F( z 1 )
F ( z 1 )
e
(
t

k
)

e
(
t
)

y(t )
A( z 1 )
A( z 1 )
C( z 1 )

Prediction error
~
y (t k / t ) y(t k ) y (t k / t ) E( z 1 )e(t k )

A( z 1 ) 1 0.9z 1
Example:

1
1
C( z ) 1 0.7z
k2

na 1, nc 1, ne 1, nf 0

Diophantine Identity:
1 0.7z 1 (1 0.9 z 1 )(e0 e1 z 1 ) z 2 f 0
1 0.7z 1 e0 (e1 0.9e0 )z 1 ( 0.9e1 f 0 )z 2
Solution: e0 1, e1 0.9e0 0.7 1.6, f 0 0.9e1 1.44

Two-step-ahead prediction:
F ( z 1 )
1.44
y (t 2 / t )
y(t )
y( t )
1
C( z )
1 0.7z 1

16

6. Minimum-Variance (MV) Control


Model: A( z 1 ) y(t ) B( z 1 )u(t k ) C( z 1 )e(t )
Set-Point: y 0 0
Prediction Equation:
B( z 1 )
C( z 1 )
u
(
t
)

e(t k )
A( z 1 )
A( z 1 )
Diophantine Identity: EA C z k F
y( t k )

EAy(t k ) Cy(t k ) z k Fy(t k ) EBu(t ) ECe(t k )

F
EB
y( t )
u(t ) Ee(t k )
C
C
F
EB
u( t )
Prediction: y (t k / t ) y(t )
C
C
y( t k )

y (t k / t ) E ( z 1 )e(t k )
Prediction Error: ~

F ( z 1 )
y( t )
E ( z 1 ) B( z 1 )
Potential Problem: nonminimum-phase system

MV Control: u(t )

Example: (1 0.9 z 1 ) y(t ) 0.5u(t 2) (1 0.7z 1 )e(t )


k 2, E 1 1.6 z 1 , F 1.44, B 0.5

MV Controller: u(t )

1.44 / 0.5
y( t )
(1 1.6 z 1 )

u(t ) 1.6u(t 1) 2.88 y(t )

17

7. Minimum-Variance Self-Tuning
Direct Adaptive Control: identify control model
Indirect Adaptive Control: identify process model design controller

Indirect Adaptive Control: closed-loop identification


Direct Adaptive MV:
F ( z 1 )
u
(
t
)

y(t ) F( z 1 ) y(t ) G( z 1 )u(t ) 0


MV:
1
1
E( z ) B( z )

{ G( z 1 ) E( z 1 ) B( z 1 ) g0 g1 z 1 ... g( k 1) nb z {k 1 nb}
F( z 1 ) f0 f1 z 1 ... f nf z nf

(nf max(nc, na ( k 1)) k ) }

Adaptive MV: F ( z 1 ) y(t ) G ( z 1 )u(t ) 0


Direct Estimation of F and G :
y (t k / t )

F
G
y(t ) u(t )
C
C

C( z 1 ) y (t k / t ) F ( z 1 ) y(t ) G( z 1 )u(t )
C( z 1 ) y (t / t k ) F( z 1 ) y(t k ) G( z 1 )u(t k )

y (t / t k ) F( z 1 ) y(t k ) G( z 1 )u(t k ) c j y (t / t k )
y (t / t k ) F ( z 1 ) y(t k ) G( z 1 )u(t k )
y(t ) F ( z 1 ) y(t k ) G( z 1 )u(t k ) E ( z 1 )e(t )
x T (t ) (t ) LS

18

8. Pole-Placement (PP) Self-Tuning


9. Long-Range Predictive Control

Problems of MV:
(1) non-minimum phase
F ( z 1 )
u( t )
y( t )
E ( z 1 ) B( z 1 )
(2) Nominal delay < Actual Delay

Cause: control of output at a single instant


Long-Range Predictive Control
Simultaneous control of y( k j )s
Principle:
Future output = Free response + Forced response
Free response: function of known data
Forced response: function of control actions to be determined.
A( z 1 ) y(t ) B( z 1 ) u(t )

na

nb

j 1

j 1

y(t ) y(t 1) a j y(t j ) b j u(t j )


Free Response:
p(t 1) y(t 1) u ( t j ) 0 ( j 0,1, 2...)
na

nb

j 1

j 2

y(t ) a j y(t 1 j ) b1 u(t ) b j u(t 1 j )


na

nb

j 1

j 2

y(t ) a j y(t 1 j ) b j u(t 1 j )


na

nb

j 2

j 3

p(t 2) p(t 1) a1 p(t 1) a j y(t 1 j ) b j u(t 1 j )


......
p(t i) p(t i 1) a1 p(t i 1) a2 p(t i 2) ..... (i nb)
Prediction
y(t 1) s1 u(t ) p(t 1)

y(t 2) s1 u(t 1) s2 u(t ) p(t 2)


.......
y(t i ) s1 u(t i 1) s2 u(t i 2)... si u(t ) p(t i )

Simultaneous control of y(t 1), y(t 2),..., y(t N )

19

Y ( y(t 1), y(t 2),..., y(t N )) T


U ( u(t ), u(t 1),..., u(t N 1)) T
P ( p(t 1), p(t 2),..., p(t N )) T
W ( w(t 1), w(t 2 ),..., w(t N )) T
E (e(t 1), e(t 2 ),..., e(t N )) T
= ( w(t 1) y(t 1), w(t 2) w(t 2),..., w(t N ) y(t N )) T

G=

Y GU P

minN E T E

U R

U (G T G ) 1 G T (W P )

Problems: excessive controls, delay system

Solutions: less number of free control actions

NU N

u(t ), u(t 1),..., u(t NU 1)

20

ADAPTIVE CONTROL SYSTEM DESIGN


EE 699 Project I
Consider the following process
(1 1 z 1 )(1 2 z 1 ) y(t ) b3 u(t 3) b4 u(t 4) e(t )

The parameters of the process are time-varying:

1 03. 00002
. t

. t
2 09. 00002

. t (t 1000)
b3 1 00005
b 05. 0.0002t
4

e ~ N(0, 01. 2 )
Design an adaptive system to control y(t ) (0 t 1000) for set-point y 0 1 .
Report Requirements:
(1)
(2)
(3)
(4)
(5)
(6)

Method selection
System Design
Program
Simulation Results
Results Analysis
Conclusions

Report Due: Nov. 22, 1998

21

CHAPTER 3 FUZZY LOGIC SYSTEMS


Primary Reference: J. M. Mendel, "Fuzzy Logic Systems for Engineering: A Tutorial," IEEE
Proceedings, 83(3): 345-377, 1995.
I. INTRODUCTION
A. Problem Knowledge
ObjectiveKnowledge(mathematicalmodels)
Subjectiveknowledge:linguisticinformation,
difficulttoquantifyusingtraditionalmathematics
ImportanceofSubjectiveKnowledge:ideadevelopment,highlevel
decisionmakingandoveralldesign
CoordinationofTwoFormsofKnowledge
Modelbasedapproach:Objectiveinformation:mathematicalmodels
Subjectiveinformation:
linguisticstatementRulesFLbasedQuantification
Modelfreeapproach:Numericaldatarules+linguisticinformation.
B. Purpose of the Chapter
BasicPartsforsynthesisofFLS
FLS:numberstonumbersmapping:fuzzifier,defuzzifier
(inputs:numbers,output:numbers,mechanism:fuzzylogic)
C. What is a Fuzzy Logic System
Inputoutputcharacteristic:nonlinearmappingofaninputvectorintoascalaroutput
Mechanism:linguisticstatementbasedIFTHENinferenceoritsmathematicalvariants
D. Potential of FLS's
E. Rationale for FL in Engineering
LotfiZadeh,1965:impreciselydefined"classes"playanimportantroleinhumanthinking
(fuzzylogic)
LotfiZadeh,1973:PrincipleofIncompatibility
(engineeringapplication)
F. Fuzzy Concepts in Engineering: examples
22

G. Fuzzy Logic System: A High-Level Introduction


Crispinputstocrispoutputsmapping:y=f(x)
FourComponents:Fuzzifier,rules,inferenceengine,defuzzifier
Rules(CollectionofIFTHENstatements):
providedbyexpertsorextractedfromnumericaldata
Understandingof(1)linguisticvariables~numericalvalues
(2)Quantificationoflinguisticvariables:terms
(3)Logicalconnections:"or""and"
(4)Implications:"IFAThenB"
(5)Combinationofrules
Fuzzifier:crispnumbersfuzzysetsthatwillbeusedtoactivaterules
InferenceEngine:mapsfuzzysetsintofuzzysetsbasedontherules
Defuzzifier:fuzzysetscrispoutput

23

II. SHORT PRIMER ON FUZZY SETS


A. Crisp Sets
CrispsetAinauniverseofdiscourseU:
Definedby:listingallofitsmembers,or
specifyingaconditionbywhich x A
Notation: A {x x meet some condition}

Membershipfunction A ( x ) :

( x)

( x)

Equivalence:Set A A ( x ) membershipfunction
Example1:Cars:color,domestic/foreign,cylinders
B. Fuzzy Sets
Membershipfunction A ( x ) [0, 1] :ameasurementofthedegreeofsimilarity
Example1(contd.):domestic/foreign
anelementcanresidesinmorethanonefuzzysetswith
differentdegreesofsimilarity(membershipfunction)
Representationoffuzzyset
-

F {(x, F (x) x U)

(pairsofelementandmembershipfunction)

U F ( x ) / x (continuousdiscourseU),or

F U F ( x ) / x
Example2:F=integerscloseto10
F=0.1/7+0.5/8+0.8/9+1/10+0.8/11+0.5/12+0.1/13
(Elementswithzero A ( x ) ,subjectivenessof A ( x ) ,symmetry)
C. Linguistic Variables
LinguisticVariables:variableswhentheirvaluesarenotgivenbynumbersbutbywordsor
sentences
u:nameofa(linguistic)variable
x:numericalvalueofa(linguistic)variable x U
(ofteninterchangeablewithuwhenuisasingleletter)
SetofTermsT(u):linguisticvaluesofa(linguistic)variable
Specificationofterms:fuzzysets(namesofthetermsandmembershipfunctions)

24

Example3:Pressure
- Nameofthevariable:pressure
- Terms:T(pressure)={week,low,okay,strong,high}
- UniverseofdiscourseU=[100psi,2300psi]
- Week:below200psi,low:closeto700psi,okay:closeto1050psi,
strong:closeto1500psi,high:above2200psi
linguisticdescriptionsmembershipfunctions
D. Membership Functions

F ( x)
Examples
Numberofmembershipfunctions(terms)ResolutionComputationalComplexity
Overlap(glasscanbepartiallyfullandpartiallyemptyatthesametime)
E. Some Terminology
Thesupportofafuzzyset
Crossoverpoint
Fuzzysingleton:afuzzysetwhosesupportisasinglepointwithunitymembershipfunction.
F. Set Theoretic Operations
F1. Crisp Sets
AandB:subsetsofU
UnionofAandB: A B

1 if x A or x B
( x )
A B
0 if x A and x B
IntersectionofAandB: A B

25

1 if x A and x B
( x )
A B
0 if x A or x B
ComplementofA: A

1 if x
( x )
A
0 if x A
A B A B ( x ) max[ A ( x ), B ( x )]

A B A B ( x ) min[ A ( x ), B ( x )]
A ( x) 1 A ( x)

Unionandintersection:commutative,associative,anddistributive
DeMorgan'sLaws: A B A B
A B A B
Thetwofundamental(Aristotelian)lawsofcrispsettheory:
- LawofContradiction: A A U
- LawofExcludedMiddle: A A
F2. Fuzzy Sets
FuzzysetA: A ( x )
FuzzysetB: B ( x )
Operationoffuzzysets:

A B ( x ) max[ A ( x ), B ( x )]

A B ( x ) min[ A ( x ), B ( x )]
A ( x) 1 A ( x)

LawofContradiction? A A U ?
LawofExcludedMiddle? A A ?
Multipledefinitions:
Fuzzyunion:maximumandalgebraicsum A B ( x ) A ( x ) B ( x ) A ( x ) B ( x )
Fuzzyintersection:minimumandalgebraicproduct A B ( x ) A ( x ) B ( x )
26

Fuzzyunion:tconorm(snorm)
Fuzzyintersection:tnorm
Examples:
tconorm
Boundedsum: A B min(1, A B )

A if B 0

Drasticsum: A B B if A 0
1 if > 0 and 0
A
B

tnorm
Boundedproduct: A B max(0, A B 1)

A if B 1

Drasticproduct: AB B if A 1
0 if <1 and 1
A
B

GeneralizationofDeMorgan'sLaws
s[ A ( x ), B ( x )] c{t[c( A ( x )), c( B ( x ))]}
t[ A ( x ), B ( x )] c{s[c( A ( x )), c( B ( x ))]}

27


III. SHORT PRIMER ON FUZZY LOGIC
A. Crisp Logic
Rules:aformofpropositions
Proposition:anordinarystatementinvolvingtermswhichhavebeendefined
Example:IFthedampingratioislow,THENthesystem'simpulseresponseoscillatesalong
timebeforeitdies.
Proposition:true,false
Logicalreasoning:theprocessofcombininggivenpropositionsintootherpropositions,....
Combination:
- Conjunction p q (simultaneoustruth)
- Disjunction p q (truthofeitherorboth)
- Implication p q (IFTHENrule).Antecedent,consequent
- OperationofNegation ~ p
- EquivalenceRelation p q (bothtrueorfalse)
TruthTable
Thefundamentalaxiomsoftraditionalpropositionallogic:
- Everypropositioniseithertrueorfalse
- Theexpressiongivenbydefinedtermsarepropositions
- Thetruetableforconjunction,disjunction,implication,negation,andequivalence
Tautology:apropositionformedbycombiningotherpropositions(p,q,r,...)whichistrue
regardlessofthetruthorfalsehoodofp,q,r,...
Example: ( p q) ~ [ p (~ q)]
( p q ) (~ p) q
Membershipfunctionfor p q :
p q ( x, y ) 1 p q ( x, y) 1 min[ p ( x ), 1 q ( y)]
p q ( x, y) p q ( x, y) max[1 p ( x ), q ( y)]

p q ( x, y ) 1 p ( x )(1 q ( y ))
p q ( x, y) min[11
, p ( x ) q ( y)]

InferenceRules:
ModusPonens:Premise1:"xisA";Premise2:"IFxisATHENyisB"
Consequence:"yisB" ( A B)
( p ( p q )) q )
ModusTollens:Premise1:"yisnotB";Premise2:"IFxisATHENyisB"

28

Consequence:"xisA"
(q ( p q)) p )
B. Fuzzy Logic
MembershipfunctionoftheIFTHENstatement:"IFuisA,THENvisB" (u U , v V )
A B ( x, y ) :truthdegreeoftheimplicationrelationbetweenxandy
B1.CrispLogicFuzzyLogic?
Fromcrisplogic:

A B ( x, y) 1 min[ A ( x ), 1 B ( y)]
A B ( x, y) max[1 A ( x ), B ( y)]

A B ( x, y ) 1 A ( x )(1 B ( y ))
Dotheymakesenseinfuzzylogic?

GeneralizedModusPonensPremise1:"uisA*";Premise2:"IFuisATHENvisB"
Consequence:"visB*"
Example:"IFamanisshort,THENhewillmakeavery
goodprofessionalbasketballplayer"
A:shortman,B:notaverygoodplayer
"Thismanisunder5feettall"A*:manunder5feettall
"Hewillmakeapoorprofessionalbasketballplayer"B*:poorplayer
Crisplogic ( A ( A B)) B) (compositionofrelations)

( y) sup[ A* ( x) A B ( x, y)]

B*

x A*

( y) sup[ A* ( x ) A B ( x, y)]

Examine B*

x A*

using A B ( x, y ) borrowedfromcrisplogic
andsingletonfuzzifier A ( x ' ) 1 & A ( x x ' ) 0
B* ( y ) sup[ A* ( x ) A B ( x, y )]
*

x A*

A* ( x ' ) A B ( x ' , y)
= 1 A B ( x ' , y) min[1, A B ( x ' , y)]
= A B ( x ' , y) 1 min[ A ( x' ), 1 - B ( y)]

If x x'

B* (y) 1 min[ A (x'), 1- B (y)]

29

If x x'

B* (y) 1 min[ A (x'), 1- B (y)] 1 0 1

30

B2.EngineeringImplicationsofFuzzyLogic

Minimumimplication: A B ( x, y) min[ A ( x ), B ( y)]

Productimplication: A B ( x, y ) A ( x ) B ( y)
Disagreementwithpropositionallogic

IV. FUZZINESS AND OTHER MODELS


V. FUZZY LOGIC SYSTEMS
A. Rules
l
l
l
R (l ) : IF u1 is F1 and u 2 is F2 and u p is F p , THEN

v is

Gl

l 1, 2, ..., M

Fi l s: fuzzy sets in U i R
Gl

: fuzzy set in V R

u col(u1 , u 2 , ..., u p ) U 1 U 2 ...U p

Multiple Antecedents
Example 18: Ball on beam
Objective: to drive the ball to the origin and maintain it at origin
d 2
Control variable:
dt 2
Nonlinear system, states: r ,

dr
d
, ,
dt
dt

Rules:
dr

R (1) : IF r is positive and dt is near zero and is positive and dt is near zero,
THEN u is negative
dr

dr

R ( 2 ) : IF r is negative and dt is near zero and is negative and dt is near zero,


THEN u is positive
R ( 3) : IF r is positive and dt is near zero and is negative and dt is near zero,
THEN u is positive big
dr

R ( 4 ) : IF r is negative and dt is near zero and is positive and dt is near zero,


THEN u is negative big
Example 19: Truck Backing Up Problem
Objective: x=10, 90 ( x [0, 20], [-90 , 270 ])
Control Variable: [40 , 40 ]

31

Rules: relational matrix (fuzzy associative memory)


Membership functions:
Example 20: A nonlinear dynamical system
Rough knowledge (qualitative information):
Nonlinearity f(*): y(k) and y(k-1)
f(*) is close to zero when y(k) is close to zero or -4
f(*) is close to zero when y(k-1) is close zero

Rules:
Example 21: Time Series x(k), k=1, 2,
Problem: x(k-n+1), x(k-n+2),.x(k) (predict) x(k+1)
Given: x(1), x(2),, x(D)
D-n training pairs:
x (1) : [x(1), x(2),, x(n): x(n+1)]
x ( 2 ) : [x(2), x(3),, x(n+1): x(n+2)]

x ( D n ) :[x(D-n), x(D-n+1),, x(D-1): x(D)]


n antecedents in each rule: u1 , u 2 ,..., u n
D-n rules
Extract rules from numerical data:
First method: data establish the fuzzy sets (identify or optimize the parameters in the
membership functions for these fuzzy sets) in the antecedents and the
consequents (first)
Second method: prespecify fuzzy sets in the antecedents and the consequents and then
associate the data with these fuzzy sets
Second method:
Establish domain intervals for all input and output variables: [ X , X ]
Divide each domain interval into a prespecified number of overlapping regions
Label and assign a membership function to each region
Generate fuzzy rules from the data: consider data pair x ( j )
- Determine the degrees (membership functions) of each element of x ( j ) to all
possible
fuzzy sets
- Select the fuzzy set corresponding to the maximum degree for each element
- Obtain a rule from the combination of the selected fuzzy set for the data pair x ( j )
D-n rules
32

Conflicting rules: same antecedents, different consequents


Solution: select the rule with the maximum degree in the group

( j)
( j)
( j)
( j)
D ( R ( j ) ) X ( x1 ) X ( x 2 ).... X ( x n ) X ( y )
Nonobvious Rules:

33

B. Fuzzy Inference Engine


Uses fuzzy logic principles to combine fuzzy IF-THEN rules from the fuzzy
rule base into a mapping from fuzzy input sets to fuzzy output sets.
l
l
l
R (l ) : IF u1 is F1 and u 2 is F2 and u p is F p , THEN

v is

Gl

( Input Sets: defined on U U1 U 2 ... U n


Output Set: defiend on V )

F1l F2l ... Fnl A

Gl B

R (l ) : A B
R( l ) ( x, y) A B ( x, y ) F l ( x1 ) * F l ( x 2 ) * ... F l ( x p ) * G l ( y)
1

Input to R (l ) : fuzzy set Ax , the output of the fuzzifier


Ax ( x ) X1 ( x1 ) * X2 ( x 2 ) * ... * X p ( x p )
X k s: fuzzy sets describing the inputs

R ( l ) : determines a fuzzy set

B l Ax R ( l )

Bl ( y) A R(l ) ( y) sup XAx [ Ax (x)* A B (x, y)]


x

Combining Rules:
(1)
(2)
(M)
M
(l )
M
l
Final fuzzy set: B Ax [ R , R ,..., R ] l 1 Ax R l 1 B
Using t-conorm: B B1 B 2 ... B M
Additive combiner: weights
Example 22: Truck backing up
(t i ) 140 0 , x (t i ) 6
(t i ): B1, B2

x (t i ): S1, S2

C. Fuzzification
Maps a crisp point x col ( x1 , x 2 ,..., x n ) U into a fuzzy set A * defined in U

Singleton fuzzifier:

1 x=x'
A* (x')
0 x x'

Bl ( y) A R(l ) ( y) sup XAx [ Ax (x)* A B (x, y)] A B (x' , y)


x

34

Nonsingleton fuzzifier: A ( x' ) 1 x = x' ,


A ( x' ) decreases when x - x' increases
*

Bl ( y ) A R( l ) ( y )
x

sup X Ax [ Ax ( x ) * A B ( x, y)]
sup X1 ( x1 ) * X2 ( x 2 )*...* X p ( x p ) * F l ( x1 ) * F l ( x 2 )*...* F l ( x p ) * Gl ( y)
1

x U

Gl ( y) sup X1 ( x1 ) * X2 ( x 2 )*...* X p ( x p ) * F l ( x1 ) * F l ( x 2 )*...* F l ( x p )


1

x U

Gl ( y) sup [ X1 ( x1 ) * F l ( x1 )]*[ X2 ( x 2 ) * F l ( x 2 )]*...*[ X p ( x p ) * F l ( x p )]


1

x U

Example 23: t-norm: product


membership functions: Gaussian

k-th input fuzzy set:

Xk()exk xp{1/2[(xkmXk)/Xk]}
2

Fl()exk xp{1/2[(xk mFl)/Fl] }


2

k-th antecedent fuzzy set:

k k

Q ( x k ) X ( x k ) F ( x k )
l
k

l
k

x ( m m )/( )

2 2 2 2
maximized at
k , ma x X k F l F l X k X k F l
k k
k

X 2 Xk 2 k
m Xk x k'

35

x ( m x')/( )
2 2 2 2
k,max X Fl Fl X Fl
k k
k

Fuzzier: prefilter
p

B ( y) G ( y) Q ( x k ,max )
l

k 1

l
k

X 2 0 : zero uncertainty of input

x k ,max x k '

D. Defuzzifier
1) Maximum Defuzzifier
2) Mean of Maximum Defuzzifier
3) Centroid Defuzzifier
4) Height Defuzzifier
5) Modified Defuzzifier
E. Possibilities

36

F. Formulas for Specific FLS's: Fuzzy Basis Functions


Geometric Interpretation
y f ( x ) : for specific choices of fuzzifier, membership functions,
composition, inference and defuzzifier
Example 24: singleton fuzzifier, height defuzzification
max-product composition, product inference,
M

l 1

i 1

l
y f s ( x ) [ y F l ( x i )] / [ F l ( x i )]

l 1 i 1

(
y
)

(
x'
,
y
)

(
x
'
)]

(
y
)
l
l
l

B
i
B
Fi
G
l

i 1

l ( y l ) max l ( y) 1
G
G

max-min composition, minimum inference

M M
l
s i1,. p Fl i i1,. p Fl i
i i
l1 l1

yf(x)[ymin { (x)}]/[min { (x)}]

Example 25: nonsingleton fuzzifier, height defuzzification


max-product composition, product inference,
Gaussian membership functions for Xk ( x k ) , Fkl ( x k ) , and G l ( y)
(max G l ( y) =1)
p

l
Bl ( y ) Qkl ( x k ,max )
k 1

37

l 1

k 1

l
y fns ( x ) [ y Q l ( xk ,max )] / [ Q l ( xk ,max )]
k

l 1 k 1

Fuzzy basis functions


M

y f ( x ) y l l ( x )
l 1

FBF l ( x ) (l=1,...,M):
p

l ( x )

k 1
M p

l
k

( xk )
(singleton)

F j ( xk )
k

j 1 k 1

l ( x )

Q ( xk ,max )

k 1
M p

l
k

Q j ( xk ,max )
j 1 k 1

(nonsingleton,Gaussian)

FBFs:dependonfuzzifier,membershipfunctions,
composition,inference,defuzzifier,andnumberoftherules
Combiningrulesfromnumericaldataandexpertlinguisticknowledge
M

MN

ML

j 1

i 1

k 1

j
i
k
y f ( x ) y j (x ) y N N ,i ( x ) y L L ,k ( x )

FBFsfromthenumericaldata
p

N ,i ( x ) Fsi ( x s ) / [ Fsji ( x s )]
s 1

j 1 s 1

L ,k ( x ) F ( x s ) / [ F ( x s )]
s 1

VI.

ki
s

j 1 s 1

ji

i 1,..., M N
k 1,..., M L

DESIGNING FUZZY LOGIC SYSTEMS


Linguisticrules
Numericaldata
TunetheparametersintheFLS
Trainingdata

38

x (1) : y (1)

x (2) : y (2)
x(N): y(N)

ParameterSet

y f ( , x )
Minimizetheamplitudeof y ( i ) f ( , x (i) )
Nonlinearoptimizationofcostfunction
N

(i)
(i) 2
min J ( ) [y f ( , x )]

i 1

39

CHAPTER 4 Neuro-Fuzzy Modeling and Control


Primary Reference: J.-S. R. Jang and C.-T. Sun, "Neuro-fuzzy modeling and control," IEEE
Proceedings, 83(3): 378-406, 1995.
I.
II.
III.

INTRODUCTION
FUZZY SETS, FUZZY RULES, FUZZY REASONING, AND FUZZY MODELS
ADAPTIVE NETWORKS

H. Architecture
Feedforwardadaptivenetwork&Recurrentadaptivenetwork
Fixednodes&Adaptivenodes
Layeredrepresentation&Topologicalorderingrepresentation(nolinksfromnodeitoj, i j )
Example3:Anadaptivenetworkwithasinglelinearnode

x 3 f3 ( x1 , x 2 ; a1 , a2 , a3 ) a1 x1 a2 x 2 a3
Example4:Abuildingblockfortheperceptronorthebackpropagationneuralnetwork
x 3 f3 ( x1 , x 2 ; a1 , a2 , a3 ) a1 x1 a2 x 2 a3

1 if x 3 0
x f ( x )
4 4 3
0 if x3 0

LinearClassifier
Buildingblockoftheclassicalperceptron
Stepfunction:discontinuousgradient
Sigmoidfunction:continuousgradient
x 4 f 4 ( x 3 )

1 e x3

Compositionof f 3 and f 4 :buildingblockforthebackpropagationneural


networks
Example5Abackpropagationneuralnetwork
1

x 7 1 exp[ ( w x w x w x t )]
4, 7 4
5, 7 5
6, 7 6
7

40

41

I. Back-Propagation Learning Rule


Recursivelyobtainthegradientvector:derivativesoftheerrorwithrespecttoparameters
Backpropagationlearningrule:gradientvectoriscalculatedinthedirectionoppositetothe
flowoftheoutputofeachnode
Layerl(l=0,1,...,L)l=0:inputlayer
Nodei(i=1,2,...,N(l))
Outputofnodeiinlayerl: x l ,i
Functionofnodeiinlayerl: f l ,i
Nojumpinglinks
x l ,i fl ,i ( x l 1,1 ,..., x l 1, N ( l 1), , , ,... )
Measurementsoftheoutputsofthenetwork: d1 , d 2 ,..., d N ( L )
Calculatedoutputsofthenetwork: x L ,1 , x L, 2 ,..., x L, N ( L )
Entriesofthetrainingdataset(samplesize):P
Usingentryp(p=1,...,P)generateserror
E p

N ( L)

( d k x L,k ) 2
k 1

CostFunctionforTraining E

Orderedderivative l ,i

Ep
p 1

Ep
x l ,i

Thederivativeof E p withrespectto x l ,i ,takingboth


directandindirectpathsintoconsideration.

E p
Ep

Example6:Ordinarypartialderivative
andtheorderedderivative l ,i
x l ,i
x l ,i

y f ( x)

z g( x, y)

z
g( x, y)

x
x

g( x , f ( x ))
g( x , y )

x
x

y f ( x )

g( x, y )
y

y f ( x )

f ( x )
x

42

y f ( x) 2 x
z g( x, y)
z

5
522

x
x
x
z g( x , y ) 5 x 2 y

BackPropagationEquation:
Ep
E p

L ,i
x L ,i
x L ,i
l ,i

N ( l 1) E
N ( l 1)
Ep
f
p f l 1, m

l 1, m l 1, m (0 l L 1)
x l ,i
x l ,i
x l ,i
m 1 x l 1, m
m 1

Ep

or

E p f l ,i

Ep

xl ,i

x*S

l ,i

f l ,i

E p f *
x *

(S: the set of nodes containing

as a parameter)
P

The derivative of the overall error measure E E p will be


p 1

P E
E
p

p 1

Update formula:

: can be determined by

: learning rate

E 2
)

: step size (changing the speed of the convergence)


Off-line learning & On-line learning
Recurrent network: transform into an equivalent feedforward
network by using unfolding of time technique
J. Hybrid Learning Rule: Combining BP and LSE
OffLineLearning

43

OnLineLearning
DifferentWaysofCombiningGDandLSE
K. Neural Networks as Special Cases of Adaptive Networks
D1.BackPropagationNeuralNetworks(BPNN's)
Nodefunction:compositionofweightedsumandanonlinear
function(activationfunctionortransferfunction)
Activationfunction:differentiablesigmoidalorhypertangenttypefunction
whichapproximatesthestepfunction
Fourtypesofactivationfunctions
Stepfunction

1 if x 0
f ( x)
0 if x 0

1
1 ex
1 ex
Hypertangentfunction f ( x )
1 ex
Sigmoidalfunction f ( x )

Identityfunction f ( x ) x
Example:threeinputsnode
Inputs: x1 , x 2 , x 3
Outputofthenode: x 4
3

Weightedsum: x 4 wi 4 x i t 4
i 1

Sigmoidfunction: x 4

1
1 e x4

w i 4 :
t 4 :
Example:twolayerBPNNwith3inputsand2outputs
D2.TheRadialBasisFunctionNetworks(RBFN's)

Radialbasisfunctionapproximation:localreceptivefields
Example:AnRBFNwithfivereceptivefieldunits
44

Activationleveloftheithreceptivefiled:


wi Ri ( x ) Ri ( x ci / i )

i 1,2,..., H

2
x ci

Gaussianfunction Ri ( x ) exp(
)
2i
or

1
2
x ci
Logisticfunction
1 exp{
}
i2

Maximizedatthecenter x ci
Finaloutput:

Ri ( x )

f ( x )

i 1

fi wi fi Ri ( x )
i 1

or
H

f ( x )

fi wi
i 1
H

wi
i 1

fi Ri ( x )
i 1
H

Ri ( x )
i 1

Parameters: ci , i , fi nonlinear: ci , i linear: fi

Identification:

ci s :clusteringtechniques
i s :heuristic
then

fi ai x bi leastsquaresmethod
IV.

ANFIS: ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

A. ANFIS Architecture
Example:Atwoinputs(xandy)andoneoutput(z)ANFIS
Rule1:IFxis A1 andyis B1 ,then f1 p1 x q1 y r1
Rule2:IFxis A2 andyis B2 ,then f 2 p2 x q 2 y r2
ANFISarchitecture
Layer1:adaptivenodes
O1,i Ai ( x ) i 1, 2

O1,i Bi 2 ( x ) i 3, 4
45

Ai ( x ) and Bi ( x ) :anyappropriateparameterizedmembershipfunctions
1
Ai ( x )
( x ci ) 2 bi

1[
]
ai 2

{ai , bi , ci

}premiseparameters

Layer2:fixednodeswithfunctionofmultiplication
i 1, 2 (firingstrengthofarule)
O2,i wi Ai ( x ) Bi ( x )
Layer3:fixednodeswithfunctionofnormalization
wi
i 1, 2 (normalizedfiringstrength)
O3,i wi
w1 w 2
Layer4:adaptivenodes
O4,i wi f i w i ( pi x qi y ri )

{pi , qi , t i } consequentparameters
Layer5:afixednodewithfunctionofsummation
O5,1 overall output wi fi
i

Example:AtwoinputfirstorderSugenofuzzymodelwithninerules
B. HybridLearningAlgorithm
Whenthepremiseparametersarefixed:
w1
w2
f
f1
f2
w1 w 2
w1 w2
w1 f1 w 2 f 2
linearfunctionofconsequentparameters
( w1 x ) p1 ( w1 y)q1 ( w1 )r1
+ ( w 2 x )q 2 ( w 2 y)q 2 ( w 2 )r2
Hybridlearningscheme
C. ApplicationtoChaoticTimeSeriesPrediction

Example:MackeyGlassdifferentialdelay
0.2 x (t )
0.1x (t )
x (t )
1 x 10 (t )
Predictionproblem
x (t ( D 1) ),..., x (t ), x (t ) x (t P)
D 4, P 6, 1000datapairs
x (t 18 ), x (t 12 ), x (t 6), x (t ) x (t 6)
46

500pairsfortraining,500forverification

Inputpartition:2Rules:16numberofparameters:104
(premise:24,consequent:80)

Predictionresults:
nosignificantdifferenceinpredictionerrorfortrainingandvalidating

47

Reasonsforexcellence

48

V.

NEURO-FUZZY CONTROL
DynamicModel: x = f ( x(t ), u(t ))
DesiredTrajectory: x d (t )
ControlLaw: u(t ) = g( x(t ))
DiscreteSystem
DynamicModel: x( k 1) = f ( x( k ), u( k ))
DesiredTrajectory: x d ( k )
ControlLaw: u( k ) = g( x( k ))

A. MimickingAnotherWorkingController
Skilledhumanoperators
Nonlinearapproximationability
Refiningthemembershipfunctions
B. InverseControl
Minimizingthecontrolerror
C.SpecializedLearning
Minimizingtheoutputerror:needsthemodeloftheprocess
D. BackPropagationThroughTimeandRealTimeRecurrentLearning
Principle
ComputationandImplementation:OffLineOnline
L. FeedbackLinearizationandSlidingControl
M. GainScheduling
Sugenofuzzycontroller
Ifpoleisshort,then f1 k11 k12 k13 z k14 z
Ifpoleismedium,then f 2 k 21 k 22 k 23 z k 24 z
Ifpoleislong,then f3 k31 k32 k33 z k34 z
OperatingPointslinearcontrollersfuzzycontrolrules
G.AnalyticDesign

49

Project2:NeuroFuzzyNonLinearControlSystemDesign
1.GivenProcess
isdescribedbythefollowingfuzzymodel
A.Rules:
Rule1:IF u( k 1) isVERYSMALL,then y1 ( k ) a1u(k 1) b1u(k 2)
Rule2:IF u( k 1) isSMALL,then y2 ( k ) a2 u( k 1) b2 u(k 2)
Rule3:IF u( k 1) isMEDIUM,then y3 ( k ) a3 u( k 1) b3 u( k 2)
Rule4:IF u( k 1) isLARGE,then y 4 (k ) a 4 u( k 1) b4 u( k 2)
Rule5:IF u( k 1) isVERYLARGE,then y 5 ( k ) a5 u( k 1) b5 u( k 2)
where u istheinput, yi (i 1, 2, ..., 5) istheoutputfromRulei,and a1 0.5, a 2 0.4,
a3 0.3, a 4 0.2, a 5 0.1, b1 1, b2 0.9, b3 0.8, b4 0.7, and b5 0.6 arethe
consequentparameters.
B.Membershipfunctions:

(u 0) 2
)
0.5 2
(u 1) 2
Small (u) exp(
)
0.5 2
(u 2 ) 2
Medium (u) exp(
)
0.5 2
(u 3) 2
Large (u) exp(
)
0.5 2
(u 4 ) 2
VeryLarge (u) exp(
)
0.5 2

VerySmall (u) exp(

C.Systemoutput
5

y wi yi
i 1

i 1

wi
5

wj

yi

j 1

where wi (i 1, 2, 3, 4, 5) isthefiringstrengthofRulei.

50

2.Thedesiredtrajectoryofthesystemoutputis:

y0 (k ) 0.5 (0 k 400)

y 0 ( k ) 1 ( 400 k 700)
y (k ) 15. (700 k 1000)
0
3. Assumethattheconsequentparameters a j ' s and b j ' s areunknown.Designanadaptive
controlsystemforthegivensystemtoachievethedesiredtrajectoryoftheoutputunderthe
constraint 0 u 4 .(Offlineidentificationproceduremaybeusedtoobtaintheinitialsof
thepremiseparameters.)
Report Requirements:
(1) Method selection
System Design
Program
Simulation Results
Results Analysis
Conclusions
Due: 12/14/98

51

EE 699 Final Examination


Fall 1998
Name:
Grade:
1. What are the major elements of a fuzzy logic system? What are their functions? (20%)
2. Describe an approach which can be used to extract fuzzy rules from numerical data. (20%)
3. ThesystemisdescribedbyaTakagiandSugeno'sfuzzymodelwiththefollowingrulesand
membershipfunctions,

Rules:
Rule1:IF u( k 1) isVERYSMALL,then y1 ( k ) a1u(k 1) b1u(k 2)
Rule2:IF u( k 1) isSMALL,then y2 ( k ) a2 u( k 1) b2 u(k 2)
Rule3:IF u( k 1) isMEDIUM,then y3 ( k ) a3 u( k 1) b3 u( k 2)
Rule4:IF u( k 1) isLARGE,then y 4 (k ) a 4 u( k 1) b4 u( k 2)
Rule5:IF u( k 1) isVERYLARGE,then y 5 ( k ) a5 u( k 1) b5 u( k 2)
where u istheinput, yi (i 1, 2, ..., 5) istheoutputfromRulei,

Membershipfunctions:
(u 0 ) 2
VerySmall (u) exp(
)
0.5 2
(u 1) 2
Small (u) exp(
)
0.5 2
(u 2 ) 2
Medium (u) exp(
)
0.5 2
(u 3) 2
Large (u) exp(
)
0.5 2
(u 4 ) 2
VeryLarge (u) exp(
)
0.5 2
Iftheoutputofthesystemisgivenby
y

w i yi ,
i 1

explaintheroleof wi (i 1, 2, 3, 4, 5) andgiveawaytodetermine wi (i 1, 2, 3, 4, 5) .
(20%)
4.Thesystemis
y k ay k 1 bu k 1 k
where
a and b :parametersofthesystem
y k and u k :outputandinputatinstantk,and

52

k :system'snoiseatinstantk, E( k ) 0

(k ) ,

and

E( k j ) 0 (k j )) .

Givendatapairs {u k , y k } s( k 1, 2, ..., N ),determinetheLeastSquaresestimatesofthe


parameters a and b ?(20%)
5.Thesystemis
y k a1 y k 1 a2 y k 2 b3 uk 3 b4 uk 4 k

where y k : output at instant k,


u k : input at instant k,
k : noise at instant k, E( k ) 0

(k ) ,

and

E( k j ) 0 (k j )) ,

a1 , a 2, b3 , b4 : parameters of the system.


At instant t, {u k , y k } s( k 1, 2, ..., t 1 )and y t are known. Give an equation which predicts
y t 10 . (20%)

53

Anda mungkin juga menyukai