Anda di halaman 1dari 8

OPTIMAL CONTROL OF DISCRETE-TIME SYSTEMS

LAB NO.7

GROUP MEMBERS:

SYED AHSAN RAZA SHERAZI (150570)

MUHAMMAD SALMAN (150600)

ABDULLAH AHMAD (150693)

Bachelor of Mechatronics Engineering

(2019)

LAB SUPERVISOR

Engr. Haroon Khan

AIR UNIVERSITY, ISLAMABAD


LAB 7
OPTIMAL CONTROL OF DISCRETE-TIME SYSTEMS

Introduction:
The optimal control problem of nonlinear systems has always been
the key focus of control fields in the past several decades. Traditional
optimal control approaches are mostly based on linearization methods or
numerical computation methods. However, closed-loop optimal feedback
control is desired for most researchers in practice. Therefore, in this
chapter, several near-optimal control scheme will be developed for
different nonlinear discrete-time systems by introducing the different
iterative ADP algorithms. First, an infinite-horizon optimal state feedback
controller is developed for a class of discrete-time systems based on DHP.
Then, due to the special advantages of GDHP algorithm, a new optimal
control scheme is developed with discounted cost functional. Moreover,
based on GHJB algorithm, an infinite-horizon optimal state feedback
stabilizing controller is designed. Further, most existing controllers are
implemented in infinite time horizon. However, many real-world systems
need to be effectively controlled within a finite time horizon. Therefore, we
further propose a finite-horizon optimal controllers with ε-error bound,
where the number of optimal control steps can be determined definitely.

Flow chart:
Problem Formulation

Problem Statement

Problem Solution
Example: Optimal control for a scalar linear system

In this lab, we are only concerned with software simulations to test optimal control design,
therefore, steps to design the control system are not described here. However, initial and final
equations obtained for the systems are as follows:

Optimal control

Optimal state trajectory


Optimal performance index

Code:
% Simulation of Optimal Control for Scalar Systems
% Fixed Final State Case
clc, clear all
a = 0.99;
b = 0.1;
ug(1) = 4;
N = 100;
rN = 10;
x0 = 0;
xg(1) = 0;
x(1) = x0;
x2(1) = x0;
delta = (1-a ^ (2*N)) / (1-a ^ 2);
delta = delta*b ^ 2;
u(1) = b*(rN-x(1)*a ^ N)*a ^ N/ (delta);
u(1) = u(1) / a;
u2(1)= rand;
for k = 1:N
% Update the Plant State
x(k+1) = a*x(k) +b*u(k);
% Update the Optimal Control Input
u(k+1) = u(k) /a;
end

for k = 1:N
% Update the Plant State
x2(k+1) = a*x2(k) + b * u2(k);
if k == N
x2(k+1) = x(k+1);
end
u2(k+1) = rand;
end

% optimal performance index


J0 = (1./(2.0*delta))*(rN - a^N *x0)^2;
k = 1:1:N+1;
plot(k,x); hold on;
plot(k,x2)

%Guess plots
for k = 1:99
xg(k+1) = a*xg(k) + b*ug(k);
ug(k+1) = ug(k) /a;
end

PI = 0.5 * sum(u.^2)
PI2 = 0.5 * sum(u2.^2)
Output:

Conclusion:

We concluded that by using two approaches it gives us a clear idea about difference which is
shown in graph. Random value generation and exact value putting in the formula will give us
two path reaching the goal point and we are only concerned with software simulations to test
optimal control design.

Anda mungkin juga menyukai