Anda di halaman 1dari 12

Answers to P-Set # 08, 18.385j/2.

036j
MIT (Fall 2014)
Rodolfo R. Rosales (MIT, Math. Dept., room 2-337, Cambridge, MA 02139)
November 5, 2014

Contents
1 Problems 08.07.05/06/07 - Strogatz (Another driven overdamped system)
1.1 Problems 08.07.05/06/07 statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Problems 08.07.05/06/07 answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1
1
2
6

2 Problem 141003 - Newtons Method in the complex plane


2.1 Problem 141003 statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Problem 141003 answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8
8
9

3 Coupled oscillators #01 (derive phase equations)


10
3.1 Statement: Coupled oscillators #01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Answer:
Coupled oscillators #01 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Notes: coupled oscillators, phase locking, etc.
11
4.1 On phases and frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Phase locking and oscillator death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

List of Figures
1.1
1.2
2.1
2.2

Problems 8.7.57. Phase portrait for a periodically driven, overdamped pendulum


Problems 8.7.57. Poincare map for a periodically driven, overdamped pendulum
Problem 141003. Convergence zones for the Newtons map iterates . . . . . . . . .
Problem 141003. Convergence zones for the Newtons map iterates . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

2
5
9
9

Problems 08.07.05/06/07 - Strogatz


(Another driven overdamped system)

1.1

Statement for problems 08.07.05/06/07

A. By considering an appropriate Poincare map, prove that the system


d
+ sin = sin t
dt
has at least two periodic solutions. Can you say anything about their stability?
B. Give a mechanical interpretation for equation (1.1).
1

(1.1)

C. Plot a computer generated phase portrait for the system in (1.2). Check that the answer agrees with
the results in part A.
Hint. Regard the system as a vector field on a cylinder:
dt
=1
dt

and

d
= sin t
dt

sin .

(1.2)

Sketch the nullclines and thereby infer the shape of certain key trajectories that can be used to bound the
periodic solutions. For instance, sketch the trajectory through P = (t, ) = 12 (1, 1).

1.2

Answer for problem 08.07.05/06/07

Part A. We begin by noticing that we only need to worry about the behavior of the system on a square of
size 2 , that is: for t0 t t0 + 2 , and 0 0 + 2 , for any constants t0 and 0 which
we are free to choose to our convenience. The whole phase space is then obtained by periodically repeating
copies of this period box.
The nullclines (where = 0) are given by
= 2n + t

and

= 3n

t,

(1.3)

where n is an arbitrary integer. Consider now the period box 1 given by t0 = 0 = /2 see figure 1.1,
where the
nullclines
and
the period
box are drawn. Then the period box is divided, by the nullclines, into
Periodically
driven
overdamped
pendulum.
d"/dt + sin(") = sin(t).

!
2.5

Periodically driven overdamped pendulum


+ sin = sin t.
The nullclines and the period box, in addition to some special
solutions, are shown. The curve marked by a 2 is the stable
periodic solution. The unstable periodic solution is the curve
marked by a 5. The role of the solutions labeled by 1, 3,
and 4 is explained in the text.

" 1.5
3
1

4
0.5
0.5

1.5

2.5
!

Figure 1.1: Problems 8.7.05/06/07. Periodically driven, overdamped pendulum (equation (1.1)).
four sectors where alternates signs.
Name the sectors (starting from the top, and moving clockwise) as
S1 , S2 , S3 , and S4 . Then: sign = ( 1)n in sector Sn . Below we consider three special solutions,
useful in bracketing the periodic solutions, and we use the nullclines and the four sectors Sn to ascertain
some aspects of their behavior.
1

Why this? Trial and error, checking which choice makes the arguments that follow more elegant.

3
Lower case whenever is used as a variable.
Upper case to denote solutions to (1.1), = (t).

Remark 1.2.1 We use the notation

a. Solution = 1 (t), defined by 1 (0.5 ) = 2.5 , and labeled by a 1 in figure 1.1.


This solution, for t slightly greater than 0.5 , is in sector S1 , where it is decreasing. Furthermore,
this solution exits S1 across the nullcline separating S1 from S2 at a value of lower than 2.5
and higher than 1.5 . Once in S2 , is increasing, but it is not be able to reach the value = 2.5
before t = 2.5 , because the point (t, ) = (2.5 , 2.5 ) is on the nullcline separating S1 from S2 . We
conclude that:
1 (2.5 ) < 1 (0.5 ) = 2.5 .
(1.4)
b. Solution = 3 (t), defined by 3 (1.5 ) = 1.5 , and labeled by a 3 in figure 1.1.
By arguments similar to those used in item a, it is clear that:
3 (2.5 ) > 1.5 > 3 (0.5 ).

(1.5)

c. Solution = 4 (t), defined by 4 (2.5 ) = 0.5 , and labeled by a 4 in figure 1.1.


By arguments similar to those used in item a, it is clear that:
0.5 = 4 (2.5 ) < 4 (0.5 ).

(1.6)

0.5 4 (t) < 3 (t) < 1 (t) 2.5 ,

(1.7)

d. Finally, we note that:


for all 0.5 t 2.5 . In particular: all three solutions stay within the period box.
Consider now the Poincar
e Map P , defined as follows:
P (L ) = R ,

(1.8)

where R = (2.5 ) for the solution of (1.1), = (t), defined by (0.5 ) = L . Since adding 2 to
any solution of (1.1) gives another solution, it is clear that P satisfies
P ( + 2 ) = + 2 .

(1.9)

Thus Q() = P () is periodic, of period 2 . Furthermore, the issue of existence (and stability) of
periodic solutions for (1.1) can be stated in terms of P or Q as follows:
e. Equation (1.1) has a periodic solution () P has a fixed point (i.e., Q has a zero).

Proof. Let = (t) be a periodic solution. Then = (0.5 ) is a fixed point from equation (1.8). Conversely,
if is a fixed point, the solution defined by (0.5 ) = is periodic.

f. Let be a fixed point of P . Then the corresponding periodic solution is stable if dP/d < 1, and
unstable if dP/d > 1 Equivalently: dQ/d < 0 or dQ/d > 0.2
Note that P is a nondecreasing function, as follows from the fact that solutions cannot cross in the
(t, ) phase plane. Therefore: P cannot have a negative derivative.

Proof. P ( + ) + (dP/d) . Thus the distance from the periodic solution to a nearby one will either
decrease or increase over one period, depending on the size of (dP/d) .

Here all the derivatives are evaluated at = .

4
Note that this proof (as stated) is not rigorous but it is very easy to make it so. I will do this now, just for
the fun of it, not because I think it is important to do so: For every , we can find a between and
. By taking small enough, we can also guarantee that
+ such that P ( + ) = + (dP/d)()
< 1 (or (dP/d)()
> 1). This shows stability, or instability, without any need to neglect higher
(dP/d)()
order (nonlinear) terms as the sign in the first line of the proof implies.3

From (1.4) it follows that Q(1 (0.5 )) < 0, while (1.5) implies that Q(3 (0.5 )) > 0. Thus, somewhere in
3 (0.5 ) < < 1 (0.5 ), Q must have a zero. This shows that:
A periodic solution 2 of (1.1),
such that
3 (0.5 ) < 2 (0.5 ) < 1 (0.5 ), exists.

(1.10)

This solution is the one labeled by a 2 in figure 1.1. A very similar argument shows that:
A periodic solution 5 of (1.1),
such that
4 (0.5 ) < 5 (0.5 ) < 3 (0.5 ), exists.

(1.11)

This solution is the one labeled by a 5 in figure 1.1.


Remark 1.2.2 Note that:
(A) Because Q(3 (0.5 )) > 0 > Q(1 (0.5 )), we expect that: at the zero of Q between these two values, Q
will have a negative derivative (making 2 stable). However, it is possible (while unlikely) that Q may have
a vanishing derivative at the zero, so we must check this possibility.
(B) Similarly, because Q is periodic, it must have another zero, where it crosses from negative values to
positive values (and this will, generically, correspond to an unstable periodic solution). Thus, once we had
(1.10), we could have obtained (1.11) from this argument.
(C) Finally, there is the possibility of other zeros Q may have, which would mean other periodic solutions
exist.
Below we check A-C, and show that:
Q has exactly two zeros, one where the derivative is negative and another one
where it is positive. Thus, there are just two periodic solutions: one stable (which
is a global attractor for the flow) and the other unstable.
We could do this simply by computing the Poincare map, and checking that it has the desired properties
in fact, I did just that: figure 1.2 shows a plot of the Poincare map. However, just for the fun of it, I show
below how one can get these results from purely analytical methods. The process illustrates a few techniques
(actually: one) that are, sometimes, useful.
The first thing to do is find a formula for the derivative of the Poincar
e map (1.8). This is, actually, the
most important lesson from the calculation below: it shows how one can compute derivatives of Poincare
maps (or any objects whose definition involves the solution of a dierential equation) in terms of the solution
3

In all of this we use that P is dierentiable. In fact, from standard theorems for ode, it follows that P is analytic!. See:
Coddington, E. A., and Levinson, N. (1955), Theory of Ordinary Dierential Equations, McGraw Hill, New York.

!
2.5

Periodically driven overdamped pendulum.


Poincare Map for d"/dt + sin(") = sin(t).

Periodically driven overdamped pendulum


+ sin = sin t.
Poincare map, as defined by equation (1.8). The dotted line
is the identity; the periodic orbits occur at the intersection
of the two curves.

1.5

P(")
1

0.5

0.5

1.5

"

2.5
!

Figure 1.2: Problems 8.7.05/06/07. Periodically driven, overdamped pendulum (equation (1.1)).
of a linear dierential equation. Ideas such as these are used in bifurcation numerical codes (and others)
where the values of certain derivatives are needed in the calculations. The example here is very simple, so
simple that we can even do some analysis with the equation that results from implementing the idea. This
analysis is, perhaps, cute but it is not the main point. Let
= (,

t)

(1.12)

be defined as the solution of (1.1) that takes the value at t = 0.5 . Thus (,
2.5 ) = P (). Define

@
=
. Then
@
d

+ (cos )
= 0,
(1.13)
dt
with initial condition (, t0 ) = 1, where t0 = 0.5 . Furthermore:
(, t0 + 2 ) = dP/d. Thus
solving (1.13), we obtain
Z t0 +2

dP

= exp
cos dt .
(1.14)
d
t0
Let us now consider the interval 0.5 0 < 2.5 , which we divide into four subintervals in in items gj
below. We claim that:
g. If 3 (t0 ) 0 1.5 , then the solution of (1.1) satisfying (t0 ) = 0 cannot be periodic. Why?
Because this solution must satisfy 3 , and 3 (t0 + 2 ) > 1.5 . It follows that: Q() has no
zeros for 3 (t0 ) 1.5 .

h. An argument similar to that in item g, yields: Q() has no zeros for 0.5 4 (t0 ).

i. Consider now a solution of (1.1) satisfying 1.5 < (t0 ) = 0 < 2.5 . It is easy to see that, for all
t, 1.5 < (t) < 2.5 . Thus cos > 0 for all t. Hence, using (1.14), we conclude that: dP/d < 1
(thus dQ/d < 0) for 1.5 < < 2.5 .
j. Similarly: dP/d > 1 (thus dQ/d > 0) for 4 (t0 ) < < 3 (t0 ).
It should be clear that items g-j through imply the result advertised earlier in remark 1.2.2, since we already
know that Q has a zero in 1.5 < < 2.5 , and another one in 4 (t0 ) < < 3 (t0 ).
Part B. Consider a pendulum whose arm is rigid and is attached to an axle, to which an oscillatory sinusoidal
torque is applied. Assume that one end of the axle ends in a flat disk, which is immersed in some viscous
fluid this will provide a drag force proportional to the angular velocity of the axle; at least as long as the
rotation speed is not too large. Assume also that most of the dissipation is due to this flat disk (i.e., neglect
all other sources of drag in the system.) Then the (approximate) equations describing the system are:
ML

d2
d
T
+
+ g M sin =
sin t,
2
dt
dt
L

where M is the mass at the end of the pendulum arm, L is the eective length of the pendulum arm,
is the torsional drag coefficient, g is the acceleration of gravity, and T and are the amplitude and
angular frequency of the applied torque. Using = g M as the unit of time, the equations above has the
nondimensional form:

d2 d
+
+ sin = a sin ! t,
dt2
dt

where =

g M2 L
T

, a=
, and ! =
.
2

gLM
gM

Thus, if 0 < 1, T = g M L, and = g M/, the behavior of the system will be approximately described
by (1.1).
Part C. See figure 1.1.
1.2.1

Appendix

It turns out that the solutions that correspond to the two points where the Poincare map P (see figure 1.2)
has a derivative that equals one (i.e.: the maximum and minimum of Q, as defined below (1.9)) can be
identified easily. We do this here, exploiting some symmetries of equation (1.1), which we repeat here:
d
+ sin = sin t.
dt
Ap1. If = (t) is a solution of the equation, then so is

Ap2. If = (t) is a solution of the equation, then so is 2 + (t).


Ap3. If = (t) is a solution of the equation, then so is (2 + t).

(1.15)
t).

7
It follows that the special solutions introduced earlier (see figure 1.1) have the following symmetries:
1 (t) = 3

4 (3

t),

(1.16)

4 (t) = 3

1 (3

t),

(1.19)

2 (t) = 3

5 (3

t),

(1.17)

5 (t) = 3

2 (3

t),

(1.20)

3 (t) = 3

3 (3

t),

(1.18)

6 (t) =

6 (3

t),

(1.21)

where 6 is defined by (1.5 ) = 0.5 , and it is not displayed in figure 1.1. Here (1.16), (1.18), (1.19), and
(1.21) follow because both sides of the equalities are solutions and they have the same value at some time
(i.e.: at either t = 0.5 , t = 1.5 , or t = 2.5 see the definitions of these functions in items ac earlier).
Therefore, from the uniqueness theorem, both sides are equal. On the other hand (1.17) and (1.20) follow
because both sides are periodic, both sides are solutions and there are only two periodic solutions per period
in .
Now we note that (1.18) and (1.21) imply that (for j = 3 or j = 4):
cos j (t) =

cos j (3

t),

=)

2.5

cos j dt = 0.

(1.22)

0.5

Thus, using (1.14), we see that the derivative of the Poincar


e map is one at = j (0.5 ).
One further interesting consequence of the symmetries in items Ap1Ap3 is
The Poincar
e Map satisfies P (3

P ()) = 3

(1.23)

To show this, consider an arbitrary solution = p (t). Then = q (t) = 3 (3 t), is also a solution. Let L = p (0.5 ) and R = p (2.5 ). Then, by definition, P (L ) = R . On the other hand,
q (0.5 ) = 3 R and q (2.5 ) = 3 L . Thus P (3 R ) = 3 L . Since the solution p considered was arbitrary, (1.23) follows.
Remark 1.2.3 Taking the derivatives of both sides in (1.23), we obtain
dP
(3
d

P ())

dP
() = 1.
d

That is, the derivatives of P at and at P () are inverses of each other this is easy to see in figure 1.2.
This also shows (again) that the derivative of P cannot vanish.
Remark 1.2.4 It would be nice if the functional equation (1.23) somehow determined P , but it does not,
as we show now. Let = 3 P . Then (1.23) is equivalent to
( ()) = .

(1.24)

Now, let F = F (x) be any even function of its argument, with |dF/dx| C < 1, where C is a constant.
Define implicitly by
+ () = F (
()). Then satisfies (1.24).
(1.25)
This shows that (1.24), thus (1.23), has a tremendous number of solutions.

8
Why do we need the condition |dF/dx| C < 1?
Well, so that (1.25) actually defines ; for let G(, ) =
@G
dF
=1+
(
@
dx
Thus G = 0 has a unique solution

+
)

F (

). Then, for any fixed :

C.

for each .

Problem 141003 - Newtons Method in the complex plane

2.1

Statement for problem 141003

Suppose that you want to solve an equation, g(x) = 0. Then you can use Newtons method, which is as follows:
Assume that you have a reasonable guess, x0 , for the value of a root. Then the sequence xn+1 = f (xn),
g(x)
n
0, where
f (x) = x
,
(2.1)
g0 (x)
converges (very fast) to the root.
Remark 2.1.1 (The idea). Assume an approximate solution g(xa ) 0. Write xb = xa + x to improve it, where x
a)
is small. Then 0 = g(xa + x) g(xa ) + g 0 (xa ) x ) x gg(x
0 (x ) , and (2.1) follows.
a
Of course, if x0 is not close to a root, the method may not converge. Even if it converges, it may converge to a root that
is far away from x0 , not necessarily the closest root. In this problem we investigate the behavior of Newtons method in
the complex plane, for arbitrary starting points.

Consider iterations of the map in the complex plane generated by Newtons method for the roots of z 3 1 = 0.

That is
2
1
zn+1 = f (zn ) =
+
zn , n 0,
(2.2)
3 3 zn3
where 0 < |z0 | < 1 is arbitrary. Note that
1 = 1,

2 = ei 2 /3 =

1
2

( 1+i

3),

and 3 = ei 4 /3 =

1
2

( 1

3),

(2.3)

are the roots of z 3 = 1.


4
Your tasks: Write a computer program to calculate the orbits {zn }1
n=0 . Then, for every initial point z0 ,
draw a colored dot at the position of z0 , where the colors are picked as follows:
zn ! 1 , cyan.
zn ! 2 , magenta.
zn ! 3 , yellow.
No convergence, black.
(2.4)
What do you see? Do blow ups of the limit regions between zones.

Hint. Deciding that the sequence converges is easy: once zn gets close enough to one of the roots, then the very
design of Newtons method guarantees convergence. Thus, given a z0 , compute zN for some large N , and check if
|zN j | < for one of the roots and some small tolerance which does not have to be very small, in fact
= 0.25 is good enough. You can get pretty good pictures with N = 50 iterations on a 150 150 grid. A larger N is
needed when refining near the boundary between zones.
4

Numerically this means: choose a sufficiently fine grid in a rectangle, and pick every point in the grid. For example, select the
square 2 < x < 2 and 2 < y < 2, where z0 = x + i y.

9
Hint. If you use MatLab, do not plot points. Instead, plot regions, where the color of each pixel is decided by z0
use the command image(x, y, C) to plot. Why? Because using points leaves a lot of unpainted space in the figure,
and gives much larger file sizes.

2.2

Answer for problem 141003

Figures 2.1 and 2.2 show the results of our calculations. Note the fractal nature of the boundary between
2

0.8

1.5

0.75

0.7

0.5

0.65

y 0.6

0.5

0.55

0.5

1.5

0.45

2
2

1.5

0.5

0.5

1.5

0.4
0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

Figure 2.1: (Problem 141003). Convergence zones for the z 3 = 1 Newtons map iterates. Color scheme in
(2.4), with a 500 500 pixel grid. Left: N = 100 iterations, for 2 < x, y < 2. The crosses are the roots
j . Right: N = 200 iterations, for 0.2 < x < 0.6 and 0.4 < y < 0.8.
0.5

0.47

0.49
0.465

0.48
0.47

0.46

0.46

y0.45

y0.455

0.44
0.45

0.43
0.42

0.445

0.41
0.4
0.35

0.4

0.45

0.44
0.39

0.395

0.4

0.405

0.41

0.415

0.42

Figure 2.2: (Problem 141003). See figure 2.1. Further blow ups with N = 300 iterations. Left: region
0.35 < x < 0.45 and 0.4 < y < 0.5. Right: region 0.39 < x < 0.42 and 0.44 < y < 0.47.
the basins of attraction for each root: as we zoom in, the object appears as a smaller (but distorted) copy of

Coupled oscillators #01.

18.385 MIT, (Rosales)

10

itself. Non-trivial self-similarity 5 is the hallmark of a fractal. Sets like this (boundaries between convergence
regions of complex analytic iterations) are called Julia sets.
The attracting basins are Fatou sets. The sets are named after Gaston Julia and Pierre Fatou, two mathematicians that pioneered the study of complex dynamics e.g., see: G. Julia, Memoire sur literation des fonctions
rationnelles, Journal de Mathematiques Pures et Appliquees, vol. 8, pp. 47245, 1918, and P. Fatou, Sur les
substitutions rationnelles, Comptes Rendus de lAcademie des Sciences de Paris, vol. 164, pp. 806808 and vol.
165, pp. 992995, (1917).

The orbits in the Julia set are chaotic. These orbits are, generally, not periodic (but recurrent), and small
dierences in zn grow exponentially with n (sensitive dependence on initial conditions). However, computing
these orbits is extremely hard, as perturbations out of the Julia set make the resulting orbit convergent.

Coupled oscillators #01

3.1

Statement: Coupled oscillators #01

In this problem we present an example of the process described in 4.1, and consider the coupling of two
oscillators with a stable, and strongly attracting, limit cycle each. The oscillators are very simple, with trivial
equations in polar coordinates. This simplifies the analysis enormously, but the principles illustrated here
are valid for the coupling of more generic oscillators.
Consider the following equations for two coupled oscillators
xj
yj

=
=

! j yj +

(Rj2

x2j

yj2 ) xj + Fj (x1 , y1 , x2 , y2 ),

(3.1)

!j xj +

2
j (Rj

x2j

yj2 ) yj

(3.2)

+ Gj (x1 , y1 , x2 , y2 ),

where j = 1 or j = 2, and
(a) !j > 0, Rj > 0 and j > 0, are constants, with j
1,
(b) Fj and Gj are some functions these are the coupling terms.
Using the fact that

1, write
q reduced equations for the two phases 1 and 2 , defined by xj = rj cos j

and yj = rj sin j , where rj =

x2j + yj2 . In particular, consider the following cases

1. What form do the reduced equations take when the Fj and Gj are only functions of the variables
= x1 x2 + y1 y2 and = y1 x2 x1 y2 .
2. What form do the reduced equations take when G1 = G2 = 0, F1 =
R2
x where and are constants.
R1 1

R1
R2

x2 , and F2 =

Hint. Write the equations in polar coordinates.6 Then consider what happens in a neighborhood of the limit
cycles for the two oscillators when de-coupled i.e.: rj not too far from Rj . In this context, argue 7 that
the dependence on the radial variables can be made trivial.
5

A line in the plane is also self-similar, but it has trivial structure.


Recall that rj rj = xj x j + yj y j and rj2 j = xj y j
yj x j .
7
Use arguments similar to the one introduced to describe relaxation oscillations, e.g.: for the van der Pol equation. Another
example occurs when justifying that inertial terms can be neglected in the limit of a large viscosity.
6

Coupled oscillators, phase locking, oscillator death, etc.

18.385 MIT, (Rosales)

3.2

11

Answer: Coupled oscillators #01. Answer.

In polar coordinates, xj = rj cos j and yj = rj sin j , the equations take the form
rj =

j rj

(Rj2

rj2 ) + Fj cos j + Gj sin j

1
and j = !j +
(Gj cos j
rj

Fj sin j ) .

(3.3)

Because j
1, for rj not too far from Rj , the term j rj (Rj2 rj2 ) dominates the radial time evolution
and drives rj towards Rj . It is only when rj Rj = O( j 1 ) that the other terms in the radial equation
become significant. Hence the radial time evolution leads to rj Rj , after which the evolution of the phases,
to leading order, reduces to
1 = !1 + K1 (1 , 2 )
where

Kj

1
Rj

and 2 = !2 + K2 (1 , 2 ),

(3.4)

Gj R1 cos 1 , R1 sin 1 , R2 cos 2 , R2 sin 2 cos j

Fj R1 cos 1 , R1 sin 1 , R2 cos 2 , R2 sin 2 sin j .

In particular, for the cases in items 1 and 2, we have:


1a. Kj = Kj (1

2 ), since = R1 R2 cos(1

2 ) and = R1 R2 sin(1

2a. The equations are: 1 = !1 + sin 1 cos 2 and 2 = !2 +

2 ).

sin 2 cos 1 .

Note that the equations in (3.3) are singular at rj = 0 if Gj and Fj do not vanish there i.e.: if (3.13.2)
does not have a critical point at the origin. This does not aect the arguments above.

Notes: coupled oscillators, phase locking, etc.

These are notes with facts useful for the problems. They are not a problem.

4.1

On phases and frequencies

Consider a system made by two coupled oscillators, where each of the oscillators (when not coupled) has a
stable attracting limit cycle. Let the limit cycle solutions for the two oscillators be given by ~
x1 = F~1 (!1 t)
~
~j are
and ~
x2 = F2 (!2 t), where ~
x1 and ~
x2 are the vectors of variables for each of the two systems, the F
periodic functions of period 2 , and the !j are constants (related to the limit cycle periods by !j = 2 /Tj ).
In the un-coupled system, the two limit cycle orbits make up a stable attracting invariant torus for the
evolution. Assume now that either the coupling is weak, or that the two limit cycles are strongly stable.
Then the stable attracting invariant torus survives for the coupled system.8 The solutions (on this torus)
can be (approximately) represented by
~x1 F~1 (1 ) and ~x2 F~2 (2 ),
8

With a (slightly) changed shape and position.

(4.1)

18.385 MIT, (Rosales)

12

Coupled oscillators, phase locking, oscillator death, etc.

where 1 = 1 (t) and 2 = 2 (t) satisfy some equations, of the general form
1 = !1 + K1 (1 , 2 ) and 2 = !2 + K2 (1 , 2 ).

(4.2)

Here K1 and K2 are the projections of the coupling terms along the oscillator limit cycles. For example,
take K1 (1 , 2 ) = sin 1 cos 2 and K2 (1 , 2 ) = sin 2 cos 1 . Another example is the one in 8.6 of Strogatz
book (Nonlinear Dynamics and Chaos), where a model system with
K1 (1 , 2 ) = 1 sin(1 2 ) and K2 (1 , 2 ) = 2 sin(1 2 )
is introduced, with constants 1 , 2 > 0. Note that:
1. In (4.2), K1 and K2 must be 2 -periodic functions of 1 and 2 .
2. The phase space for (4.2) is the invariant torus T , on which 1 and 2 are the angles. We can also
think of T as a 2 2 square with its opposite sides identified. On T a solution is periodic if and
only if 1 (t + T ) = 1 (t) + 2 n and 2 (t + T ) = 2 (t) + 2 m , where T > 0 is the period, and both n
and m are integers.
3. In the Coupled oscillators # 01 problem an example of the process leading to (4.2) is presented.
4. The j s are the oscillator phases. One can also define oscillator frequencies, even when the j s
do not have the form j = !j t, with !j constant. The idea is that, near any time t0 we can write
j = j (t0 ) + j (t0 ) (t t0 ) + . . ., identifying j (t0 ) as the local frequency. Hence, we define the oscillator frequencies by !
j = j . These frequencies are, of course, generally not constants.
5. The notion of phases can survive even if the limit cycles cease to exist (i.e.: oscillator death). For
example: if the equations for 1 and 2 have an attracting critical point. We will see examples where
this happens in the problems, e.g.: Bifurcations in the torus # 01.

4.2

Phase locking and oscillator death

The coupling of two oscillators, each with a stable attracting limit cycle, can produce many behaviors. Two
of particular interest are
1. Often, if the frequencies are close enough, the system phase locks. This means that a stable periodic
solution arises, in which both oscillators run at some composite frequency, with their phase dierence
kept constant. The composite frequency need not be constant. In fact, it may periodically oscillate
about a constant average value.
2. However, the coupling may also suppress the oscillations, with the resulting system having a stable
steady state. This even if none of the component oscillators has a stable steady state. This is oscillator
death. It can happen not only for coupled pairs of oscillators, but also for chains of oscillators with
coupling to the nearest neighbors.
On the other hand, we note that it is also possible to produce an oscillating system, with a stable oscillation,
by coupling non-oscillating systems (e.g., the coupling of excitable systems can do this).

THE END.

Anda mungkin juga menyukai