So last time we looked at the
Dynamics and stability and solution of a system of equations x dot equals a x in continuous time and discrete time.
The last thing I want to tell you about pure linear systems without
inputs or sensors is how you can get these systems from nonlinear dynamical systems x dot f of x.
Okay, so if it's not just a linear system, but it's a matrix, sorry, a system of nonlinear differential equations, x dot equals some nonlinear function f of x, how do we get an appropriate
linear system that we can then use for control?
Okay, and we're going to demonstrate this on the pendulum system.
So I'm just going to write down the steps, and then we're going to work it out for the pendulum.
So the basic steps are, one, find some fixed points.
Find, find, points, and I'm going to say that a fixed point is a point, let's call it x bar, such that f of x bar equals 0.
Okay, so it's some, in all of the possible state space of x in Rn, hopefully there are some points that are fixed,
so that the DDT of x at that point is 0, because f of x at that point is 0.
So those points are fixed points that if I was perfectly at that point, it wouldn't move.
Okay, and so there's tons of these around, so for example in the inverted pendulum case where I've got this pendulum that's swinging up and down.
If I had it perfectly vertical, that would be a fixed point.
If I could get it exactly right and there was no disturbances, then the force would be completely balanced and it wouldn't move.
And also in the pendulum down case, that's also a fixed point where the system doesn't move.
If you have the system of the sun and the Earth,
There's some equal gravity point somewhere in between them,
closer to Earth, where if you were exactly on that point, the Sun and the Earth's gravity would be pulling equally.
Now, again, this is an unstable fixed point, because I want, you know, one millimeter
closer to the Sun, or one millimeter closer to the Earth, I would start accelerating towards those planets, or the Sun, or Earth, okay?
And so, finding these fixed points is the first step.
The second step is what we're going to do is say linearize.
and I'm going to tell you what this means,
but basically what this means is I'm going to compute this matrix of partial differential equations, dF, dx.
and I'm going to evaluate it at x bar,
okay, and whenever I teach this in class, I always have, you know, some students who are not super familiar with these nonlinear systems and
they don't know what DFTX is.
So I just want to walk you through what it is.
So let's say I had x1, x2 dot equals f1, f2.
So I want to be super explicit here about what this, this Jacobian is.
Okay, so this is a matrix of partial derivatives.
I'm going to write it out, sorry, I a pen that doesn't squeal.
This is going to be a matrix of partial derivatives, and let's actually pick an example.
Let's say this is x1, x2, and x1 squared plus x2 squared.
in this 2 by 2 case where I have two variables and two nonlinear functions,
then d f d x is just a matrix of partial differential, partial derivatives.
So it's partial f1 partial partial f1, partial x2, and then in the second row it's partial f2, partial x1, partial f2, partial x2, okay?
And even if you've been doing this for a long time,
it's sometimes hard to remember where the f2s and the x1s and the x2s go,
I have a simple way of remembering this,
which is since I have df over dx, I might want to multiply this by like a delta x.
And delta x is going to have components delta x1 and delta x2.
So they better multiply delta x1 and delta x2, okay?
So that tells me that the columns have the same, you know, x1 or x2 and the rows have the same f1 or f2.
That's just how I remember it.
I remember having trouble remembering this in the past.
And for this simple example, it's easy to compute.
So let's just walk through it.
So this is a matrix and the first term is just the partial derivative of f1 with respect to x1.
So what's a partial of this with respect to x1?
What's left when I take the derivative and x2?
It's a little dangerous here because in class, people correct me when I'm wrong.
Okay, what's partial of f1 with respect to x2?
Well, I have an x1 left, okay, so there's an x1 here.
And what's the partial of f2 with respect to x1?
Well, it's just two- 2x1 and the partial of x2 squared with respect to x2 is just 2x2.
Okay, so that's my matrix.
This is called the Jacobian of my dynamics of my right-hand side f with respect to the state x.
This is the Jacobian of this system.
And it's a matrix where each term is a partial of the dynamics with respect to one of the variables.
Now if I evaluated it at my fixed point x bar,
this at x bar, I would just literally plug in the components x1 and x2 of my fixed point.
I plug in the numbers and this becomes an honest-to-goodness matrix.
So I'm just to write this as partial fi.
Oh, this is tough, i, partial fi, partial xj.
And this is a shorthand to let you know that,
you know, this is just a matrix of partial differential, partial derivatives, okay, evaluated x bar.
Okay, so we went down a rabbit hole, but we have a dynamical system, x dot equals f of x.
It's nonlinear, but it has some fixed points like the pendulum.
So I want to find this x dot equals a x.
I identify my fixed points.
I compute the Jacobian of the dynamics, d f d x, and I plug in that x bar.
And what we're going to find, this is kind of cool.
And I'm just going to erase this for a minute.
What we're going to find is that the dynamics, if I zoom in really close to this fixed point of this nonlinear system, look linear.
Okay, so that's really neat.
And I want to draw a picture to convince you of that.
So this really rests on the Taylor series, of f about x bar.
Let's say I have some dynamics, and I'm going to write this in x1 and x2.
So there's some vector field, and maybe there's some fixed point.
I'm going to draw this like it's a saddle, so there's a stable direction and an unstable direction.
Okay, this is x bar, x bar.
And what I'm going to show is that if I zoom into a really small window,
this little kind of delta x zoom in around this x bar.
then this system looks like a linear system and I can write down the linear
system and the A matrix is going to be this Jacobian evaluated x bar, okay?
And how do I want to convince you of this?
Well the first thing I want to do is I don't want to be doing a of delta x's.
So I want to, let's just change change our coordinates so that the origin is at x bar.
So let's just act like x bar is at the origin.
So I'm going to redefine x so it's like x minus x bar.
So x bar is at the origin.
So now this is at the origin.
Okay, and if you don't feel comfortable doing that, then you can actually work with these little delta x's and and do everything, but it's just a
Okay, and so then what we're going to do is we're going to take this expression here and we're going to say well x dot
near the origin or near x bar,
what I want to do is I want to expand this out in powers of x,
or if you like, in terms of this kind of delta x from my fixed point, which is what I'm calling this variable here.
So this equals f at the fixed point plus the Jacobian d f dx evaluated at x bar times x minus my fixed point plus d,
sorry, this is getting hard to see, d squared f, you don't have to know what this is because it's going
to go to zero, dx squared x minus x bar squared dot dot dot dot dot.
So this is kind of like dot minus x bar dot,
but let's just gloss over that and say that this is x dot near my fixed point is all of this stuff.
And what we're going to do is we're going to say, well, okay, what is f of x bar?
And what are all of these higher order terms?
Well, they're going to be really, really small because I'm going to choose x to be super close to x bar.
I'm zooming in so that this little delta x, my x relative to x bar is super small.
So all of these higher order terms are going to be really, really small.
And this is going to be basically the dynamic.
And so when I linearize my dynamical system,
when I find a fixed point and I zoom into that fixed point,
um, X equals this Jacobian DFDX evaluated at X bar times whatever this, let's call this, let's call this my dynamical system.
So if I'm in here and I have some little position that I'm going to call delta x relative to x bar,
then the dynamics in this little window are going to look like linear dynamics times delta x,
because all of the higher order terms go to zero.
And so what you'll find if you actually compute this Jacobian and plug in x bar is that this is a matrix.
And so this is an honest-to-goodness, like delta x dot equals a times delta x.
And because I'm lazy, I'm usually just going to write this as x dot equals a x.
I'm going to get the linear system I want.
So I want to make sure that this, I think I went a little faster.
I want to sure that everyone understands this.
I zoom in super close to that fixed point.
And what I find is that when I expand the dynamics,
so now my state of interest is some little delta x near the fixed point,
I find that the dynamics of that little delta x near the fixed point are basically linear.
They're given by this Jacobian of f evaluated at the fixed point.
This is just a matrix of numbers at this point.
So I get delta x dot equals a times delta x.
And that is exactly what I want.
eigenvectors, it is either stable or unstable.
In this case, it would be unstable.
And I can analyze and control this.
And so that's what we do.
That's what we're going to do next is we're going to take the pendulum, which is an honest to goodness nonlinear system.
But when we find a fixed point,
let's say it's in the pendulum up position, when I find a fixed point and I zoom in really, really close.
Remember the small angle approximation?
If I use that small angle approximation and I'm really close to this fixed point,
then the dynamics look linear if I stay close to there.
And the beauty of control is that if I add control plus BU and my control works,
if I have effective control, then it actually keeps my system close to that fixed point.
It stabilize that system,
and it keeps the system close to the fixed point where the model is good, where my linear model actually is valid.
So good control can keep your system close to that linearization region that's valid, which is really neat, okay?
There are are some cases Okay, remember, this is a boot camp on control.
It's to be fast, a little bit loose.
The linearization does not always characterize the dynamics around a nonlinear system.
So there's something called the Hartman-Growman theorem,
which basically says that if the fixed point If the eigenvalues of the linearization around the fixed point is hyperbolic,
meaning that all of the eigenvalues have a non-zero real part, then this linear system really does describe that neighborhood well, okay?
So if all of the eigenvalues have a real part of all negative or some positive or some negative and positive,
but if all of the eigenvalues of this linearization have a real part that's not zero,
then this linearization works as if we zoom in close enough.
If I have eigenvalues that are purely imaginary plus or minus I, for example, so just, you know, signs and cosines.
The linearization could have nice periodic behavior.
And the nonlinear terms in this dynamics that we neglected,
that we truncated essentially, could break that behavior so that the linear dynamics do not faithfully capture the nonlinear dynamics.
So, as long as your system kind of looks like a saddle or it has some real parter, it like a spiral.
it has a real part either negative or positive or some mixture, then this holds if I zoom in close enough to the fixed point.
I just want you to be aware of the Hartman-Growman theorem and that if I have something that is of neutral stability that has just plus
So, for example, the undamped pendulum.
If I had certain non-linear terms in there, it could completely break the linear Okay?
So that's enough of the math.
I think the best thing we can do now is actually work this all the way out on a simple system like the pendulum.
And the pendulum's nice because we'll actually try to control it.
So once we have these linearizations, we're going to see if it's controllable.
And then once we convince ourselves that it is, we're actually going to develop a controller and show that you can stabilize this inverted pendulum.
And this is a really cool example.
You can actually build these things or YouTube is inverted pendulum.
along, and you're going to see all these great videos of people who have actually built hardware
based on this math to stabilize an inverted pendulum.
That's actually what the SpaceX and Blue Origin rockets do.
They want to keep the rocket, so once they go up, they want to bring this rocket down.
It's essentially a complicated version of an inverted pendulum stability problem.
Let's work this out on an example.
We have the pendulum, and I like drawing the pendulum, so essentially I'm going to have a pendulum mounted.
This thing might have some mass.
Okay, and again this is not a physics class.
I'm not going to berate you with all the different ways of computing this.
I like the Lagrangian or Hamiltonian approaches because it's really nice for these constrained
problems that have a natural variable theta rather than like x and y.
But I'm just going to write down the equation.
And so the equation is pretty simple.
In this theta variable, it is theta equals minus G over L sine of theta, okay?
So really simple, but nonlinear, okay?
So notice that this is not A times X or A times theta.
This is a nonlinear function of the state theta.
You can also convince yourself that this thing has honest to goodness fixed points at theta equals zero,
or pi, the up position, or two pi, the down position, three pi, the up position, and so on and so forth forever.
But has two basic solutions, theta equals zero, and theta equals pi.
Okay, those are the fixed points of this.
and we're going to find the fixed points,
we're going to compute the linearization, we're going to look at the stability, and so on and so forth.
And I also want to just add plus, sorry, minus delta theta dot, okay?
So it's really important to consider the system almost certainly has some friction, okay?
friction here so that this thing doesn't just swing forever at an will die out.
Okay, and one of the reasons I really really
really like working this out on physical systems is because we have a tremendous amount of physical intuition about what should happen, okay?
I can tell you everything in the world about math and go through all of the math about how to linearize a nonlinear system around a fixed
In your gut, you know what the pendulum does.
If I take the pendulum about its down equilibrium and I kick it a little, what's going to happen?
Well, it's going to oscillate and eventually it's going to come to rest.
So I know that the eigenvalue You know,
have a negative real part because it's stable and they have a plus or minus I term because it has signs and cosines,
it If I lift this thing up to the pi position, again, you know what the system's gonna do.
If I kick it ever so slightly,
it's going to completely move away from that position, kind of exponentially moving away, at least for small angles, at least for short times.
So I like this because we basically already know the answer.
We're just going to confirm that the system works.
So I like to write this down as a state space system x dot equals f of x.
Let's say in this case I'm going to say x 1x2,
the two components are theta and theta dot, okay, because I need theta dot in a system.
And so if I write this down as...
ddt of, let's say, x1, x2, remember that's theta and theta dot, that equals, okay, what is x1 dot?
Well, it's just theta dot, so it's x2.
So, and can I just drop the g over l?
Can we just act like g over l is 1?
Let's act like g over l is 1.
We're to forget that there's really gravity and length and we're just going to say that g over l is 1.
Let's get rid of the mass.
Let's make it super simple.
I don't like keeping track of all this stuff.
We're just going to make it minus 1 times sine of theta.
So down here I have a minus sign of x1, remember theta is x1, minus delta of x2.
So let's just convince ourselves,
which is all of this stuff,
minus sine of x1 minus delta Okay,
so I'm just working through,
this is pretty standard taking a system,
and I want to write it in the standard, you know, x1, x2 system, first derivative of x1, x2 equals a nonlinear function.
So now it looks just like this.
we have the formula, okay?
We know the prescription of how to approach these systems.
I don't want you to think about it like it's just plug and chug,
but this is a way of representing the basic physics of the system.
So when x1 and x2 dot equals zero, the system is stays still.
Okay, so for the fixed point, so step one, fixed point.
Okay, we have x bar equals, well, we have a couple of fixed points.
First of all, x2 always has to eat.
for this first row to be zero.
So has to be zero for this to be a fixed point.
And that's kind of convenient because that means that this term here gets killed for the fixed point equation.
So again, when does sine of x equals zero equals zero when theta zero, or x1 is zero, or x1 is zero.
So, x1 can either equal 0 or pi and x2 has to equal 0.
These are the two fixed points of my system.
Now, of course, I have 3 pi, 4 pi, 5 6.
I've got all of the but these are the two physical solutions corresponding to pendulum.
And actually to draw that.
So this is pendulum down and this one is pendulum up.
Important to remember physically these fixed points have meaning.
So now we have two fixed points.
Okay, now if I had a live class, everyone would be screaming at me, you linearize about x bar.
Okay, so that's what we do now.
We're going to take this system, this x dot equals f of x system.
We're going to compute the Jacobian and we're going to pop in these fixed points and see what the system looks like.
Okay, so what's the best way to do this?
do it right down here, okay?
So, df just these partial derivatives.
Let's do df dx equals a matrix of partials.
Okay, so partial f1 with respect to x1.
What's the derivative of this with respect to x1?
What's the partial of this with respect to x2?
There's one after you take the derivative with respect to x2.
Okay, this one's more fun.
What's the derivative of this with respect to x1?
Well, I get minus cosine x1.
Okay, so that's kind of cool.
And what do I get when I take the derivative of this with respect to x2?
And we're going to assume that delta is kind of a small,
positive number, okay, and we'll work through in our heads what the eigenvalues are.
Okay, and I'm always a little nervous writing this down, because I never know what the minus
is if I actually don't work out the physics,
but we're going to find out pretty quickly based on the eigenvalues of this if we did it right,
okay, because we kind of know what the eigenvalues of these two should look like.
Okay, so now what we do, we have DFTX, but it's still symbolic, it still has X1s in here.
So the only special thing we have to do now is plug in this fixed point into this Jacobian,
and gives us the A matrix for our system linearized about the down Okay?
And if we do the same thing with this fixed point, it gives us an a matrix for the system linearized about the up position.
Let's write down the system linearized about the down position and up position.
So in the down position, the a matrix is a equals, all right, what this matrix if I plug in x1 equals zero?
Okay, what's cos of zero?
So I get 0, 1, minus 1, minus delta.
Okay, that's the A matrix for my down, my pendulum, down.
Now, let's try the A matrix for my pendulum up solution, okay?
So here, and let's say A down and A up, just so I don't get confused.
So in my A up position, now I'm going to plug in a pi here.
Well, it's negative 1, so negative negative 1 is positive 1.
I 0, 1, positive 1 minus delta.
Now, we could fire up MATLAB and compute the eigenvalues of these.
We could totally do that.
I'm going to do it in my head.
Then the eigenvalues of this matrix are going to be lambda squared minus minus 1 equals 0.
So lambda squared equals minus 1 plus or minus i.
Okay, so this is what we expected.
Our eigenvalues of this are plus or minus i, maybe with some damping.
If I add damping, they'll have a negative real part.
And this is a stable solution, so if I kick it a little, it eventually oscillates and dies out in the down case.
if I say that there's no friction and delta equals 0, then the eigenvalue of this is going to be plus and minus 1.
And remember, if I have any eigenvalue with a positive real part, that plus one eigenvalue grows like e to the one t.
So it grows exponentially, at least in the linear regime.
And this is unstable in the pendulum, up, up.
OK, so pendulum up case is unstable.
Now, I think I probably will fire up MATLAB and actually show you how to do this eigenvalues now that we've gotten to this level.
So let me just clear this off quickly so that we can see the MATLAB window.
And then we will confirm that the eigenvalues of this what we expect them to be, okay?
But it's a really simple procedure, right?
So if you have some system of interest,
some physical system or some symbolic system,
then essentially what you can do, is you can write down the physics, the x dot equals f of x.
You find the fixed points,
you linearize about those fixed points and you get an honest-to-goodness system,
where if you look at the eigenvalues, it tells you what the system is going to do.
Okay, so let me just fire up my MATLAB and let me clear the screen.
Okay, everyone can see this.
Now, let's pick a small delta.
We've got to actually put in numbers.
I can't do this symbolically.
So say that there's some small friction.
Let's make it delta equals like 0.1.
Now let's say my a matrix, my a down matrix is equal to 0.1 negative 1 negative delta.
So 0.1 negative 1 negative d.
And I have this A matrix here.
This is my linearization.
This is the A matrix of my linearization about the pendulum down position.
So now I hope everything goes right.
And I get the eigenvalues, they make sense.
So I'm going to do IEG of AED.
And lo and behold, what get is basically what I expected, which is that this has a small negative real part.
So it is ever so slightly damped and stable.
And it's got this plus and minus I term here.
So basically I get sine and cosine oscillations of theta and theta dot.
But eventually it's got this decaying exponential envelope given by this neighborhood.
This means I didn't mess up the sign here or here because this is consistent with what we know about the pendulum down equilibrium.
So now let's do the same thing for the pendulum up equilibrium.
So up is equal to 0 1 1 minus delta.
So the only thing that changed was the sign down here.
And now I'm going to look at I of A up.
And now what I have is I have one stable eigenvalue, negative 1.05, and one unstable eigenvalue, 0.95.
Remember, this is continuous time, so having a positive real part eigenvalue is unstable.
So this system is kind of like a saddle, it is a saddle point.
It's a stable direction and an unstable direction.
And so if I kick this even a little bit, it will go unstable.
at least zoomed in around that up position,
and eventually the other non-linear terms are going to kick in and it's going to do some non-linear stuff.
But locally, we can tell what the stability is in the pendulum up and pendulum down.
And what we're going to do in the next lecture for next week,
what we're going to do is have a plus BU, and we're going to drive this system towards stability.
We're going to stabilize the up position using sensor-based feedback control.
So it's been a long path to get to the point where we're ready to do feedback control, but now we are.
We know what eigenvalues mean.
We know how to get a linear system even for a complicated nonlinear system that we're actually interested in.
And now we're going to add plus BU,
and we're going to show how you can drive the system to stability and make it kind of robust.
is by using feedback control.