1.1 Teaching Old Motors New Tricks - Part 1: Introduction to Motor Control, PI Controllers, PID Controllers and Intro to Field Oriented Control

Well, good morning. And welcome back for day two of the Industrial Control seminar. My name is Dave Wilson. And I'll be covering the motor control part of the seminar today.
Motor control is my favorite technical topic. I've been doing it now for about 34, 35 years. And one of the reasons I like it is because with a lot of other engineering disciplines there's kind of a finite limit in terms of what you can do. Like if you become a Bluetooth expert, well once you've learned the whole Bluetooth interface standard, well, that's about as far as you can go.
With motor control, there are no limits. It's the biggest sandbox in engineering-- I like to refer to it as. When you think of all the engineering disciplines that have to come together to make a motor spin-- of course, you have power electronics; you have analog, digital, software, DSP, digital filters; you've got to dynamics, mechanics, thermal mechanics, heat transfer, all these things wrapped together to make that little motor shaft go round and round and round.
And for that reason, I think motor control engineers can be the most challenged in terms of the things that they have to master in order to make the motor spin. But I also think it's fair to say that motor control engineers are perhaps some of the most frustrated engineers that we have. The reason is, is because you're working on your motor control project-- and how many of you can relate to this?
So your boss comes in and says, well, where are you one the schedule? And you say, well, I've got to go do my stability analysis and do this, that, and the other thing. He says, wait a minute, wait a minute, I'm not asking you to solve world hunger. I just want you to make the motor shaft go round and round and round. That's what, two weeks? Can you do that?
And of course, you understand that to make that little shaft go round and round, all these things have to come together to make that happen. It's not apparently obvious to the uneducated observer all the things that need to go in to make that happen. So we're going to talk a little bit about that today-- hopefully, give you enough knowledge about motor control that you can at least go out and make some motor spin and have some fun.
Now, obviously, with a topic this big, there's no way I can comprehensively cover it in one day. So I'm going to have to just focus on one piece of it. And that is the control side of it. Most of the stuff I'm going to talk about today is going to be related to algorithm development, field-oriented control, centralist field-oriented control, things like that, because obviously working for the C2000 group, that's what we are very, very much involved with.
So I decided, what would be an appropriate title for this session? Teaching old motors new tricks. The reason I picked that one is because when you look at motor topologies, most of them have been around since the 1800s. There hasn't been a lot of new development. Perhaps, the stepper motor is the latest change in topology. But by and large, motors have not changed that much over the last say 50 to 100 years.
What has changed, and continues to change at a very, very rapid rate, is how we control those motors. And that, of course, has been enabled by power electronics, reductions in form factor, more sophisticated control algorithms, which, of course, have been enabled by faster and faster processors designed to do these types of calculations.
So I'm going to start right out with going through control systems, kind of building on what Richard talked about yesterday. And as I go through this, hopefully, you'll see what we're doing here. With our three-day Industrial Control seminar, we're starting at kind of a high conceptual level. Richard covered the control theory aspect of it yesterday.
I'm going to bring it down a notch and talk about an application of that control theory, which, of course, is motor control. And then tomorrow, Ken is going to get down to where the rubber meets the road. And that's where you're banging bits and registers to make things happen inside of a processor.
So I want to start off kind of where Richard was yesterday and talk about control theory. But I have to correct some of the problems or the misstatements that Richard made yesterday, just off the bat. And I'll need your help to do this.
Could you tell me what the letter is that follows g in the alphabet? H, it's not-- "haech"? [LAUGHTER] It's H. Hm? OK. Good, very good, you got that correct.
Number two, what is that little thing called, it's like an embedded computer, that can run code, it can run instructions? It's called a micro? Processor, not "pro-sessor". OK, good. So we've already corrected two problems from yesterday's seminar. Yes, Richard. OK, you're welcome.
[LAUGHTER]
I've been doing this for quite a while now. And at least, I got you to stop saying zed. And now, he says z, because the zed transform just-- it just didn't cut it. You got to say Z transform. Anyway, there's probably a few other ones that I could mention, but I'm going to let Richard go by relatively unscathed.
"Pay-tent", I forget-- that was another good one. Yes, "pay-tent", I love that. We still got to work on him for that one. Oh, I'm sure. By the way, of the three of us, I'm the only one that really talks without an accent. Because you'll notice right away that Ken is from "new-joisy", the "gaw-den" state.
Let me-- let's try to reel this back in again then. I like to start this session with a true or false question, just to kind of get you involved in the discussion. And by the way, I like very interactive discussions. Don't feel like you have to wait until a question or answer session. If something doesn't make sense or if you just want to make a comment, say, Dave, that's not true, just raise your hand and just say it. Let's be very interactive here.
And in order to facilitate the mood for doing that, I want to start off with a true or false question related to control systems. Now, think about this. I don't want you to answer right away, but just kind of mull it over in your mind. And then we're going to poll the audience and see what the correct answer is.
Control systems need feedback in order to function properly. True or false? So think about that a little bit. And now, let's solicit some feedback from you guys. Is that a true or false statement? It's false. Who said that? All right, why do you say that's false? Hm?
You can control something and run an open loop.
OK, now is that really a control system then?
You're controlling the output. You just don't know what that output--
OK, you're controlling the output, but would you say that is truly a control system? Oh, you don't know?
I'm just a software engineer.
[LAUGHTER]
How many are software engineers in here? Let's see your hands. How many are hardware engineers? We got a good even split. I tend to be a hardware engineer. So as soon as I hear somebody say they're a software engineer, my eyes just glaze over, and I can't relate to them.
OK, but no, that's a thoughtful answer. I appreciate that. What about some other opinions here. True or false? Control systems need feedback in order to function properly. Surely we've got other opinions in here.
When you talk about control systems, you're talking about loop. You're talking about something that's-- You're talking about three basic elements, right? A reference, a feedback, and an output.
OK. So you'd say, if it doesn't have feedback, it's not a control system.
Well, yeah.
OK, I think you just did. [LAUGHTER] All right, somebody in the back there.
Would it also depend on what properly is?
Very well could. What do you think properly means? Or what-- some suggestions.
In order to turn on and off, for example, or control the motor, whether it's properly working is up to what the performance requirements are. If I don't care that it turns or whatever, I'm just controlling the power to it--
Hm, OK.
It's just not feedback.
There's a light switch on the wall over there. I can control whether the light comes on and off. And it does it properly. Is that a control system? Somebody from the front here.
Oh, I guess my thought would be is that I can see, for example, like I've built myself personally hobby CNC machines. And you use stepper motors. And I would say, yes, in a closed system. However, they are open loop. Otherwise, you might miss a step. I have no feedback to know that I've actually missed a step.
Interesting.
Depending on in the open design or the motor-- the motor torque to be that. So yes, I can control them. I can return to a certain location. I can do all that, so I have complete control over them, but I have no feedback. I can't guarantee that it won't do it.
OK. OK, but you would say that that's still a control-- you're controlling it properly, and it is a control system. OK. Other opinions on this. I like to do this, because I like to get you thinking about, what are we talking about when we talk about control systems and control theory.
I would say false.
You would say false.
--that a control could be already contained within the design of the system with the feedback.
OK, I think that's kind of echoing what some of the other people said as well. Well-- yeah, go ahead, Brett. You had a comment.
Yeah, I'd say false. I would think that you need to be observing something, whether you do a feedforward kind of analysis or a feedback or do some sort of sensor to say, I'm at, for instance, a point in my linear motion or not, that's kind of a feedback kind of mechanism, I guess, but not necessarily have to just be feedback.
You introduced a new word in this whole discussion-- feedforward. I mean, Richard talked a little bit about that yesterday. I want to key off of that, because that's going to lead me to where I want to go in this discussion. I like doing this, because it gets a lot of audience interaction going.
In fact, I'll tell you a funny story. When I was in Detroit and we did this seminar, it was pretty much divided for some reason on both sides of the room. And next thing I know, they're arguing with each other. I'm completely out of the loop at that point. I'm out of control.
I'm trying to get them to say, come on, let's go to the next slide. In fact, during break, they continued it out into the hallway. I thought they were going to start a fight out in the hallway about what this actually is.
The point is, the way I've worded the question, I'm kind of setting you up. It's very ambiguous. What does properly mean? What are we talking about when we talk about a control system?
And these are things that, when I took control theory-- this is back in the '70s-- I opened up my control theory book and right away I see a system. I think it was an example of a missile tracking system with some feedback. And then you turn the page. And there it is, G over 1 plus GH. And then you go to the next chapter, and it's about eigenvectors and eigenvalues.
I went through the whole course, never even heard the word feedforward. So you can imagine my surprise when I get out in the industry and people are doing control systems without any feedback. Like, that can't work, can it? Or can it? Let's look at this.
Let's say that I have a process. We're going to call it G1. Now, this is unadjustable, but it is perfectly predictable. Now, if I want to design a controller such that what I get will equal what I want, do I have all the information that I need to know if I know the transfer function of this?
Let's say that this is an amplifier with a gain of 10.00000. It doesn't matter what day of week it is. It doesn't matter what the temperature is. That's what it is.
Can I design a controller at that point to guarantee that what I get will equal what I want. The answer is yes. And Richard talked about this yesterday. All I need to do is take 1 over x of that-- in this case it would be 0.1-- and then whatever I have coming in here gets multiplied by 0.1 times 10. Well, what do you know, it's exactly the same thing that I wanted on my output.
And this is an example of feedforward. Now, feedforward works really good for systems that don't have any outside perturbations or things that affect the system. What happens if you start getting into a problem like this, where you have a disturbance that's coming in and affecting your output? Now, can I say that I have enough information to guarantee that what I want will equal what I get? No, because I don't have enough knowledge about that disturbance.
Now, you could argue that if you did have enough knowledge about the disturbance itself you could still design a feedforward system without feedback. I think it was-- trying to remember which French philosopher it was. I think it was Laplace that said, if you give me the initial velocity of every particle in the universe, I will predict the future. Well, that's an example of feedforward thinking, that I know everything about everything. I know all the perturbations that could affect my system, everything else.
Technically, you could argue you could do that. Now, that might be a pretty big equation in that case. But he's arguing the point that, why do we need feedback. But if you don't know enough about the disturbance, then we have to implement a new structure, taking a totally different approach, where we take the output, we feed it back and compare it to what we want, and we generate an error signal, which is our difference.
So we have two competing philosophies-- feedforward and feedback. And there are examples when we want to use one or the other, or in many cases, in tandem. You might want to use them together to get some really nice performance out of your system.
Now, when I do this, here we have the familiar G over 1 plus GH, where H is my feedback gain, but in this case, I've just chosen to represent it as 1 or unity. So G over 1 plus GH multiplied by what I want is going to equal what I get. So let's take G, which by the way is the combination of both of these gains, and let's set it to 1.
What can you tell me about what I will get compared to what I want if G is equal to 1? 1/2, right? Is that a good control system? If you want 1/2, that's a good control system, right. Maybe.
What can you tell me about the control system and what I get compared to what I want if G is equal to 100? Solve it now. 0.99 or something like that. So we're even closer.
What if G is equal to 1,000? Or a million? Well, we just keep getting closer and closer to what we want. So from this simple little exercise, we can conclude that high gain is always a good thing, right?
Some people are out there kind laughing like, wait a minute, where am I going with this. No, high gain isn't always a good thing. Why is that? It's because Mother Nature plays to very cruel tricks on us.
Number one, we realize that every gain is really a function of frequency. If you go out high enough in any natural system, any system in nature and look at its frequency response, it will always have one more pole than it has a zero. It's a simple way of saying that everything eventually rolls off with frequency.
And it doesn't matter if it's the parasitic capacitance between me and the back wall, at some point, the frequency response is going to roll off. So we can't really have high gain for all frequencies. That's the first nasty trick.
But the second one is even more insidious than that. For every pole that we encounter, it introduces-- for every real pole that we encounter, it introduces 90 degrees of phase shift in our signal. Now, why does that cause problems?
Well, let's say, for example, that you're running along with your frequency response at 100 dB or whatever, you hit your first pole in your system. So now, you're coming down at 20 dB per decade. How much phase shift do you have? 90.
Now, you hit your second pole. They're coming down at 40 dB per decade with a phase shift of-- 180. You keep coming down. And finally, you get to 0 dB, which is the unity gain point. Now, right at that point, you have unity gain and 180 degrees of phase shift. Let's see what happens.
Here's my single getting injected into my system. It comes out the other side of my system the same amplitude, because it's unity gain, but with 180 degrees of phase shift, which means it's inverted, right? That signal runs back around here, hits the minus input of this comparator, it's inverted again. So now, I'm back with exactly what I started with. And it just keeps on going. It can sustain itself. Congratulations, you've built an oscillator.
Now, in most cases, that's not what you're trying to do. So we have to find ways of stabilizing our system. And Richard talked about this yesterday of how you have to look at your poles and zeroes of your system to ensure that you don't get oscillations, because in most cases, that's an undesirable response.
So we can conclude that high gain and long phase shifts don't go well together. High gain and high phase delay are just not compatible. Let me give you a real world example of that.
Now, to illustrate this, I'm going to have Richard taking a shower. He's already getting worried back there. Richard, have you ever seen this slide yet? Have you ever taken a shower? [LAUGHTER]
So Richard steps into the shower. What does Richard want? Well, he would like to have some warm water. So what does he get when he first turns the spigot on? He gets cold water. He compares that to what he wants and says, that ain't what I wanted. And he responds to it with a gain.
Now, if you know Richard's personality, he's a very high gain individual. So what he does is he says, I'm going to turn that knob all the way up to hot until it saturates the system. He can't turn it on anymore.
The problem is, there's a delay in your system. It takes a while for that water to get all the way up through the pipe. So by the time it does get up to where it's coming out, what does he have? He doesn't have warm water, he has hot water.
So once again, he says, well, that's not what I want. He says, I'm going to respond again with high gain. He turns it all the way back to ice cold. Takes a while for that water to get up. And then he has ice cold water again.
Now, after about an hour of doing this, Richard finally realizes, I've got so much phase delay in my system that I cannot respond with that high of a gain. So he starts reducing his gain in his system. In other words, if it's not exactly what he wants, respond with very gentle gain, very low gain, until finally this system stabilizes.
So you can see, even in scenarios in real life, you're dealing with this kind of stuff all the time. Your brain just automatically figures it out-- you how to stabilize the system. Well, in control systems, you have to do that either with the math of the system, looking at the poles and the zeroes of the system, or some other way. But again, the point I want to make, phase delay in your system and high gain are incompatible.
That's why, for example, you very rarely see FIR filters used in control loops. FIR filters have very, very long phase delay. In most cases, when we design a control loop, we are focused only on IIR filters or something which has manageable phase delay, where we cannot create this very, very long time delay from the input to the output. So I guess the moral of the story is, don't take showers with Richard. That's what I've kind of concluded at least.
So going back to feedforward, how do we design a feedforward compensator? And the answer is, going back to our familiar statement that we made today and yesterday, we take whatever the transfer function is of our plant and we do 1 over x of that. And the thought "pro-sess", "pro-sess"-- gosh, you've got me saying it now-- the thought process goes something like this. In order to achieve a desired output, we need to find out what stimulation signal is required at the plant input, right here, to get that output. If we know the transfer function of the forward direction, I can find the transfer function looking backwards-- in other words, 1 over x, or 1 over Gs-- and that becomes the transfer function of my feedforward filter.
So in this case right here, I have a simple integrator with a gain of 2. So if I know that this is the signal that I want to get at the output, and it's this x squared function, I need to supply a linear ramp, so to speak, so that comes in here and gets integrated to create the desired effect on the output. So that's how we do it.
And it works with all kinds of different transfer functions. Richard did point out yesterday, there's some cases where you can create some poles that you weren't expecting when you do that. But by and large, that's the way that we achieve this.
Let me give an example of how I've used feedforward to solve a problem that's very common in motor control systems. And that is with PWM modulation, what do you do if you don't have a regulated bus? And this is very common in industrial applications. Because when you're trying to build a, let's say, a 100-horsepower motor drive, you just can't afford to regulate the bus.
So what you do is you basically run it through a front end rectifier stage. In this case, I've chosen just a single phase. But you can also have three phase versions of this.
And then you put it into DC bus, which has a huge bus capacitor on it, maybe up to 10,000 microfarads or something like that. And what happens is when you are drawing current on your output, you can actually see this. You can see that the bus will actually have this ripple effect as a result of the fact that it, well, simply, it's unregulated.
Now, that ripple will affect the output voltage of your system. So here, we have a situation. I have a desired voltage that I would like to put on one of the phases that motor. Here is what the plant looks like. Now, what I've done is I've gone through and just described what the modulation process looks like.
So first of all, I take whatever my average bus voltage is. I divide by that to create a duty cycle. So in other words, let's say that my average bus voltage is 100 volts. If I want to create a 90-volt output, well, it's 90 divided by 100. That'd tell me I need a 0.9 duty cycle.
I then feed that into a pulse width modulator, which we'll assume right now has a one-to-one transfer function, to create the actual pulse widths with the desired duty cycle. Now, this would be uncompensated. But what happens is, in order to recreate-- or I should say, to create the output voltage, that gets multiplied by my bus voltage. So if I have 0 to 1 right here and it's a 100-volt bus, that means this signal is going to be 0 to 100, with a 90% duty cycle.
The problem is that bus level, as a function of time, is rippling. So what you will see is there's going to be this low frequency ripple riding on your PWM envelope. And that means your bus voltage, or your phase voltage, is going to be affected by that.
So how could we solve this problem? Well, one solution might be to take this average output voltage right here, run it through some kind of integrator or RC filter, run it back in here, and compare it to our desired voltage, and say, well, that's not what we wanted, and then compensate it that way. The problem is, now you've got poles in your transfer function in your loop. And you can now, since you've introduced phase delay, you can get into stability issues.
In this case, a much better solution is to use feedforward. And it goes something like this. If my transfer function is the V bus divided by-- or V bus as a function of time divided by the average V bus, do 1 over x of that. So this is now my feedforward compensator-- V bus average divided by V bus of t.
In order to design this network-- I know what V average should be, because that's what my system design requirements are-- what I don't know is what the real time bus ripple is. I can measure that directly, run that into my equation here in real time, solve this expression, and then run it into this. And what I get on the output now are compensated PWMs.
I actually did this. Here was an example of the motor current taken with a bus that had severe ripple distortion on it, with no compensation. And you can actually see that that does, in fact, affect the current waveform.
But then when you use feedforward compensation, it really cleans up the waveform very nicely. If you look at your PWM signals, they look terrible. They've got all these little cusps and spikes on them and everything else. But when it's combined with the actual bus voltage, it straightens everything out, and you end up with exactly the voltage waveforms that you wanted.
So this was a case where the process is well-defined. There's no disturbance associated with this, practically speaking. It was a great candidate for feedforward. And this is, in fact, the way that most people do it in industry, if they want to solve the problem of bus ripple. They don't worry about feedback and closing the loop or anything like that. They just implement a feedforward controller, as shown here.
So which is best? We have feedforward versus feedback. Look at both of these expressions right here, both of these diagrams. Here's the feedback philosophy, where we have some system dynamics.
In order to get the actual trajectory to equal the desired trajectory, we solve that problem, or we try to address that issue by jacking up our gain as much as we can. And of course, while we're doing that, we're flirting with stability issues, because the higher you get the gain for whatever phase delays in your system, you're going to start seeing some oscillations or some instabilities. So that's the way we try to get the actual trajectory to equal the desired trajectory.
Now, let's look at the feedforward approach. The way we get the actual trajectory to equal the desired trajectory here is we let the feedforward system do the heavy lifting for us. And we may have a little bit of feedback in here, just because drifts of the system over time or things like that, we can compensate that with a very low frequency feedback system that you don't have to worry about stability issues with.
Now, what's interesting about both of these approaches, these diagrams are exactly the same diagram. Wire for wire, connection for connection, these are exactly the same diagrams. They're just twisted around differently to reflect different design philosophies.
When would you use feedback typically and when would you use feedforward? If you've got a situation where you have lots of perturbations, external disturbances on your system, that's when feedback works the best, because feedforward just can't handle that. But if you're doing something where you're trying to follow a desired trajectory, for example, and you don't have a lot of external influences on your system that are affecting the output, well then you might want to consider using a feedforward network.
And we're going to see a little bit later, in some of our field-oriented systems-- in fact, we're going to design some systems together as some lab exercises, which actually use both, because you can take advantage of the benefits of feedback in conjunction with the benefits of feedforward to make some systems that really, really respond well.
Well, I want to review the PI controller. Yesterday, Richard talked about this in the context of the PID controller. In most cases for more control, we don't use the D term, especially if we're using cascaded design techniques. And I'll tell you what that is in just a minute.
But in most cases, what we have is we have a situation where we have an integrator and a steady state gain in the same topology right here. We have a commanded value, a measured value. We generate the error signal. The error signal is split up into two paths, straight gain. And then this is integrated with an integrator gain. And then the outputs are combined.
Now, this is the parallel PI control topology. This is the one that I think most people are familiar with. Just because it's intuitive, you can actually see what's going on here. And if you look at the transfer function of this particular system, it looks something like this.
So if I adjust the KP term, what it does is it adjusts this level right here of the Bode plot. And if I leave my I term the same, what it's also going to do is adjust the controller's zero, because obviously, the higher this gets, it's going to intersect at a different frequency. So as this term goes up, the controller's zero is going down.
Now, if I adjust the I term and leave the P term alone, it's going to be adjusted like this. And that's going to cause the controller's zero to go up. So what we see is this frequency right here is actually a function of both the KP and the KI terms. Well, this is zero is actually very important to us, because when you're designing a PI current controller, for example, where we set that zero we would like to be able to do it so that we get pole zero cancellation in our closed loop frequency response. So having this adjusted independently can all of a sudden start shifting our zero and throw off our frequency response.
That's why a lot of people prefer to use what's called the series PI control topology. Has anybody seen this one or used this one before? A few people have. Now, in this case, the error signal comes in. The first thing that happens, it gets multiplied by a steady gain, which is called Ka. And then that gets fed forward to the output. But it also goes through the integrator. And then they're recombined like that.
Now, if you look at the Bode plot, if you look at the frequency response, exactly the same frequency response. We haven't changed that. And you can actually relate the series form of the PI controller to the parallel form, as shown here. So I can derive one from the other.
So why would I want to use a series PI controller versus a parallel one? The answer to that question is because now I directly control the zero. I have independent control of where that zero is. And that's going to be a benefit to me as we design current loop controllers.
So in this case right here, Ka, it's going to adjust the amplitude, not of just this portion of the graph, but the entire graph. That's the nice thing about it too, because this is now in series with my total gain in my system. So when I bring that up, everything else in the loop that's part of that closed loop gain will also go up. It's not just adjusting one part of it.
And then Kb-- Kb is the zero. It's exactly equal to the zero. So I can set that independently wherever I want that to be.
So when I put that in a cascaded control structure, as shown here, when I talk about a cascaded control structure, I'm talking about where you have one loop embedded inside of another loop and maybe even embedded in a third loop. Here's an example where I've got a motor and I want to control the current. If you look at the transfer function of the motor windings, you'll see that you don't really need a D term, that PI term is perfectly adequate for doing that.
And the reason is because you've only got one real pole in your motor winding. It's an R/L time constant, essentially. So you're going to have 90 degrees of phase shift. You don't need any phase lead, like the D term gave us.
When Richard did that yesterday and he showed when you increase the D term, it gave you phase lead, we don't need face lead. So there's really no need to have a D term. The PI controller can control it just fine.
Now, this is the commanded current. This is the measured current. If I want to design a speed loop, how would I design that? Using this structure right here, can I alter it in such a way that I can also control the motor speed? And we'll see that the answer is yes.
We embed this torque loop inside of the speed loop as shown right here. So now, we have something on the motor-- some feedback that's giving us velocity information. We compare the measured velocity with our commanded velocity to generate an error signal. And once again, we just need a PI controller to do this. And the output of the PI controller for the speed loop is what determines what the commanded input will be for the current loop.
And when you think about that intuitively, it kind of makes sense. Because if my speed is too low, what do I want to do? I want to generate more torque, so that the motor speeds up. Or if the speed is too high, well, the other way, I want to generate less torque or maybe negative torque to make the motor slow down.
And you can go through and do the stability analysis of this. And that's another reason people like cascaded loops, because the way that you do it, you start with the inner loop. You do the stability analysis for that, get that all set up the way you want it in terms of its open loop parameters.
And then for every loop that you cascade around that, you now use the closed loop transfer function of this treated as just one closed loop expression. And that now becomes part of the open loop expression for your velocity loop. And then once you get that stable and working the way you like it, you can just keep moving on out into the position loop.
So this is just an example of how we can actually cascade different control structures to achieve different outcomes or different effects.
What's the requirement for the bandwidth relationship between different loops?
Question was, what is the bandwidth requirement relationship between the different parts of the loop here, which is a great question. In this case right here, I need to have my current loop operating at a much higher frequency than either the velocity loop or the position loop. So your bandwidth is going to be set highest right here. And the reason for that is, at least with the motor control system, the things which affect this current loop are related to the poles of this plant right here, which are typically, your R/L time constant is going to be much higher than your mechanical poles in your system. So this has to be a high enough bandwidth to deal with that.
And your velocity loop, the things that affect your velocity loop are things like inertia, torque on your motor. That's also part of the transfer function. But specifically, the inertia of your system, the mechanical poles of your system, are much, much lower. So you can usually design these to be lower frequency and then the position loop.
Usually what you do in a digital control system is you run the position loop and the velocity loop at the same sampling frequency at a lower bandwidth. This one is running at a higher bandwidth. So for the systems that I've designed, for example, you may have this running at a 10-kilohertz sampling frequency, to where the velocity loop and the position loop can run at a 1-kilohertz sample frequency.
And again, it depends upon mostly the poles, the mechanical response in the inertia of your system. But in most cases, that pole is going to be way, way lower than the electrical time constant or the electrical pole of your motor. Does that answer your question?
Well, let's take a look at how we design the current loop controller using the series PI structure. In this case, I have-- and I've represented my motor winding as shown here is a series L, a series R. And then I have a dependent voltage source. This is my back EMF voltage-- and we're going to talk about this later today-- which is equal to my speed of my motor times some constant, which is referred to as the back EMF constant of my machine. So obviously, as the speed goes faster, this voltage source right here is going to get higher and higher.
I'm sampling the current in this loop either through Hall effect sensors or shunt resistors, any number of different ways to do that. And I feed it back around here to be my measured current. So if you go through the analysis of this system in terms of trying to optimize it, how do you get the most stable response without a bunch of peaking in your control loop?
Here's an interesting thing that you'll discover. Let's start with Kb first. If you set Kb-- remember, Kb, that's the zero right? Kb is the zero in radians per second. If you set that equal to R/L-- in other words, 1 over my time constant of my motor load or my motor winding-- guess what happens. Obviously, there's a pole zero cancellation, and you end up with essentially a first order system.
Now, if you're interested in all of the nauseating math that goes behind that, I've actually written a series on my blog site. It's at ti.com/motorblog. And I've got a 10-part series on how to make your PI controller behave. And in there, I show all the math of what happens.
And if you do this, if you make this substitution right here, you will cancel out the pole there and end up with a first order equation, which is nice, because first order equations, I mean, they just have a simple roll off. There's no peaking or anything else. Question?
What's the URL of that site?
It's www.ti.com/motorblog, all one word.
If you don't make that substitution, you end up with a second order system. And of course, when you have a second order system-- as Richard talked about yesterday-- you now have the opportunity for complex poles. And that's exactly what you end up with.
You end up with those poles getting close to, in some cases, the j omega axis. And you get this very peaky kind of resonant effect that you, in most cases, don't want. Again, another reason why people like to use the series form of the PI controller versus the parallel form, at least for controlling currents on motors is because of this effect right here.
What is Ka? Well, it turns out Ka sets the bandwidth of my system. So if I know what bandwidth I want in terms of my current bandwidth, all I have to do is divide Ka by L-- or I should say, I'll just set Ka over L-- that is my bandwidth. So if I know this and I know this, then I can solve for Ka. And then you've designed your PI controller.
As Richard talked about yesterday, from a generic point of view, PI controllers have been done iteratively and somewhat empirically. But for this particular application, when you're talking about a motor winding with your current controller, there really is no ambiguity to it. If you don't have this expression right here, you're going to get a response that is second order and will have peaks in it in its response. This is, in fact, the way that we tune the PI current controllers at TI for all of our field-oriented systems.
What about the velocity loop? Well, this is where it gets really interesting. And that's what took up the bulk of my 10-part series on tuning PI loops, because now we have a system which is a lot more unpredictable, has poles that are much lower in frequency. And in many cases, the way we synthesize the velocity signal, if you've ever looked at the different velocity synthesis techniques, you're probably going to have to filter that somehow anyway, so you've got another low pass filter right here in your feedback loop. And all these things come together to make a real nice hairy equation with lots of degrees of freedom.
Well, again, if you go through and read the series that I wrote, assuming that what you're really interested in is adjusting the damping factor of your loop, of your velocity loop, just controlling the dampening responsive of that, once again, you can boil it down into a very confined set of constraints, which you can deal with. And you end up with these expressions right here. And I'm still using a series PI controller for the velocity loop. And in order to confuse the terms of that loop versus the PI controller for the current, I chose to use Kc and Kd in this case.
So here is what the expression is now. Kc should equal this expression right here. Tau, what is tau? Tau is just my filter pole time constant for my velocity feedback signal. And then Kd is equal to this expression. And in both cases, we have this other little term right here, called--
What is that? That's not sigma. It's delta? Thank you. I never learned the Greek alphabet. So I'm still struggling with that. To me, it's just a symbol.
So we have delta here then. What is delta? In this case, that is the damping factor. So what you can do is adjust your damping factor. That's the only thing really you have to tune in your system. If you know what your low pass filter pole is right here, if you know what your system inertia is right here, if you know the number of poles in your motor and also your rotor flux, you can plug all those in, and this is the only knob then that you have to turn to adjust the damping on your velocity to be whatever you want it to be.
There was a comment-- oh, the other comment I was going to make here is, a lot of times I get feedback on my blog, people saying what is the rotor flux? How do I find what the rotor flux is? Actually, it's easier than you think.
Assuming that you're using SI units, metric SI units, it turns out that the rotor flux in Weber's is exactly equal to your back EMF constant, which is in volts per radian per second. So if you can find that, then you can go ahead and plug it in right here and use that same value for the flux.
So again, we make some assumptions about we were trying to minimize the amount of peaking that we want, because in most cases, that's not a good thing. We plug all those in. And we can end up with this expression right here.
For a permanent magnet-- let me see right here-- there's another term right here called K. We have to plug these values in for K. If this is a permanent magnet motor for an induction motor, this is the expression.
I'm not going to bore you with all the equations, because I figure if you're interested in that, you can read it yourself on the block site. But I just wanted to make you aware that tuning your PI loop for a more control system can be deterministic. We can't make that statement in general for PI control loops. But when we're dealing with motor control systems, we can.
How will these expressions change if you're using a parallel form of PI controller loop?
Then you would have to go through and do that substitution, where Ka is equal to KP and then that Kb is equal to KI divided by KP, I think it was, and solve that again. And what you find is, as you're changing things in the K's now for your parallel structure, you're accidentally moving the zero around. And that's usually not an effect that you want. You'd like to keep that zero fixed, because it's working with some poles in your system to provide an optimal effect.
So that's why just an empirical approach of just keep trying different values for KP and KP in a parallel form, I mean, that's pretty much what you have to do is just empirically put things in there, because it's always going to be moving your zero in your PI controller around, which is not a good. But it can be done. Does that answer your question? Any other questions about this?
This just shows an example-- it's kind of hard to see that, I know, with the red color there. But you can see different values of damping factor that I've chosen right here. The yellow line frequency, that is the frequency of my-- this is my velocity filter pole. And you can see is I get damping factors which approach unity. In fact, if I have a damping factor of 1, that will turn into an oscillator, because it'll become right here on top of the velocity filter pole, and it'll just ring.
So damping factors, usually in the area of about I would say 3 to 10, that typically gives a good response of what you're looking for. Yeah. it's really hard to pick that up, I know, with the colors-- There we go. Now, you can see them a little bit better. And this right here also shows the time step response for different values of the damping factor.
So again, values between like 3 and 10 typically give a fairly good response. If you start getting out into damping factors much greater than that, you're getting into a very, very overdamped kind of responses. So again, it's a way for you to tune your velocity loop with just one knob, which in most cases is what people prefer versus empirically just trying to put in different values of KP and KI.
Well, as we talked about yesterday, Richard mention the fact that we've got an integrator in a PI controller, obviously. And you run into problems where you can end up with windup, what's called a windup effect. And it gets the name from the fact if you take like a watch spring and you wind it up really tight and then you let it go, what's going to happen is it's going to unwind.
But then once it gets back to the rest position, it's got like some inertia associated with it. And it's going to keep winding up in the other direction. And then it'll kind of go back and forth like that.
That effect is called windup. And the problem is actually created whenever you have very high frequency transients in your system. And I'll give you an example here.
This is my input command right here. Let's say I have a step function input command to this PI control loop. This is what I want the output to do. But because of the poles in my transfer functions for my plant, it can't do that. It can only respond this fast.
Well, look at what's happening here. The difference between this graph and this graph, in this area right here, that's all the error that I'm essentially integrating up. That's actually getting put on the integrator output.
When I finally do get to the commanded position, now my integrator has all this excess-- if you're looking at it in terms of voltage, it has all this excess voltage on it. You can tell I'm a hardware guy, because I'm used to designing old PI control loops with actual integrators in them. And what we have to do now is we have to dump that excess voltage.
But the only way we can do that is to create negative error now. So we end up over shooting, until finally, all this error right here gets dumped back. And we're back into the same responsive that we wanted.
Well, if you don't have some kind of protection against your windup, you end up with systems that are very, very underdamped. In fact, it can actually overshoot wildly, and then undershoot, overshoot, all because of the windup in my system. So how do we deal with windup? How do we affect that?
There's different ways to do it. One is what's called integrator switching. And I've actually used that before. And the thinking behind that goes, why do I want the integrator in my control system to begin with? It's because I'm trying to get rid of steady state error.
Well, if I'm just starting my move profile, am I worried about steady state error? Probably not. It's only when I start getting close to the desired trajectory or the desired position or the target that I'm really worried about the integrator. So instead of letting it iterate through the whole trajectory of the move, only switch it on at the last part when my error signal gets below a certain value, and then use that as kind of like the landing lights to bring it in all the way.
That can work. The problem with that is, is depending on when you do the integrator switching, you can actually create discontinuities in your current waveform. So if you're dealing with a situation that can have lots of different torque variations on it, from 0 torque to maybe full torque, integrator switching is going to create discontinuities in your control waveform. So not a lot of people use that.
A much preferred technique is integrator clamping. And in this case right here, we provide some upper and lower limit on the integrator in terms of what it can actually-- what the value can be. And there's all kinds of different ways to set this.
I read recently an article that I found online. Apparently, somebody had done their master's thesis or something on all the different ways to manage integrator windup. I think there was like 15 different techniques that he had in his paper, probably even more.
But the most common ones involve some type of clamping, as shown here, where you set some upper limit or some lower limit. And in this case right here, what we're doing is we're saying-- well, not in this particular example. But one of the techniques that people use is just to take whatever your saturation limits are for your system and use those as your limits for your integration.
So if this is a current controller, for example, the output of my current controller is voltage. And if I'm running with a 24-volt power supply, for example, well, then I would have a hard limit where my system saturates at 24 volts. So why have the integrator continue to integrate up into hundreds of volts when, in fact, I've already clamped my system. Let's just go ahead and put those limits on the integrator. And that works really good offer for a lot of applications.
A better technique that I have found is what's called dynamic clamping. And that is shown here, where we actually set the limits of the integrator based upon what my P term signals are. So what I'm going to do is to look at the P out, which is right here, and use that as part of my equation to determine how to clamp it.
Now, the thinking behind this goes something like this. If I've got a big error in my system-- in other words, the difference between my commanded value and my measured value is significant-- so significant, in fact, that the P term alone is capable of driving my system into saturation, then why do I want the integrator doing anything? To drive it harder into saturation? That doesn't make any sense. So by looking at the P output term, you can say, well, if the P output term is-- if that P gain is high enough so that that signal has already put the system int saturation, figure that into the equation right here, so that we dynamically limit the integrator based upon what the P output term is.
And I've got some examples. I don't know if I have it in this slide set. But if go up on my blog series, on that 10-part series, you'll see some examples comparing no integrator clamping with static integrator clamping versus dynamic integrator clamping. And you can see that it does make, in fact, a very dramatic difference in the amount of overshoot that you get out a PI controller.
One question I get asked a lot is, OK, you're talking about integrators and PI control loops. I know how to do a P term in software, how do I do an integrator? Well, an integrator is actually very, very easy to implement. Again, we're not showing any clamping or anything here. But in its simplest form, all we're doing is creating an adder.
And what we have here is some kind of summation block and a delay block. And the equation is, the new output value is equal to the old output value plus the new input value times T. And what is T? T is our sampling period. So by using this expression right here, or if you're actually implementing it in C code, it is simply this one expression, this one line right here, and you can make an integrator. So it's actually very, very easy to implement.
So let's go now expand our PI controller into a PID structure, as Richard was talking about yesterday. Now, when might you want to use a PID controller in a motor control system? Well, a very common example of that would be if you have like a position control loop and you don't want to do the cascaded structure for whatever reason. Well, actually, a good reason would be probably because it's easier, it's quicker to calculate. Because when you have a cascaded structure, you have to go through each one of the blocks, and the calculations take a little bit longer.
But if you don't have a cascaded structure and you want to do it all with just one mathematical expression, then you're going to need something to give you some phase lead. Because as the example of a position loop, you have too many integrators in your transfer function, you're going to end up with an oscillator, if you don't do something to pull that phase lead back into line. So that would be just one example.
In this case, as you can see, we take the error signal and split it up into three different paths, now including a derivative function, followed by a derivative gain. And that also gets added into the output. Why would you not want to use a D term? Especially with digital control systems, synthesizing a derivative function can be very, very noisy.
So if you've got something on this signal right here, which has got a little bit of noise riding on it, well, what is a differentiator? All it's going to do is amplify that noise. And you end up then creating a system which might have a lot of noise on it. So there's advantages and disadvantages.
I think the cascaded approach is now really what most people use. But you still see people who like to use the D term.
How do you implement a differentiator? Very, very similar to the structure of an integrator, except that now we have the 1/T on the output here being multiplied by that. The output-- the new output is equal to the present input minus the previous input. So now, we've got the delay term working on the input side times 1/T. And if you translate that into C code, it looks something like this, where we now have a two-step process right here.
Now, let me, again, try to paint the need for a D term in an example that I think you guys could understand or relate with. I usually like to try to draw all these things back into the real world, because a lot of times, we're doing this stuff in our heads all the time and not even thinking about it. So here's another example.
Let's say you're driving down the road, and you encounter a sign that looks like this-- bridge out eight feet ahead. Now, for you drivers in Texas, let me translate that for you. That means stop here, in case you didn't know, because if you don't, then some bad things are going to happen. And you're going to end up looking like this.
So you now are the controller of your control loop. You look something like this. You have your commanded position. What is the commanded position here? Well, it's wherever the sign is located on the street. You want to stop right at that spot, not overshoot, not even by eight feet, or bad things could happen.
So there is your commanded position. You are the controller. You have two ways to affect the plant, which is the car. You have a gas pedal and brake pedal. This would be thought of as like-- and a real motor control servos system, this would be like positive voltage and negative voltage, or positive current and negative current. Essentially, I want to apply positive torque or negative torque.
Now, here is my car. And I can actually measure or look at the position of the car, compare that to where I am in the sign, with respect to the sign, and then adjust the gas pedal and the brake pedal accordingly. Now, with the information that you have here available to you, is that a stable system? Do you think that we have all the information that we need to make the system stable?
Well, the answer is no, we don't. This is not a stable system. And I'll show you why. According to the way that this system is designed with the feedback components that we've chosen, the only way that I could start putting on the brake pedal is if the error between my commanded position and my actual position starts to go negative.
So that means this actually exceeds this value. And then at that point, I apply my brake, and at which point, you're already too late, especially if you're going like 50 miles an hour. You can't stop the car in eight feet.
So what do I need to do? Well, in that case, I need to incorporate velocity information. Now, what is velocity with respect to position? It's the derivative of it. This is the same reason we put derivatives in servo control systems. It's because we can take the position, take the derivative of that to get velocity.
And really when you think about it, when you're driving down the road, you're doing that all the time. Your brain is incorporating velocity information into your control loop to make it stable. If you were-- let's say you somehow had a system where you couldn't see how fast you were going. All you could see is what your position is relative to the actual position that you wanted to stop at.
Well, you wouldn't know when to put on the brakes. You put on the brakes at a different spot, or a different position, if you're going 10 miles an hour versus 100 miles an hour. That's why we need velocity information in order to make position servo systems stable-- again, assuming we're not using a cascade structure. So that is then created by this derivative term right here. And that can be used to stabilize your system.
Now, I've actually design a digital control system using the PID approach. And I was actually able to characterize its response based upon different values of my D gain. Because D gain, the gain on the D term, is what determines the sensitivity of your system to speed. And that also then determines the stability of your system.
So let's say-- again, using our car analogy-- this is where you are right now. You're at a red light. Then all of a sudden, the red light turns green, starts moving. But then up ahead, you see another red light. So you know that now that position is the position you want to stop at.
Depending on what kind of sensitivity you have to your velocity, you can get different profiles for how you stop. Let's take an example here. This one right here, notice that the D gain is turned way up. In other words, this system is very sensitive to how fast it's going.
That would be like the little old man or the little old lady who's very scared or very sensitive about how fast they're going. And as a result of that, they ride with one foot on the brake all the time. You know, you've seen those people, right? Well, as a result of that extreme sensitivity to their speed, they're not going to overshoot. But it's going to take them forever to get to the point where they stop at the right spot.
Now, let's take the case of-- I'll pick on Ken. He's not here, is he? Good, because Ken is from New Jersey. And I've seen how they drive in New Jersey. It's about as bad as Boston. They don't care how fast they're going. Their D term is dialed way down.
So what is going to happen in that case, Ken is probably going to overshoot the place where he's supposed to stop, go out into the intersection, and say, oops, I'm out in the intersection. I guess I'll put it in reverse and go back to where I was supposed to be. Or if you know Ken, he'll probably just say, ah, heck with it, and just keep on going.
Now, here's how we drive in Wisconsin. We have our D term sensitivity adjusted just right. We're obviously sensitive to how fast we're going, but not too sensitive, so that we will put on the brakes at the right time so that we coast to a nice stop right at the spot we're supposed to be at.
So you see my point. With position servo systems, your sensitivity to how fast you're going is your D gain. That's what sets the D term. And you can get different responses based upon how you adjust that D term.
But again, remember, unless you've got an analog velocity sensor, like a tachometer or something, creating a low noise velocity signal in a position control system can be very, very difficult to do, because in many cases, you're trying to do it with a discrete position encoder. In other words, it's just a bunch of square waves. And you're trying to get nice, smooth velocity signals off of that.
And that can be very, very challenging and one of the reasons, also, why you might want to consider using a sensorless approach to your system. The problem with most sensorless techniques is they don't work in position servos, because there, you're trying to go to some position and just hold it at 0 speed. And that's a very big challenge for most sensorless control algorithms.
So if we go back and look at my system, let's look at it from the Bode plot point of view. Here, I'm coming down with a position control system. And by the way, most position servo systems have this same response.
I'm coming down at 40 dB per decade. That means I've already encountered two poles, and I am already with a phase shift of minus 180. That is the point where I'm going to crash if I'm not careful, because 180 degrees of phase shift combined with unity gain equals oscillator. I don't want that.
So if I don't do anything to break my fall, I'm coming down at 40 dB per decade. I like to think of this as kind of like this is Earth or this is the ground. This is an airplane coming down. I'm coming down too steep, I'm going to crash. And that's exactly what the phase margin right here shows.
If you look at the point where it crosses 0 dB right there, well, actually, I do have a little bit of phase margin. It's like 2 or 3 degrees. Technically, that means it's not going to burst into oscillation. But it also means your response is going to be so severely underdamped as to make the system unusable.
Instead, what I'm going to do is, somewhere right about here, I'm going to kick in my zero from my differentiator. That gives me phase lead, which essentially breaks my fall from 40 dB per decade to 20 dB per decade. I can now come in for a nice landing. And from a phase margin point of view, look at all the phase lead that I get out of having the D term. And that is what contributes to the stability of your system, as Richard was talking about yesterday.
So let's kind of put a ribbon around this whole discussion. And that is, if we look at our control system, we have some kind of controller with a transfer function, forward transfer function, the plant, which is our motor, or some mechanical system in most cases that we're trying to control, with some kind of feedback gain associated as shown here. And this is now our complete control loop.
We typically don't see a lot of analog control systems being used anymore for a long list of reasons as shown here-- low noise immunity, changes over temperature, hardly any flexibility, on and on and on it goes. That's why a lot of people are moving now to digital control systems, where we have some kind of microprocessor, microcontroller, which is doing these calculations for us in a sampled system environment. In other words, it's now discrete time versus continuous time.
And as Richard talked about yesterday, the problem is, all those nice analog controllers that we made, that we use with analog control systems, they don't work with a digital control system. We have to find sampled system equivalence of that. So we have to turn all of our transfer functions into Z domain expressions, which are equivalent to that. That is the first challenge.
The second challenge is, of course, we're introducing more phase delays, because we've got these sample and holds in our system. The A-to-D obviously has a sample and hold, which can create phase lag. We also have a PWM module, which is a sample and hold.
Did you know that, that the PWM module is a sample and hold module? Think about that. You calculate a PWM value. You drop it into the PWM module.
And then you return from your interrupt or whatever. What happens to the PWM module? It keeps generating that same pulse width, until you come back and change it some other time. In other words, it's held the value of the pulse width.
Well, that's another phase delay in your system. And all these different phase delays can add up and cause problems, not to mention the fact that you only have a window into what's happening in your system at these discrete points in time. We really don't have any guarantee of what's happening in between here.
So there's lots of challenges with designing a digital control system, which brings me to the next blog series that I wrote. It's called The Ten Commandments of Digital Control. And this was actually one of the earlier blogs I wrote, which is up on that same blog site.
And what this is, is this is just kind of, I don't know, a potpourri of tips and tricks that you can use when designing a digital control system, talking about sampling frequency, how you choose your sampling frequency compared to the bandwidth of your system. Goes into a little bit about observers, what is an observer, how you use an observer, things like that. So if you're interested in that kind of stuff and you really want to know the truth about digital control systems, the things that your mom never told you, then you can go to this website and get more information about that.
So since Richard talked a lot about control yesterday, I don't want to spend too much more time talking about it. But what I did want to do was to connect the dots-- in other words, from the theory that was presented yesterday to real systems that actually implement this stuff in a motor control environment, in a motor control application. And hopefully, you got a little taste of that as we bring it down a little bit closer to reality.
Before I go into the next section, which I think is the most exciting section of the presentation, are there any questions on anything we've talked about yet? Are you still awake? Good, because if you're not, you need to wake up for this next section.
In fact, no, I'm not quite ready for break yet. But pretty soon, we'll have a break. But I want you to now to wake up, put your thinking caps on, because this next section, if you miss this, you're going to miss something so important to motor control.
Does anybody have any idea what I'm talking about-- the most important control technique for motors today?
InstaSPIN?
[LAUGHTER]
Actually, yeah, InstaSPIN is a form of field-oriented control, exactly. How many have heard of field-oriented control before? Quite a few of you. How many of you have had the privilege to use to field-oriented control? 1, 2, OK, a few of you. For the rest of you, you're in for a real treat.
This is going to be something that I think-- it is, and I've actually referred to this before-- I think this is the most exciting development in motor control since the AC induction motor was invented in the late 1800s. And the reason for that is, is because this is a unified control topology that works with just about every type of motor. In fact, I could even extrapolate to show that this works with DC motors as well. It's the same principle.
What we're trying to do, as the name implies, is orient our fields. And if you orient your fields properly on a motor, all kinds of wonderful things happen. Now, we typically see this used today with AC motors, because AC motors up to-- well, actually, this technique was first conceived in the late '60s, 1968, by a researcher at Siemens named Felix Blaschke.
Up to that point, if you want to control an AC machine, you had to do it from what we call the stationary frame. In other words, you had to actually generate the sine waves yourselves and figure out, OK, how do those sine waves relate to the flux of the motor. And it was just a very, very complicated technique.
As Don Novotny, who was a professor at the University of Wisconsin Madison, put it, he said, once we saw field-oriented control equations, we all just nodded our heads and said, well, of course, this is the way you'd want to control an AC motor. Whether it's a permanent magnet AC motor, whether it's an induction motor, whether it's an interior permanent magnet motor, it doesn't matter. All these different motors are candidates for field-oriented control. And I think you'll see why when we get we get through this.
Field-oriented control is an example where we use the cascaded control topology. Field-oriented control in and of itself is a torque control algorithm. So what we're talking about here is just the current controller and synchronizing the current controller to the motor flux in such a way that we can get optimum control. If you want a speed loop, if you want a position loop, then you cascade those around the field-oriented loop, just like you would as we showed earlier.
So fasten your seat belts. And I guess, before we get into this, let me back up. Do we know whether the WebEx thing is working yet or not?
Try to log in again.
Because it would be nice to get this one on there if I could get it. Let me see. Let me just try to join the internal. There we go. It seems like it's back up now. What was the thing I was going to search for again?
Just search for [INAUDIBLE].
Oh, that's right. Give me just a second here, and we'll get this thing running. Push this out of the way. There we go. So at least for now, it's working.
So do you have something coming through back there? Good, all right, excellent.
So let's talk about field-oriented control starting out specifically with an AC motor. How do we do it with an AC motor? Well, here we have a situation, where I have this motor. It's a three-phase motor.
And You can see that the three phases are distributed in all these different slots right here inside the stator. I'm referring to the stator as the part of the motor that is stationary, so it's called the stator. This is the rotor, which is the part that's rotating. Pretty easy to figure that out.
The three phases of my windings are all distributed equally throughout the machine, throughout the circumference of the machine in these stator slots as shown right here. And what I'm going to do is I'm going to put currents in these windings-- it's a three-phase machine, so obviously three phase currents in the windings-- in such a way as to create this rotating magnetic field as shown by the colored band that is turning there.
Now, if I put the rotor inside of here, you can see that the rotor, in this case, has permanent magnets on it. It's going to try to follow that rotating magnetic field of the stator. And in this case right here, it's following it perfectly. Now, looking at the way that that rotor is tracking the magnetic field of the stator, can you tell me anything about what the motor torque is at this point in time? It's zero, exactly, because it's following the field perfectly.
What if I put my thumb against this thing and started loading it down? What would you expect to happen to the angular relationship between the stator magnetic field and the rotor magnets? Start what? It's going to start lagging a little bit, right? And that's exactly the effect that you have.
If we look at the curve-- and this was something that I actually plotted from a simulation of the Toyota Prius motor-- you can see that if the angular alignment between the rotor flux and the stator magnetic field, the difference is truly zero, then yes, we have zero torque. But as I start applying more and more torque to my rotor, you can see that that's going to translate into further and further phase lag, where the rotor is lagging the stator flux, until I finally get to a point of 90 degrees.
At that point, I'm generating the most torque that I possibly can for that given current on that motor. And if I try to apply more torque than that, I'd go over the fallout curve right here. And what's going to happen is the motor is going to lose control, it's going to lose synchronization, and it's just going to flop around on your bench like a fish out water.
So the goal is, obviously, to not let it get over here, because this part of the region is unstable. From here to here is stable. But we'd also like to be able to flirt with right at the top of that curve, because that's where we get maximum torque per amp. And that's what field-oriented control is going to allow us to do and to allow us to do it in a stable way, so that we don't have to worry about coming over the edge of that curve right there.
So the way that we do that is we, first of all, need to know on our rotor, what is the angle of the rotor flux. And there's different ways that we can get this. And basically, it's a big discussion in the area of field-oriented control, is how do you determine the angle of the rotor flux. But once you know the angle of the rotor flux, well, then you know exactly where you need to position your magnetic vector on the stator. We said we want that to be 90 degrees with respect to that, because that's what gives us maximum torque per amp.
So here in this example, let's go ahead and run this. And in real time, as this motor is spinning, in order to generate a current vector or essentially a magnetic flux vector on the stator to be 90 degrees with respect to the rotor flux, I'm going to apply three-phase currents to each one of the windings. And I've synchronized this diagram to this diagram in such a way so that you can actually see when the-- for A, B, and C, when the currents have to be positive peaking or negative peaking.
Now, let's say that I want the motor to go faster. What do I do? Do I change the angle of my stator flux with respect to the rotor flux? No, I always want to leave that at 90 degrees. That's the magic maximum torque per amp value.
Instead, I increase the amplitude of my current waveform, leaving these waveforms still perfectly synchronized to the position of the rotor flux. So in other words, I've just made my current vector bigger. And I can continue that ad nauseum. Now, I'm at even a higher torque, which means, in most cases, that your waveforms are going to be faster, because the motor now has sped up, unless, of course, it's under some heavy load.
If I want to calculate what the torque is, this is what the torque will be. First of all, I need to know how many poles-- how many rotor poles my motor has. In this case, it only has two-- a north and a south. You actually have some rotors that have multiple patterns of poles north and south across the rotor.
So however many poles I have, I divide that by 2. This is just a scaling constant right here, 3/2. And this right here defines what the torque is. And this is the rotor flux, which we're going to assume for now is constant, that we can't do anything to affect that.
What is this term right here? This is the vector portion of the current waveform, which is 90 degrees with respect to the rotor flux. That's why it's called Q, because Q stands for quadrature. So if it's quadrature to the rotor flux, that means it's at 90 degrees with respect to the rotor flux.
Now, that doesn't mean that my current amplitude or my current vector has to be at 90 degrees for this expression to work. It could be at 80 degrees. Instead of 90 degrees, I could have 80 degrees. What this expression says is, it's only the portion of the vector that resolves onto the axis, which is 90 degrees with respect to the rotor flux, is the only portion of the current that generates torque. The other portion of that current, which is the portion that is directly aligned with the rotor flux, does not produce any torque. Does that makes sense?
So when you look at all this right here, this is a constant. I can't change the amount of poles that my motor has. For now, we're going to assume that my rotor flux is constant. The only thing that's adjustable is the amount of the current, which is exactly quadrature to the rotor flux. And that's what I adjusted. And that's how you typically do it in a field-oriented system, is adjusting that component of the current waveform.
So to do field-oriented control, let's try to see it from a high level point of view. And if I'm doing it on a digital processor, I get an interrupt. I go out and I measure the rotor flux angle using sensored or sensorless techniques, but I need to know what the angle of that rotor flux is. Then I regulate the current vector to be at 90 degrees with respect to the rotor flux by adjusting these three phase currents in such a way that it creates a vector which is 90 degrees to it. And then I exit my interrupt service routine.
And this is something that may be done, in most cases, like at 10,000 times a second. So that means, every 100 microseconds, I'm going out and checking what the new angle is of the rotor flux, recalculating my three currents to give me a current which is exactly quadrature to that. And then I exit. Yes.
If your field is set up at 90 degrees out, what happens then though if I apply a huge torque on that, that rotor? Won't that cause it to go into an unstable--
That's a good question. Well, let's think about that. Your motor is whipping along. All a sudden, you hit it with a torque impulse. Well, what's going to happen?
Assuming that all you're doing is controlling the torque or the current-- you're not controlling speed, you're just putting current waveforms out there-- your motor is going to start slowing down. And it will slow down at a rate that is commensurate with the inertia of your system and everything else. But it's going to start slowing down.
As the motor is slowing down, the angle of your rotor flux is going to start-- that angular movement or angular velocity is also going to slow down. The way that this algorithm works is, we don't care what that angle is. Whatever it is, instantaneously, we're taking like a still picture of that rotor for that particular interrupt. Whatever it is, wherever it is, we need to calculate the currents to be at 90 degrees with respect to that.
I've actually seen systems where it's a field-oriented control system. And all of a sudden, they hit it with a mechanical stop, like a break. Just immediately, bing, the motor just stops. And you can go feel the torque. It's still exactly the same, because it's always synchronized to that rotor angle, whatever it turns out to be. Does that make sense?
Yeah, it makes sense.
Any other questions about that? This is a good section to ask questions on if you don't get it, because you'll regret it later if you don't. Yes.
Well, that's basically ways to a variable frequency drive on the motor windings.
Oh, did it do it again? Well, this time, my computer just shut down. Let's take a break. Well, before we do that, I want to answer your question. So say that again.
Just want to make sure that I understood the concept of changing the phase of the current. So what you're basically going to have to do is change the frequency of the waveform? All right.
Right. Your frequency of your current waveform has to be synchronized to the angular frequency of the motor's spinning. That's exactly right. Exactly right. Any other questions on that?
OK, 10 o'clock, let's take a break. I'll get my machine rebooted. And then we'll take it from there. We'll try to be done maybe in about 15 minutes then and come back and do it again.

Details

Date:
March 13, 2015

While motor topologies have remained relatively unchanged over the past century, control techniques by comparison have experienced explosive growth. This has been driven in large part by technology advancements in the semiconductor industry. This seminar focuses specifically on advancements in the control of motors, with an emphasis on field-oriented principles with brushless AC motors. This video is part 1 of a 5-part series and covers the introduction to motor control, PI controllers, PID controllers, and an intro to Field Oriented Control.