TI Training home
Control of SMPS - A refresher
Introduction to the control of SMPS
Control of SMPS - A refresher
이메일
Introduction to the control of SMPS
Hello and welcome to part one of this three-part talk on the control of switch mode power supplies. My name is Colin Gillmor, and I'm an applications engineer with Texas Instruments based in Cork in Ireland. The aim of this presentation is to review analog control theory in an intuitive manner, without using too much mathematics. It will also highlight the benefits and drawbacks of some of the most popular control methods.
In part one, I want to cover some basic concepts and then move on to talk a bit about transfer functions in general, and finally look at control systems, introducing voltage mode peak current mode control. Part two and part three will cover the other items on this agenda. First, I want to review some of the basic concepts used in control system. Obviously, the young man on the bicycle is having some control issues. Nevertheless, he does seem to be having fun-- I think. I've included pointers to various references in blue. The full list of references is given at the end of each part of this presentation.
There are three basic ideas behind all control theory-- so do we have some method to control the system, can we see what the system is doing, and can the system do what we want it to do? Controllability describes the ability of an external input to move a system from an initial state to any other final state within a finite time interval-- handlebars, for example. Observability-- this is the ability to know the internal state of the system. For a switch mode power supply, we would measure the output voltage and the inductor current and use these to determine the appropriate action.
Sometimes, a variable may not be directly observable-- the inductor current during T OFF, for example. In this case, we can use estimates of the values, as we do with downslope simulation in some controllers. For the bicycle analogy, does the rider have his eyes open, and is he looking in the right direction? The concepts of controllability and observability are very similar and are related mathematically.
Reachability-- a state is reachable if there is an input that moves the state of the system from an initial state to a final state in some finite time interval. For example, a state of VOUT NOM is reachable from an initial state of 0 volts at startup. The training wheels would seem to limit the lean angle to a modest 30 degrees or so. The system can't reach a lean angle of 45 degrees-- or at least, not if the rider wants to recover.
The next concept is that of a linear time-invariant system. This means that the system obeys the standard requirements for linearity-- that is, f or a plus b is equal to f of a plus f of b, and f of k times a is equal to k times f of a. In the context of switching-mode power supplies, linearity means that the system response is not a function of load of course some non-linear behaviors are deliberately added to the system-- for example, over-voltage protection, over-current protection, light load modes, and so on.
Nonlinearity also appears if some element in the control loop runs out of control range-- for example, an optocoupler transistor reaches its VCE sat limit. Part of the design task is to understand the operation of these nonlinear events and to ensure that they are not detrimental to the operation of the system as a whole. By their nature, switch mode power supplies are not time-invariant. The circuit changes as the switches in the power stage are turned on and off at a high rate.
The solution to this problem is to linearize or average the power stage. The only drawback of this approach is that the linearization is valid only up to about 20% of the switching frequency, but in practice, this is not a severe constraint. Time-invariance means that the system does not change over time. Obvious exceptions are turn on and turn off. Less obvious effects are due to CTR reduction over the lifetime of an optocoupler.
I've included a fairly typical transfer function for Buck converter here. One thing of interest is the expression for g of s does not include the switching frequency. This is true of all of the power stages I'm going to talk about here but is not true for resonant topologies like the LLC.
This topic is based on analog control of switch mode power supplies, which, of course, implies a continuous time domain, and so the time variable can take up any value. Digital control uses a discrete time domain where the time variable can take up only discrete values. However, this is beyond the scope of what I want to talk about here, and I won't mention it again.
The last concept I want to look at is that of complex frequency. This is an extension of the frequency concept an its two parameters. The sigma parameter controls the magnitude of the signal, and the omega parameter controls the rotation rate of the signal. The rotation rate in radians per second is analogous to the usual ideal frequency expressed in Hertz, where one Hertz is equal to 2pi radians per second.
Complex frequency is widely used in control theory because a single calculation provides both amplitude and phase information. I'm going to be talking about situations where the amplitudes do not change over time. So in this case, we set sigma to 0, and the complex frequency s reduces to j omega, where omega is the frequency.
Now, I want to talk about transfer functions in general. As I mentioned earlier, in order to control a system, we have to measure it, compare the measurement to some reference, and then make appropriate adjustments. This involves moving a signal through different processes, perhaps passing it through an amplifier in order to perform some function-- amplification or filtering, for example. If we do the calculations using complex frequency, then we can obtain both magnitude and phase information at the same time.
The plot here shows the transfer of function g of s in the amplifier circuit in the schematic. Bode plots are convenient and useful way to present this information and are very widely used. The frequency is plotted on the horizontal axis using a logarithmic scale. The amplitude, or magnitude, is plotted on a dB scale where the conversion factor is 20 log VOUT over VIN
The phase is plotted on a linear scale in degrees. Note that g of s is phase information in radians, where one radian is equal to 360 over 2pi degrees, or approximately 57 degrees. It's also useful sometimes to use an asymptotic approximation to the amplitude and phase, and these are shown as dotted lines on the plot above.
Transfer functions are defined mathematically by the location of their poles and zeros, so I want to talk about these terms now. The transfer function of the schematic here has a pole at 1 over 2pi r2 C. If we plot this, we can see that the gain has dropped by 3 dB at the pole. The gain is approximately flat up to the pole using the asymptotic approximation mentioned earlier. It then decreases at a rate of minus 20 dB per decade.
The main feature of the pole is that the phase shift of the signal shifts from 0 degrees at low frequency, passing through minus 45 degree at the pole, and reaching a limit of minus 90 degree at high frequencies. The transfer function of the schematic here has a 0 at 1 over 2pi r2 C. In this example, the gain is decreasing at a rate of minus 20 dB per decade up to the frequency of the 0. It then flattens out and becomes constant-- in this case, at 20 dB, which, in this example, is the ratio of r2 over r1. The main feature of this zero is that the phase shift of the signal shifts from 0 degrees at low frequency, passing through plus 45 degree at the zero, and reaching a limit of plus 90 degree at high frequencies.
I want to note that there is an alternative form of the zero where the frequency term as s over Omega z rather than omega Z over s. The gain characteristic and rate changes. Visually, it rotates anti-clockwise with 45 degrees, but the phase characteristic is unchanged. This alternative form also exists for poles. Again, the gain characteristic is rotated, but the phase remains the same.
The LC circuit shown here has a transfer function with a peak at its resonant frequency. The amplitude of the peak is a function of the circuit Q, or the ratio of resistance to reactants. The gain is flat at low frequency, peaks at the resonant frequency, and then rolls off at a rate of minus 40 dB per decade. The total phase shift is 0 at low frequency, minus 90 degrees at resonance and minus 180 degrees at high frequency.
The final transfer function that I want to show is the so-called right half plane zero. This type of zero occurs in topologies which deliver energy to the output 180 degrees out of phase with the energy taken from the input-- Flyback, Boost, and Cuk converters, if operated in continuous conduction mode, for example. The RHPZ is almost impossible to compensate because the gain is increasing, but the phase is decreasing, as it would in a pole. The normal solution is to close the loop at frequencies much lower than the frequency of the RHPZ.
One of the most common applications for the boost converter is as a power factor corrector. But although the RHPZ does exist in this topology, it's not a problem because the loop bandwidth must normally be limited to less than about 10 Hertz, and also, the system is controlling the input current, rather than the output voltage.
Now, let's take a look at some typical control system block diagrams. So all switch mode power supply control systems must have some variable which they can use to control the output. I'm going to concentrate on those which use the duty cycle as the control variable. This includes Buck, Boost and Flyback topologies. Their conversion ratios are listed here on the right.
It's important to note that the switching frequency does not appear in the equations. This is true even if the switching frequency is being varied for EMI, or to improve efficiency, or for other reasons. There is another large class of resonant topologies which do use frequency as a control variable. The most important of these is probably the LLC. Much of what I will say in the rest of this presentation will apply to the LLC topology, too, but I won't mention it explicitly.
Finally, hysteratic control is a class of systems which turn the power switch on and off in order to control the output within upper and lower control limits. This method is widely used in low-power DC/DC converters, but little of the material I present here is applicable to them.
Now, here is a block diagram of a typical control system. It has five main components. There is an error amplifier, KEA. This amplifies the error between a scaled sample of the output voltage and a fixed reference voltage. The output of the error amplifier is the control signal used by the pulse width modulator, KPWM. The output of the error amplifier acts to minimize the difference between the output sample taken by KFB and the reference.
The error amplifier must have a high DC gain for good load regulation and low offset voltages for accuracy. The transfer function of the error amplifier is designed so that the overall loop is stable-- and I'll return to that topic later. The pulse width modulator takes the output of the eror amplifier and translates it into a pulse-width modulated signal with a duty cycle in the range of 0% to 100%. The PWM modulates the input voltage according to the duty cycle to produce a switched voltage, SW. The switched voltage is applied to an output filter, KLC, where the high-frequency content is removed so that VOUT is a DC voltage.
This block diagram is a typical example of a Bock regulator, and I'm going to use it for the rest of this talk. Similar block diagrams can be drawn for the other main typologies-- Flyback, Boost, Cuk, et cetera, but the general principles are unchanged. The main point to remember is that the system behavior is governed by the overall loop response T of s, and this is the product of the control to output transfer function G of s and the output to control transfer function H of s.
There are two types of control look that I want to talk about-- voltage mode control and current mode control. The original control method for switch mode power supplies is voltage mode control, where the output of the error amplifier controls the duty cycle directly. This has the advantage of being intuitively easy to understand. It is also less noise-sensitive than current mode control. The main disadvantages are that it's more difficult to stabilize than current mode control and does not offer as much protection against transformer saturation.
Current mode control adds a second inner loop to the system. Here, XI is a sample of the output inductor current, KCS is a scaling factor, an HE of s is the peak current sampling block. Effectively, this is a comparator which compares the current sent signal with a control signal VC.
There are two varieties of current mode control in use. The most common is peak current mode control, or PCM. In PCM systems, the input VHE of s is an unfiltered version of the current signal, so that the peak current information is available. In average current mode control systems, an average version of the inductor current is used as the input to HE of s. ACM is often used in power factor controllers for reduced input current distortion.
In voltage mode control, the error amplifier controls the duty cycle directly. The error amplifier output is compared to a fixed ramp. At the start of the switching cycle, the PWM output is on. A fast comparator detects when the ramp crosses the output of the error amplifier, and the PWM output is then turned off. You can see this clearly in the animation.
This system is relatively insensitive to noise because both the ramp and error amplify outputs have a large amplitude-- 2 to 5 volts would be fairly typical. The main disadvantage of voltage mode control is that the output filter resonance appears in the transfer function. This makes it more difficult to stabilize, and the control loop bandwidth is normally less than that of an equivalent current mode controlled design. I'll go into more detail on the Type 3 Compensation, which is normally used for VMC designs, later. Finally, despite its disadvantages, voltage mode control is often used, especially in high-power designs.
In current mode control the output of the error amplifier is used as a reference for the inner current loop. I find that it's best to think of the error amplifier output as a current demand signal, and the job of the inner current loop is to control the power stage so that the output current equals the demanded current. The dotted line in the graphic is the unmodified error amplifier output.
As I'll show later, there is an inherent instability in peak current mode control which requires the addition of a slope compensation ramp to IDEM, and the solid red line is the sum of the error amplifier output and the slope compensation ramp. At the start of a switching cycle, the PWM output is on. A fast comparator detects when the current sent signal in blue crosses the output of the error amplifier in red, and the PWM output is then turned off. This animation shows how the system works.
The advantage of peak current mode is that the system limits the peak current in each switching cycle. This gives some protection against inductor saturation or other events that could cause an over-current condition. It also ensures balanced currents in the transformer of full-bridge circuits, eliminating the bulky DC blocking capacitors needed if voltage mode control is used. However, peak current mode can be unstable at large duty cycles, and this requires slope compensation for stability.
There is also an inherent error between the average current and the output-- which is what we want to control-- and the peak current-- which is what the system is measuring. This can be a problem in Boost PFC stages because the Peak-to-Average ratio is not constant as D changes. Boost PFC systems often use average current mode to eliminate this Peak-to-Average error and so achieve higher power factors. Another advantage is that the current loop removes the output LC resonance from the transfer function, and this makes it possible to increase the loop bandwidth and to use a similar type two compensation network.
As a side note, I want to mention that peak current mode control is inherently unstable in the half bridge circuit. The reason for the instability is described in the article in Ref 13, but the effect is that the voltage at the junction of the splitter capacitors is driven to either the input rail or to ground. Here's a list of references you may find useful. I'm going to leave it on the screen for a few seconds.
Thanks for your attention, and I hope you found this talk useful. Please feel free to contact me directly at this email address if you have any questions or if you have any suggestions for improvements that I could make. Now, I'm going to invite you to take a look at part two of this series.
설명
2019년 4월 8일
In this series of videos I want to show that it is possible to gain a good qualitative understanding and feel for analog control of SMPS without using too much mathematics. I will show how a functioning Switched Mode Power Supply control system is designed and how the loop is stabilised. I will discuss the Gain Phase and Load transient test methods used to verify that a design is stable and indicated what the limits for acceptance might be.