James K. Roberge: 6.302 Lecture 17
JAMES ROBERGE: Hi. Today I'd like to illustrate how we use describing functions to analyze a class of systems that are known as conditionally stable systems. The name arises because these systems have a rather unusual property. They're perfectly well-behaved for a certain range of input signal amplitudes. However, in the classical conditionally stable system, when the amplitude of the driving signal becomes large enough, the systems change behavior and become unstable. And we'd like to see how that comes about and how we can predict the behavior of those systems by describing functions.
The common characteristics shared by all conditionally stable systems is that if we look at their angle as a function of frequency, the angle portion of the Bode plot associated with the negative of the loop transmission, we find out that there's a range of frequencies where the angle drops more negative than minus 180 degrees. This range of frequencies usually exists where the amplitude portion or the magnitude portion of the Bode plot has a magnitude greater that 1. The angle then recovers to a value more positive than minus 180 degrees at the crossover frequency.
The difficulty with a sort of a system arises when we include a saturating element in the loop in addition to the sorts of dynamics that I've just described. If the element saturates, it can lower the loop transmission magnitude and force crossover in a region of negative phase margin.
Let's consider as an example a Bode plot with magnitude characteristics as shown. Here we start out at some high value for the negative of the loop transmission magnitude at low frequencies. And I've assumed that we have three coincident poles. Thus, there's a region where the magnitude versus frequency characteristics fall off as 1 over omega cubed. Then let's assume there's two additional zeros associated with the loop transmission, and that those zeros are located at frequencies below the crossover frequency. Consequently, in the vicinity of crossover, the magnitude portion of the loop transmission will roll off as 1 over omega.
The angle looks something like this. At sufficiently low frequencies where the magnitude versus frequency curve is flat, the angle will be near zero degrees. Providing the duration of the 1 over omega cubed region is sufficiently long-- the angle is heading toward minus 270 degrees as a consequence of the 1 over omega cubed or 1 over s cubed roll-off, and providing that duration is long enough, the angle will become more negative than minus 180 degrees. If we have no higher frequency poles-- in other words, if the 1 over s roll-off that we see here continues for all frequencies, the angle associated with the negative of the loop transmission will eventually approach minus 90 degrees. So we have this sort of angle verses frequency curve, and the magnitude versus frequency curve that I indicated before.
We notice that the system, as shown, has adequate phase margin. Here's the crossover frequency, and we have this much phase margin. Certainly a positive value for phase margin. We can look at the same system in gain-phase coordinates. That's of course the sort of plot we normally use for our describing function analysis. And here we start out at some high value at omega equals 0, which is actually this point on the curve, an angle of 0 degrees and a large magnitude.
As omega increases, the angle becomes progressively more negative, eventually drops below minus 180 degrees, remains more negative than minus 180 degrees over some range of frequencies, and then recovers, asymptotically approaching minus 90 degrees for very large values of omega. Again we notice, since we go through the magnitude equals 1 point with the curve to the right of the minus 180 degree line, we have a perfectly acceptable phase margin.
We can look at the system in one more representation. And as we go through the analysis, I'm going to try to play these various techniques off against each other so we can see the interrelationship among them. We said that the system has three coincident low-frequency poles and two higher-frequency zeros. Providing the zeros are far enough away from the poles, providing there's enough separation between the frequency at which these three poles are located and the frequency at which these two zeros are located, the root locus diagram for the system will look as shown. Initially we should be able to ignore the two zeroes. They're high-frequency singularities, ones that are further away from the origin.
And so we'd anticipate that this triplet of poles would give us the usual plus 60 degrees, minus 60 degrees, 180 degrees branches in the s plane. And again, depending on the relative spacing of the poles and the zeros, it's very possible that for some range of the a0f0 product, there are two branches in the right-half plane. We recognize that for larger values of the a0f0 product, the branches circle around, eventually meet on the real axis. One of the branches heads toward one of the zeros at this location. The other branch heads off along the negative real axis.
So if we look at the stability of this sort of a system, we'd find that it was stable in an absolute sense for small values of the a0f0 product. All three of the closed-loop holes would lie on the left-half plane. There'd be an intermediate range of a0f0 product where two of the system closed-loop holes lay in the right half of the s plane. Consequently, the system is unstable. And then for still higher values of the a0f0 product, once again all three branches lie on the left half of the s plane.
The situation with sufficiently large a0f0 magnitude, such that all three branches lie on the left half of the s plane, is the situation we've shown in this gain-phase plot. We have a stable system with a positive amount of phase margin. And similarly, in the Bode plot, the other stable situation would correspond to the magnitude curve being pushed down so far that system crossover occurred at frequencies below this one, or at frequencies below this one in the gain-phase presentation.
Now suppose we have a system that has this type of dynamics associated with it, and we add to that system an element that saturates. As we mentioned last time, saturation occurs in virtually all physical systems, so it's certainly something that we have to contend with. Let's suppose that in the loop, in addition to these linear elements, we have a saturating non-linearity. Suppose we have an ideal saturating element, where we have a slope of k for the output versus input characteristics, providing the magnitude of the input signal is less than some maximum value, E sub M.
So we assume the characteristics are symmetrical. As long as our input signal lies between here and here, the slope of the input-to-output transfer characteristic is constant and has a value of k. However, once the input signal exceeds E sub M, the input-to-output transfer characteristics flatten out and have 0 slope. So what this does-- when we apply a sinusoid, providing our test sinusoid has a peak amplitude less than E sub M, we find out that the gain of our element is simply k. We put in a sinusoid, we get out a sinusoid. The gain is simply equal to the slope of the transfer characteristics in the vicinity of the origin.
Consequently the describing function, as a function of the amplitude of our test signal E, is simply equal to k. Of course, the input and output sinusoids are in synchronism. There's no relative phase shift between them, so the angle associated with the describing function is 0 degrees. And that's the condition that exists for E less than E sub M.
When we exceed a value of E sub M for our test signal, of course the output sinusoid is flattened down. It has flat spots on the peak and the valley of the output sinusoidal signal. I go through the derivation of that describing function in the book, and if we go through that, we find out that the expression is that the describing function, as a function of amplitude, is simply 2 times k over pi times the arcsine of R plus R times the square root of 1 minus R squared, where we've defined R as the ratio E sub M-- the saturating magnitude on the input axis-- divided by the amplitude of our test signal. Once again, the fundamental component of the output is in phase with the test signal. So the angle associated with our describing function is 0.
We could also look at what happens for very, very large input voltages. Under those conditions, the output is very, very nearly a square wave. The large input test signal sweeps the non-linearity through its linear region over a very small fractional range of input voltages. The output is almost a square wave. The amplitude of the output square wave would be k times E sub M. The fundamental component of the output square wave would be 4k over pi times E sub M, the kE sub M being the peak magnitude of the output, the 4 over pi, giving us the magnitude of the fundamental term in the Fourier series expansion. So that's the magnitude of the output fundamental. We divide that by E to get our describing function. And that's in the limit, where our test signal amplitude E is much, much larger than the value necessary to saturate the element.
We can get the same expression here by recognizing that for small values of R, when E sub M is much smaller than E, the arcsine is approximately equal to the argument. This term is about equal to 1. We get simply a 2R. Once again we get the 4kE sub M over pi E.
If we plot the describing function, G sub E as a function of E, versus E, we have a value of k for input test amplitudes less than E sub M-- that's of course the slope of the transfer characteristics at the origin. As soon as we get a test signal larger than E sub M, the magnitude of the describing function begins to drop off and asymptotically approaches the hyperbola described by this equation. So we have this behavior with the curve, eventually approaching 0 for a sufficiently large values of E.
Now we conceive a difficulty. Suppose we have a system that concludes this sort of a non-linearity with this kind of a describing function. If we then analyze our system by describing functions, what are we really doing? We're effectively linearizing the system in a particular kind of a way. Describing function analysis really amounts to a linearization about an operating point that's described by the amplitude of the signal applied to the nonlinearity.
Consider what happens if the amplitude of the signal applied to the nonlinearity is such that we're into the saturating region. And if we now consider this to be a plot of the negative of the loop transmission for a nonlinear system, where we of course get the gain of the nonlinearity by our describing function analysis, we'd find out that operating in saturation shifted the curve down. We have a value for the loop transmission magnitude that's less than that which exists when we have small signals. The gain in the describing function sense of the nonlinear element is smaller. And so that pushes the magnitude curve down. And we could push it down sufficiently so the crossover occurred at this frequency, as one possibility. So let's look at that.
In other words, if we drove the system in some way so that the amplitude at the input to the nonlinearity, the quantity E in our describing function analysis, had an appropriate value, we could force crossover here at a position of 0 phase margin. Similarly, if we had an even larger amplitude applied to the nonlinearity in our overall loop, we could force crossover to this frequency. Once again, a frequency with 0 phase margin. So we might in that case, in the vicinity of crossover, have a magnitude versus frequency characteristic that looks something like so.
The corresponding pictures in the other two plots would be a case where we had a sufficiently large amplitude to push this curve down, if we view it that way, so that we either crossed over at the higher frequency of these two points, or at the lower frequency. If we use a conventional describing function analysis, what we do is plot minus 1 over G sub D of E on the same coordinates. Let's assume that k is 1, in which case minus 1 over G sub D of E would have a form like this. G sub D would have a maximum value of 1, so 1 over G sub D magnitude would have a minimum value of 1, and then would increase to larger values for increasing Es.
So we'd end up with a curve that goes like this. The angle of course of minus 1 over G sub D of E would be minus 180 degrees, since the angle of G sub D of E is 0 degrees. We'd have a magnitude of 1. This is minus 1 over G sub D of E. This is the direction of increasing E. We notice again two intersections in this case. Those intersections correspond to the case I mentioned earlier, where the gain, in a sense, has been lowered such that crossover is forced here, or where the gain has been lowered so the crossover is forced at this point.
Again, we can look at the same process from a root locus point of view. We assume that we might start out with a system designed so that all of our poles lie on the left half of the s plane. However, as we increase the amplitude of the signal at the input to the nonlinearity, the loop transmission magnitude-- the a0f0 product-- in a describing function sense drops. The branches move back for some sufficiently large amplitude at the input to the nonlinearity. We have a pair of complex conjugate poles on the imaginary axis. There's a potential for oscillation. If the amplitude at the input to the nonlinearity were further increased, again, we'd get a lower a0f0 product, since the gain of a nonlinear element in the describing function sense would drop. We might end up at this point. That's, again, a borderline stability point.
Well, we see either from the classical describing function analysis, from this sort of an analysis, or from the root locus analysis that there are two values of E that result in a system that has unity magnitude for the loop transmission at the frequency where the angle associated with the af product is minus 180 degrees. And we have to consider which, if either, of those represents a stable amplitude oscillation.
Let's first assume that somehow we've driven the system, we've put in a test signal maybe to the overall input of the feedback system, and we've carefully monitored the signal that exists at the input to the nonlinearity. Somehow we've driven the system so that the signal amplitude at the input to the nonlinearity is one that forces crossover at this frequency. We have 180 degrees of negative phase shift at that frequency. And now let's see what happens. The system now is presumably oscillating with the amplitude at the input to the nonlinearity that moves this magnitude curve down, such that crossover occurs here.
And now let's assume possibly that the amplitude of the oscillation increases just a little bit. The systems oscillating with some amplitude at the input to the nonlinearity. Let's suppose the amplitude increases just a little bit. Well, since the gain of our nonlinear element decreases with increasing amplitude, this curve is pushed down just a little bit further. Crossover occurs somewhat below this frequency. This curve moves down like so. We cross over now in a region of negative phase margin, and consequently the amplitude will continue to increase. We started this by assuming the amplitude increased a little bit. We find out that the system is not restorative at that point. The amplitude continues to increase.
Conversely, if we assume the system was oscillating at this frequency with the amplitude necessary to force unity magnitude or crossover at this frequency, and now we assume that the amplitude of that oscillation shrinks just a little bit, the magnitude curve is pushed up. We cross over at a slightly higher frequency. We cross over, therefore, in a region of positive phase margin. Again, the amplitude further decreases. The system is stable. The amplitude further decreases. Fine. So we conclude that we really can't get a stable amplitude oscillation with the value of E necessary to force crossover at this frequency.
Let's consider the lower frequency possibility. Suppose now we have an even larger value for the amplitude E. So the curve has been pushed down. Crossover now occurs at this frequency, again at a frequency with zero phase margin. And once again, let's perform our test. Let's assume that the amplitude of the signal increases a little bit. What happens? Crossover is pushed toward lower frequencies. The curve comes down. Crossover moves toward lower frequencies. We cross over in a region now with positive phase margin. The amplitude increased. We find we have positive phase margin. The system's damped. The amplitude of the oscillation shrinks back toward its original value.
Similarly, if we assume the magnitude decreases a little bit, the gain in the describing function sense goes up. We cross over at a higher frequency, a region of negative phase margin. The amplitude grows back to its original value. So here, the system's restorative, and consequently we'd anticipate the system would oscillate with parameters such that E was large enough to force crossover at this frequency. Again, we can make the same exact kind of an argument-- and I think it's worth comparing these cases-- in a root locus presentation.
Suppose for example we had the correct amplitude, so that when we considered the gain of the nonlinearity, the describing function for the nonlinearity, our pair of closed-loop holes was precisely on the imaginary axis. If we assume a slight increase in that amplitude, the magnitude of the describing function decreases. We move back in the left half plane. The system's stable. The amplitude shrinks back to its original value, and so forth.
Again, if we make the test at this intersection-- let's see, if we assume that the amplitude increases, the gain drops. The a0f0 product decreases. We have a system that has poles in the right half plane. Consequently, the amplitude further increases. Once again, the same result. A stable amplitude oscillation is not possible here. A stable amplitude oscillation is possible here.
I'd like to illustrate these ideas with a demonstration system. And what I've chosen for the negative of the loop transmission for the demonstration system is a negative loop transmission, or an af product, that's 3,100 times some quantity a0, divided by s. In other words, we have a pole in the origin of the demonstration system. We have two coincident poles located at 1.5 times 10 to the third radians per second, the reciprocal of this time constant. And then we have two higher frequency zeroes at 10 to the fourth radians per second. This differs just a little bit from the description I gave earlier, where I assumed three coincident poles. But that's not a fundamental difference. Here we have one pole at the origin, two poles that are coincident further out. And then the two zeros.
This is a system that will in fact exhibit the kind of angle versus frequency characteristic that I indicated characterizes conditionally stable systems. The angle will drop more negative than minus 180 degrees over some range of frequencies, and then recover to a value that's more positive than minus 180 degrees. We can see that pretty easily. We get a negative 90 degrees of phase shift from the pole at the origin, and then we can view this combination, the two poles and the two zeros, as simply two cascaded lag-type transfer functions.
If you recall, the expression for the maximum phase shift that we get out of either a lead-type transfer function or a lag-type transfer function, the maximum phase shift, is simply the arcsine of alpha minus 1 over alpha plus 1, where alpha is of course the ratio of the frequencies associated with the pole and the zero. And in this case, alpha is 6.7, the pole being located a factor of 6.7 below the zero. If we evaluate this expression, we find out that we get 47 and a half degrees of phase shift associated with each of those doublets. Since the pole is located at the lower frequency, we have negative phase shift.
So now let's keep track of the total phase shift that we get as a maximum. We get minus 90 degrees, minus twice 47.5 degrees. Consequently a maximum negative phase shift of minus 185 degrees. So we have this characteristic that I said describes conditionally stable systems, where the angle will drop more negative than minus 180 degrees over some range of frequencies. In this case it drops to about minus 185 degrees.
In order to implement a system like this, we'd simply build it up out of operational amplifiers. But it's not a completely made-up system. The way you get into this sort of thing, the reason that you get into this problem typically, is as follows. Let's look back at our original magnitude versus frequency curve. When we look at a typical feedback system, there's usually some process that limits the maximum achievable crossover frequency.
We found out when we looked at operational amplifiers that there was a whole collection of higher frequency poles. There was a phase shift associated with the lateral PNP transistors that are used in those operational amplifiers. And that collection of poles, the phase shift associated with time delay, basically, really as a realistic possibility forced us to cross over at some frequency that was bounded by possibly a megahertz for the usually encountered operational amplifiers.
If you consider a mechanical system, a servo mechanism, there are effects like resonances in various members in the system that connect the output with the input, or a motor with an output member. At sufficiently high frequencies, there are mechanical resonances. Again, it becomes very, very difficult to force the system crossover above those critical frequencies.
So in the usually encountered feedback system, there's really some maximum value for the crossover frequency. And if we design a well-behaved system, we'll try to have a slope of something like 1 over s in the vicinity of crossover, so that we can achieve adequate phase margin. But rather than maintaining that 1 over s roll-off all the way back to very low frequencies, we may very easily find it advantageous to have a faster roll-off. 1 over s squared, possibly. We looked at that with two-pole compensation in an op amp or the kind of transfer function that results when we include a lag network. We'd have a 1 over s squared kind of roll-off. The reason for that is that we get greater desensitivity over a wide range of frequencies compared to a true 1 over s roll-off at all frequencies.
We might want to extend that. We might want to get even greater desensitivity than we could get with even the two-pole roll-off. And so we might go to a higher-order roll-off, a three-pole roll-off-- again, in an attempt to improve desensitivity. So while the demonstration system we're building up is a made-up system, if you will, this sort of situation exists in many actual systems.
The building blocks we're going to use are an integrator, so that we can get the pole with the origin. The transfer function, of course, for this operational amplifier. The configuration is the familiar one, minus 1 over Rcs. We want to get lag-type transfer functions. We get that with this operational amplifier configuration. I think we can see that at sufficiently low frequencies, where the capacitor is an open circuit, the input-to-output gain is simply minus R3 over R1. At higher frequencies, where the capacitors are short-circuit, the gain is minus $3 in parallel with R2 divided by R1. If you do the arithmetic, you find this expression for the input-to-output transfer function. We have some DC gain, and then we have an expression that is a pole-zero doublet, as we anticipate. So we can use one of these and two of these as building blocks for our demonstration system.
And here we have the system that we're actually using. We have the integrator. Here's the 1 over Rcs. Actually, in our case, we have 10 to the fourth for 1 over the Rc product. So we have 10 to the fourth over s for this Rc. There's something up here that I'll ignore initially. We'll look at that a little bit later on. That's a way of improving the behavior of this kind of a system. And here we have the two operational amplifiers that give us the lag-type transfer functions. At low frequencies, the gain of the amplifier is 0.56. At higher frequencies, it decreases by a factor of 6.7, if you keep track of everything.
We have another really identical network here. We have the input to our system. This is really the feedback path if we consider this the output of the system, so this resistor provides the overall global feedback path. We have a limiter, and we make the limiter in a very simple way. We have the usual inverting connection operational amplifier with a 1k input resistor, a potentiometer for the feedback element. The gain, of course, of this configuration is simply the negative of the feedback resistor divided by the input resistor. So by adjusting the potentiometer, we can change the gain from basically 0 to minus 100. Actually, in the actual test setup, we have a little series resistor included with the potentiometer, so we can't make the gain of this block really 0. We can bring it down to about unity.
And then the way the nonlinearity is introduced is by putting in two back-to-back Zener diodes. The breakdown voltage of the Zener diodes is somewhere between five and six volts. If you look at it, there's a forward drop plus a Zener diode in either direction, one of the Zeners going into forward conduction, the other one into Zener breakdown for sufficiently large signals. The net result is that we get symmetrical limiting at least to the extent that the two Zener diodes are identical, and that limiting occurs at a peak output amplitude of about six volts.
This is all been done with inverting operational amplifier connections. We could have done it with non-inverting connections in some places, but the networks are a little bit more involved. They require more components. So we did it all with inverting amplifiers. We have one, two, three, four inversions. We have to do something to get a negative feedback loop. We put in one additional inversion here, just a unity gain inverter, so that we do, in fact, have a negative feedback loop.
We can look at a block diagram for that system. There's a very nice one-to-one relationship between the block diagram and the circuit that I'd like to emphasize. Here's the input and the fed-back signal that are really summed. And we saw that earlier. The two input resistors here and here, performing that summation. The second element is one of our lag-type transfer functions. We see that in the block diagram, the lag-type transfer function, the DC low-frequency gain of that operational amplifier connection being 0.56, and then the pole-zero doublet with actually a pole located at the reciprocal of this value, 1.5 times 10 to the third radians per second, with zero at the higher frequency.
The next element is the minus 10 to the fourth over s. And we of course get that with our integrator. The final lag network, and then the final gain term. And not shown in the linear block diagram is the limiter.
We can again look at this in all the ways that we described earlier. Here is the root locus plot. A pole at the origin. Two poles at 1.5 times 10 to the third radians per second. Two zeros at 10 to the fourth radians per second. We find out, as we'd predict from our Bode plot argument earlier, the maximum negative phase shift, that there should be a range of a0f0 products, or in our case a0, since everything else is fixed, for which two branches line in the right half plane-- we can find out the frequencies, and in fact the values of a0 in our expression that cause the two poles to lie on the right half plane.
And we can do that by route analysis, for example. That's probably the easiest way to determine the frequencies at which the branches cross into the right half plane. If we do that and divide by 2 pi to get from radians to hertz or cycles per second, we find out that we cross into the right half plane at 390 hertz for an a0 of 2.7. The two complex branches re-enter the left half plane at 950 hertz, for a0 equals 24.
We can again look at this and all the other ways we've seen. Here's the magnitude portion of a Bode plot. We find out that if we choose a value of a0 of 80, very nearly the maximum value we can get with our 100k potentiometer, and take care of the rounding in here, the crossover occurs at just about 10 to the fourth radians per second. Here's the angle portion of the Bode plot. If we look at 10 to the fourth radians per second, we find out that we have a positive value of phase margin. So again, with a0 equals 80, we'd be crossing over, as I'd indicated the original Bode plot on the board, with a positive phase margin.
We can look at the same system in gain-phase form. Again, omega being a parameter, we find out that for a0 equals 80, crossover occurs. Here is the magnitude equals 1 portion. Crossover occurs at 10 to the fourth radians per second. We have roughly 15 degrees of phase margin. The angle is more negative than minus 180 degrees for frequencies between 2.5 times 10 to the third radians per second, 6.3 times 10 to the third radians per second. Those are the values that we determined earlier by route analysis. The 390 hertz and the 950 hertz.
Now let's look at how the system actually behaves. What we have here is, again, the usual test generator, a power supply to power our setup. And we have the actual conditionally stable system here. We see five operational amplifiers ganged up. So they perform the functions that we've showed in the actual schematic. We have the potentiometer that varies the magnitude or the gain of the quantity a0. We have a couple of switches that I'll ignore for a moment. We'll see what their function is later on, but this is the basic box.
Right now, we have things set for a low value of a0. This is an a0 actually of about 1. I indicated that when the potentiometer is all the way to the left, we get an a0 of about 1. And under those conditions, we have a fairly lightly damped system. We notice that the frequency of ringing on the step response of the system were two milliseconds per division, so the ring frequency is about 250 hertz, two divisions. And we indicated for lightly damped systems, the ring frequency on the step response is just about equal to the crossover frequency of the system. So here we have a low value for a0, less than the 2.4 critical value. The crossover occurs below the critical frequency, below 390 hertz. And we notice it's fairly poorly damped. There's relatively small phase margin for an a0 of one.
Let's try to go to the higher value for a0, something like 80, where the system should once again be stable. And let's say all of a sudden we find out we have some problems. The system is broken into oscillation. We're now in the region where crossover occurs with negative phase margin. But unfortunately, we can't recover. I can turn the a0 pot. Here we start out, low a0, we're all right. All of a sudden, we break into oscillation. And if I try to set a0 equal to 80, I can't do it, because we have large enough signals in the system, such that the system saturates. And in spite of my attempts to increase the a0f0 product, the system thwarts me. We're now into the limiting region of the nonlinearity. There's nothing I can do about it this way.
Well, how do we get out of this? First, I'll turn down the input amplitude. Well, that doesn't help either. The fact that we made the input go away doesn't really matter. The system is now oscillating. And I seem unable to stop it from the input. One thing we might be able to do is turn off the power supply. We have the pot turned up to the maximum a0 value, and let's hope that if we turn off the power supply-- it takes a long time to die, there's big capacitors in there.
All right, the system's quiet now. Let's see what happens if we turn on the power supply. Oops. We lost. This is really a flip-a-coin kind of situation, because it's entirely possible that during the startup transient, we somehow kick the system, get into the large amplitude case, and the system oscillates. That time we won. I turned on the power supply, turned it off, turned it on again. It turned out this time, just by flipping the coin a second time, the system started in the stable mode.
We now have the large value for a0, an a0 of about 80. And what we notice is that we now have, again, a poorly damped step response. We only have about 15 degrees of phase margin, as I indicated earlier, for a0 equals 80. But we're now certainly crossing over at a high frequency, above the 950 hertz critical frequency above the higher value in the s plane, where the roots come back. We're now at 500 microseconds per division. The frequency of the ring on our response is just about a division. So crossover now is about two kilohertz. That corresponds to the a0 of 80. And again the system shows a fair amount of overshoot, corresponding to the low d value of phase margin.
Let's see what happens. We'll up the input amplitude. We're just driving the system harder now. Let me change the generator, so I can get a little larger signal input. Now all of a sudden, we're beginning to get into the nonlinearity. Notice the flattening at the top and so forth. Things are getting worse. All of a sudden, the system pops into oscillation. Reducing the input drive amplitude doesn't help. The systems unstable. It's effectively supplying its own input. It's oscillating. We can take the input away. Effectively, we'll trigger on the oscillation, and there's our system oscillating. We notice the peak amplitude of the oscillation-- we're looking at the output of the limiter stage now. We're at two volts per division. The peak amplitude of the oscillation is about six volts.
We can check our describing function prediction. The prediction says that we should oscillate at the lower frequency at about 390 hertz. If we look at the picture, we note that we're at 500 microseconds per division now. We have five divisions horizontally. So that's 2 and 1/2 milliseconds. To within scope accuracy, we measure about 400 hertz for the frequency of oscillation. That's certainly within our experimental confidence in, first of all, building the system, and secondly measuring its frequency of oscillation. So that tells us that the prediction made by describing function analysis is in fact confirmed.
Let's look at one other thing, which is the gain of the describing function element, or the gain of the nonlinear element in the describing function sense. Under these conditions, we have a very large amplitude signal at the input to the nonlinear element. And let's look at that input signal for a moment. I have that on the second trace.
We notice, first of all, it's very nearly sinusoidal. To, again, what we can observe on the oscilloscope, the signal at the input to the nonlinear element is sinusoidal. The gain of the nonlinear element in its linear region is about 80. That says that we would get into the nonlinearity at an input signal level of six volts divided by 80-something under 100 millivolts. Our amplitude scale here is a volt per division, so we're getting nearly three volts peak at the input.
Clearly we're in the situation where we're driving the nonlinear element through its linear region on a very small fraction of the signal applied to it. And we said, under those conditions, we can get the approximate describing function for our element by taking the peak value of the output-- six volts-- times 4 over pi, which gets us the fundamental component of the output, and dividing by the input signal level, which is E. And our route analysis told us that for oscillation at the low frequency, we need a gain, in the describing function sense through the nonlinearity, an a0 of 2.7. We solve that for the amplitude at the input to the nonlinearity. We find out that that amplitude should be 2.83 volts.
Let's look at the sine wave. Again we're at one volt per division. And certainly within experimental tolerance, we'd conclude that that's very nearly 2.83 volts peak. We have a close to three volts peak, and if we look at the fine division, we just about cross the 2.8 volt minor division. So again, within the accuracy of our experimental configuration, we conclude that describing function analysis predicts the amplitude of the oscillation very, very well.
How do we cure this thing? It's certainly embarrassing to have to turn off the power supply, turn it back on. That'd be impractical in a satellite for example. How do we try to cure the situation in a somewhat more scientific way? Let's go back to the original schematic and look at the point that I glossed over that says, nonlinear compensation. What you do, I think, or at least one technique for compensating this sort of a system, is to consider what happens. If we keep track of signal levels, we find out that when the system is behaving normally, at least at the high-gain configuration, we normally want to cross over in this system above 950 hertz.
When the system is behaving in that manner, the signal level here is very, very small. If you keep track of the maximum signal at this point, we found out that that's about six volts. If you go back through the gains of these elements, you find out there as long as the system's in its linear region, the signal level at this point is quite small. It's less than a diode drop, less than the forward drop of a diode.
However, when the system pops into the mode where it's oscillating at the low frequency as predicted by our describing function analysis, the signal at this point becomes quite large. It becomes significantly larger than 6/10 of a volt. Consequently, if we close this switch, what happens, effectively, is that when the system's oscillating, or tries to oscillate, we don't have an integrator. We have the 10k resistors shunted across the feedback capacitors.
So we have an interesting sort of adaptive compensation, where when the system's behaving normally, when its signal amplitudes are small, these diodes are open circuits. We can have the switch closed. Nothing happens. The system behaves exactly as we showed it before. However, when the system tries to break into oscillation, the amplitude of the signal at this point gets large. Automatically, this 10k resistor is shunted across the 0.01 microfarad capacitor. What we do is change the dynamics associated with this integrator.
We can see how that happens. Let's look at the integrator configuration. And we had seen before, when the diodes were open or when the switch was open, we just had the Rc. We have simply 1 over Rcs, or minus 1 over Rcs, for the Vi and Vout characteristics. When we have large signals, we can argue in the describing function sense that this resistor is shunted across the capacitor. To the extent that that's true, if we had a linear system, we'd get a Vout over Vi that would simply be 1 over Rcs plus 1.
We've chosen Rc to be 10 to the minus fourth seconds. And so the result-- if we go back to our original transmission, under large signal conditions, to the extent that we're able to argue as I have that the net effect of having a large signal is to shunt that resistor across the feedback capacitor, what we do is remove the pole from the origin and replace this with a pole that's 10 to the minus fourth, s plus 1. 1 over Rcs plus 1. And that pole cancels one of these zeros.
The net result is that we end up with a system that has two poles at 1.5 times 10 to the third radians per second. A single zero now out at higher frequencies, out at 10 to the fourth radians per second, because the pole that replaced the pole at the origin cancels one of these zeroes. The system is now absolutely stable. There's no value for a0f0 product that gets us into trouble. The root locus diagram for the system is as shown.
Let's look back at the output of our system. Here's the system oscillating. Here we have the compensation. If I throw the compensation switch, what we do is simply close that switch that I showed earlier in the schematic diagram. Let's trigger so that we can see what's happening. All of a sudden, the system becomes very well-behaved again. We can now drive it. Remember, we had the system simply oscillating by itself. We can now drive it. We have our nonlinear compensation on. We now get to the point that we got into trouble earlier.
Well, the waveform gets ugly. There's no doubt about that. We can get almost any kind of behavior out of the waveform while we're driving things. But if I take the input amplitude down once again, allow the system to re-enter its linear region, it does stabilize. Notice the difference. We'll throw the compensation off, and now let me up amplitude. It jumps into the oscillating mode. We're getting a little modulation. I'm still driving it. But let's not worry about that. If I put the nonlinear compensation on, it recovers. I can get quite bad performance, but at least when I take things away, the system returns to its stable mode of operation. So we've found quite an effective way to really exploit the characteristics or the behavior of this particular nonlinear system.
And there's certainly no generality in nonlinear compensation, but what we do is look for a point in the loop where signal levels change rather dramatically when the system starts to oscillate, and see if we can somehow use that information to improve the performance. We found a rather easy way to do it in this particular case.
I'd like to very quickly show one more feature of this system. We also have the ability to include inside the loop a complementary emitter-follower pair. I think many of you are familiar with this sort of a connection. This is the kind of power-handling stage one very frequently uses. I think we mentioned it earlier in one of the demonstrations at the very beginning of the course. And the forward drop associated with the emitter-followers gives us a dead zone with a width of about 0.6 volts. For signals smaller than about 6/10 of a volt, if the transistors are silicon transistors, we have a very little output. As the input gets bigger than about 6/10 of a volt, the output follows it. So the slope of the input/output characteristics for signals considerably larger than 6/10 of a volt becomes about 1.
The describing function associated with that kind of an element is close to 0 until we get up to about 6/10 of a volt. When we get much larger than that, the gain of this complementary emitter-follower pair approaches 1. The input and output are very nearly identical for sufficiently large input signals. So our describing function as a function of test signal amplitude is as shown.
But what this does is reverse the conditions that we indicated earlier. We now find out if we chase this through that the high frequency crossover point becomes the stable amplitude oscillation, since the arguments that we made before are basically reversed. The kinds of arguments we made looking at what happens as we increase amplitude and see if the system is conservative or restorative, we find out that those arguments are reversed, and now the high-frequency crossover point becomes the stable amplitude oscillation.
Furthermore, we get into trouble with this system with too small signals into the system, rather than with too large ones. And let's just look at how that works. I have the one nonlinear compensation on, so that we prevent ourselves from getting into the kind of instability we had before. Now let me put in the dead zone. And the system is still behaving pretty well here. We can in fact drive it as we did before. And for large signals in, it behaves pretty much as it did before.
But now let's see what happens as I began to decrease the amplitude of the input signal. Now all the signals in the loop are getting smaller, and in particular the signal that's applied to the complementary emitter-follower is getting smaller. And now we're beginning to get into some problems there. Let me make the signal still smaller. And now the system oscillates with very small input signals.
Let's make the input signal go away completely. And we're now oscillating because we have too small an input signal. And in fact, we can look at the frequency of oscillation, and we notice that its out around a kilohertz. We're now oscillating at the higher frequency point. In this case, we can cure the oscillation, make it go away, by putting in a large signal. That gets the gain of our describing function element to be higher, and that cures the oscillation. So let's go ahead, put in a larger signal. We go from the unstable case to a system with relatively light damping, but stable.
Well, I think this test system demonstrates how we can investigate this kind of rather interesting and yet frequently occurring system by describing functions. It also shows how we're able to improve the performance of a nonlinear system by using an appropriate kind of nonlinear compensation. It's a little bit harder to determine than some of the linear compensations you have to-- there's not quite as nice analytic ways of determining the compensation, but we at least saw an example of how we could improve the performance of a nonlinear system.
This concludes our discussion of describing functions as one of the analytic techniques that we have for investigating nonlinear systems. Thank you.