The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. In the last lecture, I introduced and illustrated the kinds of signals and systems that we'll be dealing with throughout this course. In today's lecture, I'd like to be a little more specific and in particular talk about some of the basic signals, both continuous time and discrete time, that will form important building blocks as the course progresses.
Let's begin with one signal, the continuous time sinusoidal signal, which perhaps you're already somewhat familiar with. Mathematically, the continuous time sinusoidal signal is expressed as I've indicated here. There are three parameters, A, omega 0, and phi.
The parameter A is referred to as the amplitude, the parameter omega 0 as the frequency, and the parameter phi as the phase. And And graphically, the continuous time sinusoidal signal has the form shown here. Now, the sinusoidal signal has a number of important properties that we'll find it convenient to exploit as the course goes along.
One of which is the fact that the sinusoidal signal is what is referred to as periodic. What I mean by periodic is that under an appropriate time shift, which I indicate here as t0, the signal replicates. or repeats itself, or said another way, if we shift the time origin by an appropriate amount t0, the smallest value of t0 being referred to as the period, then x of t is equal to itself shifted. And to demonstrate that, we can demonstrate it mathematically by simply substituting into the mathematical expression for the sinusoidal signal t plus t.
capital T0 in place of T. When we carry out the expansion, we then have, for the argument of the sinusoid, omega 0 T plus omega 0 capital T0 plus phi. Now, one of the things that we know about sinusoidal functions is that if you change the argument by any integer multiple of 2 pi, then the function has the same value. And so we can exploit that here. In particular, particular, with omega 0 t0, an integer multiple of 2 pi, then the right-hand side of this equation is equal to the left-hand side of the equation.
So with omega 0 t0, 0 equal to 2 pi times an integer, or t 0 equal to 2 pi times an integer divided by omega 0, the signal repeats. The period is defined as the smallest value of t 0, and so the period is 2 pi divided by omega 0. And going back to our sinusoidal signal, we can see that, and I've indicated here then the period as 2 pi over omega 0, and that's the value under which the signal repeats. Now. In addition, a useful property of the sinusoidal signal is the fact that a time shift of a sinusoid is equivalent to a phase change. And we can demonstrate that again mathematically.
In particular, if we put the sinusoidal signal under a time shift, I've indicated that the time shift that I'm talking about by small t0. and expand this out, then we see that that is equivalent to a change in phase. And an important thing to recognize about this statement is that not only is a time shift generating a phase change.
But in fact, if we inserted a phase change, there is always a value of t0, which would correspond to an equivalent time shift. Said another way, if we take omega 0 t0 and think of that as our change in phase, for any change in phase, we can solve this equation for a time shift. Or conversely, for any value of time shift, that represents an appropriate phase. So a time shift corresponds to a phase change, and a phase change likewise corresponds to a time shift. And so, for example, if we look at the general sinusoidal signal that we saw previously, in effect, changing the phase corresponds to moving this signal in time one way or the other.
For example, if we look at the look at the sinusoidal signal with a phase equal to zero, that corresponds to locating the time origin at this peak. And I've indicated that on the following graph. So here we have illustrated a sinusoid with zero phase, or a cosine with zero phase, corresponding to taking our general picture and shifting it, shifting it appropriately as I've indicated here.
This, of course, still has the property that it's a periodic function, since we simply displaced it in time. And by looking at the graph, what we see is that it has another very important property. a property referred to as even. And that's a property that we'll find it useful in general to refer to in relation to signals. A signal is said to be even if, when we reflect it about the origin, it looks exactly the same.
So it's symmetric about the origin. And looking at this sinusoid, that in fact has that property, and mathematically, the statement that it's even is equivalent to the statement that if we replace the time argument by its negative, the function itself doesn't change. Now, this corresponded to a phase shift of 0 in our original cosine expression. If instead we had chosen a phase shift of, let's say, minus pi over 2, then instead of a cosinusoidal signal, what we would generate is a sinusoid with the appropriate phase. Or, said another way, if we take our original cosine, cosine, And substitute in for the phase minus pi over 2. Then, of course, we have this mathematical expression.
Using just straightforward trigonometric identities, we can express that alternately as sine over pi. omega 0 t. The frequency and amplitude, of course, haven't changed.
And that, you can convince yourself also, is equivalent to shifting the cosine by an amount in time that I've indicated here. namely a quarter of a period. So illustrated below is the graph now when we have a phase of minus pi over 2 in our cosine which is a sinusoidal signal.
Of course, it's still periodic. It's periodic with a period of 2 pi over omega 0 again, because all that we've done by introducing a phase change is introduce the time shift. Now, when we look at the sinusoid in comparison with the cosine, namely with this particular choice of phase, this has a different symmetry, and that symmetry is referred to as odd. What odd symmetry means...
graphically is that when we flip the signal about the origin, the time origin, we also multiply it by a minus sign. So that's, in effect, anti-symmetric. It's not the mirror image, but it's the mirror image flipped over.
And we'll find many Occasions not only to refer to signals more general than sinusoidal signals as even in some cases and odd in other cases, and in general, mathematically, an odd signal is one which satisfies the algebraic expression that x of t, when you replace t by its negative, is equal to minus x of minus t. So So replacing the argument by its negative corresponds to an algebraic sine reversal. OK, so this is the class of continuous time sinusoids. We'll have a little more to say about it later. But I'd now like to turn to discrete time sinusoids.
What we'll see is that discrete time sinusoids are very much like continuous time ones, but also with some very important differences. And we want to focus not only on the similarities, but also on the differences. Well, let's begin with the mathematical expression a discrete time sinusoidal signal mathematically is, as I've indicated here, A cosine omega zero n plus phi. And just as in the continuous signal, continuous time case, the parameter a is what we'll refer to as the amplitude, omega 0 as the frequency, and phi as the phase. And I've illustrated here several discrete time sinusoidal signals.
And they kind of look similar. In fact, if you sort of track what you might think of as the envelope. It looks very much like what a continuous time sinusoid might look like. But keep in mind that the independent variable in this case is an integer variable.
And so the sequence, in fact, only takes takes on values at integer values of the argument. And we'll see that that has a very important implication, and we'll see that shortly. Now, one of the issues that we addressed in the continuous time case was periodicity. And I want to return to that shortly, because that is, in fact, one of the areas where there is an important distinction.
Let's first, though, examine the statement similar to the one that we examined for continuous time, namely the relationship between a time shift and a phase change. Now, in continuous time, of course, we saw that a time shift corresponds to a phase change and vice versa. Let's first look at the relationship between shifting time and generating a change in phase.
phase, in particular for discrete time, if in fact I implement a time shift, that generates a phase change. And we can see that easily by simply inserting a time shift, n plus n0. And if we expand out this argument, we have omega 0 n plus omega 0 n0.
And so I've done that on the right-hand side of the equation here. And the omega 0 n0 then simply corresponds to a change in phase. So clearly, a shift in time generates a change in phase.
And for example, if we take a particular sinusoidal signal, let's say we take the cosine signal at a particular frequency and with a phase equal to 0, a sequence that we might generate is one that I've illustrated here. So what I'm illustrating here is the cosine signal with 0 phase and 0. It has a particular behavior to it, which will depend somewhat on the frequency. If I now take the same sequence and shift it so that the time origin is shifted a quarter of a period away, then you can convince yourself, and it's straightforward to work out, that that time shift corresponds to a phase shift of pi over 2. So in that case, with the cosine with a phase of minus pi over 2. That will correspond to... The expression that I have here, we can alternately write that using, again, a trigonometric identity as a sine function.
And that, I've stated, is equivalent to a time shift. Namely, this shift of pi over 2 is... is equal to a certain time shift.
And the time shift for this particular example is, in fact, a quarter of a period. So here we have the sinusoid. Previously, we had the cosine.
The cosine was exactly the same sequence, but with the origin located here. And in fact, that's exactly the way we drew this graph. Namely, we just simply took the same values and changed the time origin.
Now, looking at this sequence, which is the sinusoidal sequence, the phase of minus pi over 2, that has a certain symmetry. And in fact, what we see is that it has an odd symmetry. just as in the continuous time case, namely, if we take that sequence, flip it about the axis, and flip it over in sine, then we get the same sequence back again. Whereas with zero phase, corresponding to the cosine that I showed previously, that has an even symmetry, namely, if I flip it about the time origin and don't do a sine reversal, then the sequence is maintained.
So, So here we have an odd symmetry expressed mathematically as I've indicated, namely replacing the independent variable by its negative attaches a negative sign to the whole sequence. Whereas in the previous case, what we have is zero phase and an even symmetry, and that's expressed mathematically as as x of n equals x of minus n. OK, now one of the things I've said so far about discrete time sinusoids is that a time shift corresponds to a phase change. And we can then ask whether the reverse statement is also true, and we knew that the reverse statement was true in continuous time.
Specifically, is it true that a phase change always corresponds to a time shift. Now we know that that is true, namely that this statement works both ways in continuous time. Does it in discrete time?
Well, the answer, somewhat interestingly or surprisingly until you sit down and think about it, is no. That it is not necessarily true in discrete time that any phase change can be interpreted as a simple time shift of the sequence. And let me just indicate what the problem is.
If we look at the relationship between the left side and the right side of this equation, expanding this out as we did previously, we have that omega 0 n plus omega 0 n 0 must correspond to omega 0 n plus phi. And so omega 0 n 0 must correspond to the phase change. Now.
What you can see pretty clearly is that depending on the relationship between phi and omega 0, n0 may or may not come out to be an integer. Now in continuous time, the amount of time shift did not have to be an integer amount. In discrete time, when we talk about a time shift, the amount of time shift, obviously, because of the nature of discrete time signals, must be an integer.
So the phase changes related to time shifts must satisfy this particular relationship, namely that omega 0 n0, where n0 is an integer, is equal to the change in phase. OK. Now.
That's one distinction between continuous time and discrete time. Let's now focus on another one, namely the issue of periodicity. And what we'll see is that, again, whereas in continuous time, all continuous time sinusoids are periodic, in the discrete time case, that is not necessarily true. To explore that a little more carefully, Let's look at the expression again for a general sinusoidal signal.
With an arbitrary amplitude, frequency, and phase. And for this to be periodic, what we require is that there be some value, capital N, under which when we shift the sequence by that amount. we get the same sequence back again. And the smallest value of capital N is what we've defined as the period. Now, when we try that on a sinusoid, we, of course, substitute in for n, little n, little n plus capital N.
And when we expand out the argument here, we'll get the argument that I have on the right-hand side. And in order for this to repeat, in other words, in order for us, in effect, to discard this term, omega 0 times capital N, where capital N is the period, must be an integer multiple of 2 pi. And in that case, case, it's periodic as long as omega 0 times capital N, capital N being the period, is 2 pi times an integer.
Just simply dividing this out, we have capital N, the period, is 2 pi m divided by omega 0. Well, you can say, okay, what's the big deal? Whatever capital N happens to come out to be when we do that little bit of algebra, that's the period. But in fact, capital N, or 2 pi m divided by omega 0, may not ever come out to be an integer, or it may not come out to be the one that you thought it might. For example, let's look at some particular sinusoidal signals.
Let's see, we have the first one here, which is a sinusoid, as I've shown. And it has a frequency, what I've referred to as the frequency, omega 0 equal to 2 pi divided by 12. And what we'd like to look at is 2 pi divided by omega 0. In effect, then find an integer to multiply that by in order to get another integer. Let's just try that here.
If we look at 2 pi over omega 0, 2 pi over omega 0 for this case is equal to 12. Well, that's fine. 12 is an integer. So what that says, in fact, is that this sinusoidal signal is periodic, and in fact, it's periodic with a period of 12. All right, let's look at the next one.
The next one, we would have 2 pi over omega 0. Again, and that's equal to 31 fourths. So what that says is that the period is 31 fourths. But wait a minute. 31 fourths isn't an integer.
We have to multiply that by an integer to get another integer. Well, we'd multiply that by 4. So 2 pi over omega 0 times 4 is 31. 31 is an integer. And so what that says is this is periodic, not with a period of 2 pi over omega 0, but with a period of 2 pi over omega 0 times 4, namely, with a period of 31. Finally, let's take the example where omega 0 is equal to a sixth, as I've shown here, that actually looks, if you track it with your eye, like it's periodic.
2 pi over omega 0, in that case, is equal to 12 pi, well, what integer can I multiply 12 pi by and get another integer? The answer is none, because pi is an irrational number. So in fact, what I'm going to do is I'm going to multiply what that says is that if you look at this sinusoidal signal, it's not periodic at all, even though you might fool yourself into thinking it is simply because the envelope looks periodic. Namely, the continuous time equivalent of this is periodic. The discrete time sequence is not.
Okay, well, we've seen then some important distinctions between continuous time sinusoidal signals and discrete time sinusoidal signals. The first one is the fact that in the continuous time case, time shift and phase change are always equivalent, whereas in the discrete time case, in effect, it works one way but not the other way. We've also seen that for a continuous time signal, the continuous time signal is always periodical.
periodic, whereas the discrete time signal is not necessarily. In particular, for the continuous time case, if we have a general expression for the sinusoidal signal that I've indicated here, that's periodic for any choice of omega 0. Whereas in the discrete time case, it's periodic only if omega 0 is 2 pi Well, only if 2 pi over omega 0 can be multiplied by an integer to get another integer. Now, another important and, as it turns out, useful distinction between the continuous time and discrete time case is the fact that in the discrete time case, as we vary what I've called the frequency omega 0, We only see distinct signals as omega 0 varies over a 2 pi interval.
And if we let omega 0 vary outside the range of, let's say, minus pi to pi or 0 to 2 pi, we'll see the same sequences all over again, even though at first glance the mathematical expression might look different. So in the discrete time case, this class of signal signals is identical for values of omega 0 separated by 2 pi, whereas in a continuous time case, that is not true. In particular, if I consider these sinusoidal continuous time signals as I vary omega 0, what will happen is that I will always see different sinusoidal signals.
Namely, these won't be equal. And in a continuous time case, In effect, we can justify that statement algebraically. And I won't take the time to do it carefully.
But let's look, first of all, at the discrete time case. And the statement that I'm making is that if I have two discrete time sinusoidal signals at two different frequencies, and if these frequencies are separated by an integer multiple of of 2 pi, namely if omega 2 is equal to omega 1 plus 2 pi times an integer, when I substitute this into this expression, because of the fact that n is also an integer, I'll have m times n as an integer multiple of 2 pi. And that term, of course, will disappear because of the. periodicity of the sinusoid, and these two sequences will be equal.
On the other hand, in the continuous time case, since this is not restricted to be an integer variable, since t is not restricted to be an integer variable, for different values of omega 1 and omega 2, these sinusoidal signals will always be different. OK. Now, many of the issues that I've raised so far in relation to sinusoidal signals are elaborated on in more detail in the text.
And of course, you'll have an opportunity to exercise some of this as. you work through the video course manual. Let me stress that sinusoidal signals will play an extremely important role for us as building blocks for general signals and descriptions of systems, and leads to the whole concept of Fourier analysis, which is very heavily exploited throughout the course.
What I'd now like to turn to is another class of important building blocks. And in fact, we'll see that under certain conditions, these relate strongly to sinusoidal signals. Namely, the class of re-discordation. and complex exponentials. Let me begin first of all with the real exponential and in particular in the continuous time case.
A real continuous time exponential is mathematically expressed as I indicate here, x of t equal to c e to the a t, where for the real exponential, c and a are real numbers. And that's what we mean by the real exponential. Shortly, we'll also consider complex exponentials, where these numbers can then become complex.
So this is an exponential function. And for example, if the parameter our A is positive, that means that we have Have a growing exponential function. If the parameter a is negative, then that means that we have a decaying exponential function.
Now, somewhat as an aside, it's kind of interesting to note that for exponentials, a time shift corresponds to a scale change, which is somewhat different than what happens with sinusoids. In the sinusoidal case, we saw that a time shift corresponded to a phase change. With a real exponential, a time shift, as it turns out, corresponds to simply changing the scale.
There's nothing particularly crucial or exciting about that, and in fact, perhaps stressing it is a little misleading, for general functions, of course, about all that you can say about what happens when you implement a time shift is that it implements a time shift. OK, so here's the real exponential, just c e to the a t. Let's look at the real exponential now in the discrete time case. And in the discrete time case, we have several alternate ways of expressing it. We can express the real exponential in the form c e to the beta n.
Or as we'll find more convenient, in part for a reason that I'll indicate shortly, we can rewrite this as c times alpha to the n, where of course alpha is equal to e to the beta. More typically, in the discrete time case, we'll express the exponential as c times alpha to the n. So for example, this becomes essentially a geometric series of progression as n continues for certain values of alpha.
Here, for example, we have for alpha greater than 0, first of all on the top, the case where the magnitude of alpha is greater than 1, so that the sequence is exponentially or geometrically growing. On the bottom, again with alpha positive, but now with its magnitude less than 1, we have a geometric progression that is exponentially or geometrically decaying. OK, so this, in both of these cases, is with alpha greater than 0. Now the function that we're talking about is alpha to the n, and of course what you can see is that if alpha is negative instead of positive, then when n is even, That minus sign is going to disappear. When n is odd, there will be a minus sign.
And so for alpha negative, the sequence is going to alternate positive and negative values. So for example, here we have alpha negative with its magnitude less than 1. And you can see that, again, its envelope decays geometrically, and the values alternate in sign. And here we have the magnitude of the of alpha greater than 1 with alpha negative.
Again, they alternate in sign, and of course, it's growing geometrically. Now, If you think about alpha positive and go back to the expression that I have at the top, namely c times alpha to the n, with alpha positive, you can see a straightforward relationship between alpha and beta. Namely, beta is the natural logarithm of alpha. Something to think about is what happens if alpha is negative, which is, of course, a very important and useful class of real discrete time exponentials also.
Well, it turns out that with alpha negative, if you try to express it as c e to the beta n, then in fact beta comes out to be an imaginary number. And that is one, but not the only reason why in the discrete time case it's often most convenient to to phrase real exponentials in the form alpha to the n rather than e to the beta n. In other words, to express them in this form rather than in this form.
Those are real exponentials, continuous time and discrete time. Now let's look at the Continuous time complex exponential. And what I mean by a complex exponential, again, is an exponential of the form c e to the a t.
But in this case, we allow the parameters c and a to be complex numbers. And let's just track this through algebraically. If c and a are complex numbers, let's write c in polar form so it has a magnitude and an angle.
Let's write A in rectangular form so it has a real part and an imaginary part. And when we substitute these two in here, combine some things together. Well actually I haven't combined yet. I have this for the amplitude factor and this for the exponential factor. I can now pull out of this the term corresponding to e to the rt and combine the imaginary parts together and And I come down to the expression that I have here.
So following this further, an exponential of this form, e to the j omega, or e to the j phi, using Euler's relation, can be expressed as the sum of a cosine plus j times a sine. And so that corresponds to this factor. And then there is this time-varying amplitude factor on top of it. Finally, putting those together, we end up with the expression that I show on the bottom. And what this corresponds to are two sinusoidal signals, 90 degrees out of phase, as indicated by the fact there's a cosine and a sine.
So there's a real part and an imaginary part with sinusoidal components 90 degrees out of phase and a time-varying amplitude factor, which is a real exponential. So it's a sinusoid multiplied by a real exponential in both the real part and the imaginary part. And let's just see what one of those terms might look like.
What I've indicated at the top is a sinusoidal signal with a time-varying exponential envelope, or an envelope which... which is a real exponential, and in particular, which is growing, namely with r greater than 0. And on the bottom, I've indicated the same thing with r less than 0. And this kind of sinusoidal signal, by the way, is typically referred to as a damped sinusoid. So with r negative, what we have in the real and imaginary parts are damped sinusoids and The sinusoidal components of that are 90 degrees out of phase in the real part and in the imaginary part. OK, now in the discrete time case, we have more or less the same kind of outcome.
In particular, what we'll Make reference to our complex exponentials in the discrete time case. The expression for the complex exponential looks very much like the expression for the real exponential, except that now we have complex factors. So c and alpha are complex numbers. And again, if we track through the algebra and get to a point point where we have a real exponential multiplied by a factor which is a purely imaginary exponential, apply Euler's relationship to this, we then finally come down to a sequence which has a real exponential amplitude multiplying one sinusoid in the real part and in the imaginary part.
part exactly the same kind of exponential, multiplying a sinusoid that's 90 degrees out of phase from that. And so if we look at what one of these factors might look like, it's what we would expect given the analogy with the continuous time case. Namely, it's a sinusoidal sequence with a real exponential envelope. In the case where alpha is positive, then it's a growing envelope.
In the case where alpha is negative, I'm sorry, where the magnitude of alpha is greater than 1, it's a growing exponential envelope. Where the magnitude of alpha is less than 1, it's a decaying exponential envelope. And so I've illustrated that here.
Here we have the magnitude of alpha greater than 1. And here we have the magnitude of alpha less than 1. In both cases, sinusoidal sequences underneath the envelope, and then an envelope that is dictated by what the magnitude of alpha is. OK, now in the discrete time case then, we have a result similar to the continuous time case, namely components in a real and imaginary part that have a real exponential factor time. times a sinusoid. Of course, if the magnitude of alpha is less than 1, I'm sorry, if the magnitude of alpha is equal to 1, then this factor disappears or is equal to 1. to 1, and this factor is equal to 1. And so we have sinusoids in both the real and the imaginary parts. Now, one can ask whether, in general, the complex exponential with the magnitude of alpha equal to 1 is periodic or not periodic.
And the clue to that can be inferred by examining this expression. In particular, in the discrete time case, with the magnitude of alpha equal to 1, we have pure sinusoids in the real part and in the imaginary part. And in fact, in the continuous time case, with r equal to 0, we have sinusoids in the real part and the imaginary part. In a continuous time case, when we have a pure complex exponential, in fact, so that the terms aren't exponentially growing or decaying, in fact, those exponentials are always periodic because, of course, the real and imaginary sinusoidal components are periodic. In the discrete time case, we know that the sinusoids may or may not be periodic depending on the value of omega zero.
And so in fact, In the discrete time case, the exponential e to the j omega 0 n that I've indicated here may or may not be periodic depending on what the value of omega 0 is. OK, now to summarize, in this lecture I've introduced and discussed a number of important basic signals, in particular, sinusoids and real and complex exponentials. One of the important outcomes of the discussion, emphasized further in the text, is that there are some very important similarities between them, but there are also some very important differences.
And these differences will surface when we exploit sinusoids and complex exponentials as basic building blocks for more general continuous time and discrete time signals. In the next lecture, what I'll discuss us are some other very important building blocks, namely what are referred to as step signals and impulse signals. And those, together with the sinusoidal signals and exponentials, as we've talked about today, will really form the cornerstone for essentially all of the signal and system analysis that we'll be dealing with for the remainder of the course.
Thank you.