In this video, we're going to learn about how to process signals on a computer. we'll venture into the world known as digital signal processing. It turns out that my research area is DSP, you're going to hear me talk about DSP a lot. And so, we're going to to start just like we did for analog signal. We're going to start talking about fundamental signals and the basic systems. And one of the points of this video is that the, in the digital world and in the analog world, there are a lot of similarities about how you think about signals and their spectra. So we'll see that you're going to already know a lot of what [UNKNOWN] say but there's a few little things a little bit different. Alright. And the first thing, of course, is we're going to start with the complex exponential. Our friend, this is our most important signal that we have just about. And if you recall, a discrete-time signal is a function only of the integers. So we call this a digital signal. I'm being a little careful here about saying discrete-time. Digital signals usually refer to discrete in time, functions of the integers, and discrete in amplitude. Well, I'm going to just talk about discrete-time signals pretending that the amplitude is a continuos value variable, and it could, it could be any real number. It turns out that's not really true. In a computer, you can't represent all numbers exactly. Everything is discrete-valued but we're going to hide that detail in the, in the way we do our theory. Alright. The first thing to note is in this complex exponential here, the frequency variable has to dimensionless. Has no dimensions, because n has no dimensions, and you cannot exponentiate anything that has units. So, f is dimensionless, and furthermore, here's a very interesting property. Suppose I consider, the frequency f plus an integer, so l is another integer. And if I write this as a product, e^j 2pi times an integer is one. So, you get that this is equal to that. That's the definition of a periodic function. So, f is periodic for any integer and, of course, one with the smallest period. So, this is periodic, the period of one. That's very important. So, every function that frequency that we're going to talk about has to be periodic with period one. Because of, of the dimensionalist nature of f and the fact that n is an integer. We didn't have this for the analog complex exponential because t was any real number and f had to have units of inverse seconds in order for the exponent to be dimensionless. n here is dimensionless that makes f dimensionless. Alright. But this periodic behavior has a interesting consequence so f is periodic with period one. And it turns out, let's consider this complex exponential which has a frequency of 1-f and if you go through all the math again, that's one and that's the same as the complex exponential at negative frequency. So, here's our f-axis. And here's 1/2, -1/2, and 1. Let's extend it out to -1. So, if you have some it's got to be periodic with period one so that means this part looks like that. It turns out that this part, the part greater than a half, has to be the same as this part because it's periodic. So, this is the same as that. So, a frequency at 0.6, let's say, a little bit bigger than a half, according to this formula, well, that is 0.6 is that makes this f 0.4, 0.4 which means that f is got to be the same of the same value as -0.4. So, the highest frequency you can have, it turns out, is a 1/2 because once you go above a 1/2, that winds you up in negative frequency. So, here's the name of the game. The lowest frequency sinusoid occurs at f=0, the highest frequency complex exponential whatever is at frequency of 0.5, that's the highest frequency that makes any sense. You got little bit higher than that, you wind up back at may be frequency. so, in this highest frequency complex exponential, it turns out to be an alternating between plus and -1 forever. So, when we talk about signals, there are functions of quantities, there are functions of frequency. We're either going to define them over -1/2 to 1/2. That's a period. Or we can talk about them going as a function from 0 to 1 and realize that this part is the same as that part. This is positive frequency, this part is negative frequency and then there, that's the one difference, a very important difference between discrete-time signals and analog signals. Alright. Let's then extend this to a sinusoid. the formula looks the same and the difference is, is that n is an integer, which again, makes f have no no dimensions. And we're usually going to pick f around, in that range. So, the highest value of f makes any sense to talk about is f. And you get what looks a sinusoid but, of course, it's only defined with integers so you get a stem plot. And what doesn't change, of course, is Euler's formula. So, this sinusoid can be written as the real part of Ae to the j phi times e^j2pifn. So again, that's the complex amplitude. That all falls through nothing really changes in that regard. So, let's talk about some other fundamental signals. And here's our friend, the unit step. And I want to point out that I'm now defining it everywhere. [LAUGH] Be happy I guess. So, here's the n-axis and I'll plot u of n. And it's zero with, and this, zero pops up to one at the origin and there's one forevermore, okay? So, that's the unit step. it's a familiar quantity we've seen before. Here's a new basic signal that we didn't have in the [UNKNOWN] called the unit sample. And this one is very simple. It is 0 basically everywhere except at the original where it's 1. That's why it's called a unit, and it's called a unit sample, because it looks like it's one value, okay, and is being rewritten by delta(n). Okay. Now, here's the reason we talk about the unit sample. Let's sketch out here some signal which I am just going to call S(n) and let's assume that this is the origin n=1, n=2, etc. along that scale, the horizontal scale there. The value of the signal at the origin is, of course, S(0). What is the signal at the origin? Well, it's a unit sample whose amplitude is S(0), okay? S(0) is just a number. So, what the signal is at the orgin is a unit sample. This signal, the value, the signal at n=1 is S(1) times a unit sample which is a delayed version of the unit sample. It's delta(n)-1, alright? Because this signal is a delayed version of that and it's delayed by 1 unit, etc. This is S(2), S(3), S(4), S(5), etc. because it's just a 5 down there. So, what we get is a somewhat confusing but very important formula. That a signal, any signal can be expressed as a superposition of unit samples. In fact, that's the definition of a discrete-time signal. It is a superposition of unit samples where we use amplitude at every sample expresses whatever the value the signal has at that lay. So this may look a little funny here but the idea here is that the real signal is the part that depends on n. Here's the only part that depends on the n in the formula. We're going to find this expression to be extremely handy a little bit later. Alright. Now, let's move on to some simple systems and we've seen this one before. The simple amplifier where G is the gain, of course. Now we had to use operational amplifiers, op amps, when we talk about analog signals that amplified. Well, now an amplifier is easy, which corresponds to a multiply. So, you just kind of do it. So, I've written this in kind of a computer like code thing, because that's exactly the way it works. Multiplies are very simple to do. So, are delays and time delay is simply delaying, of course, only by an integer because all discrete-time signals are defined only if the integer makes no sense to delay by f, for example, because must have an integer argument signals. So, and a delay is also easy to implement in a computer. no, no real dificulty thre. Also, the definition of linear system is the same basically as it was for analog systems. Notice the change in terminology here. We say shift-invariant because the delay can only be in integers. So, the word shift is supposed to mean only delaying by an integer, so it was just a slight terminology change. So, linear time invariant systems basically refers to an analog system and your shift-invariant system refers to a discrete-time system. So, the linear part of the definition is the same as it was for analog, superposition applies. So, whenever you express an input as a sum of simpler signals, the output is also, is equal to the super position of the outputs that you got when you put each analog. So, nothing changes at all. And shift-invariant is the same, is the same basic form as time invariant. If you delay the input by some amount, the output that you get is the original input to this, this, when the signal wasn't delayed. And then, you just delay that and that's what the output is. No matter what delay you pick, the system doesn't change with time. And all of the examples that we encountered for the linear and time invariance systems apply here, through this special case of discrete-time. All these examples follow through so there's really no difference here. So, let's summarize the basics that here, digital signals, discrete-time signals are functions of the integers that has a very important consequence. We're talking about the complex exponential, things that are functions of frequency. Frequency is dimensionless and is only defined uniquely over unit-length intervals. Usually it's going to be 0 to 1 or -1/2 to 1/2 and it's the -1/2 to 1/2 is the one we're going to use a lot wehn we're doing theory. It's going to turn out that in a practical, if you do the calculation, the, it's convenient to take frequency going over zero to 1. I'll point this out when we get to it. The most intriguing thing and the simplest thing from your viewpoint, is that linear signal in systems theory is exactly the same as it is for analog signals. There is no difference, you don't have to learn anything new. So, we're on our way and now we're going to start talking about some more details, in particular, the frequency domain in discrete-time.