Okay, in this video we're going to talk about the basic ideas behind systems. We'll first talk about some very simple systems, systems that the operations they perform in the input signal to boost an output are quite simple and straightforward. We're then going to talk about how you can book systems together, to make more complicated input, output relationships. The relationship between the input and the output . Then finally we're going to talk about linear systems, and time invariant systems. And the ones that are both linear. And time invariant are going to be extremely important to us, as we're going to see. Alright, so let's talk about some simple systems, ones that are pretty easy to understand. And the first one is quite simple. It's called a gain, and what the system does is, the output is equal to the input times a gain. A number G, a scalar. So I think that's pretty easy to understand it. It just takes the input, multiplies it times a number and that's its So, we can use the unit step as a example input to see how this works. So the input here, is I've drawn it as a unit step. And then the output is the same unit step, except it's now got an amplitude of G instead of 1, whatever that is. In this special case that I've drawn, I've indicated here of an amplifier has to do with the fact that if g is bigger than 1, the terminology is, that it is an amplifier. It makes things bigger. It amplifies. There are cases in which g is less than 1 in which the case it serves as an attenuator. It makes things smaller. And finally, a somewhat confusing terminology, if G is negative, it's called an inverter. So it does not mean take the reciprocal. So if we use our unit step input for example here. So, we bring in new step, and we look at what the output is going to be. If it G was negative, the output, would be like that. So, it doesn't mean take the reciprocal it means change the sign. And I have no idea where the terminology inverter came from, but that's what it is. Okay, let's talk about some more systems, and this one we've seen already. The time delay, the output is equal to the input a little bit later, it's been delayed, and again using unit step as our example, you delay the output is also a unit step that is now delayed. It seems like I said, already talking about signals, pretty easy to understand what it does. Turns out is a more complicated to build. So, delay boxes turn out to not be easy. We do want to point out that they do occur naturally. Light, in fact, all electrical signals travel at a finite speed. They don't travel from one place or another instant. So, if something happens at the sun, a change in the brightness of the sun, it won't reach the earth for about eight and a half minutes. So that's a delay. That's a physical delay. Just run an electrical signal through a long piece of wire. Introduces the delay so in some cases it's pretty easy, it's very hard to have a controllable delay but that could be a bit more complicated. So like I said last time if you have tau greater than 0 that's called a time delay, if tau is negative it's a time advance that occurs earlier in time. turns out such systems are pretty hard to build, because, again using arguments step as an example, if time were negative, the output would come out before the change in the input occured. So if this is, for example, -1, this output at time 1, -1 is going to have to occur before it occurs at the input. So, somehow the system would have to know the future value, the future behavior of the input signal. Well, that's a little weird. And so, time advances are great mathematical things, but building them, is, more than just a little complicated. And another example that's along similar lines, something that's easy to write, but hard to build, and that's called time reversal. So here, the, using it again our unit step input, you put in the unit step, and it comes out, delayed. Not delayed, I'm sorry, my mistake it comes out time-reverse. So notice, that again, the value at, lets say time-1 is going to be the value of the input at time 1. The system would have to know what was going to happen 2 seconds ahead in that case to produce an output. Well it gets even worse. Let's clean this out, what happened at time 10 would have to come out at the output at time minus 10. Which means they would have to be able to predict way ahead in the future, in fact, have to predict the entire waveform into the infinite future, and then put it out at the infinite past, so time reverse systems are very weird, very hard to build. however, they have a very simple mathematical description. not all simple systems can be built. Well, let's talk about some systems that are maybe not so simple, but easy to understand, and that's the derivative system. So, what this system does is simply take the input and evaluate its derivative, and that produces the output. So, easy, understanding actually derivative systems are actually easy to build. We can also talk about, systems that integrate, and I'm going to point out something about this, again these are easy to build There's a convention in this course that's important and that is there are no things such as indefinite integrals in this course. Every integral is a definite integral. And most of our integrals will start at minus infinity and go up to time t. I tend to use Greek letters for variables of integration. It's just a convention I have to make it clear that it is a variable of integration. This will come up very frequently. So, the output y is equal to the integral over all previous time of the input x. This [INAUDIBLE] and like I said there is no trouble building this system. Okay. Those were our simple systems. Let's put them together. So, let's talk about some simple structures. Perhaps the simplest is called the cascade structure, and essentially, all it has is one system after another. We've already seen this in the fundamental model communication. It is a cascade of several systems, in fact five of them. So the input to the entire chain is x, goes through s1, produces w and then w in turn produces y. So we can think about this as one system, which is constructed as a cascade of two simpler systems. In fact, that's exactly what cascade structures are for. For example, suppose you wanted a system that delayed and inverted. And I could do that by building an inverter let's say here. Then passing that, the output of inverter into delay.And then the overall relationship between x and y would be an inverted delay. So, that's the way you get something a bit more complicated from simpler. Systems. Another way you can hook systems together to do interesting things is what's called a parallel connection. And I think it's pretty easy to see why they're in parallel. You have two systems here that are in parallel to each other. There are a couple of conventions here we need to talk about. I want you to notice here that I have x coming in, and it gets split. At least that's what the diagram would seem to indicate. In system theory, the convention is that when such as, a split occurs, it doesn't mean split the signal in two or anything. What it means is the same signal is applied to both systems after the split, so just a convention. That's why I labeled this very carefully x(t). When we get to talking about this in special cases, you're not going to see me write this. It will be understood that the input to s1 and s2 is x(t). That's what this diagram Diagram means. We also have a new system over here. We have a two input one output system. And I think it's pretty obvious what it is, it's an adder, so if we were to call this y 1, the operative of that system, and that y2 and y = y1 + y2. So, and that's what an adder does, I think it's pretty self-explanatory in terms of this what's called block diagram. So a parallel system. And there's two systems in parallel each having the same input x and you just add up their outputs to produce the total output. So for example suppose you wanted to model an echo chamber. So in that case the system s1, we let's say do nothing. It's output is equal to the input. The output of s2 or rather s2 would be a delay box. It would delay it's input. After you add them together, you get the same signal, plus it's self delayed. That's what happens when you have an echo. You get the same, you get a signal, plus its delayed version. So I could build, construct something out of very simple systems, it's a bit more complicated. Okay, finally, we have a the most complicated structure that we're going to encounter the feedback. And I'll explain how it's used in a second, but let's go through this. So we have the input and here's the output. The input is going to go through an adder and the other input to the adder comes from something derived from y. So, the way we think about this is that this output of this adder is an error signal. Because you can see this little minus sign sitting here. And that's the standard convention which means that the e is equal to x minus the output of S2. So that little minus sign, right there corresponds to that minus sign. So that's how you indicate a difference instead of a sum. So what this feedback configuration does, is it takes e passes it through S1 to produce the output. But that output gets sent back through S2 to occur at the input. That gets subtracted from x, and produces the air signal, which goes around, and round, and round, and round. So this is the way a cruise control works on a car. You may choose the setting for the velocity which you want the car to go and, if that is different than it's current velocity, that produces an error signal. That causes the engine to rub. Let's clean that to describe the engine. Causes and output velocity y. Well that goes through the control system as two to produce the air signal. And finally when the air goes to zero, the cruise control kind of settles down and doesn't accelerate the car any more. So, that's a great example of feedback system. I want to point out that everything I just said in that example wasn't electrical. Turns out systems can describe a very general class of signal. The fundamental reason is, they're just mathematical constructs. And system theory and signal theory for that matter applies to electrical signals and beyond, so It's interesting that you can use these kind of models to describe much more general things. Alright, let's talk about special cases, special systems, and perhaps one of the most important, or linear systems. So let's look at this. So we have a system S. If it is linear, if you take an input that consists of a sum of inputs, sum of the signals. So x1 is a signal, x2 is a signal, and a2 and a1 are scalars, constants. The output of that sum, is the sum of the outputs to the simple sequence. So, if you put in x1 by itself in S, you get that for an output, put x2 by itself you get that. You add those two together with the waiting cost and that's the same. As if the system saw this sum, presented to it as an input. So this is a very important property of linear systems. It's called the principle of superposition. We're going to use this a lot. We've already seen when we talked about signals that is very convenient, in many ways, to think about signals as a sum of simpler signals. So, for example, that triangular signal we saw in the signals video. We wrote as a sum of three signals. Well, to find out what the output to sum system is to that signal, all I have to do is figure out the output to a step, and to a ramp, and a delayed ramp, add them all up, and that's the going to be output to our complicated signal. So, this principle of superposition is very, very important. It helps decompose problems, once you can decompose a signal into simpler parts, you now can find the output because you probably can easily find the output to a simple signal. So, here's the some special cases of the linear system. Let's assume that the linear system here as is the linear. If you multiply the input by a number, it changes the input too, now the output is going to be the output you had before multiplied by the same gain factor and there was a way, almost sloppy way of saying this, you double the input, you double the output. If that happens, then the system could be linear. If that does not happen, if doubling the input does not double the output, then the system can't be linear. It violates the rules. That's a special case of what I wrote up here Because, here we have a 2 0, for the case that I'm showing down here. Okay, then the other case is, just, as I described in some detail, if you just, decompose a signal into a sum, it's output Of the system, the system's output to that sum is the sum of the output's to each, so that, goes just into detail of what we've said before. But, this result turns out has a very interesting consequence, so let's consider a special case. A2 is the negative of a1. And let's say that x1 and x2 are the same thing. We're just going to call that x. So, a1 and a2 are negatives of each other, and these two signals has the same thing. Well, what that gives us, is that the, input to our system is a1x- a1x. Well that's called 0. But now, let's look at what the linear system equation says in that special case. Now, since x1 and x2 are the same, we have S(x) and S(x), and they're weighted by the same a1, but they're subtracted to get 0. So, if you have. The system is linear. You stick in 0, identically 0, into a linear system, what you get out is 0. Any system which doesn't obey that rule, cannot be linear. It's sort of a very simple test, but it's not what's called, suffiecient, it's a neccessary property, but not suffiecient. The real test of wheter something is linear or not, goes back to the original definetion. And by the way, in detail, this Principles super position has to apply for all signals to XY and X2, and all constants A1 and A2. So, it's a pretty, demanding requirement, that, when you have a linear system, it's very, very nice. Another special. A class of systems or so-called timing variance systems. So what these kind of systems are are is that if you come into the lab, let's say, and you build a device and you put in an input and you measure the output end, you leave the system alone, you come back the next day, you put in the same signal And, what you should get out is the, exactly the same output, as you had the previous day. In that scenario, I just described, means that, I delay the input to the next day, and what happened? I got out that day, the same thing as what I got the previous day, but delayed. [INAUDIBLE] day, you have the information. So systems that do not change their behavior with time, are called Time-Invariant Systems. That does not mean, signals can't vary with time, that's certainly not the case. What it does mean, the system doesn't change its behavior, with time, stays the same. So, let's go through a little, set of examples, and see if we can, figure out how to classify signals according to whether they're linear, or time-invariant. So, here we have the game box. Multiply the signal the, and amplifier, let's say g is bigger than 1 and positive, it's an amplifier. Is that linear? Well, if I change x to x1 + x2, do I get G * x1 + G * x2? I certainly do. It's linear. Is it time invariant? Well, I think it's pretty obvious that it is but I just change T to T minus tau. What I get is the previous expression for Y, if I stick in T minus tau there. So this is linear and time invariant. No problem with that. Okay, another example. How about the derivative box. Something that takes the derivative. Is that linear? If you take the derivative, of a sum, is that the sum of the derivatives? And I think you will agree, yeah sure, that's true. So that's, linear. How about time-invariant? Once the derivative of x / t minus tau, well, that's, in calculus you learn that's the derivative of x evaluated at t minus tau. Well that makes it time invariant. So, that's we got two special cases here of linear and time invariant systems. How about this system? How about this squaring system? So the output equals to the input squared. While that is certainly not linear. Alright, because the square of X 1 plus X 2 is not equal to X 1 squared plus X 2 squared. How about time-invariant? If I, replace t, by t minus tau, is that the same formula I'd get if I replaced t minus tau in this, in this relationship, input/output relationship, and the answer is, it certainly is. This is an example of a non-linear, but Time-Invariant System. And finally, how about this system, which is called a modulator? So, what we have is our input is here, and what happens is that the system multiplies it by cosine 2 pi f ct to produce an output. Well, is it linear? So if I have x1 + x2, will I get out, cosine x1 + cosine x2? And I certainly do. So, that's linear. How about time invariant? So, the thing to notice is that the delay applies only to x. So if I replace x here by t minus tau. You don't replace it over here because that's inside the system. That's not the input delay. Well that's not the same expression that you get, if you replace t minus tau over here in this formula, you have to replace the t in both places by t minus tau. So this system is not time-invariant, due to delay of the input you do not get the outputs we had before, delayed because this co sign is changing behavior with time. So we have the cases where both of these examples you going to have systems also that are not linear and not time invariant of course. And the ones that are really going to be special are the ones that are linear and And time invariant. We're going to have to deal with this one. But we're clearly can't use the linear time invariant system theory to talk about it, because it is not time invariant. We'll see that in a second. Well okay, we now know a lot about systems. What systems do, is they operate on input to produce an output for some reason. You might want to amplify it. You might want to delay the signal. Take it's derivative, whatever. That's what systems do. They can do all kinds of things to signals that are interesting and by building up structures in simple systems, you can make something that's a lot more complicated than any of the components. And that's basically the way electrical engineers do a lot of design. Breakdwon a complicated input and output relationship into a simpler set that I can realize by cascade, parallel, and feedback structures. And I keep emphasizing that linear time-invariant systems are very important. What we're going to call LTI. LTI systems really are important. They occur a lot, and as what we see when we start talking about circuits, we're going to talk about linear time invariant circuits. When we get to signal processing we'll talk about linear time-invariant signal processing. Those special systems are very important because you can do a lot of very interesting things and have a very good theory for them when we start talking about them. That comes very soon.