In this video, we're going to talk about how computations are formed on computers. And, in particular, how numbers are represented on by computers. Turns out this is going to be very important for us to figure out how computers function so we can convert from analog signals to digital signals, understand the nature of the errors that are involved, and also the nature of the errors that are involved in when you do computations on digital computers. So, the first thing we need to talk about is what's called Positional Notation. This amounts to something we're very familiar with. And it turns out, how we deal with numbers is very similar to how computers deal with numbers, at least in a conceptual way. We'll talk about Base-2, which is the number system that computers use, and I'm going to emphasize that knowing your powers of two is going to be very, very useful skill to have. So, let's talk about positional notation. Now, I claim that one of the critical discoveries in mathematics was positional notation. So, what positional notation means is that we have a number that we would write, 257. What that means is that the 2 means 2*100. So, 2*100 + 5*10 + 7*1. And, the so-called base of the number representation has to do with the powers they're involved here. And the word position comes from the fact that if you start counting from the, the right end, the position tells you what the exponent is. That's implicit in the positional representation. So, we know that the 7 is, is the number the 1's, the 5 corresponds to the number of 10's, etc. And so, this is called base 10, that's what decimal means is base 10. And you write that with this little subscript over here to tell you the base. Also in base 10 of every digit in each position is an integer between 0 and 9, and this 9 is important, it's 1 less than the base. So, if you add two numbers together and one of the positions adds up to being a number bigger than 9, you know what you do is you do a carry to the next position to the left and adjust the number accordingly so it fits in the range between 0 and 9. So, that's all we need to know about base 10. The, I've got to point out that positional notations really important. A alternate representation of numbers is Roman numerals, and 257 happens to be this in Roman numerals. I think you might be hard for us to multiply this by 2, which I'm going to write again in Roman numerals without using positional notation. If you tried to stay in the Roman numeral world, it's very, very hard to multiply. not as hard but pretty difficult to add and I would not even think about dividing in Roman numerals, that's going to be a disaster. So, positional notation when it was discovered, makes all the simple arithmetic operations very, very easy and it's quite concise in representing very big numbers and that's a very useful thing. Well, in a computer, numbers are represented in what we call binary and all that is, is base 2 is still positional notation. So here, I have a number 10000001, and the little subscript here 2 tells me it's page 2. So, I know that the, the position furthest to the right represents the 1's position, which is 2 to the 0, the next position over represents 2^1, and the next position to the left from that, 2^2, etc. And, I know my powers of 2. So I know that this, of course is 1, that's 2, that's 4, 8, 16, etc. So, this number this red one corresponds to the 2^8 position. So, our number here in binary is 2^8+2^0 or 1. I happen to know that 2^8 is 256. So, this turns out to be our old number that we've been talking about that represented in binary. So, everything's the same in base 2 or in base 10. In base 2, you, all the numbers are either 0 or 1 in each position because that's 1 less than the base. Carries work the same way it's all the same except it's in base 2. Like I said, it's because voltage is either being 0 or some positive voltage make it easy to represent numbers electricly in base 2. Now, here are the powers of 2, which I suggest you learn. It's very important to know them. I know that 2^5 is 32, for example. one of the more important ones, though, is the one at the bottom. And that is very interesting and very useful result to know. And that, and what that means is 2^10, it's about equal to 10^3. In fact, on some of the computers when they say a k, it usually means 1000, 1k and something. 1k on some computer systems actually means 1024 because it's very close to being 1,000. now, how do you represent numbers? And what are some of the various conventions for this positional notation? So, computers are often arranged in terms of what they call words. And here, I show a word that's 8-bits long, bit means binary digit. You may not know that but it means binary digit. And so, this word is 8-bits long here and turns out that it is what is known in the trade as a byte. And you, the most computers these days make devote many more bits to an integer than 8. usually though, they're multiples of 8 so you can have 64-bit. When we're talking about 64-bit computers, that means integers can be represented by 8 bytes etc. Let's talk about the simpler easy to manage a 8-bit manager. And notice, the word here on this is unsigned. So, this is exactly positional notation. The subscript here refers to the power of 2. So, in 8-bits, the powers of 2 go from 0 to 7. And that means I can represent numbers equal to 0 to 255, and 255 is 2^8-1 because there is no eighth exponent. When you turn on every one of these digits, you get 255 which is 1 less than 2^8. Now, we'd also like to be able to talk about integers with the sign. And what is done is they take the same 8-bits that you had before, and you you use them. You have to devote one bit here to the sign. And if it's a 1, that tells you it's negative, if it's 0, it's positive. But now, you only have 7 bits to represent numbers. When all is said and done that the range numbers that's represented by a signed 8-bit integer ranges from -128 to 127. And, you might wonder why there's more to the negative side than another. But it turns out you, in some sense, get an extra number because 0+0 is the same as -0. So, what is normally done is they allocate the extra, if you will to the negative side for all kinds of reasons that aren't worth getting into. Well, that's very nice. But, how about anything with a fraction? What are we going to do about that? And that's where a floating point comes in. So, in floating point, the representation number is m, so called mantissa, times 2^e where e is the exponent. So, what you do is you take a very long word, not 8-bits, much longer than that. You devote one part of it to the exponent, including a sign, so it's a signed integer. And you devote another part of the word to being assigned an integer, which stands at the mantissa, except it's not an integer. It turns out to be what we will call a floating point fraction. So, when you divide this up into individual bits, this is the 2^-1 bit, this one is the 2^-2 bit, etc. Okay? And this restriction that m has to be between the 1/2 of one means, this bit is the first bit is always on and the rest are whatever a number it's required to fill in that range. So, what happens is that you get a number, you scale it so the mantissa is in the right range, and that tells you what the exponent is. And that is written as a signed integer. So, a 1/32, which is 2^-5, is actually expressed in floating point as 1/2*2^-4. And that means there is a one in first position and zero everywhere else since it's exactly 1/2. And then, I represent as a signed integer -4. So, that's how floating point handles numbers that actually span in very large range, and can be negative and can be fractions. So, you can go from minus a very small fraction up to plus a very big number all in one concise representation. On computers these days, you can see one 64-bit floating point. You can actually see 128-bit floating point for some calculations. So, that's how numbers are represented on computers and how computations are done. Now I want to point out, that the numbers which we're going to relate to signal values are all, can be represented a variety of ways signed, unsigned, floating point or not, but always with discrete values. I want to point out a number like 1/3 is an infinte repeating decimal, it's also an infinite repeating fraction in binary. Since you only have a fixed number of bits for any wor, number on a computer, they have, can, 1/3 cannot be represented exactly. There's always in effect, an error in representing a number. So, we're going to have to learn how to live with discrete amplitude values represented by different numbers in a computer. And this also means, by the way, that the computations are also not quite exact. It depends on what the numbers are. Furthermore, numbers are always stored in individual memory locations. So, that may seem like a minor point, but that means we cannot store a continuum of values. So, we've been talking about analog signals, which are functions of continuous time and usually have a continuous range of values. So, storing an individual memory locations mean we have to store just a few values of our signal. We can't store them all because the number of values in any interval in continuous time is uncountably infinite, that's a very big number. Furthermore, the amplitude values cannot be made exactly because there's some numbers we cannot represent exactly. But also, even beyond that, we can't represent all possible numbers, let's say between -1 and 1. That's impossible. So, we're going to have to learn about quantization and time, and quantization and amplitude. And that's going to be the subject of our next video.