1 00:00:03,540 --> 00:00:08,660 Okay, so one, one technique to do this is to, 2 00:00:09,740 --> 00:00:17,740 Let's say, take the TLB, and instead of, having the TLB in front of our cache. 3 00:00:18,280 --> 00:00:22,728 But our TLB,, let's say, in parallel or after, after our cache. 4 00:00:22,728 --> 00:00:28,271 So what this means is the addresses that go into our cache are virtual addresses. 5 00:00:28,271 --> 00:00:34,294 And this has some pretty big implications. Lots of processors are doing this, these 6 00:00:34,294 --> 00:00:37,647 days, Where they'll actually put, the TLB in 7 00:00:37,647 --> 00:00:41,548 parallel with the cache. And this picture is a little bit 8 00:00:41,548 --> 00:00:45,380 confusing, because it looks like this is after the cache. 9 00:00:45,380 --> 00:00:50,445 To some extent, that is and it isn't, depending on how you sort of squint and 10 00:00:50,445 --> 00:00:53,981 look at this. If the cache is completely virtually 11 00:00:53,981 --> 00:00:59,260 indexed and virtually tagged, it would look something like this, because you only 12 00:00:59,260 --> 00:01:04,562 go fire up the TLB when you take cache miss, and you have to go out to well, 13 00:01:04,562 --> 00:01:10,403 farther out layers of memory. Now if you have a, a virtually indexed but 14 00:01:10,403 --> 00:01:16,927 physically tagged cache, what that means, is the address that goes into the index of 15 00:01:16,927 --> 00:01:22,844 a cache array is a virtual address. But then you do TLB access in parallel and 16 00:01:22,844 --> 00:01:28,761 outcomes a physical address, and they need you to compare the tag match on the 17 00:01:28,761 --> 00:01:33,237 physical addresses. That makes a lot of, lot of things a lot 18 00:01:33,237 --> 00:01:37,030 easier in life and we'll look at that in a second. 19 00:01:37,030 --> 00:01:41,819 So, One of the major, major challenges that 20 00:01:41,819 --> 00:01:47,692 you end up with here of virtually indexed caches that I wanted to point out is you 21 00:01:47,692 --> 00:01:52,155 start to get some aliasing problems. What do I mean by this? 22 00:01:52,155 --> 00:01:56,700 Well, before, when you were to go put something in the cache, it could only be 23 00:01:56,700 --> 00:01:59,392 basically one place in a direct-mapped cache. 24 00:01:59,392 --> 00:02:03,219 In a n way associative cache. It could be in, in different places. 25 00:02:03,219 --> 00:02:06,808 So a two way set associative cache could be in the two ways. 26 00:02:06,808 --> 00:02:09,380 But you knew where to look for it, at least. 27 00:02:09,920 --> 00:02:18,292 But all of a sudden, if you start to have, Bits above the minimum page size in the 28 00:02:18,292 --> 00:02:24,129 address feed in to where it is in the cache, it can actually be in multiple 29 00:02:24,129 --> 00:02:28,020 places in the cache. So, brief example here. 30 00:02:30,849 --> 00:02:43,488 We have a 32 bit address.. We have our cache offset here. 31 00:02:43,488 --> 00:02:50,560 We have, let's say, a page size of four kilobytes. 32 00:02:50,820 --> 00:02:54,632 So, that's going to be what, twelve bits?.. 33 00:02:59,180 --> 00:03:07,620 And, let's say our cache index. Or our cache rather has, I don't know more 34 00:03:07,620 --> 00:03:13,684 than four kilobytes in it. So all of a sudden we got a direct match 35 00:03:13,684 --> 00:03:19,576 cache which has eight kilobytes. Uh-oh. 36 00:03:19,576 --> 00:03:26,520 So, this, this bit here. So this, this is our index. 37 00:03:26,800 --> 00:03:35,520 Sorry, our index into our cache has one bit here above the page boundary. 38 00:03:35,860 --> 00:03:43,300 And the OS could elect to have this bit here, be a zero Or a 39 00:03:43,300 --> 00:03:51,164 One. So what that means is, all of the sudden 40 00:03:51,164 --> 00:03:57,566 when we go to index into our cache. The same physical piece of memory might be 41 00:03:57,566 --> 00:04:02,006 in two different locations or depending on how the operating system sort of lays out 42 00:04:02,006 --> 00:04:06,133 memory, you might look in the wrong spot or you might need to check both places. 43 00:04:06,133 --> 00:04:10,259 So, we just started thinking about this, that the bits above the minimum page size, 44 00:04:10,259 --> 00:04:14,856 if our cache is bigger than our minimum page side, minimum page size, the bits are 45 00:04:14,856 --> 00:04:17,677 not gonna match. And, we're going to we'll walk through an 46 00:04:17,677 --> 00:04:25,797 example of that in a second. Also, virtually address caches has some 47 00:04:25,797 --> 00:04:33,223 other, other challenges here, Because two applications can have the same virtual 48 00:04:33,223 --> 00:04:36,306 addresses. And let's say you're trying multiplexing 49 00:04:36,306 --> 00:04:38,966 between application one and application two. 50 00:04:38,966 --> 00:04:43,681 All of the sudden, these two applications are going to go into your cache and you 51 00:04:43,681 --> 00:04:47,913 might have one application hitting on the data of another application. 52 00:04:47,913 --> 00:04:52,386 So if two applications are trying to go access, address five, and they have 53 00:04:52,386 --> 00:04:57,343 different values stored in address five, in our virtually indexed cache, all of the 54 00:04:57,343 --> 00:05:01,817 sudden, you might start get something weird here, you might actually end up 55 00:05:01,817 --> 00:05:04,883 where, If, if you don't protect against this, one 56 00:05:04,883 --> 00:05:08,372 process is reading another process's data out of the cache. 57 00:05:08,372 --> 00:05:12,746 So you need to protect against this. There's a couple different approaches. 58 00:05:12,746 --> 00:05:16,944 One approach is actually just to flush the cache on every context swap. 59 00:05:16,944 --> 00:05:20,314 So every time you change processes, flush the whole cache. 60 00:05:20,314 --> 00:05:24,689 That's sounds really expensive, but believe it or not, that's actually done 61 00:05:24,689 --> 00:05:29,064 with, non, non trivial, probability. And, er, it actually is done in some 62 00:05:29,064 --> 00:05:33,557 actual real, systems out there. A little bit, nicer way to do this is to 63 00:05:33,557 --> 00:05:37,559 have address space identifiers. So you actually tag the cache with an 64 00:05:37,559 --> 00:05:39,956 address space ID as per the tag information. 65 00:05:39,956 --> 00:05:44,522 So it's not just the virtual dress that matters but its also which process ID, or 66 00:05:44,522 --> 00:05:47,491 which dress space ID. So it is another sort of thing. 67 00:05:47,491 --> 00:05:51,772 But then it increases your tag bits. So you've got to be little bit careful 68 00:05:51,772 --> 00:05:57,880 about that. So this is mostly about having a virtually 69 00:05:57,880 --> 00:06:06,420 indexed, virtually addressed cache. Or virtually, is going to be virtually 70 00:06:06,420 --> 00:06:10,637 indexed, virtually tag cache. And if we look at this, how this fits into 71 00:06:10,637 --> 00:06:14,162 the, the pipeline, Life actually gets a lot, lot better from 72 00:06:14,162 --> 00:06:17,464 a hardware perspective. This is sort of summing up what we saw 73 00:06:17,464 --> 00:06:19,852 before. You really only have to do, only have to 74 00:06:19,852 --> 00:06:23,205 do translate on cache miss. So your, your mating processor pipeline 75 00:06:23,205 --> 00:06:26,202 looks the same as what we've been drawing up to this point. 76 00:06:26,202 --> 00:06:30,723 But on cache miss, you have to go through either your instruction translation look 77 00:06:30,723 --> 00:06:32,654 aside buffer, or your data look aside buffer. 78 00:06:32,654 --> 00:06:39,757 Er, data translation look aside buffer. So, to sort of this, little bit more 79 00:06:39,757 --> 00:06:45,417 pictorially here, what's happening with virtually addressed cache's. 80 00:06:45,417 --> 00:06:51,163 Let's take a, take little bit of a gander at this example here. 81 00:06:51,163 --> 00:06:57,500 So we have two virtually addresses, virtual address one, virtual address two. 82 00:06:58,200 --> 00:07:04,686 And, the operating system elects to map those to the same physical page, in 83 00:07:04,686 --> 00:07:08,152 memory. This is something that virtual memory 84 00:07:08,152 --> 00:07:13,381 systems do many times, sometimes you wanna have a memory whole, we have memory now 85 00:07:13,381 --> 00:07:18,287 twice you wanna have memory map between two of your applications share data, 86 00:07:18,287 --> 00:07:22,999 pretty common thing to have happened. So we want to share some physical memory. 87 00:07:22,999 --> 00:07:29,191 Unfortunately, if you go look at our virtual adressed cache here, we actually 88 00:07:29,191 --> 00:07:33,325 end up with, and in two different locations, this is going back to that 89 00:07:33,325 --> 00:07:37,046 first example there. We have a first copy here and a first copy 90 00:07:37,046 --> 00:07:41,947 there and it depends on where it actually was located in the virtual address space, 91 00:07:41,947 --> 00:07:46,908 which has no mapping to where it actually should be located in the physical address 92 00:07:46,908 --> 00:07:52,846 space, so the bits don't match. The bit of the virtual address here and 93 00:07:52,846 --> 00:07:57,623 the bit of the physical address in this bit here does not match. 94 00:07:57,623 --> 00:08:03,819 And that causes a, a world of problems, because all of a sudden you can have well 95 00:08:03,819 --> 00:08:08,000 say even the same application right to address lets say. 96 00:08:08,000 --> 00:08:13,791 Ten and right to address 4096 plus ten and there's supposed to be mapping to the same 97 00:08:13,791 --> 00:08:18,909 location. It's supposed to be the same address, but you couldn't write five and 98 00:08:18,909 --> 00:08:24,431 then the other using the other name for that same location you know read lets say 99 00:08:24,431 --> 00:08:30,751 a 1000 or some random number out of there. So there are, there's a couple of 100 00:08:30,751 --> 00:08:35,806 techniques to, to deal with this and I don't want to go into too much detail. 101 00:08:35,806 --> 00:08:40,924 Your ah,, book goes into some more detail, but just to give you a little but of an 102 00:08:40,924 --> 00:08:44,380 insight on how to, how to, how to go about solving this. 103 00:08:44,700 --> 00:08:51,734 There are some systems out there which actually require that the virtual indexed 104 00:08:51,734 --> 00:08:59,213 piece of memory, or the resides in the same location as another page of that same 105 00:08:59,213 --> 00:09:01,440 address in the cache. Now, 106 00:09:01,820 --> 00:09:05,772 You sit there and you scratch your head, And you might say, well, is this 107 00:09:05,772 --> 00:09:12,332 effectively decreasing our associativity of our cache, or, or moving things around 108 00:09:12,569 --> 00:09:15,180 our cache. Well, 109 00:09:15,820 --> 00:09:20,964 A little bit [laugh] is the answer, but these are sort of the trade-offs. 110 00:09:20,964 --> 00:09:25,080 The OS can, to some extent, manage this, this different layout. 111 00:09:27,060 --> 00:09:32,696 And that's what I was saying here er, early sparks actually used a system like 112 00:09:32,696 --> 00:09:37,260 this were ensures that the virtual addresses acessing 113 00:09:37,640 --> 00:09:43,110 The address in the P A will not conflict in the direct map cache in a, in a bad 114 00:09:43,110 --> 00:09:45,711 way. So, so your, your guarantee you will 115 00:09:45,711 --> 00:09:51,480 always go to the same location. So, 116 00:09:52,020 --> 00:09:57,047 That's sort of the beginning point, if you have virtually indexed and virtually 117 00:09:57,047 --> 00:09:59,985 tagged. But you can have other, other mixes of 118 00:09:59,985 --> 00:10:02,597 these things. Not all of them make sense. 119 00:10:02,793 --> 00:10:07,429 You can have physically indexed, physically tagged, that what we've been 120 00:10:07,429 --> 00:10:10,889 talking about at the beginning of, of last lecture. 121 00:10:10,889 --> 00:10:15,525 That was the sort of the simple case. Virtually indexed, virtually tagged, that 122 00:10:15,525 --> 00:10:20,340 has lots of, challenges we'll say. Virtually indexed physical tags. 123 00:10:20,340 --> 00:10:23,340 Now this is actually a really good trade off here. 124 00:10:23,340 --> 00:10:27,420 So you do the translation parallel with the cache axis, 125 00:10:27,420 --> 00:10:31,560 And then you do the tag check. You don't actually need to have ascides in 126 00:10:31,560 --> 00:10:36,686 this case. Address base identifiers, because you're 127 00:10:36,686 --> 00:10:39,093 guaranteed to have the correct physical match. 128 00:10:39,093 --> 00:10:43,018 You might be accessing, let's say, the wrong location in the cache, or, or you 129 00:10:43,018 --> 00:10:47,518 might be accessing the wrong location, so we don't get around this, virtual physical 130 00:10:47,518 --> 00:10:51,076 problem in being in two locations. We'll talk about that in a second. 131 00:10:51,076 --> 00:10:53,902 You still need to handle that, sort of, in the sparkway. 132 00:10:53,902 --> 00:10:57,827 But at least you don't have to have address space identifiers, and at least 133 00:10:57,827 --> 00:11:00,810 you don't have to flush your cache on every process swap. 134 00:11:00,810 --> 00:11:06,340 Because you're guaranteed that the, the check that you do on the cache miss or hit 135 00:11:06,340 --> 00:11:09,434 is exact, cuz you're doing a physically indexed. 136 00:11:09,631 --> 00:11:16,703 Excuse me, physically tagged cache. And then you can have something, well, we 137 00:11:16,703 --> 00:11:22,812 call it both index and physically tagged. This is a cute little trick that a lot of 138 00:11:23,420 --> 00:11:27,941 architectures play, is they just want to ignore all these problems. 139 00:11:27,941 --> 00:11:33,284 And they want to make something that looks like a physically indexed, physically 140 00:11:33,284 --> 00:11:37,121 tagged cache. But they still want to have a cache that's 141 00:11:37,121 --> 00:11:43,400 bigger then their minimum page size. So you have a four-key page size and you 142 00:11:43,400 --> 00:11:47,681 want to have a it in eight kilobyte cache. You have a direct mapped cache. 143 00:11:47,681 --> 00:11:53,252 This bit here, the one above the, the, the page size, is going to be part of your tag 144 00:11:53,252 --> 00:11:56,815 index. But it's not, It doesn't fall into. 145 00:11:57,039 --> 00:12:02,550 So it's part of your tag index, but you're not going to be able to control it. 146 00:12:02,550 --> 00:12:08,434 But what you can do, is, let's say you take eight kilobyte cache and you make it 147 00:12:08,434 --> 00:12:12,903 two way set associative. So all of a sudden, it's eight kilobytes, 148 00:12:12,903 --> 00:12:16,552 but it reduces the number of index bits you have. 149 00:12:16,552 --> 00:12:20,886 And it fits within your index into the cache. 150 00:12:20,886 --> 00:12:28,015 So it's a cute little trick that you have. It's virtual and physical. 151 00:12:28,015 --> 00:12:31,720 The virtual and the physical dresses below the page line is the same. 152 00:12:32,580 --> 00:12:40,609 So the index, Into the cache. 153 00:12:41,115 --> 00:12:45,167 It doesn't get changed after you go through address translation. 154 00:12:45,167 --> 00:12:50,421 So, you'll see this where people actually add associativity to their L1 caches, just 155 00:12:50,421 --> 00:12:55,549 to avoid having to do address translation. And then you do address translation in 156 00:12:55,549 --> 00:13:00,043 parallel and you do physically tagged. And you do the, the, the tag check 157 00:13:00,043 --> 00:13:04,520 physically. There is this other down here that I kinda 158 00:13:04,520 --> 00:13:08,136 have X through. I haven't, I, don't think I've ever seen 159 00:13:08,136 --> 00:13:11,624 one of these built. You can, cause it kind of doesn't make 160 00:13:11,624 --> 00:13:14,415 sense. I mean a physically indexed virtually 161 00:13:14,415 --> 00:13:18,173 tagged cache. Yeah, not sure why you'd want to do that 162 00:13:18,344 --> 00:13:23,263 because, usually the hard part is coming generating the address that takes into the 163 00:13:23,263 --> 00:13:25,780 cache. So, I have, I've never seen one of this 164 00:13:25,780 --> 00:13:29,040 but it's always possible to go, go, go something like that. 165 00:13:30,320 --> 00:13:34,423 But it's something that I think about that having, if, if all of the sudden your 166 00:13:34,423 --> 00:13:39,180 cache size, or the amount of index bits that go into your cache is more than the 167 00:13:39,180 --> 00:13:43,759 amount of bits in your minimum page size, your address is going to show up in 168 00:13:43,759 --> 00:13:47,148 multiple places. And you have to, either deal with that or 169 00:13:47,327 --> 00:13:52,143 at least understand what's going on there. And it's usually done by the operating 170 00:13:52,143 --> 00:13:57,165 system. One, one final note,- we've mostly been 171 00:13:57,165 --> 00:14:03,485 talking about multi-level page tables for user index, and then you have a tree of 172 00:14:03,485 --> 00:14:07,109 pages that come out of it. This is only one approach. 173 00:14:07,109 --> 00:14:11,572 People have built stranger things out there, and still implemented paging. 174 00:14:11,572 --> 00:14:14,690 That's only one structure to hold all the pages in. 175 00:14:14,690 --> 00:14:19,520 So one thing you could think about is if you have lots and lots of page tables. 176 00:14:20,020 --> 00:14:24,448 And they're all mapping, let's say to the same they all look similar. 177 00:14:24,448 --> 00:14:29,188 You could try share different portions of the page table and many times the 178 00:14:29,188 --> 00:14:33,429 operating system does that. But another way to look at it, is you can 179 00:14:33,429 --> 00:14:38,606 try to have a table which takes a physical page and maps it backwards to a virtual 180 00:14:38,606 --> 00:14:41,226 address. These are actually, usually called 181 00:14:41,226 --> 00:14:44,781 inverted page tables. And on first appearance, this sounds 182 00:14:44,781 --> 00:14:49,210 weird, cuz it's the map in the direction you don't want, you would think. 183 00:14:49,210 --> 00:14:54,595 But to wake up for it, what. Usually the architectures do to solve this 184 00:14:54,595 --> 00:14:58,810 problem, the people, those architectures that have had inverted page tables do to 185 00:14:58,810 --> 00:15:03,183 solve this problem, is they have a fast hashing, function and a really small hash 186 00:15:03,183 --> 00:15:06,924 map, which does the correct direction. And then, if they have the, slow 187 00:15:06,924 --> 00:15:10,770 direction, they basically sort of walk the, page table and say, complicated 188 00:15:10,770 --> 00:15:15,090 hashing scheme, but basically it's kind of one of those linked list hash functions. 189 00:15:15,090 --> 00:15:19,411 So you, you check one location in the, physical to virtual table, and if it's not 190 00:15:19,411 --> 00:15:23,481 there, there's a link to another location, another location, then if it's, It 191 00:15:23,481 --> 00:15:28,344 actually ends up working out not too, not too bad but that's not very common today. 192 00:15:28,344 --> 00:15:32,807 We just want to put out that you just have the idea or you can have different 193 00:15:32,807 --> 00:15:37,384 arrangements of pages and canonical page that we talked about today or in class 194 00:15:37,384 --> 00:15:40,360 last time, there's only one way to go about doing it. 195 00:15:42,400 --> 00:15:48,300 Okay, so the questions on virtual memory caches, before, yep. 196 00:15:58,603 --> 00:16:03,319 That's a good question. [laugh] So, we said page relocation is 197 00:16:03,319 --> 00:16:09,039 when the operating system, takes a page, and, decides to, move it in physical 198 00:16:09,039 --> 00:16:11,900 memory someplace else. Are they going to be as predicted as them, 199 00:16:11,900 --> 00:16:25,982 25N, that location? So, You're saying because, yes, You're saying the cache tree 200 00:16:25,982 --> 00:16:29,300 gets stale. Yes, so, so the problem is that. 201 00:16:30,588 --> 00:16:38,400 Let's take the draws. Here, we have a, a linear page table. 202 00:16:39,505 --> 00:16:51,801 We put in address, 5000 hexadecimal. And that, maps to, some location in our 203 00:16:51,801 --> 00:17:00,505 physical memory. Let's say, to make life easy, it maps to 204 00:17:00,505 --> 00:17:12,158 address, 8000 hexadecimal. Now, the OS comes along and swaps this 205 00:17:12,158 --> 00:17:17,249 page out to disk. Sometime in the future, decides to pull it 206 00:17:17,249 --> 00:17:25,425 back in. And it pulls it back in down here at a00 207 00:17:25,425 --> 00:17:33,540 hexadecimal, a000 hexadecimal and it updates the page table points there. 208 00:17:34,060 --> 00:17:41,740 Now what Obama was trying to say is. In our cache, we had some data that point 209 00:17:41,740 --> 00:17:45,780 to this physical memory that was in, in the cache. 210 00:17:46,220 --> 00:17:50,668 Now all of the sudden we, we go and we move everything around. 211 00:17:50,668 --> 00:17:55,553 Is this, is this a problem? Because we're basically going to, do a 212 00:17:55,553 --> 00:18:01,387 index, and the physical address that comes out is not going to match what was in 213 00:18:01,387 --> 00:18:04,377 there for that data. That's actually okay. 214 00:18:04,377 --> 00:18:08,241 We're just going to get a miss on that, that location. 215 00:18:08,241 --> 00:18:14,075 So we're just going to get a miss, and then it's basically going to evict it and 216 00:18:14,075 --> 00:18:17,211 then go pull that exact same piece of data. 217 00:18:17,211 --> 00:18:23,365 That's actually not so bad. Now, there are [laugh] other subtle, 218 00:18:23,365 --> 00:18:27,616 subtle, more subtle challenges with these virtually indexed, virtually tagged. 219 00:18:27,784 --> 00:18:32,147 Sorts of things that will many times require you, when you go to use remapping, 220 00:18:32,147 --> 00:18:35,280 to actually invalidate all the memory that you take out. 221 00:18:35,640 --> 00:18:40,771 Because you actually might get a hit . Even though its though its pointing to the 222 00:18:40,771 --> 00:18:43,979 wrong location. So lets say the other, the other case is 223 00:18:43,979 --> 00:18:46,720 your in a virtual index, virtually tagged cache. 224 00:18:48,000 --> 00:18:52,873 And, we did this remapping, this is exact same remapping here. 225 00:18:52,873 --> 00:18:58,965 Well, there's different physical memory underlaying, underlying address 5000 226 00:18:58,965 --> 00:19:04,693 virtual now. And we wanna make sure we don't take a hit 227 00:19:04,693 --> 00:19:10,012 on old data, which is still in our cache. So typically the scheme to go handle this, 228 00:19:10,012 --> 00:19:15,402 is you actually invalidate all that memory out of your actually it's typically flush 229 00:19:15,402 --> 00:19:20,670 and validate operation out of your cache. And depending on what architecture you're 230 00:19:20,670 --> 00:19:25,386 on, some architectures actually have instructions that flush the entire cache. 231 00:19:25,386 --> 00:19:30,531 So X86 has something called write back and invalidate which will actually flush the 232 00:19:30,531 --> 00:19:35,370 entire, it'll, it'll write back all of the data and it'll flush the entire cache. 233 00:19:35,594 --> 00:19:41,120 Other architectures, or something more like mips, does it on a line by line 234 00:19:41,120 --> 00:19:44,630 basis. And it's a, basically an operating system 235 00:19:44,630 --> 00:19:48,962 only instruction. Where you can actually access address. 236 00:19:48,962 --> 00:19:54,787 And given that address or excuse me, you, you present an index into the cash. 237 00:19:54,787 --> 00:19:58,820 And given that index it'll flush that data cleanly. 238 00:19:59,520 --> 00:20:04,416 Now and then there's something sort of in the middle if you want actually have the 239 00:20:04,416 --> 00:20:09,085 user be able to, be able to do the sort of flushing, you need to think a lot harder 240 00:20:09,085 --> 00:20:13,754 about this because you want some way for the user to be able to present a virtual 241 00:20:13,754 --> 00:20:16,715 address but have that name something about the cash. 242 00:20:16,885 --> 00:20:21,725 So there are some ways to do that but it's kind of, the, the corner cases there get 243 00:20:21,725 --> 00:20:24,458 pretty tricky. But, so yeah, does that answer your 244 00:20:24,458 --> 00:20:27,305 question? So you, you, you get a miss when you go to 245 00:20:27,305 --> 00:20:30,152 access it someplace else and that's actually okay. 246 00:20:30,152 --> 00:20:35,798 Now. Trickier thing, it let's say the OS 247 00:20:35,798 --> 00:20:42,310 decides to. Point to this, some other way. 248 00:20:42,310 --> 00:20:50,447 And let's say DMA or user of a processor or something to go write this piece of 249 00:20:50,447 --> 00:20:53,912 memory. Now your cash is stale, but this is sort 250 00:20:53,912 --> 00:20:58,658 of a more involved question going to if you have multiple processors, how do you 251 00:20:58,658 --> 00:21:03,286 keep memory from multiple processors coherent, which we're going to be talking 252 00:21:03,286 --> 00:21:07,557 about in two lectures, about cash coherence between different processors. 253 00:21:07,557 --> 00:21:12,243 If it's on the same processor and it's accessed the same way, the cash is going 254 00:21:12,243 --> 00:21:15,210 to pick up that change. If it's accessed, let's say. 255 00:21:15,210 --> 00:21:18,906 . Some other ways that same address, the 256 00:21:18,906 --> 00:21:23,084 operating system's gonna have to, be very careful. 257 00:21:23,084 --> 00:21:30,074 And this is why these virtually indexed, physically tagged, address, caches usually 258 00:21:30,074 --> 00:21:35,859 require some way for the operating system not to have those bits differ. 259 00:21:35,859 --> 00:21:39,556 If the bits match, you know you'll kick it out. 260 00:21:39,556 --> 00:21:45,903 So, and because it's physically tagged, even if you have a, let's say, four way 261 00:21:45,903 --> 00:21:49,785 accociative cache. You're going to get a hit on the physical 262 00:21:49,785 --> 00:21:54,133 address bits, after the translations. So it's now that you can actually have, 263 00:21:54,133 --> 00:21:59,004 let's say, way zero and way one having the same piece of physical address data in it. 264 00:21:59,004 --> 00:22:03,410 That just can't happen in the cache because, on a, on a, physic lead tag cache 265 00:22:03,410 --> 00:22:06,600 because the physical tag information could be the same.