You will recall that, value of Hubble's constant, was fairly unsettled all the way through 1980s. The reason for this, is that, these measurements are difficult. There are so many different relations, then one has to be calibrated against the other, and there are many opportunities for errors. And systematic errors in particular. And that resulted in the values of Hubble constants scattering by a factor of 2, which means that distances will be also uncertain by a factor 2 and things like luminosity by a factor of 4 and that, clearly, was not a satisfactory situation. So when the Hubble space telescope was launched, measuring Hubble constant was seen as one of it's key goals and it was a subject of a so called distance scale or Hubble Constant key project. This took ten years of very diligent measurements using Hubble space telescope. And even today Hubble space telescope continues to be used for this purpose, improving the results. The idea here was to observe Cepheids in a number of nearby spiral galaxies, and the reason why Hubble was needed is that these stars are faint, and they're in crowded fields, so the superb resolving power of HST was needed in order to actually measure their brightness and populate their light curves. Then, using the locally calibrated Cepheid relation, understand the distance to these galaxies and then use those to calibrate other things such as supernovae. A choice was made to use the distance to the Large Magellanic Cloud to establish The zero point to the Cepheid period luminosity relation. You recall that, that was the original discovered period luminosity relation by Henrietta Leavitt, and it still plays a role. And so any uncertainties in the distance to the Large Magellanic Cloud would then map directly into the inserted piece in the Hubble. Constant. Now not only did this team perform wonderful measurements, but they're also very careful about their analysis, and they tried to honestly account for every source of error they can think of. And the final result is shown here. Hubble constant turned out to be right in the middle of the disputed interval between 50 and 100 kilometers per second per megaparsec. It's 72 + or - 3 in just random errors, but also + or - 7 kilometers per second per megaparsec plus potential systematic errors and that's an honest result. In fact this is still perfectly consistent with all of the more modern measurements, Note, however, that there is still dependance, on the assumed to the [UNKNOWN] clouds. And that is osmething that people continue to improve. So here is a sample image of what they were looking at. The picture on the left, is 1 of the spiral galaxies used in their study, and superimposed on that, is an outline of the field of view of the camera, of a space telescope, that was used. This was the original White field Blanthery camera, and has the strange B2 bumper shape. The picture on the right, shows a zoom in, on one of those images, with some of the candidates Cepheid's circled. As you can see, this would be a very hard thing to do from the ground. Some of these Cepheid's, occur in star-forming regions, so there may well be other bright stars that are blended with them and that has to be taken care of. And here are some light curves of cepheids discovered by the key project. These represent many measurements of different times that we folded together into the best fit period. And you can see that those really look like those of nearby cepheids. So here are some Hubble diagrams they obtain in the end. The bottom left one shows only that one. For galaxies, whose distance was derived from Cepheid's. The one on the right includes all possible calibration sources they could come up with. And here is a table that accounts for different sources of uncertainty in their measurement. I'm just showing for information only, and you can see how many things they can think of And here is their final error probability distribution around hubble constant This is actually a good scientific way of presenting result. It's not a number, it's a probability distribution for that number The peak value is the one that's quoted and the width of that distribution is indicative of that uncertainty. People continued, to try to determine Hubble constant, using any 1 of the number of methods and combinations of different indicators and callibrators, and here is again, a table of those. Not there for you to remember all of it, but just to see how, much the different measurements, scatter around each other. Most of them are certainly within the error bars of the value determined by Hubble key project. Now recall that the basis for the whole thing was distance to the Large Magellanic Cloud which is about 50 kpc from the Milky Way. So the uncertainty of this distance maps directly onto the uncertainty on Hubble Constant. And the distance to The Magellanic Cloud was measured many different ways by many different authors. That alone, has a spread of + or - 10%. There is maybe about, 10 different methods, by which this was attempted, and here is the table that shows some of the results. Again, not there for you to remember all of it, but just to see the, roughly, the spread of the numbers, and the accuracies that are involved. A more important Check, which is actually becoming a very powerful new method is, as follows: in the neclei, of many, large spiral galaxies, there is a massive black hole. Which, for all practical purposes, is like a point mass, just like, essentially, all of the mass in the solar system is in the sun. Now, if we have test particles moving around that black hole. From measuring their orbits, we can find out how far the galaxy is. The orbits can be well-assumed to be Keplerian. And the suitable test particles are so-called interstellar masers. These are interstellar clouds that have very sharp line due to the coherent emission, and as they're moving around, the center of this galaxy, they can be used to measure the central mass, But also they can be used to measure the semi-major axes of these orbits. This was the first case in which this was done, since then there have been more, and the distance to this particular galaxy was found to be consistent with that one determined by the Cepheids. Now wouldn't it be nice, if we can bypass all this messy distance scale ladder climbing from one to another and so on, and go directly into the tuple flow. Well, there're 2 methods by which death can be accomplished, that do not require any other calibration. They're both based on physical reasoning. The first one is gravitational lens time delay and the second one is so-called Senile of Gorbachev effect. Whereas these are based on physics, they're still very much model-dependent. Initially, at least, we are producing values that were somewhat lower than that when measured by the Hubble Key Project, but since then, they have converged a little more. Any of these small discrepancies can be understood in terms of systematic errors. So firstly, gravitational lens time delays. Assuming that we understand the geometry of the lensing, and I'll show this in a moment, we can in principal derive the distance between the lens and the lensed object using the measured time delay. Modeling the lens geometry is the key uncertainty here. Because masses responsible for gravitational lensing are not always perfectly straightly symmetric and there can be combination of many potential wells of say galaxies in a cluster or group. So here is how it works,.Here's a schematic diagram showing what gravitational source might be. There is a background source, say a quasar, There is a foreground which could be a galaxy or a cluster. It bends the lights rays coming from the original source and One usually sees mulitple images, on the sky. Each of these images correspond to particular path of light rays that came around the gravitational lens and there will generally difference in length. The difference in length would translate itself in to a difference in arrival times. So we are going to see in our ability in the place are we will first see it in one image And then some time later in the other. These time delays are typically in the range of weeks or months. The path difference between different rays, assuming that you know, know the geometry will scale directly with everything else, every other length in the system. And the ratio of this path difference to the, distance to the lens or the source. Is also something that the model will tell you. So by measuring the time delay, multiplying by the speed of light, we directly measure the difference in the path length. And if we knew the lens model, we can then use it to infer the distance to the lens, or the lensed object. Synyaev-Zeldovich Effect is something entirely different. Clusters of galaxies contain galaxies, dark matter. But also a lot of hot gas. Gas that was expelled from galaxies or accreted by the cluster. And since that gas is in a potential well of a cluster, the speed of individual particles. Protons. Electrons has to be such that the kinetic energy balances the potential energy. It turns out that [UNKNOWN] responds to temperatures of millions or tens of millions of degrees, which means that the gas will emit in the X-rays. Now we're looking at the cosmic microwave background [UNKNOWN] cluster Cluster. The photons of the micro background will come through, and some of them will scatter off these hot energetic electrons. In the cases of forward scattering this will generally result in an increased energy of the photon. The energy's been gained from the electrons in the cluster.Of course there is equal amounts going out from the other side. So essentially what you see on the micro background sky is there's going to be a bump that corresponds to this X-ray cloud. And the entire spectrum of the cosmic micro background will be shifted towards a somewhat higher energies. By measuring that shift We can find out, how long was the path? Because the longer the path along the line of sight, more chances the photons have to be scattered. Therefore, we can, derive from this measurement directly, physical we can measure the apparent size of the cluster on the sky. In an average, we expect the cluster will have the size in radial direction or orthogonal to it. So since we have observed apparant angular size of the cluster in the sky. And we know how much that is in physical units, the cluster's red shift. We can derive the angular diameter distance Now, any given cluster is not likely to be spherically symmetric, but the whole ensemble, on average, will probably work out. A beautiful thing about this method, is that it does not depend, on the distance to the cluster itself. The source that's observed, is cosmic microwave background. Cluster could be near by, or it could be very far away. So the method can work over a very broad range of redshift's. There are uncertainties in modelling the process because the gas could be clumpy, there are some density gradients, all of that has to be accounted for before we can derive the actual diameter of a cluster that photon goes through. Next we will talk about measurements of the age of the universe.