Otra Ley del Inverso del Cuadrado
Figure 10.4 shows light leaving a star and traveling through space. Moving outward, the radiation passes through imaginary spheres of increasing radius surrounding the source. The amount of radiation leaving the star per unit time—the star’s luminosity—is constant, so the farther the light travels from the source, the less energy passes through each unit of area. Think of the energy as being spread out over an ever-larger area, and therefore spread more thinly, or "diluted," as it expands into space. Because the area of a sphere grows as the square of the radius, the energy per unit area—the star’s apparent brightness—is inversely proportional to the square of the distance from the star. Doubling the distance from a star makes it appear 22, or four, times dimmer. Tripling the distance reduces the apparent brightness by a factor of 32, or nine, and so on.
Of course, the star’s luminosity also affects its apparent brightness. Doubling the luminosity doubles the energy crossing any spherical shell surrounding the star and hence doubles the apparent brightness. The apparent brightness of a star is therefore directly proportional to the star’s luminosity and inversely proportional to the square of its distance:
Thus, two identical stars can have the same apparent brightness if (and only if) they lie at the same distance from Earth. However, as illustrated in Figure 10.5, two non-identical stars can also have the same apparent brightness if the more luminous one lies farther away. A bright star (that is, one having large apparent brightness) is a powerful emitter of radiation (high luminosity), is near Earth, or both. A faint star (small apparent brightness) is a weak emitter (low luminosity), is far from Earth, or both.
Determining a star’s luminosity is a twofold task. First, the astronomer must determine the star’s apparent brightness by measuring the amount of energy detected through a telescope in a given amount of time. Second, the star’s distance must be measured—by parallax for nearby stars and by other means (to be discussed later) for more distant stars. The luminosity can then be found using the inverse-square law. Note that this is basically the same reasoning we used earlier in our discussion of how astronomers measure the solar luminosity (in our new terminology, the solar constant is just the apparent brightness of the Sun). (Sec. 9.1)
La Escala de Magnitudes
Instead of measuring apparent brightness in SI units (for example, watts per square meter W/m2, the unit in which we expressed the solar constant in Section 9.1), optical astronomers find it more convenient to work in terms of a construct called the magnitude scale. This scale dates back to the second century B.C., when the Greek astronomer Hipparchus ranked the naked-eye stars into six groups. The brightest stars were categorized as first magnitude. The next brightest stars were labeled second magnitude, and so on, down to the faintest stars visible to the naked eye, which were classified as sixth magnitude. The range one (brightest) through six (faintest) spanned all the stars known to the ancients. Notice that a largemagnitude means a faint star.
When astronomers began using telescopes with sophisticated detectors to measure the light received from stars, they quickly discovered two important facts about the magnitude scale. First, the one through six magnitude range defined by Hipparchus spans about a factor of 100 in apparent brightness—a first-magnitude star is approximately 100 times brighter than a sixth-magnitude star. Second, the characteristics of the human eye are such that a change of one magnitude corresponds to a factor of about 2.5 in apparent brightness. In other words, to the human eye a first-magnitude star is roughly 2.5 times brighter than a second-magnitude star, which is roughly 2.5 times brighter than a third-magnitude star, and so on. (By combining factors of 2.5, we confirm that a first-magnitude star is indeed (2.5)5
100 times brighter than a sixth-magnitude star.)
In the modern version of the magnitude scale, astronomers define a change of five in the magnitude of an object to correspond to exactly a factor of 100 in apparent brightness. Because we are really talking about apparent (rather than absolute) brightnesses, the numbers in Hipparchus’s ranking system are now called apparent magnitudes. In addition, the scale is no longer limited to whole numbers, and magnitudes outside the original range 1–6 are allowed—very bright objects can have apparent magnitudes much less than 1, and very faint objects can have apparent magnitudes far greater than six. Figure 10.6 illustrates the apparent magnitudes of some astronomical objects, ranging from the Sun, at -26.8, to the faintest object detectable by the Hubble or Keck telescopes, at an apparent magnitude of +30—about as faint as a firefly seen from a distance equal to Earth’s diameter.
Apparent magnitude measures a star’s apparent brightness when seen at the star’s actual distance from the Sun. To compare intrinsic, or absolute, properties of stars, however, astronomers imagine looking at all stars from a standard distance of 10 pc. (There is no particular reason to use 10 pc—it is simply convenient.) A star’s absolute magnitudeis its apparent magnitude when viewed from a distance of 10 pc. Because distance is fixed in this definition, absolute magnitude is a measure of a star’s absolute brightness, or luminosity. The Sun’s absolute magnitude is 4.8. In other words, if the Sun were moved to a distance of 10 pc from Earth, it would be only a little brighter than the faintest stars visible in the night sky. As discussed further in More Precisely 10-1, the numerical difference between a star’s absolute and apparent magnitudes is a measure of the distance to the star.