|Proceedings of the National Academy of Sciences, is that despite triggering a similar maturation of the end buds, the drug and hormones offered vastly different levels of protection.|
Perphenazine cut the 9-month cancer incidence to just under 75 percent, while breast-cancer rates in animals receiving the pair of hormones plummeted to between 4 and 11 percent. “What that told us,” Nandi says, “is that [end-bud] differentiation is probably not the reason for protection against breast cancer following pregnancy.” This had been the leading hypothesis.
The hormonal duo proved effective even when delivered several weeks before the carcinogen exposure. It also worked when its estrogen component was small—matching in some of the lower blood concentrations measured during gestation, though still exceeding those that occur outside of pregnancy. Even cutting the rats’ therapy to a single week, the equivalent of treating women for just 3 months, didn’t reduce long-term protection.
Estrogen-only therapy worked less well, allowing 38 percent of the rats to develop cancer, and treatment with progesterone alone actually spurred the disease. Not only did all rats on the progesterone regimen get cancer, but each developed more tumors than did the carcinogen-exposed rats that were denied any treatment.
Though he is not yet sure how estrogen protects, Nandi told ^ News that “we think the predominant effect is going to be at the brain and pituitary-gland level.” These organs may trigger the secretion of hormones that reduce the number of cellular receptors in breast tissue for hormones such as estrogen. The presence of fewer receptors for estrogen would reduce the breast’s response to this hormone, which can fuel cancer growth.
Investigating hormonal simulation of pregnancy to reduce breast-cancer risk “is something that had to be done,” says epidemiologist Malcolm C. Pike of the University of Southern California in Los Angeles. Nandi’s new data suggesting that short-term therapy might work “is really very exciting,” Pike maintains.
The USC researcher has data indicating that intensive treatments with novel contraceptives also block breast cancer (SN: 10/31/92, p. 298), and a start-up company is now making and testing them. Other researchers have developed drugs to selectively limit the effects of hormones in the breast. In the war on this cancer, Pike says, “I’m now very optimistic we’re going to win.” —J. Raloff
Asteroids get solar push toward Earth
When it comes to luring asteroids into the inner solar system, a little nudge goes a long way.
Most asteroids inhabit an elliptical set of tracks, known as the main asteroid belt, between the orbits of Mars and Jupiter. Although unlikely to spell doomsday for Earth, rocks occasionally get flung from the belt onto paths that intersect our planet’s orbit. A new study suggests that tiny motions induced by the sun’s energy can play a crucial role in sending asteroids on such an inward journey.
Researchers realized in the 1980s that asteroids occupying certain zones, known as resonances, within the outer part of the main belt are profoundly influenced by Jupiter’s gravity. The giant planet’s pull can dramatically elongate the orbits of these asteroids, causing their paths to cross those of the inner planets. More recently, scientists have calculated that another set of resonances in the main belt nearer Mars also acts as an escape hatch, ejecting some rocks into the inner solar system.
These special zones are numerous but extremely narrow, making it hard to explain how so many asteroids end up in the inner solar system. In the March 5 Science, Paolo Farinella of the University of Trieste in Italy and David Vokrouhlicky of Charles University in Prague, Czech Republic, present computer simulations showing a nongravitational effect so tiny it has often been ignored could account for the migration.
Named for the Russian engineer who discovered it a century ago, the Yarkovsky effect results from the way a spinning asteroid absorbs and reradiates solar energy. Because an asteroid’s surface gets hotter the longer sunlight falls on it, it does not reradiate energy evenly throughout its day or year.
If different parts of the surface don’t reemit radiation equally, the asteroid will receive a net kick in a particular direction, just as a rocket spewing a jet of gas recoils in the opposite direction.
Farinella and Vokrouhlicky calculate that over a period of 10 million to 1 billion years, the typical interval between collisions among such small asteroids, the Yarkovsky effect can shift an orbit by a few million kilometers. This effect is large enough to push a significant number of asteroids with diameters of less than 20 km into resonances that can deliver them into the inner solar system.
The smaller the asteroid, the greater the Yarkovsky effect. This could explain why the tiniest members of one family of asteroids, known as the Astrids, have the widest range of orbits, Farinella and Vokrouhlicky note.
“We’re learning more and more that small effects [like this] can have important consequences,” says Joseph A. Burns of Cornell University. For instance, the Yarkovsky effect may help explain why two classes of asteroid fragments, or meteorites, each with a distinct composition, take very different amounts of time to reach Earth. —R. Cowen
The near-Earth asteroid 433 Eros.
Census Sampling Confusion
Controversy dogs the use of statistical methods to adjust U.S. population figures
By IVARS PETERSON
Counting the number of people in the United States is a massive, costly enterprise, done once every 10 years.
For the year 2000 enumeration, the Bureau of the Census plans to mail out more than 90 million questionnaires and deliver millions more by hand to households across the nation before Census Day, April 1. An army of census takers will next fan out across the country aiming to track down the significant number of people who fail to return forms.
Past experience suggests that, despite such a huge effort, the population count will still be incomplete. In 1990, the Census Bureau officially recorded 248,709,873 people. Evidence from other surveys and demographic analyses indicated that the population was closer to 253 million and those not counted were mostly children, people from racial and ethnic minorities, and poor residents of both rural and urban areas.
“Coming out of the 1990 census, we recognized that you can’t count everyone by direct enumeration,” says Barbara Everitt Bryant, Census Bureau director during that head count.
To obtain the most accurate count possible in the year 2000, the Census Bureau has proposed integrating the results of conventional counting techniques with the results of a large sample survey of the population (SN: 10/11/97, p. 238). Bureau officials call this approach a “one-number census” because the statistically adjusted result will be the one and only official count.
“The plan will help lead to a result that includes more of the overall population, especially for certain subpopulations, and it will help to control costs,” Tommy Wright, chief of the Census Bureau’s statistical research division, argues in the Jan. 22 Science.
In January, however, the Supreme Court ruled against the use of statistical sampling methods to obtain population figures for determining how many of the 435 seats in the House of Representatives go to each of the 50 states. At the same time, the court upheld a law that mandates the use of statistically adjusted figures, when feasible, for all other purposes, such as distributing $180 billion in funds for federal programs and determining congressional and state district boundaries.
Last week, the Census Bureau responded to the court’s decision and unveiled a revised plan for Census 2000 that would, in effect, provide two population totals. After counting “everyone it possibly can,” the bureau would then conduct a quality-check survey, says Kenneth Prewitt, Census Bureau director. The new plan, however, could push the cost of conducting the census from $4 billion to more than $6 billion.
A handful of statisticians has also entered the fray. David A. Freedman of the University of California, Berkeley; Martin T. Wells of Cornell University in Ithaca, N.Y.; and several colleagues contend that current survey techniques don’t necessarily provide the accuracy needed to improve census data.
“It’s a very difficult problem,” Wells says. “There are many, many problems for which statistics is wildly successful, but statistics can’t solve everything.” Freedman and his colleagues present their critique in a paper posted on the Internet (Technical Report 537 at http:// www.stat.berkeley.edu/tech-reports/).
Many other statisticians disagree with that stand. “You have to look at what else is feasible,” says David S. Moore of Purdue University in West Lafayette, Ind. “The real question is, Given the time constraints, the obvious weaknesses in straight enumeration, and the budget Congress is likely to supply, how good a job does the Census Bureau’s proposed method do?”
“Generally, the statistical community is very much in favor of trying to do sampling,” says Donald B. Rubin of Harvard University.
To illustrate the difficulties of obtaining accurate population figures, Wright uses an analogy. Suppose 10 enumerators, working independently, are asked to determine the number of people at a basketball game during halftime.
The ticket sales count won’t do because some people might have sneaked in or legitimately gained admission without tickets, while others who bought tickets may not have shown up. Counting people in the stands during halftime presents other problems. Spectators come and go, some exiting the arena and others seeking refreshments or switching seats.
As a result, some people may be counted twice and others missed completely. Almost certainly, the 10 enumerators would come up with 10 different counts. That variability represents what statisticians call measurement error.
Just like the estimates of attendance at a basketball game, censuses of the United States also contain measurement error, Wright says. Moreover, given limited resources, it’s difficult to keep that measurement error small when counting a large and highly mobile population.
As a quality check on the accuracy of the year 2000 enumeration, the Census Bureau has proposed surveying a nationwide sample of about 10,000 randomly selected areas, or “blocks,” containing about 300,000 households. The Census Bureau divides the United States into roughly 7 million blocks, of which about 5 million are inhabited. A block can be just one large apartment complex or a rural county.
In an effort entirely independent of the main count, census workers would knock on the door of each housing unit in the sample blocks and list every person who admits to having resided there on Census Day. The Census Bureau would compare the results from the main count conducted in those blocks with this second, independent count, looking for matches of housing units and persons. Those two measures would then be combined to yield a single set of numbers.
The venerable, widely used statistical procedure that underlies this approach is known as the capture-recapture method. In effect, it combines two estimates, both slightly off from the true value, to generate one number that is thought to be closer to the truth than either of the original measures.
To illustrate how the technique works, consider the problem of estimating the number of perch in a certain lake. By fishing simultaneously in several areas distributed across the lake, a research team catches, tags, and releases 40 perch. That represents the capture phase.
In a second effort, the recapture phase, the team goes out and catches, say, 50 perch, of which eight are tagged. Because one-fifth (8 out of 40) of the tagged fish were caught, you can conclude that this time approximately one-fifth of all the perch in the lake were caught. That means the total number of perch is about 250 (40/8 x 50).
In the Census Bureau’s approach, the traditional count is the capture phase and the survey is the recapture phase. The final population estimate would be the product of the first measure (based on counting) times the second measure (based on sampling) divided by the number of people found in both phases.
“Although sampling techniques introduce sampling error, they offer the opportunity to diminish the larger measurement error and hence the overall error,” Wright notes.
That approach has been endorsed by the National Research Council’s Committee on National Statistics as well as a panel of the American Statistical Association and other groups.
“Estimation based on statistically designed samples is a standard and widely used method of obtaining information about large human populations,” Moore says. All the economic indicators and employment figures, for example, stem from such methods.
Though there may be questions about the details of how the Census Bureau might implement statistical sampling, proper use of this tool can improve the accuracy of the census, Moore maintains.
It’s the details, however, that bother Freedman and his associates. “Adjustment adds layer upon layer of complexity to an already complex census,” the statisticians contend. “The results of adjustment are highly dependent on somewhat arbitrary technical decisions. Furthermore, mistakes are almost inevitable, are very hard to detect, and have profound consequences.”
Flawed data processing and clerical work in both the census and the survey, for example, can accidentally introduce errors. In the 1990 quality-check effort, a computer glitch would have mistakenly added a million people to the initial count.
If they go undetected, such errors can have a significant impact. The census includes both overcounts (people erroneously counted twice) and undercounts (people missed). The quality-control survey provides estimates of both numbers. However, a relatively small error in approximating the number of omissions and the number of mistaken enumerations could lead to an unacceptably large error in calculating the difference of the two, the net undercount.
Even if it’s possible to improve the counts of demographic groups across the nation as a whole using sampling, critics of the method point out that getting an accurate estimate of the population make-up in an individual state, county, or city is much more difficult. That’s important because political representation and tax funds are generally allocated to geographical areas rather than specifically to racial, ethnic, or other groups nationwide.
Some groups of people, such as children, are likely to be missed in both the census and the follow-up survey. Other people, such as college students, may get counted twice. The difficulty is that the people missed or counted twice aren’t distributed evenly across the country but are concentrated in different regions.
“Unless the adjustment method is quite exact, it can make estimated shares worse by putting people in the wrong places,” the statisticians point out.
In planning for the 2000 census follow-up, the bureau has responded to criticisms of the 1990 post-enumeration survey, which was not actually used to adjust the 1990 counts. It intends to increase the sample size substantially, change the procedure for matching the enumeration and sampling, require that adjustments be made state by state rather than nationwide, and introduce other refinements.
Nonetheless, Freedman and his colleagues maintain that the central issue is whether the proposed adjustments to the census take out more error than they put in. Although sampling error goes down as the sample size increases, it becomes more difficult to recruit, train, and manage the personnel required to conduct an accurate survey, they contend. Other potentially sizable, hard-to-detect errors can then creep in.
“Those concerns about nonsampling error are not new,” says the Census Bureau’s Raj Singh. “We have quality-control programs, we monitor the various operations very closely as they are conducted, and we have an extensive training program for the staff performing those operations.”
“The Census Bureau does an outstanding job, but it has constraints,” Wells says. “There’s no easy solution. You have to be very careful and try to really understand where the problems lie.”
“This is a scientific question on which scientists ought to be able to use their best judgment without being attacked for the consequences of that judgment on, for example, apportionment or representation of minorities,” Moore says. However, “that doesn’t mean that the scientists have to be unanimous.”
The method also doesn’t have to be perfect. “You’re correcting the big errors,” Bryant says. “The whole point of the adjustment is to get the differential undercount down.”
Dress rehearsals last year allowed the Census Bureau to test and refine its procedures. For example, the bureau implemented its original census plan, which includes statistical sampling for completing the count and for conducting an independent post-enumeration survey in Sacramento, Calif. There, the initial population count was 349,197. Door-to-door enumerators and sampling techniques later added 28,544 people. The independent quality-check survey pointed to a net undercount of 6.3 percent, even after the additions. The final, adjusted population figure was 403,313.
In Columbia, S.C., and 11 surrounding counties, the Census Bureau conducted the dress rehearsal using only traditional counting methods and came up with a population total of 662,140. It also administered a post-enumeration survey to check accuracy but not to make an adjustment. The survey indicated that the count missed 9.4 percent of the population. The net undercount was 13.4 percent for blacks and 6.3 percent for whites.
“The evidence, repeatedly from the first census through the 1990 census, suggests that if Census 2000 were conducted . . . using the conventional methods of the past, even with increased outreach, the results would tend to be consistently below the truth,” Wright insists. “On the other hand, theory, simulations, and tests lead us to believe that a one-number census . . . would tend to yield results around and closer to the truth.”
The debate over sampling in the census, however, is far from over. In Congress, it has become a highly partisan battle.
Republicans vehemently oppose the use of sampling. They favor improving the traditional head count. Their proposal makes additional funds available to the Census Bureau to reach a greater number of people by advertising the census more widely, hiring extra census workers, and translating the census forms into additional languages.
Congressional Democrats strongly support the Census Bureau’s original plan and are ready to go along with the Clinton administration’s idea of pursuing an enumeration that produces two sets of figures. Some contend that, without sampling, the bureau will get a significant undercount no matter how much it spends on outreach.
With funding for the Census Bureau slated to run out in June, reaching a decision on the specific form the census will take becomes a major issue.
“The census is a very large undertaking,” Moore says. “With Congress not wanting to fully fund the census until these issues are settled, regardless of which way we go, [Census 2000] may end up being seen as a failure simply because there wasn’t enough time to do all the details as well as we would like.” n
Total U.S.population (millions)Number oenumerator staff
Number of headquarters and/or office staff
Total cost of census
(thousands of dollars)
1790 3.9 650 a $44
1800 5.3 900 a 66
1810 7.2 1,100 a 178
1820 9.6 1,188 a 208
1830 12.9 1,519 43 378
1840 17.1 2,167 28 833
1850 23.2 3,231 160 1,423
1860 31.4 4,417 184 1,969
1870 38.6 6,530 438 3,421
1880 50.2 31,382 1,495 5,790
1890 63.0 46,804 3,143 11,547
1900 76.2 52,871 3,447 11,854
1910 92.2 70,286 3,738 15,968
1920 106.0 87,234 6,301 25,117
1930 123.2 87,756 6,825 40,156
1940 132.2 123,069 9,987 67,527
1950 151.3 142.962 9,233 91,462
1960 179.3 159,321 2,960 127,934
1970 203.2 166,406 4,571 247,653
1980 226.5 458,523 4,081 1,136,000
1990 248.7 510,200 6,763 2,600,000
aThere was no official headquarters staff for the first four censuses. In addition, the records for the 1790, 1800, and 1810 censuses were accidentally destroyed; the numbers shown are estimates.
Growth of the decennial census from 1790 to 1990.
Source: Bureau of the Census estimates of net undercounts based on demographic analysis and derived largely from administrative data, such as birth and death records, as of June 1991.
Changes in the U.S. population and its undercount by race and ethnicity between the 1980 and 1990 decennial censuses.
Once dreaded as industrial poisons,
some of these compounds may prove
to be natural—even beneficial
By JANET RALOFF
Within every cell of the body resides a protein known to scientists as the aryl hydrocarbon receptor. Most people, however, know it by its more ominous name: the dioxin receptor.
It’s the site to which a number of industrial chemicals or byproducts bind and start a process that can lead to damage. Like a key in a lock, these hydrocarbons slip into the receptor. If the fit is good, they trigger a cascade of gene actions that can disrupt normal development, impair fertility, and cause diseases, perhaps including cancer (SN: 12/9/95, p. 399).
The key that most effectively unlocks this receptor’s toxic potential is 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). It appears to be so potent that even at background levels in the environment it can alter the development of a child’s teeth (SN: 2/20/99, p. 119).
Dozens of other pollutants can also turn on this receptor. To distinguish among these compounds, scientists usually describe TCDD as the dioxin and all others as functional dioxins. Though direct exposure to some of these others has been linked to IQ deficits (SN: 9/14/96, p. 165), sickness and behavioral problems can show up even in the offspring of heavily exposed people (SN: 11/11/95, p. 310).
Toxicologists have puzzled over the question of how fish, birds, and mammals evolved a receptor millions of years ago (SN: 5/17/97, p. 306) that could be activated by pollutants that would become ubiquitous only in the 20th century. Furthermore, why would this prescient receptor be distributed throughout the body?
The most reasonable hypothesis was that this lock developed to turn on gene activity in response to some natural keys in the body.
Strong support for this suspicion emerged 3 years ago when Frank J. Gonzalez of the National Cancer Institute and his colleagues bred a strain of mice lacking the gene for the aryl hydrocarbon (Ah) receptor. These so-called knockout mice proved sickly, suggesting that the body needs the Ah receptor for healthy development (SN: 5/6/95, p. 277).
The finding failed, however, to shed light on the dioxin receptor’s natural function. Lately, a host of researchers have been focusing on this mystery. While uncovering many tantalizing leads, they all agree on only one thing: The receptor most likely has multiple roles in normal cell functioning, none of which anticipated its response to TCDD.
If any of the potential roles are confirmed, calling this protein the dioxin receptor may malign a molecule critical to good health. What’s more, the fact that so many compounds can bind to the receptor may, contrary to conventional wisdom, prove a good thing.
Over the past 2 decades, the list of compounds known to turn on the Ah receptor has been growing. While the source and potency of these hydrocarbons can differ widely, all have tended to share a similar molecular structure—two rings and some appendages that lie flat in a common plane.
Last October, toxicologists at the University of California, Davis reported the surprising finding that carbaryl, a widely used insecticide, also activates the Ah receptor (SN: 11/28/98, p. 344). What distinguishes carbaryl is its nontraditional structure, explains Michael S. Denison, who led the research. While it possesses a pair of rings, the side chain that dangles off of it does not lie in the same plane as the rings.
Suspecting that many other dioxinlike compounds might have been missed over the years because they, like carbaryl, don’t resemble TCDD, Denison’s group has begun a new screening program. The team hopes to uncover natural keys for the Ah lock. The researchers have selected some 600 prospects—including many of the body’s signaling agents, such as prostaglandins, lipids, and neurotransmitters—and have begun running them through bioassays to see whether they fit the Ah receptor.
The effort “has turned up a couple of these [nonclassically shaped] compounds that look quite promising,” Denison says.
In the Aug. 18, 1998 Biochemistry, he and his coworkers described finding some Ah receptor binding by tryptamine and indole acetic acid, two breakdown products of the ubiquitous amino acid tryptophan. They also reported, in the Sept. 1, 1998 Archives of Biochemistry and Biophysics, a binding by bilirubin and biliverdin, chemicals that form during the natural destruction of aging red blood cells. However, all four compounds appear to bind too weakly to activate the receptor under normal conditions.
Thomas A. Gasiewicz, a toxicologist at the University of Rochester (N.Y.) Medical Center, notes, however, that these breakdown products “may serve some very important local function at very focal times in a particular tissue’s development.”
Why did the body develop a lock that many keys can open? The explanation that Denison favors is that the body employs the Ah receptor as the trigger for a general-purpose detoxifying system.
Most compounds that bind to this receptor, he explains, are actually destroyed by the actions they initiate. The gene activity that they trigger produces enzymes that rapidly break down the invading chemical.
“Because we get exposed to so many natural toxicants, I think this turns out to be a beneficial protection system,” Denison argues.
Unfortunately, he adds, the system fails when it encounters TCDD because it is so resistant to the hydrocarbon-degrading enzymes whose production is triggered by its binding to the Ah receptor. Not only does it not disappear, but TCDD loiters in the lock, triggering a seemingly endless production of the destructive enzymes.
Christopher A. Bradfield of the University of Wisconsin-Madison’s McArdle Laboratory for Cancer Research calls Denison’s detoxification explanation for the Ah receptor “a compelling story.” At least seven genes that foster the metabolic breakdown of functional dioxins are directly activated via the Ah receptor, he notes. However, he adds, “that doesn’t make it the only story.”
Work in a number of labs, including Bradfield’s, suggests that the Ah-receptor protein traces evolutionarily to sensory systems in bacteria. Independent of any chemical-detoxification role, Bradfield says, the Ah receptor seems capable of turning on signals that can play a critical role in animals.
Lending support to this idea is the Ah receptor’s resemblance to some very primitive proteins, maintains J. Clark Lagarias, a plant biochemist at the University of California, Davis. The Ah-receptor protein contains a region characterized by a pattern of repeating amino acids, called the PAS domain. Such a pattern shows up in proteins carrying biochemical signals within organisms from purple bacteria to people.
When this domain encounters its environmental trigger—be it light or some chemical—its shape changes, “setting in motion a cascade of biochemical events” including gene activation, Lagarias explains.
Some light-sensitive proteins with PAS domains help microbes move toward the sunlight they need for energy, he notes. Other PAS-domain proteins signal the presence of toxicants that a bacterium should avoid or foods that it should seek.
Through billions of years of evolution, many of the organisms that use these proteins have become more complex, and so have their locks. With the Ah receptor in animals, Lagarias says, “we’re looking at a modern descendant of those bacterial locks.” They can now respond not just to outside environmental cues but to internally generated ones as well.
Agneta Rannug of the Karolinska Institute in Stockholm and Ulf Rannug at the University of Stockholm report that they have uncovered one such signaling role that occurs in animals, including humans.
Denison’s work focused on the Ah receptor’s ability to trigger any of several genes, such as CYP 1A1, that make enzymes to begin detoxifying poisons. The Rannugs note that shining light on skin cells grown in the test tube also increases the activity of CYP 1A1.
“What has never been explained,” Agneta Rannug says, is why light should have this effect. The Rannugs and their colleagues now suggest that this light exposure breaks tryptophan down into its metabolites, which then “can spread throughout the body,” triggering the Ah receptor—and, eventually, the CYP 1A1 gene’s production of those enzymes.
At an international dioxin conference in Stockholm last year, the Rannugs and Yu-Dan Wei of the University of Stockholm unveiled test-tube studies with human cells. Their data indicate that some metabolites of tryptophan, different from those that Denison has been studying, are potent activators of the Ah receptor. The Rannugs believe that these chemicals “belong to a new class of signaling substances that may function as chemical messengers of light,” compounds that may even play a role in the body’s biological clock.
Pharmacologist Stephen Safe of Texas A&M University in College Station has identified another possible role for the Ah receptor.
The presence of a dioxinlike key has always been thought necessary to activate the receptor. However, Safe’s studies now indicate that when the Ah-receptor lock is keyless, it will link up with another protein, known as Arnt. In partnership, these proteins can then “act to regulate expression of some genes” by binding to specific parts of their DNA, Safe has found.
In the June 1998 Nucleic Acids Research, Safe and his colleagues showed that cellular production of an enzyme known as cathepsin-D can triple or quadruple under the influence of such a partnership. Once some compound binds to the Ah receptor, this enhanced gene expression drops. This suggests, he told