Monsanto's decision not to market bioengineered wheat was another victory for the wrongheaded idea that risk is bad.
In recent months opponents of genetically modified food have insisted on the application of a popular, yet misguided, standard for regulating risk known as the Precautionary Principle. The principle demands that all steps must be taken to avoid risks of harm even if cause-and-effect relationships are not established scientifically. And it says an activity's proponents bear the burden of proving the activity is safe. The principle's influence is growing. In March voters in Mendocino County, Calif. showed their enthusiasm for it by passing a ballot initiative to ban the planting of genetically modified crops. Bowing to pressure from those who invoke the principle, Monsanto announced in May that it will not market the world's first bioengineered wheat.
The sweep of the principle is potentially limitless. It could be applied to medicines and air pollution, global warming and cell phones, human cloning, arsenic in drinking water or exposure to sunlight and radon. First used as a legal principle in Sweden in the late 1960s, it spread across Europe in the following decades to the point where it now promises (or threatens) to become a general principle of international law.
The Precautionary Principle has a lot of intuitive appeal. Sensible regulators should not require unambiguous proof of harm. Inconclusive evidence can be enough. But the problem is that while promising safety, it can be both dangerous and incoherent. Risks are on all sides of social situations, and regulation itself creates risks. Because risks are everywhere, the Precautionary Principle forbids action, inaction and everything in between. It is therefore paralyzing; it bans the very steps that it mandates.
Consider the question of whether regulators should take a highly precautionary approach before allowing new medicines to be introduced. If a government takes such an approach, it will protect people against harm from inadequately tested drugs--which is good. But a precautionary approach will also prevent people from receiving benefits from those very drugs--which is bad. If we're really interested in human health, is it "precautionary" to require extensive premarketing testing, or instead to do the opposite? The Precautionary Principle doesn't say.
Sometimes regulation would violate the Precautionary Principle because it would give rise to substitute risks in the form of hazards that materialize, or are increased, as a result of regulation. A reluctance to use DDT, often justified by reference to the Precautionary Principle, is now having really bad effects in the Third World. DDT may well be the cheapest and most effective way of combating the mosquitoes that cause malaria. So shouldn't the Precautionary Principle call for use of, rather than restrictions on, DDT?
The genetic modification of food poses risks and thus might be regulated in the name of the Precautionary Principle. But an argument can be made that the principle forbids restricting these foods. Modified crops could provide food that is both cheap and healthy, food desperately needed in some poor countries.
In England the Precautionary Principle has been invoked on behalf of limitations on cell phones; some studies suggest the use of cell phones might be associated with an increase in brain cancer. But cell phones are often used to call for police and medical assistance in the event of emergencies. Would it truly be precautionary to restrict their use?
We can go much further. A great deal of evidence suggests that expensive regulations can have harmful effects on life and health simply because they reduce income and employment. If regulatory policies increase the cost of producing goods, they will lower living standards. Poverty really isn't good for your health. It follows that a multibillion-dollar expenditure for "precaution" has--in the worst case--serious and harmful effects on human well-being. Such expenditures are therefore inconsistent with the Precautionary Principle!
Of course sensible societies take precautions, but they do so after balancing all of the relevant risks, not simply a few. Ordinary people are unlikely to do well if they always think, "Better safe than sorry." What is true for ordinary people is true for societies, too.
Terence Corcoran writes that one of the more painful experiences in business journalism is to watch an unprepared corporation or industry stumble through an encounter with junk science. Whether it's the oil industry's pathetic response to Kyoto, biotech firms capitulating to the genetically modified food scare, Coke and McDonald's ineptly coping with the latest health alarm or chemical companies grappling with a new pesticide study, the results are uniformly awful.
Corcoran says that in some cases, of course, there may be nothing a company or industry can do to neutralize a public relations disaster. A good example is the Canadian chemical industry's attempt to deal with a recent report from the Ontario College of Family Physicians on the alleged risks of pesticide use. The OCFP study landed in Toronto shortly before a city council vote on whether to ban the use of pesticides. It became an international story, based on the sensational conclusion that "exposure to all the commonly used pesticides -- phenoxyherbicides, organophosphates, carbamates and pyrethrins -- has shown positive association with adverse health effects." No pesticide is safe, it seemed to say.
The industry tried to respond. It held a news conference that few journalists attended and brought in industry officials to attempt to clear up risk concerns and answer questions. They couldn't. How could they? The OCFP report, called a Pesticides Literature Review, was a small masterpiece of junk science. Junk science occurs when facts are distorted, risks are exaggerated and science is warped by politics and ideology. Even the most knowledgeable specialist in the field could not hope to respond to something like the OCFP study without extensive research.
And even if some counter-research could have been quickly assembled, the industry would still be stuck with the task of proving a negative: proving that their products do not kill people when properly used in the face of a study that claims they do. For the concluding item in our Junk Science series, we commissioned noted U.S. toxicologist Frank Dost to review the OCFP report. He questions its science, the fact that it was released without review and says it should not be used to set policy.
But how many people, including journalists, would have the patience to come to grips with a serious look at the OCFP report? There are no sensational conclusions about deaths and cancers and risks. Mr. Dost, in his brief (by science report standards) commentary, finds the report flawed. It would take six months to a year to unravel the whole OCFP review.
Meanwhile, the industry is boxed in, unable to respond to a piece of junk science. While the Sierra Club and other groups churn out claim after claim through full-time professional agitators whose sole function in life is to flim-flam the media, industry officials cower in their offices. Corcoran says he is told that one of the world's biggest chemical companies has only one person in all of Canada assigned to manage such public relations events. Most companies pass the buck over to the industry associations -- collectives where nobody is responsible.
Industries are also, however, betrayed by the government agencies they came to depend on for regulation. Most businesses assume if they meet government safety rules and standards, they are protected. But governments have become notoriously unwilling to stand by their own science and regulations. Health Canada's pesticide scientists were muzzled throughout the OCFP event, leaving the industry in the cold.
So who's around to stand up for good science? Nobody has any incentive to take on the professional purveyors of health and environmental scares. Politicians and bureaucrats will flow with just about any claim that happens to capture public attention. More often, they will help generate public alarm --Kyoto, obesity, hormones -- if it helps expand their work and careers.
Businesses, unfortunately, also have incentives to capitulate to the latest junk science fad, even though the costs to the economy and the integrity of our system are expanding. Elizabeth Whelan, president of the American Council on Science and Health, cites the case of Proctor & Gamble and Estee Lauder, who have agreed to reformulate two of their cosmetic lines to remove a chemical that is perfectly safe. "Great PR. Bad precedent," she says, noting the exercise is a waste of resources and a distraction.
Gerber and H.J. Heinz are reformulating their baby foods to avoid genetically modified content, a decision the Hoover Institution's Henry Miller yesterday on this page called "cowardly capitalism" and "selling out" the highest interests of the company, its commitment to making a superior product, and its customers.
This kind of expedience can't go on forever. The CEO of Coca-Cola, E. Neville Isdell, recently summed up corporate attitudes toward science scares: "We can't answer to what's out there philosophically. We have to respond to what consumers want." But if what consumers want is based on a growing culture of junk science, where ideologues have taken over and facts and risks are blown out of reality, business -- along with the rest of us -- are on a treadmill to some kind of scientific and bureaucratic hell.
Frank Dost writes in this op-ed that he has been asked by the editor to examine and comment on the recent Pesticides Literature Review prepared by the Ontario College of Family Physicians. Dost says that he understands the review, published on April 23, became an important element in the City of Toronto's debate over whether to ban the use of pesticides. In Dost's view, the OCFP report fails several important science tests and should not be used as a guide to setting policy on pesticide use. Indeed, he questions whether the OCFP paper should have been made public, given that it was not subject to rigorous external peer scrutiny.
Dost says that the OCFP report reviews a portion of the pesticide epidemiology literature. Epidemiology is the science of searching for possible connections between disease and environmental factors, in this case pesticide exposure. By definition, epidemiology cannot show a cause-and-effect relationship.
Dost's first and continuing impression of this review was that the authors see all pesticides as a common group of chemicals for which an effect by one may be associated with any other. If chemical A is carcinogenic, then so is chemical B. Numerous specific pesticides that are clearly established as being non-carcinogenic by international regulatory action are thrown into this amorphous collection for which an alarm is sounded about cancer. This philosophy is the same as considering all therapeutic drugs as just medicine. By this thinking, the nature of the chemical, its behavior in the body, the specific intended effect on the patient, the potential toxicity (side effects) all become unimportant, and the practice of medicine becomes very simple. The problem with such an approach is obvious.
All chemicals are different. An evaluation of potential risk of use of any given substance is possible only if the entire spectrum of its unique chemical, physical and biological properties is considered. In the OCFP report, the only information considered is possible associations between some indirect estimate of "pesticide" contact and disease. As important as this connection is for specific chemicals, it is usually the least precise information about chemical effects. The authors comment on the need for "linkage with animal studies, clinical case literature and other sources of information on particular pesticide use and toxicity." But then they ignore the extensive body of data describing individual chemicals, particularly their ability to affect genetic material or to cause cancer or reproductive effects or other toxicity.
In short, there is no recognition of national, international and state regulatory actions. These regulatory judgments -- by the U.S. Environmental Protection Agency, Health Canada and others -- are not made by a few people over coffee, but depend on analyses of effects down to the genetic level and over a lifetime, with several levels of expert review. Would the OCFP have the same dismissive attitude toward the enormous body of non-human study that enables preclinical and clinical trials of the therapeutic drugs prescribed by physicians?
Epidemiology depends on a core precept: Findings must meet standards of plausibility. Plausibility can be determined only by examination of all available information about each chemical of concern. Do the findings make sense? The OCFP review fails that test. First, it lumps grossly unlike chemicals together. Second, it fails to consider the mass of fundamental biological and environmental data available for each substance.
Along with the overriding principle of plausibility, the review also did not attend to several other vital precepts in epidemiology. I will briefly discuss only one: strength of association. This means that an estimate of added risk must be supported by dependable diagnoses, good exposure information, identification of other possible contributors to the risk, a large enough population of affected and presumably unaffected people (controls) and other factors. With all these qualifications, the estimated risk must be high enough relative to normal background risk that it is likely to be real. Depending on the kind of study, normal risk for a given effect is represented as one or 100, and if the estimated risk in the study population is more than one the initial implication is that risk has been increased.
It is not that simple, however. Low numbers of cases, unreliable exposure estimates, exposure to other carcinogens and other factors make such numbers unreliable, including the normal high background risks. The total background risk of cancer in North America is one case in the lifetime of every three or four people. A statistical range into which the risk estimate will fall is the Confidence Interval (CI). Usually the estimate (odds ratio) is stated to have a 95% probability of falling somewhere between the two extremes of the CI. If the lower limit of the CI is one (normal risk) or less, the estimate is usually considered to not represent an increased risk. The lower value can only approach zero, but the upper end has no limit. Reliability decreases as that gap widens. The OCFP authors have included numerous studies showing little strength of association to support their overall contention.
The OCFP tendency to find meaning where it is questionable can be illustrated with a couple of references, from among hundreds in the study. One is a reference to a 1999 report by Hardell and Eriksson of Sweden on a large number of herbicides and other substances, suggesting that several herbicides cause non-Hodgkins lymphoma. However, the estimated risks are low and the low end of most CIs are one or less, indicating non-significant risk. The positive associations appear in groups so small that a single misdiagnosis could change the result. Furthermore, almost half of the respondents were next of kin, trying to recall activities of someone else 10 or 20 years earlier. In two lists of 33 substances in the Hardell paper, almost all odds ratios (risk estimates) were similar. Such similarity in a large group of widely divergent kinds of chemicals is biologically implausible, unless none has a significant effect. Another example: A 2001 paper by T.E. Arbuckle and others was stated to have "revealed an association between phenoxy herbicides and spontaneous abortions" and may "possibly point" to cricital exposure "when the fetus may be more vulnerable to toxic exposures." In fact, the authors of the Arbuckle paper looked at exposure to nine different chemicals and stated they intended only to develop hypotheses for further work. They also acknowledged the difficulties with such studies. Most of the findings are below or just at statistical significance and therefore similar, which, again, is improbable unless there is no effect.
The OCFP literature search strategy is also questionable. The primary search term was "pesticides," with various modifiers, but apparently no specific chemicals were searched. There is no rationale given for ignoring the useful research conducted before 1990; certainly there was no great leap forward in methods at that time. I won't attempt to list the more recent work they missed. It is useful, however, to comment on their rejection of two studies that were funded by the "chemical industry" because of presumed bias. Perhaps they are not aware that such work by industry is subject to scientific and political scrutiny from every quarter. Flaws in industry studies carry a much higher penalty than does, say, the OCFP flaws. The willingness to exclude valid studies on that basis itself suggests a bias that can influence the way they interpret research. Perhaps this is expressed in their advocacy of political action to address "this public health issue," and recommend avoiding pesticides on purchased foods. It is likely that they are not aware that almost all produce actually meets organic standards for pesticide residues.
Reviews in the OCFP paper also appear to be selective toward positive findings. An excellent example is the study by Fleming who found an association between pesticide use and prostate cancer. OCFP didn't mention that the same study found no associations with cancer of the lungs, breast, pancreas, kidney, colon or leukemia and non-Hodgkins lymphoma. Fleming also mentioned that the applicators were consistently healthier than the general population. One of the stronger claims by OCFP in this review was the potential for pesticides to cause prostate cancer. It was based on a portion of the National Cancer Institute Agricultural Health Study, including more than 55,000 applicators.
However, the positive association occurred only with use of methyl bromide, a fumigant rarely used in Canada and scheduled for cancellation in North America. OCFP didn't consider it important to note that cancer incidence from all sites was significantly less than expected, or that for 35 other pesticides no association with prostate cancer was observed.
Selectivity also extends to exposure criteria. In many cases, this review comments that exposure data are inadequate or questionable, but then goes forward with a statement about evident outcomes.
Dost says that he has to wonder if philosophical issues may have caused good science to be pushed aside. It is not inappropriate that the authors use Rachel Carson's Silent Spring as a backdrop for their report, but almost no present-day pesticides fit her concerns. Carson was somewhat a visionary, but her rationale at that time made sense. Unfortunately, Colborn's Our Stolen Future is also considered fundamental. Amidst the author's frequent self-congratulation, this book also lumps all pesticides together, speaking of endocrine disruption and other effects, and only at the end do we find that she is really talking about persistent chlorinated hydrocarbons. The book doesn't belong on the same table with Carson's. (The authors of the OCFP report were wise enough to ignore these substances.)
These brief comments are not a thorough review of the OCFP report but are sufficient to seriously question its value. As it stands, this document does not describe the health impact of pesticides. It should not supplant the judgment of Health Canada on regulatory policy issues.
Paddy's Home Page