Assessing, managing, and communicating chemical food risks

This Scientific Status Summary describes the trinity of risk-related factors: assessment, management,and communication. Addresses current issues surrounding this trinity of factors that pertain to the determination, management, andacceptablity of risks posed by chemicals in foods.

February 1, 2009

First published in Food Technology Magazine, May 1997. 51[5]:85-92. (Download PDF version)

Foods may contain mixtures of thousands of individual chemicals that confer a variety of characteristics, including texture, flavor, color, and nutritive value. The potential human health risks from consumption of specific food chemicals is an issue of considerable societal importance as can be seen by the results of consumer food safety surveys (Opinion Research Corporation, 1996) and the enormous number of food chemical regulations in the United States and abroad. Concerns have been raised about different types of food chemicals, including those added to food during production and processing (i.e., pesticide residues, hormones, antibiotics, and food additives), environmental substances that inadvertently contaminate foods, and naturally-occurring toxins (Francis, 1992; NRC, 1993a, 1996a; Winter et al., 1990).

Intense debate centers on the magnitude of risks posed by chemicals in food and society’s acceptance of such risks. A trinity of risk-related factors — assessment, management, and communication — forms the basis for decisions about chemicals requiring regulation and the types of regulation needed. The relationships of these factors are shown in Figure 1. Risk assessment, risk management, and risk communication represent dependent yet unique fields of study, and each may be characterized as rapidly evolving and controversial. This summary addresses the current issues surrounding this trinity of factors that pertain to the determination, management, and acceptability of risks posed by chemicals in foods.

Risk Assessment

The 16th Century Swiss physician Paracelsus (Phillipus Aureolus Theophrastus Bombastus von Hohenheim) established the basis of the modern study of toxicology through his observation that “all substances are poisons; there is none which is not a poison. The right dose differentiates a poison and a remedy.” To paraphrase Paracelsus, it is the dose that makes the poison. Modern risk assessment practices rely upon this relationship — between the dose of a chemical and the toxicological response — to predict the probabilities, types, and magnitudes of human health effects anticipated from exposure to specific levels of chemicals from foods or other sources.

Paracelsus founded the toxicological principle after conducting human studies investigating the uses of mercurial compounds for the treatment of syphilis. Four hundred years later, human epidemiology, which uses human data to predict potential health risks, is still the preferred method for estimating risks, as it precludes the need to make assumptions about the reliability of animal toxicology data as a surrogate for human data. Epidemiological studies have been used to correlate human cancers with various factors; a famous review article on this subject is that of Doll and Peto (1981), who estimated that approximately 35% (with a range of 10–70%) of U.S. cancer deaths are attributable to variation in diet. Further studies have identified macronutrients and excess calories as the greatest contributors to dietary cancer risk in the United States (NRC, 1989a).

Unfortunately, there are severe limitations in the use of epidemiology to predict risks from human exposure to chemicals in the diet. Ethical considerations, appropriately, do not allow for the same type of human toxicological studies that were performed in Paracelsus’ time. Health effects with low probabilities of occurrence are difficult to measure with statistical confidence using epidemiology, and human data are often difficult to obtain and may be inaccurate due to recall bias. A major problem with epidemiology, particularly with respect to determining the risks from exposure to food chemicals, is the difficulty in identifying control groups that have not been exposed to the chemical being studied. Diseases, such as cancer, are characterized by having a long latency period, making epidemiology ineffective in assessing risks from exposure to newer chemicals. Additionally, the use of epidemiology requires some documented level of human exposure; as such, it is not useful for predicting potential risks from newly-developed chemicals prior to their release.

As a result of the limitations of human epidemiology studies, toxicological risk assessment is typically performed to determine the probabilities, types, and magnitudes of human health effects anticipated from exposure to specific levels of food chemicals. Toxicological risk assessment normally relies upon the results of long-term animal toxicology studies performed in a variety of animal species; results are extrapolated to predict potential human health effects.

While animal toxicology studies have been widely used in the past half-century, the field of risk assessment is relatively new and rapidly evolving. The first major guidelines for conducting risk assessments were published in 1983; they define the four major components of risk assessment as (1) hazard identification, (2) dose/ response assessment, (3) exposure assessment, and (4) risk characterization (NRC, 1983).

Fig. 1 — Trinity of risk issues
Fig. 1 — Trinity of risk issues

Hazard Identification

Hazard identification is the process by which specific chemicals are causally linked to the production of particular health effects. The process involves gathering and evaluating toxicity data obtained from animal and human studies to determine the types of health effects produced and the conditions of exposure under which the effects may be produced. Examples of such health effects include neurotoxicity, birth defects, reproductive abnormalities, developmental effects, immunotoxicity, toxicity to the liver, kidney, or lung, and cancer. Hazard identification in itself does not assess risks but determines whether and to what degree it is scientifically correct to infer that health effects produced in one setting (e.g., animals) will occur in other settings (e.g., adequately exposed humans) (Environ, 1986). The U.S. Environmental Protection Agency (EPA) has developed a variety of guidelines for the toxicology testing of pesticides, while the U.S. Food and Drug Administration (FDA) has published comprehensive guidelines for the safety assessment of direct food and color additives (FDA, 1993).

A critical component of hazard identification is the determination of whether a chemical does or does not cause cancer. This distinction is important because risk assessment practices use different criteria for carcinogens (cancer-causing chemicals) and non-carcinogens. It is typically assumed, as will be discussed later, that non-carcinogenic effects may exhibit toxicity threshold doses while carcinogenic effects may lack thresholds; this distinction may have dramatic effects upon the relative risks calculated from low levels of exposure to carcinogens and non-carcinogens.

Cancer studies usually involve long-term rodent (e.g., rat and mouse) feeding studies, in which test animals are exposed to various doses of a chemical typically including a control (zero) dose, a medium dose, and a high dose. The dosing is continuous throughout the animals’ lifetimes. The determination of whether a chemical is a carcinogen is made statistically through comparisons of the results of the exposed animal groups with those of the control group (Winter, 1992).

The procedures used to determine carcinogenicity are themselves highly controversial. In an effort to maximize the chance of detecting cancer in the animal studies, special strains of animals that may be more susceptible to developing cancer are often used; this practice raises questions about the validity of extrapolating such results to humans (Abelson, 1993). Additionally, while cancer itself requires tumors to invade other tissues, benign (non-invasive) tumors are also usually considered as evidence of potential carcinogenicity.

Another critical issue in hazard identification of carcinogens is the use of the Maximum Tolerated Dose, or MTD, which typically represents the highest dose administered to the test animals. The MTD is usually determined following the results of 90-day toxicity studies and is roughly described as the highest dose that does not alter the test animal’s longevity or well-being because of non-cancer effects (NRC, 1993b). It has been argued that many chemicals may induce cancer at the MTD through biological mechanisms that do not occur at lower doses. Such mechanisms include increased cell proliferation rates in response to high-dose toxicity (Ames and Gold, 1990); exposure at lower doses, where these mechanisms are not active, would not result in cancer. The controversy over the use of the MTD is reflected in the polarity of opinions of the 17-member panel of the National Research Council’s (NRC’s) Committee on Risk Assessment Methodology. The Committee recommended continued use of the MTD, but a sixmember minority recommended abandoning use in favor of more moderate doses that could provide greater understanding of the mechanisms of carcinogenesis (NRC, 1993b).

Dose/Response Assessment

Once a specific toxicological hazard has been identified, it is possible to predict the relationship between human exposure to the chemical and the probability of adverse effects. The procedures used to establish this dose/response relationship are governed by the type of hazard; non-carcinogenic and carcinogenic hazards are typically treated differently.

For non-carcinogenic hazards, it is usually believed that toxic effects will not be observed until a minimum, or threshold, dose is reached. The concept of a toxicity threshold is theoretical; it is practical only in relation to what effects occur at exposures just above and just below the threshold dose. To estimate the threshold, toxicology studies generally try to identify two dose levels: one above the threshold at which effects are seen (i.e., the Lowest Observed Effect Level, or LOEL) and one presumably below the threshold at which no effects are seen (i.e., the No Observed Effect Level, or NOEL). The degree to which the NOEL and LOEL estimates approximate the threshold is not possible to determine due to limitations in the number of dose levels used in the toxicology studies and statistical and biological limitations. As a prudent measure, the NOEL is generally used as a conservative estimate of the threshold (Environ, 1986).

It is critical to realize that the NOEL values are derived from toxicology studies involving small homogeneous groups of animals and, therefore, may not represent appropriate thresholds for large and nonhomogeneous human populations. To allow for differences in the animal-to-human extrapolation and to consider variability in human responses, uncertainty factors (also known as safety factors) are used; “acceptable” levels of human exposure are determined by dividing the NOELs by the uncertainty factors. The choice of uncertainty factors is governed by the availability of human data, the nature, severity, and chronicity of the effect, the quality of animal toxicology data, and the need to accommodate human response variability for sensitive subgroups; overall uncertainty factors may range from 1 to10,000 (NRC, 1993a). The most common uncertainty factor is 100 which is rationalized as a 10-fold uncertainty factor for species variation (assuming humans are 10 times more sensitive than the animals studied) multiplied by another 10-fold uncertainty factor for human variation (assuming some humans are 10 times more sensitive than “average” humans).

Historically, the division of the NOEL by an uncertainty factor has produced a term known as the Acceptable Daily Intake (ADI), expressed as amount of chemical exposure per amount of body weight per day. The EPA has recently replaced the term ADI with an analogous term, the toxicity reference dose, or RfD, thereby removing the inference of “acceptability,” which may carry the connotation of a non-scientific, value judgment (Rodricks, 1992).

As an alternative to the use of the NOEL approach in the dose/response assessment, the concept of the benchmark dose has been proposed (Crump, 1984). The benchmark dose is defined as the lower confidence limit for the dose, corresponding to a specific increase in the response rate over the background level (NRC, 1993a). This approach provides a consistent basis for calculating the RfD, considers the dose/response model, and uses all available experimental data in contrast to the NOEL approach, which ignores the shape of the dose/response curve. The benchmark dose approach can also be applied to the risk assessment of carcinogens.

The major distinction in the dose/response assessment for carcinogens and non-carcinogens involves the treatment of thresholds. For carcinogens, it is assumed that no threshold level of exposure may exist; this implies that carcinogens are hazardous in any amount. Limited scientific evidence in support of the lack of thresholds for carcinogens comes from ionizing radiation studies, although such studies involved relatively high levels of human radiation exposure, and it is commonly argued that cancer from radiation itself may proceed by a threshold mechanism (Goldman, 1996). Mechanistically, it is assumed that many carcinogens act as mutagens that cause direct damage to the genes; it has been proposed, in what is often called the “one-hit” model of carcinogenesis, that exposure to a single molecule of a carcinogen could ultimately lead to a mutation that could develop into cancer.

Typical human exposure to animal carcinogens is often several thousand times lower than doses that produced tumors in experimental studies. Calculation of carcinogenic risks therefore requires the results of highdose animal studies to be extrapolated to predict human risks at low exposures. A number of mathematical models have been developed for the dose/response assessment of carcinogens; each yields a value known as the cancer potency factor, often known as the Q* or Q1* (NRC, 1987). Cancer potency factors may vary widely depending upon the choice of the model and its assumptions. The most commonly used model is the linearized multistage model that assumes a cell, which may be a target for a carcinogenic chemical, goes through a specific number of different stages and that the probability of a “hit” on the cell, which leads to the development of cancer, is stage-specific. At the lowest levels of exposure, the relationship between exposure level and excess cancers is linear (Figure 2). Also commonly performed are statistical corrections that express cancer risks on the basis of the upper 95% confidence interval of the slope of the dose/response curve, adding an additional element of conservatism to the risks (Winter, 1992). Upper confidence level cancer risks may be orders of magnitude greater than the “best” estimates obtained using the mathematical models.

Considerable discussion has focused on the fact that chemicals may, in fact, use several biological mechanisms to cause cancer (Gori, 1992). Genotoxic chemicals, which cause mutations, may indeed lack threshold doses. In other cases, such as the induction of thyroid tumors (Paynter et al., 1988) or the induction of tumors resulting from increased cell proliferation (Ames and Gold, 1990), it is argued that such carcinogenic effects are exerted through mechanisms consistent with a threshold hypothesis. Currently, however, the non-threshold linearized multistage model is the one most commonly applied to carcinogenic chemicals, regardless of the potential mechanisms of carcinogenicity.

Exposure Assessment

To apply the information derived from the hazard identification and dose/response assessment processes to risk assessment, an estimate of the likely amount of human exposure is necessary. Human exposure to chemicals in the diet is typically expressed as the product of the concentration of the chemical in various foods and the amounts of the foods consumed. Estimation of both factors requires several assumptions and involves considerable uncertainty.

In some cases, such as with food additives, the concentration of a chemical in food may be well known and relatively constant. For incidental chemicals in foods (e.g., pesticide residues, hormones, antibiotics, and environmental contaminants), however, the concentrations may vary dramatically from sample to sample, making an accurate estimate of the “actual” level of concentration difficult to obtain. The choice of assumptions used to predict the concentration levels may also be related to the availability of reliable monitoring data.

As an example, there are a variety of techniques to determine pesticide residue levels in foods. Such approaches range from the highly theoretical and conservative assumption that all pesticides are present at a predetermined level, typically at the maximum allowable level, to more complex, data-intensive approaches based upon actual measurements of residue levels at the time the food is ready to be consumed. Also of use are a variety of intermediate techniques that consider factors, such as residue results from field monitoring studies, effects of post-harvest factors on residue levels, and incorporation of actual pesticide use data (Winter, 1992). Results from the use of the various techniques may differ dramatically. For example, the NRC, in an effort to examine the statutory basis for establishing legal limits for pesticide residues in food, estimated exposure to several carcinogenic pesticides in foods by making several assumptions: (1) all registered pesticides were always used on all commodities for which they were registered, (2) all residue levels were present at the maximum allowable level (tolerance), and (3) residue levels were not reduced by post-harvest factors (NRC, 1987). Archibald and Winter (1989), using more realistic pesticide residue data obtained from the FDA’s Total Diet Study, in which residue levels were determined at the time the food was ready to be consumed, reported that the NRC exposure estimates were exaggerated by factors ranging from hundreds to tens of thousands of times.

The development of accurate food consumption estimates is challenging. Typically, eight general methods are used to assess food consumption: food disappearance data (correcting food production and import data for food exports, waste, storage, and non-human food use), household disappearance data, dietary histories, dietary frequencies, 24-hour recalls, food records, weighted intakes, and duplicate portions (Pennington, 1991). The method used depends upon the purposes of the study and availability of resources. For dietary risk assessment purposes, the most common food consumption estimates are derived from the results of the 1977– 78 and 1987–88 USDA Nationwide Food Consumption Surveys from which three-day dietary records of individuals were collected by interview; the amount of each food item consumed and the individual’s weight were specified. Additional information concerning demographic and socioeconomic background, age, gender, ethnicity, and geographic location were tabulated. To assist in the exposure assessment process, standard recipes for composite foods are acquired and the percentages, by weight, of the various raw agricultural commodities present in the composite foods are determined (Alexander and Clayton, 1986). An apple pie, for example, could be converted into components of apples, sugar, flour, and shortening; multiplying estimates of chemical concentrations in each component by estimated consumption of each component yields an estimate of chemical exposure.

A critical step in the exposure assessment phase is to identify the “target” audience exposed. It is widely accepted that dietary chemical exposures of different population subgroups may differ dramatically due to differences in food consumption patterns. Infants and children, for example, eat fewer foods than adults but consume more food on a per-body-weight basis; their exposure to pesticide residues, for example, is often greater than that for adults (NRC, 1993a). Because of the differences in exposure of population subgroups, exposure assessments often use the subgroup of highest exposure. In the case of acute (short-term) risk assessments, the diets of the most highly-exposed individuals — those representing the upper 90th, 95th, or 99th percentiles — are often considered rather than median consumers, and chemical concentrations are often considered at the highest detected levels rather than at median levels. A new approach, involving statistical convolution of the distributions of food consumption and chemical concentration levels, allows the distribution of dietary exposures to be calculated in place of the simple point estimates of “target” audiences described above. This approach can be modified to address exposure to multiple chemicals with similar toxicological properties, such as organophosphate pesticides (NRC, 1993a).

Risk Characterization

The final stage of risk assessment is called risk characterization. This involves describing the nature, and often the magnitude, of risk and includes any uncertainties. An accurate description hinges on the accuracy of the results of the first three steps, which, again, involve identifying a specific hazard, estimating the amount of exposure, and predicting the likelihood of adverse effects based on exposure. For non-carcinogens, risk characterization typically relates the estimated exposure to the toxicity reference dose or acceptable daily intake. It is critical to understand that the RfD or ADI is not a threshold level that divides “safe” and “unsafe” human exposures and is, therefore, not a direct expression of risk. Risk is a probability; exposure at the RfD or ADI presents a “very low risk,” although “very low” is undefined (Rodricks, 1992). Qualitatively, risk increases at levels above the RfD or ADI, with greater exposure resulting in greater potential risk.

For carcinogens, estimated cancer risks are obtained by multiplying exposure estimates by cancer potency factors. This practice often results in numerical cancer risks such as 1 x 10-6, which is defined as one excess cancer over background per million persons exposed. Care should be taken to avoid misinterpreting results through “body count” analyses, in which risk estimates are multiplied by population numbers to suggest “actual” human cancer cases. As an example of this practice, the Natural Resources Defense Council, in a widely publicized report (NRDC, 1989), predicted that between 5,500 and 6,200 of the current population of U.S. preschoolers may eventually develop cancer solely as a result of their exposure before six years of age to eight pesticides or metabolites commonly found in fruits and vegetables (NRDC, 1989). This practice ignores the fact that considerable uncertainty is inherent in the process of carcinogen risk assessment and that the estimated cancer risk typically represents the upper bound of occurrence, while the “best” estimate of cancer risk may be several orders of magnitude lower or even zero (Winter, 1992).

Optimally, risk characterization should include qualitative evaluations in addition to single numerical depictions of risk or ranges of numerical depictions. Such qualitative factors include the strength of the evidence that a chemical produces the particular effect from which the risk was estimated. The numerous uncertainties and assumptions inherent in the risk assessment process should also be discussed (Hoerger, 1990). The NRC’s Committee on Risk Characterization recently proposed that the notion of risk characterization should be reconceived to increase the likelihood of achieving sound and acceptable decisions (NRC, 1996b). Current methods of risk characterization were criticized for their inappropriate portrayal of scientific and technical information that may be of little use to decision makers and that could lead to unwise decisions. It has been proposed that the process of risk characterization be considerably expanded so that it be viewed as a product of both analysis and deliberation. Risk characterization should be directed toward informing choices and solving problems. In doing so, risk characterization would encourage participation and a broader understanding of the consequences among interested and affected parties.

From the preceding discussion, it is clear that our current practices of risk assessment are far from ideal and introduce considerable uncertainty in the final risk estimates. At the same time, however, risk assessment has provided a relatively consistent framework that allows for open discussion and debate on how to best regulate chemicals in foods. The accuracy of risk assessments will undoubtedly increase as improvements emerge in the areas of hazard, dose/response, and exposure assessment.

Risk Management

Measuring risks and deciding how they should be managed are two related, yet distinct activities. Risk assessment provides regulators with probabilistic risk information; regulators may make use of the risk information in determining what action, if any, should be taken to manage the risk in question. Risk management should be viewed as a process by which actions to control a particular risk are identified, selected, and implemented (Covello and Merkhofer, 1993). In addition to considering the results of risk assessments, risk management represents a regulatory decision-making process that involves consideration of political, social, economic, and technological information and requires the use of value judgments on issues such as the acceptability of risk and the reasonableness of the costs of control (NRC, 1983).

Fig. 2 — Predicting cancer risks at low exposure levels using linear extrapolation.
Fig. 2 — Predicting cancer risks at low exposure levels using linear extrapolation.

The proper interpretation of risk assessment information is crucial for the development of scientifically appropriate risk management policies (Winter, 1994). Inherent in the risk assessment process are large gaps in knowledge that require many choices to be made among competing models and assumptions; this introduces considerable uncertainty into the risk estimates that must be appreciated in the risk management process. Optimally, risk managers should be allowed the flexibility to make risk-related decisions using a “weight-of-evidence” approach that allows for the consideration of all available valid scientific data. It has been suggested, however, that risk assessments are often conducted using a “strengthof- evidence” approach, in which experiments demonstrating positive toxicological effects are given more weight than any number of negative experiments of equal quality (Gray, 1996). This may be particularly true in the case of carcinogen risk assessment, where conservative assumptions may exaggerate risks greatly and, therefore, may distort regulatory practices (Nichols and Zeckhauser, 1988).

Legislative mandate largely determines the flexibility afforded risk managers in interpreting the results of risk assessments and in considering other factors before making regulatory decisions. A variety of laws pertain to chemicals in food and water (Table 1); different risk management practices are prescribed for the different laws. As such, an acceptable level of risk for one type of food chemical may differ greatly from what is considered legally acceptable for another type of food chemical, and the use of practices such as risk balancing (comparing risks with benefits and/or economic impact) and technical feasibility may be allowed under some laws and not allowed under others. In essence, it is possible for the same chemical to be subject to different allowable levels of risk depending upon whether consumers eat it, breathe it, or drink it. While this may seem counterintuitive, it is critical to realize that each law has its own history and was enacted rather independently from the other laws through a complex interaction of industry, consumer, environmental, and government constituencies, each providing input for their agendas in the legislative process (Rodricks, 1992).

FDA regulates carcinogenic food additives on a zero-risk basis through the Federal Food, Drug, and Cosmetic Act (FFDCA). Within the FFDCA is the Delaney Clause, which states that no additive can be used in food if the additive has been shown to induce cancer in humans or animals; as such, carcinogenic food additives are not allowed regardless of the level of exposure. At the same time, the FDA has applied the concept of a negligible risk (defined as one excess cancer above background per million persons exposed using conservative risk assessment models) to veterinary drugs while taking negligible risk and risk balancing approaches to regulate specific carcinogenic food contaminants, such as PCBs in fish and aflatoxins in peanuts and other products (Rodricks, 1992). Non-carcinogenic food additives are allowed for use if the exposure estimates are below the ADI.

The regulation of pesticide residues in foods has been the subject of considerable scientific and societal interest for much of the past decade (NRC, 1987; NRC, 1993a). The major law regulating pesticides, the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), provides the EPA with the authority to permit specific pesticide uses when it has been determined that the potential benefits of the uses of the pesticides outweigh their potential risks. Some benefits of pesticides may be directly related to health; an example concerns the use of a fungicide that may result in food residues yet may prevent the formation of naturally-occurring fungal toxins of potentially greater health risk. Substitution risks are also important, since cessation of the use of a specific pesticide that may leave food residues could lead to an increase in the use of less-effective pesticides, resulting in greater potential for environmental disruption and worker-safety concerns in addition to food residues. Another benefit considered by the EPA is the pesticide’s ability to produce an abundant, available, and affordable food supply by increasing crop yields and reducing production costs and consumer prices.

Until recently, some pesticide residues were also subject to the Delaney Clause of FFDCA, which, in contrast to the risk/benefit balancing approach of FIFRA, existed as a strict zero-risk statute for potentially carcinogenic This inconsistency led to what has been called the “Delaney Paradox”; pesticide residues on raw agricultural commodities were not subject to the Delaney Clause and could be regulated on the basis of risks and benefits while those on processed forms were regulated solely on a risk basis (NRC, 1987). To complicate matters, EPA, through its coordination policy, would not allow pesticides to be used on raw commodities if processed forms were impacted by the Delaney Clause (Winter, 1993).

In August 1996, new legislation was enacted that repealed the Delaney Clause with respect to pesticide residues. The legislation limited risk balancing provisions concerning pesticide residues in foods and instituted a “reasonable certainty of no harm” standard that considers risks from threshold effects (exposure below the RfD) and from non-threshold effects (one excess cancer above background per million persons exposed using conservative risk assessment models). In addition, regulatory practices were prescribed to consider exposures and sensitivities of specific population subgroups (e.g., infants and children), other types of toxicological effects such as endocrine disruption, and multiple exposure to pesticides possessing similar toxicological hazards. EPA regulates drinking water contaminants under provisions of the Safe Drinking Water Act. For noncarcinogenic drinking water contaminants, allowable levels are set to ensure that a fraction of the ADI is not exceeded. For carcinogenic drinking water contaminants, it has been recognized that zero-risk is not technologically attainable. As an alternative, Maximum Containment Levels are established at the lowest technologically feasible levels; these typically result in lifetime cancer risks in the order of one in 100,000 or lower, but risks for some chemicals at the Maximum Containment Level exceed one in 100,000 (Rodricks, 1992).

Risk Communication

Common methods for communicating food chemical risk information have been characterized as one-way and technocratic, in which government leaders, industry, or regulatory agencies provide risk assessment and risk management information with the aim that the public accept the risk messages being conveyed and act accordingly (Scherer, 1991). Since public opinion directly influences risk management decisions (Figure 1), this one-way communication process presents a barrier to effective public involvement in the decision-making process. Consistent with the need to increase public involvement in the risk management process, the NRC broadly defined risk communication as “an interactive process of exchange of information and opinion among individuals, groups, and institutions. It involves multiple messages about the nature of risk and other messages, not strictly about risk, that express concerns, opinions, or reactions to risk messages or to legal and institutional arrangements for risk management” (NRC, 1989b).

Table 1 — U.S. federal laws regulating chemicals in food and water
Table 1 — U.S. federal laws regulating chemicals in food and water

Effective risk communication requires communicators to recognize and overcome several obstacles that are rooted in the limitations of scientific risk assessment and in public understanding. Technical barriers to effective risk communication include the need to make assumptions and subjective judgments in the risk assessment process as well as the existence of disagreements among experts. From the standpoint of public understanding, it has been noted that public perceptions of risk are often not consistent with those of experts, that risk information may frighten and frustrate the public, that strong beliefs are hard to modify, and that naive views are easily manipulated by the method of presentation (Slovic, 1986).

The sheer complexity and uncertainty inherent in risk assessment provides a significant barrier to public understanding and appreciation of the magnitude of risks. One method of explaining risk information is to make comparisons to other risks; it appears that comparisons are more meaningful to the public than absolute numbers or probabilities, particularly in cases where the absolute values are quite small. As an example of this approach, Wilson and Crouch (1987) collected and analyzed risk data for a variety of commonplace human risks, including motor vehicle accidents, cigarette smoking, electrocution, alcohol consumption, drinking contaminated water, and mountaineering; annual risks of death and attendant uncertainties of measurement were provided to enable comparisons of the various risks. Ames et al. (1987) and Gold et al. (1992) ranked the potential human carcinogenic risks of exposures to a variety of environmental pollutants, synthetic pesticide residues, naturally-occurring toxins, and pharmaceutical products using an index that relates predicted human exposure levels for carcinogens to their carcinogenic potency in rodents. Their results indicated that the risks posed by residues of synthetic pesticides or environmental pollutants ranked low in comparison to risks of naturally-occurring carcinogens; this finding was recently supported in a report of the NRC (1996a).

Despite the appeal of using risk comparisons to put results of risk assessments in perspective, such risk comparison practices have been subject to criticism. It has been pointed out that risk comparisons reduce risks to a single dimension, such as loss of life expectancy, while many risks are multi-dimensional (Roth et al., 1990) and involve differential types of morbidity and affect specific population subgroups. Direct comparisons of different types of risk often ignore the different levels of uncertainty inherent in the risk estimates; some actuarial risks such as the risks from death in motor vehicle accidents or home accidents have relatively low uncertainties while those predicted for lifetime exposure to low levels of carcinogens in the diet are subject to significant uncertainty.

While risk comparisons are helpful in communicating the magnitude of risks, they are not, by themselves, adequate guides to personal or public decision policies because they ignore critical elements concerning public values and acceptability of different types of risks. The scientific process provides information concerning the risks, costs, and benefits of policy choices, but the ultimate management of the risks is an issue of social policy that requires decisions to be made on the basis of value choices (Groth, 1991).

According to Sandman (1987), risk may be defined as the summation of “hazard” (defined as the probability of an adverse outcome) and “outrage” (defined as other nonquantitative nonbiological attributes). Sandman points out that the public pays too little attention to hazard while the scientific experts pay absolutely no attention to outrage, which explains the common differences between public and expert opinions concerning risks; some risks possessing high hazard but low outrage may be of less public concern than those with low hazard but high outrage. A variety of “outrage” factors have been identified; several are listed in Table 2. Major outrage factors include whether the risk is voluntary, whether the risks and benefits are equitably distributed, whether the risk is from natural or synthetic sources, whether the risk is subject to individual control, and whether the risk is familiar or not.

Several strategies for effective risk communication through acknowledgment of scientific and social risk factors have been developed (Covello et al., 1988; Groth, 1991; NRC, 1989b, Scherer, 1991). Given the enormous complexity in both the scientific and social risk arenas, it is critical that risk communicators and risk managers maintain reasonable expectations for the outcome of their respective efforts. The NRC, in its exhaustive study of ways to improve risk communication, concluded that improvements in risk communication will not resolve problems in risk management and end controversy, although poor risk communication may create problems. The NRC also concluded that risk managers must consider risk communication as an important and integral aspect of risk management, and that risk communication will, in some instances, change the risk management process itself (NRC, 1989b).

Conclusion

This summary has reviewed critical issues concerning assessing, managing, and communicating chemical food risks. Each component of this trinity of risk issues is complex and is shaped by scientific and public limitations, subjectivity, and a reliance on value judgments. Optimal policy decisions concerning chemical food safety risks require an appreciation of each of the risk components rather than myopic focus upon only the assessment, management, or communication phase. Relatively speaking, the study of each of the three risk components is in its infancy. Significant improvements are needed and expected as society strives to develop appropriate policies concerning food chemical risks.

References

Abelson, P.H. 1993. Health risk assessment. Reg. Toxicol. Pharmacol. 17: 219- 223.

Alexander, B.V., and Clayton, C.A. 1986. Documentation of the food consumption files used in the tolerance assessment system. Research Triangle Institute, Research Triangle Park, NC.

Ames, B.N., and Gold, L.S. 1990. Too many rodent carcinogens: Mitogenesis increases mutagenesis. Science 249: 970-971.

Ames, B.N., Magaw, R., and Gold, L.S. 1987. Ranking possible carcinogenic hazards. Science 236: 271-280.

Archibald, S.O., and Winter, C.K. 1989. Pesticide residues and cancer risks. California Agriculture 43(6): 6-9.

Covello, V.T., and Merkhofer, M.W. 1993. “Risk Assessment Methods: Approaches for Assessing Health and Environmental Risks.” Plenum, New York.

Covello, V.T., Sandman, P.M., and Slovic, P. 1988. “Risk Communication, Risk Statistics, and Risk Comparisons: A Manual for Plant Managers.” Chemical Manufacturers Assn., Washington, D.C.

Crump, K.S. 1984. A new method for determining allowable daily intakes. Fundam. Appl. Toxicol. 4: 854-871.

Doll, R., and Peto, R. 1981. The causes of cancer: Quantitative estimates of avoidable risks of cancer in the United States today. J. Natl. Cancer Inst. 66: 1192-1308.

Environ. 1986. “Elements of Toxicology and Chemical Risk Assessment.” Environ Corp., Washington, D.C.

FDA. 1993. Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food: “Redbook II.” U.S. Food and Drug Administration, Washington, D.C.

Francis, F.J. 1992. “Food Safety: The Interpretation of Risk.” Council for Agricultural Science and Technology, Ames, IA.

Gold, L.S., Slone, T.H., Stern, B.R., Manley, N.B., and Ames, B.N. 1992. Rodent carcinogens: Setting priorities. Science 258: 261-265.

Goldman, M. 1996. Cancer risk of low-level exposure. Science. 271: 1821- 1822.

Gori, G.B. 1992. Cancer risk assessment: The science that is not. Reg. Toxicol. Pharmacol. 16: 10-20.

Gray, G.M. 1996. “Key Issues in Environmental Risk Comparisons: Removing Distortions and Insuring Fairness.” Reason Foundation, Los Angeles, CA.

Groth, E. 1991. Communicating with consumers about food safety and risk issues. Food Technol. 45: 248-253.

Hoerger, F.D. 1990. Presentation of risk assessments. Risk Analysis 10(3): 359-361.

Nichols, A.L., and Zeckhauser, R.J. 1988. The perils of prudence: How conservative risk assessments distort regulation. Reg. Toxicol. Pharmacol. 8: 61-75.

NRC. 1983. “Risk Assessment in the Federal Government: Managing the Process.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1987. “Regulating Pesticides in Food: The Delaney Paradox.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1989a. “Diet and Health: Implications for Reducing Chronic Disease Risk.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1989b. “Improving Risk Communication.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1993a. “Pesticides in the Diets of Infants and Children.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1993b. “Issues in Risk Assessment.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1996a. “Carcinogens and Anticarcinogens in the Human Diet.” National Academy Press, National Research Council, Washington, D.C.

NRC. 1996b. “Understanding Risk: Informing Decisions in a Democratic Society.” National Academy Press, National Research Council, Washington, D.C.

NRDC. 1989. “Intolerable Risk: Pesticides in Our Children’s Food.” Natural Resources Defense Council, New York.

Opinion Research Corporation. 1996. “Trends, Consumer Attitudes, and the Supermarket.” Food Marketing Institute, Washington, D.C.

Paynter, O.E., Burin, G.J., Jaeger, R.B., and Gregorio, C.A. 1988. Goitrogens and thyroid follicular cell neoplasia: Evidence for a threshold process. Reg. Toxicol. Pharmacol. 8: 102-119.

Pennington, J.A.T. 1991. Methods for obtaining food consumption information. Chpt. 1 in “Monitoring Dietary Intakes,” ed. I. Macdonald, pp. 3-8. Springer- Verlag, New York.

Rodricks, J.V. 1992. “Calculated Risks: The Toxicity and Human Health Risks of Chemicals in our Environment.” Cambridge University Press, New York.

Roth, E., Morgan, M.G., Fischhoff, B., Lave, L., and Bostrom, A. 1990. What do we know about making risk comparisons? Risk Analysis 10: 375-387.

Sandman, P.M. 1987. Risk communication: Facing public outrage. EPA Journal 13(9): 21-22.

Scherer, C.W. 1991. Strategies for communicating risks to the public. Food Technol. 45: 110-116.

Slovic, P. 1986. Informing and educating the public about risk. Risk Analysis 6(4): 403-415.

Wilson, R., and Crouch, E.A.C. 1987. Risk assessment and comparisons: An introduction. Science 236: 267-270.

Winter, C.K. 1992. Dietary pesticide risk assessment. Rev. Environ. Contam. Toxicol. 127: 23-67.

Winter, C.K. 1993. Pesticide residues and the Delaney Clause. Food Technol. 47: 81-86.

Winter, C.K. 1994. Lawmakers should recognize uncertainties in risk assessment. California Agriculture 48(1): 21-29.

Winter, C.K., Seiber, J.N., and Nuckton, C.A. 1990. “Chemicals in the Human Food Chain.” Van Nostrand Reinhold, New York.