Risk Analysis: An International Journal
Comparative Probabilistic Assessment of Occupational Pesticide Exposures Based on Regulatory Assessments
Implementation of probabilistic analyses in exposure assessment can provide valuable insight into the risks of those at the extremes of population distributions, including more vulnerable or sensitive subgroups. Incorporation of these analyses into current regulatory methods for occupational pesticide exposure is enabled by the exposure data sets and associated data currently used in the risk assessment approach of the Environmental Protection Agency (EPA). Monte Carlo simulations were performed on exposure measurements from the Agricultural Handler Exposure Database and the Pesticide Handler Exposure Database along with data from the Exposure Factors Handbook and other sources to calculate exposure rates for three different neurotoxic compounds (azinphos methyl, acetamiprid, emamectin benzoate) across four pesticide-handling scenarios. Probabilistic estimates of doses were compared with the no observable effect levels used in the EPA occupational risk assessments. Some percentage of workers were predicted to exceed the level of concern for all three compounds: 54% for azinphos methyl, 5% for acetamiprid, and 20% for emamectin benzoate. This finding has implications for pesticide risk assessment and offers an alternative procedure that may be more protective of those at the extremes of exposure than the current approach.
A Comparison of the Bow-Tie and STAMP Approaches to Reduce the Risk of Surgical Instrument Retention
Although relatively rare, surgical instrument retention inside a patient following central venous catheterization still presents a significant risk. The research presented here compared two approaches to help reduce retention risk: Bow-Tie Analysis and Systems-Theoretic Accident Model and Processes. Each method was undertaken separately and then the results of the two approaches were compared and combined. Both approaches produced beneficial results that added to existing domain knowledge, and a combination of the two methods was found to be beneficial. For example, the Bow-Tie Analysis gave an overview of which activities keep controls working and who is responsible for each control, and the Systems-Theoretic Accident Model and Processes revealed the safety constraints that were not enforced by the supervisor of the controlled process. Such two-way feedback between both methods is potentially helpful for improving patient safety. Further methodology ideas to minimize surgical instrument retention risks are also described.
The Cancer Risk Associated with Residential Exposure to Soil Containing Radioactive Coal Combustion Residuals
Coal combustion residuals (CCRs) are composed of various constituents, including radioactive materials. The objective of this study was to utilize methodology on radionuclide risk assessment from the Environmental Protection Agency (EPA) to estimate the potential cancer risks associated with residential exposure to CCR-containing soil. We evaluated potential radionuclide exposure via soil ingestion, inhalation of soil particulates, and external exposure to ionizing radiation using published CCR radioactivity values for 232Th, 228Ra, 238U, and 226Ra from the Appalachia, Illinois, and Powder River coal basins. Mean and upper-bound cancer risks were estimated individually for each radionuclide, exposure pathway, and coal basin. For each radionuclide at each coal basin, external exposure to ionizing radiation contributed the greatest to the overall risk estimate, followed by incidental ingestion of soil and inhalation of soil particulates. The mean cancer risks by route of exposure were 2.01 × 10−6 (ingestion), 6.80 × 10−9 (inhalation), and 3.66 × 10−5 (external), while the upper bound cancer risks were 3.70 × 10−6 (ingestion), 1.18 × 10−8 (inhalation), and 6.15 × 10−5 (external), using summed radionuclide-specific data from all locations. The upper bound cancer risk from all routes of exposure was 6.52 × 10−5. These estimated cancer risks were within the EPA's acceptable cancer risk range of 1 × 10−6 to 1 × 10−4. If the CCR radioactivity values used in this analysis are generally representative of CCR waste streams, then our findings suggest that CCRs would not be expected to pose a significant radiological risk to residents living in areas where contact with CCR-containing soils might occur.
Toxoplasmosis is a cosmopolitan disease and has a broad range of hosts, including humans and several wild and domestic animals. The human infection is mostly acquired through the consumption of contaminated food and pork meat has been recognized as one of the major sources of transmission. There are, however, certain fundamental differences between countries; therefore, the present study specifically aims to evaluate the exposure of the Italian population to Toxoplasma gondii through the ingestion of several types of pork meat products habitually consumed in Italy and to estimate the annual number of human infections within two subgroups of the population. A quantitative risk assessment model was built for this reason and was enriched with new elements in comparison to other similar risk assessments in order to enhance its accuracy. Sensitivity analysis and two alternative scenarios were implemented to identify the factors that have the highest impact on risk and to simulate different plausible conditions, respectively. The estimated overall average number of new infections per year among adults is 12,513 and 92 for pregnant women. The baseline model showed that almost all these infections are associated with the consumption of fresh meat cuts and preparations (mean risk of infection varied between 4.5 × 10−5 and 5.5 × 10−5) and only a small percentage is due to fermented sausages/salami. On the contrary, salt-cured meat products seem to pose minor risk but further investigations are needed to clarify still unclear aspects. Among all the considered variables, cooking temperature and bradyzoites’ concentration in muscle impacted most the risk.
Groundwater leakage into subsurface constructions can cause reduction of pore pressure and subsidence in clay deposits, even at large distances from the location of the construction. The potential cost of damage is substantial, particularly in urban areas. The large-scale process also implies heterogeneous soil conditions that cannot be described in complete detail, which causes a need for estimating uncertainty of subsidence with probabilistic methods. In this study, the risk for subsidence is estimated by coupling two probabilistic models, a geostatistics-based soil stratification model with a subsidence model. Statistical analyses of stratification and soil properties are inputs into the models. The results include spatially explicit probabilistic estimates of subsidence magnitude and sensitivities of included model parameters. From these, areas with significant risk for subsidence are distinguished from low-risk areas. The efficiency and usefulness of this modeling approach as a tool for communication to stakeholders, decision support for prioritization of risk-reducing measures, and identification of the need for further investigations and monitoring are demonstrated with a case study of a planned tunnel in Stockholm.
Evacuation planning and management involves estimating the travel demand in the event that such action is required. This is usually done as a function of people's decision to evacuate, which we show is strongly linked to their risk awareness. We use an empirical data set, which shows tsunami evacuation behavior, to demonstrate that risk recognition is not synonymous with objective risk, but is instead determined by a combination of factors including risk education, information, and sociodemographics, and that it changes dynamically over time. Based on these findings, we formulate an ordered logit model to describe risk recognition combined with a latent class model to describe evacuation choices. Our proposed evacuation choice model along with a risk recognition class can evaluate quantitatively the influence of disaster mitigation measures, risk education, and risk information. The results obtained from the risk recognition model show that risk information has a greater impact in the sense that people recognize their high risk. The results of the evacuation choice model show that people who are unaware of their risk take a longer time to evacuate.
Flooding in urban areas during heavy rainfall, often characterized by short duration and high-intensity events, is known as “surface water flooding.” Analyzing surface water flood risk is complex as it requires understanding of biophysical and human factors, such as the localized scale and nature of heavy precipitation events, characteristics of the urban area affected (including detailed topography and drainage networks), and the spatial distribution of economic and social vulnerability. Climate change is recognized as having the potential to enhance the intensity and frequency of heavy rainfall events. This study develops a methodology to link high spatial resolution probabilistic projections of hourly precipitation with detailed surface water flood depth maps and characterization of urban vulnerability to estimate surface water flood risk. It incorporates probabilistic information on the range of uncertainties in future precipitation in a changing climate. The method is applied to a case study of Greater London and highlights that both the frequency and spatial extent of surface water flood events are set to increase under future climate change. The expected annual damage from surface water flooding is estimated to be to be £171 million, £343 million, and £390 million/year under the baseline, 2030 high, and 2050 high climate change scenarios, respectively.
Recovery of interdependent infrastructure networks in the presence of catastrophic failure is crucial to the economy and welfare of society. Recently, centralized methods have been developed to address optimal resource allocation in postdisaster recovery scenarios of interdependent infrastructure systems that minimize total cost. In real-world systems, however, multiple independent, possibly noncooperative, utility network controllers are responsible for making recovery decisions, resulting in suboptimal decentralized processes. With the goal of minimizing recovery cost, a best-case decentralized model allows controllers to develop a full recovery plan and negotiate until all parties are satisfied (an equilibrium is reached). Such a model is computationally intensive for planning and negotiating, and time is a crucial resource in postdisaster recovery scenarios. Furthermore, in this work, we prove this best-case decentralized negotiation process could continue indefinitely under certain conditions. Accounting for network controllers' urgency in repairing their system, we propose an ad hoc sequential game-theoretic model of interdependent infrastructure network recovery represented as a discrete time noncooperative game between network controllers that is guaranteed to converge to an equilibrium. We further reduce the computation time needed to find a solution by applying a best-response heuristic and prove bounds on ε-Nash equilibrium, where ε depends on problem inputs. We compare best-case and ad hoc models on an empirical interdependent infrastructure network in the presence of simulated earthquakes to demonstrate the extent of the tradeoff between optimality and computational efficiency. Our method provides a foundation for modeling sociotechnical systems in a way that mirrors restoration processes in practice.
This article presents a public value measure that can be used to aid executives in the public sector to better assess policy decisions and maximize value to the American people. Using Transportation Security Administration (TSA) programs as an example, we first identify the basic components of public value. We then propose a public value account to quantify the outcomes of various risk scenarios, and we determine the certain equivalent of several important TSA programs. We illustrate how this proposed measure can quantify the effects of two main challenges that government organizations face when conducting enterprise risk management: (1) short-term versus long-term incentives and (2) avoiding potential negative consequences even if they occur with low probability. Finally, we illustrate how this measure enables the use of various tools from decision analysis to be applied in government settings, such as stochastic dominance arguments and certain equivalent calculations. Regarding the TSA case study, our analysis demonstrates the value of continued expansion of the TSA trusted traveler initiative and increasing the background vetting for passengers who are afforded expedited security screening.
Influence of Distribution of Animals between Dose Groups on Estimated Benchmark Dose and Animal Welfare for Continuous Effects
The benchmark dose (BMD) approach is increasingly used as a preferred approach for dose–effect analysis, but standard experimental designs are generally not optimized for BMD analysis. The aim of this study was to evaluate how the use of unequally sized dose groups affects the quality of BMD estimates in toxicity testing, with special consideration of the total burden of animal distress. We generated continuous dose–effect data by Monte Carlo simulation using two dose–effect curves based on endpoints with different shape parameters. Eighty-five designs, each with four dose groups of unequal size, were examined in scenarios ranging from low- to high-dose placements and with a total number of animals set to 40, 80, or 200. For each simulation, a BMD value was estimated and compared with the “true” BMD. In general, redistribution of animals from higher to lower dose groups resulted in an improved precision of the calculated BMD value as long as dose placements were high enough to detect a significant trend in the dose–effect data with sufficient power. The improved BMD precision and the associated reduction of the number of animals exposed to the highest dose, where chemically induced distress is most likely to occur, are favorable for the reduction and refinement principles. The result thereby strengthen BMD-aligned design of experiments as a means for more accurate hazard characterization along with animal welfare improvements.
Perceptions of Risk and Vulnerability Following Exposure to a Major Natural Disaster: The Calgary Flood of 2013
Many studies have examined the general public's flood risk perceptions in the aftermath of local and regional flooding. However, relatively few studies have focused on large-scale events that affect tens of thousands of people within an urban center. Similarly, in spite of previous research on flood risks, unresolved questions persist regarding the variables that might influence perceptions of risk and vulnerability, along with management preferences. In light of the opportunities presented by these knowledge gaps, the research reported here examined public perceptions of flood risk and vulnerability, and management preferences, within the city of Calgary in the aftermath of extensive flooding in 2013. Our findings, which come from an online survey of residents, reveal that direct experience with flooding is not a differentiating factor for risk perceptions when comparing evacuees with nonevacuees who might all experience future risks. However, we do find that judgments about vulnerability—as a function of how people perceive physical distance—do differ according to one's evacuation experience. Our results also indicate that concern about climate change is an important predictor of flood risk perceptions, as is trust in government risk managers. In terms of mitigation preferences, our results reveal differences in support for large infrastructure projects based on whether respondents feel they might actually benefit from them.
This article focuses on conceptual and methodological developments allowing the integration of physical and social dynamics leading to model forecasts of circumstance-specific human losses during a flash flood. To reach this objective, a random forest classifier is applied to assess the likelihood of fatality occurrence for a given circumstance as a function of representative indicators. Here, vehicle-related circumstance is chosen as the literature indicates that most fatalities from flash flooding fall in this category. A database of flash flood events, with and without human losses from 2001 to 2011 in the United States, is supplemented with other variables describing the storm event, the spatial distribution of the sensitive characteristics of the exposed population, and built environment at the county level. The catastrophic flash floods of May 2015 in the states of Texas and Oklahoma are used as a case study to map the dynamics of the estimated probabilistic human risk on a daily scale. The results indicate the importance of time- and space-dependent human vulnerability and risk assessment for short-fuse flood events. The need for more systematic human impact data collection is also highlighted to advance impact-based predictive models for flash flood casualties using machine-learning approaches in the future.
In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios.
This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose–response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data.
Accurate estimates of the amount and type of fish people eat are necessary to determine the health benefits and risks of consuming fish, and to assess compliance with fish consumption guidelines issued for fish affected by chemical contaminants. We developed a web-based and mobile-phone-enabled diary methodology to collect detailed fish consumption information for two 16-week periods in the summers of 2014 and 2015. We recruited study participants from two populations living in the Great Lakes region—women of childbearing age (WCBA) and urban residents who had purchased fishing licenses. In this article, we describe the methodology in detail and provide evidence related to participation rates, the representativeness of our sample over time, and both convergent validity and reliability of the data collection methods. Overall, 56% of WCBA and 50% of urban anglers provided complete data across both data collection periods. Among those who provided information at the beginning of Year 2, 97% of both audiences provided information throughout the entire 16-week period. Those who participated throughout the two-year period were slightly older on average (1.9–2.5 years) than other members of our original samples. We conclude that using diaries with web and smartphone technology, combined with incentives and persistent communication, has strong potential for assessing fish consumption in other areas of the country or for situations where the potential risks associated with fish consumption are substantial and the cost can be justified.
Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit.
Risks of Allergic Contact Dermatitis Elicited by Nickel, Chromium, and Organic Sensitizers: Quantitative Models Based on Clinical Patch Test Data
Risks of allergic contact dermatitis (ACD) from consumer products intended for extended (nonpiercing) dermal contact are regulated by E.U. Directive EN 1811 that limits released Ni to a weekly equivalent dermal load of ≤0.5 μg/cm2. Similar approaches for thousands of known organic sensitizers are hampered by inability to quantify respective ACD-elicitation risk levels. To help address this gap, normalized values of cumulative risk for eliciting a positive (“≥+”) clinical patch test response reported in 12 studies for a total of n = 625 Ni-sensitized patients were modeled in relation to observed ACD-eliciting Ni loads, yielding an approximate lognormal (LN) distribution with a geometric mean and standard deviation of GMNi = 15 μg/cm2 and GSDNi = 8.0, respectively. Such data for five sensitizers (including formaldehyde and 2-hydroxyethyl methacrylate) were also ∼LN distributed, but with a common GSD value equal to GSDNi and with heterogeneous sensitizer-specific GM values each defining a respective ACD-eliciting potency GMNi/GM relative to Ni. Such potencies were also estimated for nine (meth)acrylates by applying this general LN ACD-elicitation risk model to respective sets of fewer data. ACD-elicitation risk patterns observed for Cr(VI) (n = 417) and Cr(III) (n = 78) were fit to mixed-LN models in which ∼30% and ∼40% of the most sensitive responders, respectively, were estimated to exhibit a LN response also governed by GSDNi. The observed common LN-response shape parameter GSDNi may reflect a common underlying ACD mechanism and suggests a common interim approach to quantitative ACD-elicitation risk assessment based on available clinical data.
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty.
The use of electronic cigarettes has grown substantially over the last few years. Currently, about 4% of adults use electronic cigarettes, about 16% of high school students report use in the past 30 days, as do approximately 11–25% of college students. A hallmark of the reduction in tobacco use has been the shift in social norms concerning smoking in public. Such norms may also drive views on acceptability of public electronic cigarette use. While normative factors have been given attention, little substantive application of the literature on risk perception has been brought to bear. The overall aim of this study was to place a cognitive–affective measure of risk perception within a model that also includes social cues for e-cigarettes, addictiveness beliefs, and tobacco use to predict perceived social acceptability for public use of e-cigarettes. To do so, a cross-sectional study using an online survey was conducted among a sample of undergraduate students at a Western university (n = 395). A structural equation model showed that the acceptability of public e-cigarette use was influenced by social cues, beliefs about addiction, and cognitive risk perception, even after controlling for nicotine use. What is revealed is that cognitive assessment of e-cigarette risk and perception of addictiveness had a suppressing effect on perceived acceptability of public vaping, while greater exposure to social cues exerted a countervailing effect. This is evidence of the role that risk perception and social norms may play in the increases in electronic cigarette use that have been observed.