Risk Analysis: An International Journal
Evaluating and Visualizing the Economic Impact of Commercial Districts Due to an Electric Power Network Disruption
Critical infrastructure networks enable social behavior, economic productivity, and the way of life of communities. Disruptions to these cyber–physical–social networks highlight their importance. Recent disruptions caused by natural phenomena, including Hurricanes Harvey and Irma in 2017, have particularly demonstrated the importance of functioning electric power networks. Assessing the economic impact (EI) of electricity outages after a service disruption is a challenging task, particularly when interruption costs vary by the type of electric power use (e.g., residential, commercial, industrial). In contrast with most of the literature, this work proposes an approach to spatially evaluate EIs of disruptions to particular components of the electric power network, thus enabling resilience‐based preparedness planning from economic and community perspectives. Our contribution is a mix‐method approach that combines EI evaluation, component importance analysis, and GIS visualization for decision making.
We integrate geographic information systems and an economic evaluation of sporadic electric power outages to provide a tool to assist with prioritizing restoration of power in commercial areas that have the largest impact. By making use of public data describing commercial market value, gross domestic product, and electric area distribution, this article proposes a method to evaluate the EI experienced by commercial districts. A geospatial visualization is presented to observe and compare the areas that are more vulnerable in terms of EI based on the areas covered by each distribution substation. Additionally, a heat map is developed to observe the behavior of disrupted substations to determine the important component exhibiting the highest EI. The proposed resilience analytics approach is applied to analyze outages of substations in the boroughs of New York City.
The observed global sea level rise owing to climate change, coupled with the potential increase in extreme storms, requires a reexamination of existing infrastructural planning, construction, and management practices. Storm surge shows the effects of rising sea levels. The recent super storms that hit the United States (e.g., Hurricane Katrina in 2005, Sandy in 2012, Harvey and Maria in 2017) and China (e.g., Typhoon Haiyan in 2010) inflicted serious loss of life and property. Water level rise (WLR) of local coastal areas is a combination of sea level rise, storm surge, precipitation, and local land subsidence. Quantitative assessments of the impact of WLR include scenario identification, consequence assessment, vulnerability and flooding assessment, and risk management using inventory of assets from coastal areas, particularly population centers, to manage flooding risk and to enhance infrastructure resilience of coastal cities. This article discusses the impact of WLR on urban infrastructures with case studies of Washington, DC, and Shanghai. Based on the flooding risk analysis under possible scenarios, the property loss for Washington, DC, was evaluated, and the impact on the metro system of Shanghai was examined.
Recognizing Structural Nonidentifiability: When Experiments Do Not Provide Information About Important Parameters and Misleading Models Can Still Have Great Fit
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.
After smoking, exposure to radon and its progeny is the second leading cause of lung cancer. The probability of inducing lung carcinomas by inhaled radon progeny depends on the deposited radiation dose, and is significantly affected by physiological and morphometric changes induced by smoking. Due to irritation of the airways, the inhalation of cigarette smoke leads to the hyperproduction of mucus. Two concurrent processes occur: on one hand, increased production of mucus protects the target cells against radiation damage; on the other hand, in the case of long‐term smokers, a chronic lung obstruction develops, causing an increase in the radiation dose to the lungs. Depending on the duration and intensity of smoking, these processes contribute to the final radiation dose with different weights. The primary objective of this study was to investigate to what extent these smoke‐induced changes can modify the resulting absorbed dose of inhaled radon progeny relative to healthy nonsmokers. Since the bronchial dose depends on the degree of lung tissue damage, we have used this dose as a tool for detecting the effects of smoking on the lung epithelium. In other words, the biological effect of radon served as a tracer of changes induced by smoking.
Proposed as an advanced conceptualization of how to handle risk, risk governance begins with the critique and expansion of the traditional idea and standard practices of risk analysis. In developments over the last two decades, proponents of a more integrative approach on governing risks have moved further away from distinct conceptions of risk assessment, risk management, and risk communication and toward the processes and institutions that guide, restrain, and integrate collective activities of handling risk. In early formulations of what risk governance entails, the superiority of the interplay between risk evaluation and risk management over linear and simple deductions from risk assessment to risk management was established precisely by developing a distinctive rationality of how to proceed. Later, the International Risk Governance Council recaptured this distinctive rationality that institutionalized processes should embody the interplay of the assessment of risks and related concerns, their sociopolitical appraisal, and the logical inference for risk management. Recently, this approach has been refined and augmented toward an integrative and adaptive concept of risk governance and toward a postnormal conception of risk governance. Main characteristics are a new concept of differentiated responsibility and deliberation in which expertise, experience, and tacit knowledge are integrated, forming the core of legitimate political risk decision making.
Efficacy Foundations for Risk Communication: How People Think About Reducing the Risks of Climate Change
Believing action to reduce the risks of climate change is both possible (self‐efficacy) and effective (response efficacy) is essential to motivate and sustain risk mitigation efforts, according to current risk communication theory. Although the public recognizes the dangers of climate change, and is deluged with lists of possible mitigative actions, little is known about public efficacy beliefs in the context of climate change. Prior efficacy studies rely on conflicting constructs and measures of efficacy, and links between efficacy and risk management actions are muddled. As a result, much remains to learn about how laypersons think about the ease and effectiveness of potential mitigative actions. To bring clarity and inform risk communication and management efforts, we investigate how people think about efficacy in the context of climate change risk management by analyzing unprompted and prompted beliefs from two national surveys (N = 405, N = 1,820). In general, respondents distinguish little between effective and ineffective climate strategies. While many respondents appreciate that reducing fossil fuel use is an effective risk mitigation strategy, overall assessments reflect persistent misconceptions about climate change causes, and uncertainties about the effectiveness of risk mitigation strategies. Our findings suggest targeting climate change risk communication and management strategies to (1) address gaps in people's existing mental models of climate action, (2) leverage existing public understanding of both potentially effective mitigation strategies and the collective action dilemma at the heart of climate change action, and (3) take into account ideologically driven reactions to behavior change and government action framed as climate action.
To test a possible boundary condition for the risk information seeking and processing (RISP) model, this study experimentally manipulates risk perception related to the 2014 Ebola outbreak in a nationally representative sample. Multiple‐group structural equation modeling results indicate that psychological distance was negatively related to systematic processing in the high‐risk condition. In the low‐risk condition, psychological distance was positively related to heuristic processing; negative attitude toward media coverage dampened people's need for information, which subsequently influenced information processing. Risk perception elicited more fear, which led to greater information insufficiency and more heuristic processing in the low‐risk condition. In contrast, sadness was consistently related to information processing in both conditions. Model fit statistics also show that the RISP model provides a better fit to data when risk perception is elevated. Further, this study contributes to our understanding of the role of discrete emotions in motivating information processing.
Risk Perceptions Toward Drinking Water Quality Among Private Well Owners in Ireland: The Illusion of Control
In rural areas where no public or group water schemes exist, groundwater is often the only source of drinking water and is extracted by drilling private wells. Typically, private well owners are responsible for the quality and testing of their own drinking water. Previous studies indicate that well owners tend to underestimate the risks of their well water being contaminated, yet little is known about why this is the case. We conducted a qualitative study by interviewing private well owners in Ireland to investigate their beliefs surrounding their water quality, which, in turn, inform their risk perceptions and their willingness to regularly test their water. Based on our findings we designed a theoretical model arguing that perceived control is central in the perceived contamination risks of well water. More specifically, we argue that well owners have the illusion of being in control over their water quality, which implies that people often perceive themselves to be more in control of a situation than they actually are. As a result, they tend to underestimate contamination risks, which subsequently impact negatively on water testing behaviors. Theoretical and practical implications are highlighted.
Cigarette Smoking and Multiple Health Risk Behaviors: A Latent Class Regression Model to Identify a Profile of Young Adolescents
Cigarette smoking is often established during adolescence when other health‐related risk behaviors tend to occur. The aim of the study was to further investigate the hypothesis that risky health behaviors tend to cluster together and to identify distinctive profiles of young adolescents based on their smoking habits. To explore the idea that smoking behavior can predict membership in a specific risk profile of adolescents, with heavy smokers being more likely to exhibit other risk behaviors, we reanalyzed the data from the 2014 Health Behaviour in School‐Aged Children Italian survey of about 60,000 first‐ and third‐grade junior high school (JHS) and second‐grade high school (HS) students. A Bayesian approach was adopted for selecting the manifest variables associated with smoking; a latent class regression model was employed to identify smoking behaviors among adolescents. Finally, a health‐related risk pattern associated with different types of smoking behaviors was found. Heavy smokers engaged in higher alcohol use and abuse and experienced school failure more often than their peers. Frequent smokers reported below‐average academic achievement and self‐rated their health as fair/poor more frequently than nonsmokers. Lifetime cannabis use and early sexual intercourse were more frequent among heavy smokers. Our findings provide elements for constructing a profile of frequent adolescent smokers and for identifying behavioral risk patterns during the transition from JHS to HS. This may provide an additional opportunity to devise interventions that could be more effective to improve smoking cessation among occasional smokers and to adequately address other risk behaviors among frequent smokers.
Understanding healthcare viral disease transmission and the effect of infection control interventions will inform current and future infection control protocols. In this study, a model was developed to predict virus concentration on nurses’ hands using data from a bacteriophage tracer study conducted in Tucson, Arizona, in an urgent care facility. Surfaces were swabbed 2 hours, 3.5 hours, and 6 hours postseeding to measure virus spread over time. To estimate the full viral load that would have been present on hands without sampling, virus concentrations were summed across time points for 3.5‐ and 6‐hour measurements. A stochastic discrete event model was developed to predict virus concentrations on nurses’ hands, given a distribution of virus concentrations on surfaces and expected frequencies of hand‐to‐surface and orifice contacts and handwashing. Box plots and statistical hypothesis testing were used to compare the model‐predicted and experimentally measured virus concentrations on nurses’ hands. The model was validated with the experimental bacteriophage tracer data because the distribution for model‐predicted virus concentrations on hands captured all observed value ranges, and interquartile ranges for model and experimental values overlapped for all comparison time points. Wilcoxon rank sum tests showed no significant differences in distributions of model‐predicted and experimentally measured virus concentrations on hands. However, limitations in the tracer study indicate that more data are needed to instill more confidence in this validation. Next model development steps include addressing viral concentrations that would be found naturally in healthcare environments and measuring the risk reductions predicted for various infection control interventions.
The dynamics of organizational risk communication is an understudied topic in risk research. This article investigates how public officials at six government agencies in Sweden understand and relate to risk communication and its uses in the context of agency organizational work on policy and regulation. Qualitative interviews were used to explore the practitioners’ views on some key topics in the academic literature on risk communication. A main finding is that there is little consensus on what the goals of risk communication are; if, and how, uncertainty should be communicated; and what role is to be played by transparency in risk communication. However, the practitioners agree that dissemination (top down) to the public of robust scientific and expert knowledge is a crucial element. Dialogue and participation is used mainly with other agencies and elite stakeholders with whom agencies collaborate to implement policy goals. Dialogue with the public on issues of risk is very limited. Some implications of the findings for the practice of risk communication by government agencies are suggested.
A Multicompartment SIS Stochastic Model with Zonal Ventilation for the Spread of Nosocomial Infections: Detection, Outbreak Management, and Infection Control
In this work, we study the environmental and operational factors that influence airborne transmission of nosocomial infections. We link a deterministic zonal ventilation model for the airborne distribution of infectious material in a hospital ward, with a Markovian multicompartment SIS model for the infection of individuals within this ward, in order to conduct a parametric study on ventilation rates and their effect on the epidemic dynamics. Our stochastic model includes arrival and discharge of patients, as well as the detection of the outbreak by screening events or due to symptoms being shown by infective patients. For each ventilation setting, we measure the infectious potential of a nosocomial outbreak in the hospital ward by means of a summary statistic: the number of infections occurred within the hospital ward until end or declaration of the outbreak. We analytically compute the distribution of this summary statistic, and carry out local and global sensitivity analysis in order to identify the particular characteristics of each ventilation regime with the largest impact on the epidemic spread. Our results show that ward ventilation can have a significant impact on the infection spread, especially under slow detection scenarios or in overoccupied wards, and that decreasing the infection risk for the whole hospital ward might increase the risk in specific areas of the health‐care facility. Moreover, the location of the initial infective individual and the protocol in place for outbreak declaration both form an interplay with ventilation of the ward.
Communicating with the Public About Marauding Terrorist Firearms Attacks: Results from a Survey Experiment on Factors Influencing Intention to “Run, Hide, Tell” in the United Kingdom and Denmark
Effective risk communication is an integral part of responding to terrorism, but until recently, there has been very little pre‐event communication in a European context to provide advice to the public on how to protect themselves during an attack. Following terrorist attacks involving mass shootings in Paris, France, in November 2015, the U.K. National Police Chiefs’ Council released a Stay Safe film and leaflet that advises the public to “run,” “hide,” and “tell” in the event of a firearms or weapons attack. However, other countries, including Denmark, do not provide preparedness information of this kind, in large part because of concern about scaring the public. In this survey experiment, 3,003 U.K. and Danish participants were randomly assigned to one of three conditions: no information, a leaflet intervention, and a film intervention to examine the impact of “Run, Hide, Tell” advice on perceptions about terrorism, the security services, and intended responses to a hypothetical terrorist firearms attack. Results demonstrate important benefits of pre‐event communication in relation to enhancing trust, encouraging protective health behaviors, and discouraging potentially dangerous actions. However, these findings also suggest that future communications should address perceived response costs and target specific problem behaviors. Cross‐national similarities in response suggest this advice is suitable for adaptation in other countries.
A Definition and Categorization System for Advanced Materials: The Foundation for Risk‐Informed Environmental Health and Safety Testing
Novel materials with unique or enhanced properties relative to conventional materials are being developed at an increasing rate. These materials are often referred to as advanced materials (AdMs) and they enable technological innovations that can benefit society. Despite their benefits, however, the unique characteristics of many AdMs, including many nanomaterials, are poorly understood and may pose environmental safety and occupational health (ESOH) risks that are not readily determined by traditional risk assessment methods. To assess these risks while keeping up with the pace of development, technology developers and risk assessors frequently employ risk‐screening methods that depend on a clear definition for the materials that are to be assessed (e.g., engineered nanomaterial) as well as a method for binning materials into categories for ESOH risk prioritization. The term advanced material lacks a consensus definition and associated categorization or grouping system for risk screening. In this study, we aim to establish a practitioner‐driven definition for AdMs and a practitioner‐validated framework for categorizing AdMs into conceptual groupings based on material characteristics. Results from multiple workshops and interviews with practitioners provide consistent differentiation between AdMs and conventional materials, offer functional nomenclature for application science, and provide utility for future ESOH risk assessment prioritization. The definition and categorization framework established here serve as a first step in determining if and when there is a need for specific ESOH and regulatory screening for an AdM as well as the type and extent of risk‐related information that should be collected or generated for AdMs and AdM‐enabled technologies.
Multiple hazard resilience is of significant practical value because most regions of the world are subject to multiple natural and technological hazards. An analysis and assessment approach for multiple hazard spatiotemporal resilience of interdependent infrastructure systems is developed using network theory and a numerical analysis. First, we define multiple hazard resilience and present a quantitative probabilistic metric based on the expansion of a single hazard deterministic resilience model. Second, we define a multiple hazard relationship analysis model with a focus on the impact of hazards on an infrastructure. Subsequently, a relationship matrix is constructed with temporal and spatial dimensions. Further, a general method for the evaluation of direct impacts on an individual infrastructure under multiple hazards is proposed. Third, we present an analysis of indirect multiple hazard impacts on interdependent infrastructures and a joint restoration model of an infrastructure system. Finally, a simplified two‐layer interdependent infrastructure network is used as a case study for illustrating the proposed methodology. The results show that temporal and spatial relationships of multiple hazards significantly influence system resilience. Moreover, the interdependence among infrastructures further magnifies the impact on resilience value. The main contribution of the article is a new multiple hazard resilience evaluation approach that is capable of integrating the impacts of multiple hazard interactions, interdependence of network components (layers), and restoration strategy.
Experiments in Lay Cues to the Relative Validity of Positions Taken by Disputing Groups of Scientists
Risk analysis and hazard management can prompt varied intra‐scientific disputes, some which have or will become public, and thus potentially available for lay judgments of the relative validity of the positions taken. As attentive laypeople may include elites as well as the general public, understanding whether and how cues to credibility of disputing groups of scientists might shape those lay judgments can be important. Relevant literatures from philosophy, social studies of science, risk analysis, and elsewhere have identified potential cues, but not tested their absolute or relative effects. Two experiments with U.S. online panel members tested multiple cues (e.g., credentials, experience, majority opinions, research quality) across topics varying in familiarity subject to actual intra‐science disputes (dark matter, marijuana, sea‐level rise). If cues supported a position, laypeople were more likely to choose it as relatively more valid, with information quality, majority “vote,” experience, and degree source as the strongest, and interest, demographic, and values similarity as the weakest, cues. These results were similar in overall rankings to those from implicit rankings of cue reliability ratings from an earlier U.S. online survey. Proposed moderators were generally nonsignificant, but topic familiarity and subjective topic knowledge tended to reduce cue effects. Further research to confirm and extend these findings can inform both theory about citizen engagement with scientific and risk disputes, and practice in communication about science and risk.
Several statistical models for salmonella source attribution have been presented in the literature. However, these models have often been found to be sensitive to the model parameterization, as well as the specifics of the data set used. The Bayesian salmonella source attribution model presented here was developed to be generally applicable with small and sparse annual data sets obtained over several years. The full Bayesian model was modularized into three parts (an exposure model, a subtype distribution model, and an epidemiological model) in order to separately estimate unknown parameters in each module. The proposed model takes advantage of the consumption and overall salmonella prevalence of the studied sources, as well as bacteria typing results from adjacent years. The latter were used for a smoothed estimation of the annual relative proportions of different salmonella subtypes in each of the sources. The source‐specific effects and the salmonella subtype‐specific effects were included in the epidemiological model to describe the differences between sources and between subtypes in their ability to infect humans. The estimation of these parameters was based on data from multiple years. Finally, the model combines the total evidence from different modules to proportion human salmonellosis cases according to their sources. The model was applied to allocate reported human salmonellosis cases from the years 2008 to 2015 to eight food sources.
This article empirically examines the effectiveness of earthquake early warning (EEW) in Japan based on experiences of residents who received warnings before earthquake shaking occurred. In Study 1, a survey (N = 299) was conducted to investigate residents’ experiences of, and reactions to, an EEW issued in Gunma and neighboring regions on June 17, 2018. The main results were as follows. (1) People's primary reactions to the EEW were mental, not physical, and thus motionless. Most residents stayed still, not for safety reasons, but because they were focusing on mentally bracing themselves. (2) Residents perceived the EEW to be effective because it enabled them to mentally prepare, rather than take physical protective actions, before strong shaking arrived. (3) In future, residents anticipate that on receipt of an EEW they would undertake mental preparation as opposed to physical protective actions. In Study 2, a survey (N = 450) was conducted on another EEW issued for an earthquake offshore of Chiba Prefecture on July 7, 2018. Results were in line with those of Study 1, suggesting that the findings described above are robust. Finally, given people's lack of impetus to undertake protective action on receipt of an EEW, this article discusses ways to enhance such actions.
Farmers’ Risk‐Based Decision Making Under Pervasive Uncertainty: Cognitive Thresholds and Hazy Hedging
Researchers in judgment and decision making have long debunked the idea that we are economically rational optimizers. However, problematic assumptions of rationality remain common in studies of agricultural economics and climate change adaptation, especially those that involve quantitative models. Recent movement toward more complex agent‐based modeling provides an opportunity to reconsider the empirical basis for farmer decision making. Here, we reconceptualize farmer decision making from the ground up, using an in situ mental models approach to analyze weather and climate risk management. We assess how large‐scale commercial grain farmers in South Africa (n = 90) coordinate decisions about weather, climate variability, and climate change with those around other environmental, agronomic, economic, political, and personal risks that they manage every day. Contrary to common simplifying assumptions, we show that these farmers tend to satisfice rather than optimize as they face intractable and multifaceted uncertainty; they make imperfect use of limited information; they are differently averse to different risks; they make decisions on multiple time horizons; they are cautious in responding to changing conditions; and their diverse risk perceptions contribute to important differences in individual behaviors. We find that they use two important nonoptimizing strategies, which we call cognitive thresholds and hazy hedging, to make practical decisions under pervasive uncertainty. These strategies, evident in farmers' simultaneous use of conservation agriculture and livestock to manage weather risks, are the messy in situ performance of naturalistic decision‐making techniques. These results may inform continued research on such behavioral tendencies in narrower lab‐ and modeling‐based studies.