Risk Analysis: An International Journal

Subscribe to Risk Analysis: An International Journal feed
Wiley Online Library : Risk Analysis
Updated: 1 hour 6 min ago

Risk Communication Emergency Response Preparedness: Contextual Assessment of the Protective Action Decision Model

15 June 2017 - 1:15am

Studies are continuously performed to improve risk communication campaign designs to better prepare residents to act in the safest manner during an emergency. To that end, this article investigates the predictive ability of the protective action decision model (PADM), which links environmental and social cues, predecision processes (attention, exposure, and comprehension), and risk decision perceptions (threat, alternative protective actions, and stakeholder norms) with protective action decision making. This current quasi-longitudinal study of residents (N = 400 for each year) in a high-risk (chemical release) petrochemical manufacturing community investigated whether PADM core risk perceptions predict protective action decision making. Telephone survey data collected at four intervals (1995, 1998, 2002, 2012) reveal that perceptions of protective actions and stakeholder norms, but not of threat, currently predict protective action decision making (intention to shelter in place). Of significance, rather than threat perceptions, perception of Wally Wise Guy (a spokes-character who advocates shelter in place) correlates with perceptions of protective action, stakeholder norms, and protective action decision making. Wally's response-efficacy advice predicts residents’ behavioral intentions to shelter in place, thereby offering contextually sensitive support and refinement for PADM.

Hazard Experience, Geophysical Vulnerability, and Flood Risk Perceptions in a Postdisaster City, the Case of New Orleans

15 June 2017 - 1:10am

This article investigates the determinants of flood risk perceptions in New Orleans, Louisiana (United States), a deltaic coastal city highly vulnerable to seasonal nuisance flooding and hurricane-induced deluges and storm surges. Few studies have investigated the influence of hazard experience, geophysical vulnerability (hazard proximity), and risk perceptions in cities undergoing postdisaster recovery and rebuilding. We use ordinal logistic regression techniques to analyze experiential, geophysical, and sociodemographic variables derived from a survey of 384 residents in seven neighborhoods. We find that residents living in neighborhoods that flooded during Hurricane Katrina exhibit higher levels of perceived risk than those residents living in neighborhoods that did not flood. In addition, findings suggest that flood risk perception is positively associated with female gender, lower income, and direct flood experiences. In conclusion, we discuss the implications of these findings for theoretical and empirical research on environmental risk, flood risk communication strategies, and flood hazards planning.

Issue Information - TOC

14 June 2017 - 3:26pm

From the Editors

14 June 2017 - 3:26pm

A Reliability-Based Capability Approach

12 June 2017 - 4:50pm

This article proposes a rigorous mathematical approach, named a reliability-based capability approach (RCA), to quantify the societal impact of a hazard. The starting point of the RCA is a capability approach in which capabilities refer to the genuine opportunities open to individuals to achieve valuable doings and beings (such as being mobile and being sheltered) called functionings. Capabilities depend on what individuals have and what they can do with what they have. The article develops probabilistic predictive models that relate the value of each functioning to a set of easily predictable or measurable quantities (regressors) in the aftermath of a hazard. The predicted values of selected functionings for an individual collectively determine the impact of a hazard on his/her state of well-being. The proposed RCA integrates the predictive models of functionings into a system reliability problem to determine the probability that the state of well-being is acceptable, tolerable, or intolerable. Importance measures are defined to quantify the contribution of each functioning to the state of well-being. The information from the importance measures can inform decisions on optimal allocation of limited resources for risk mitigation and management.

Geographic Hotspots of Critical National Infrastructure

12 June 2017 - 4:45pm

Failure of critical national infrastructures can result in major disruptions to society and the economy. Understanding the criticality of individual assets and the geographic areas in which they are located is essential for targeting investments to reduce risks and enhance system resilience. Within this study we provide new insights into the criticality of real-life critical infrastructure networks by integrating high-resolution data on infrastructure location, connectivity, interdependence, and usage. We propose a metric of infrastructure criticality in terms of the number of users who may be directly or indirectly disrupted by the failure of physically interdependent infrastructures. Kernel density estimation is used to integrate spatially discrete criticality values associated with individual infrastructure assets, producing a continuous surface from which statistically significant infrastructure criticality hotspots are identified. We develop a comprehensive and unique national-scale demonstration for England and Wales that utilizes previously unavailable data from the energy, transport, water, waste, and digital communications sectors. The testing of 200,000 failure scenarios identifies that hotspots are typically located around the periphery of urban areas where there are large facilities upon which many users depend or where several critical infrastructures are concentrated in one location.

Public Understanding of Ebola Risks: Mastering an Unfamiliar Threat

9 June 2017 - 3:29am

Ebola was the most widely followed news story in the United States in October 2014. Here, we ask what members of the U.S. public learned about the disease, given the often chaotic media environment. Early in 2015, we surveyed a representative sample of 3,447 U.S. residents about their Ebola-related beliefs, attitudes, and behaviors. Where possible, we elicited judgments in terms sufficiently precise to allow comparing them to scientific estimates (e.g., the death toll to date and the probability of dying once ill). Respondents’ judgments were generally consistent with one another, with scientific knowledge, and with their self-reported behavioral responses and policy preferences. Thus, by the time the threat appeared to have subsided in the United States, members of the public, as a whole, had seemingly mastered its basic contours. Moreover, they could express their beliefs in quantitative terms. Judgments of personal risk were weakly and inconsistently related to reported gender, age, education, income, or political ideology. Better educated and wealthier respondents saw population risks as lower; females saw them as higher. More politically conservative respondents saw Ebola as more transmissible and expressed less support for public health policies. In general, respondents supported providing “honest, accurate information, even if that information worried people.” These results suggest the value of proactive communications designed to inform the lay public's decisions, thoughts, and emotions, and informed by concurrent surveys of their responses and needs.

Benchmarking Discount Rate in Natural Resource Damage Assessment with Risk Aversion

6 June 2017 - 12:20pm

Benchmarking a credible discount rate is of crucial importance in natural resource damage assessment (NRDA) and restoration evaluation. This article integrates a holistic framework of NRDA with prevailing low discount rate theory, and proposes a discount rate benchmarking decision support system based on service-specific risk aversion. The proposed approach has the flexibility of choosing appropriate discount rates for gauging long-term services, as opposed to decisions based simply on duration. It improves injury identification in NRDA since potential damages and side-effects to ecosystem services are revealed within the service-specific framework. A real embankment case study demonstrates valid implementation of the method.

Risk Perception and Risk Talk: The Case of the Fukushima Daiichi Nuclear Radiation Risk

5 June 2017 - 11:35am

Individuals’ perceptions and their interpersonal communication about a risk event, or risk talk, can play a significant role in the formation of societal responses to the risk event. As they formulate their risk opinions and speak to others, risk information can circulate through their social networks and contribute to the construction of their risk information environment. In the present study, Japanese citizens’ risk perception and risk talk were examined in the context of the Fukushima Daiichi nuclear radiation risk. We hypothesized and found that the risk information environment and risk literacy (i.e., competencies to understand and use risk information) interact to influence their risk perception and risk talk. In particular, risk literacy tends to stabilize people's risk perceptions and their risk communications. Nevertheless, there were some subtle differences between risk perception and communication, suggesting the importance of further examination of interpersonal risk communication and its role in the societal responses to risk events.

Modeling the Transmission of Measles and Rubella to Support Global Management Policy Analyses and Eradication Investment Cases

31 May 2017 - 2:26pm

Policy makers responsible for managing measles and rubella immunization programs currently use a wide range of different vaccines formulations and immunization schedules. With endemic measles and rubella transmission interrupted in the region of the Americas, all five other regions of the World Health Organization (WHO) targeting the elimination of measles transmission by 2020, and increasing adoption of rubella vaccine globally, integrated dynamic disease, risk, decision, and economic models can help national, regional, and global health leaders manage measles and rubella population immunity. Despite hundreds of publications describing models for measles or rubella and decades of use of vaccines that contain both antigens (e.g., measles, mumps, and rubella vaccine or MMR), no transmission models for measles and rubella exist to support global policy analyses. We describe the development of a dynamic disease model for measles and rubella transmission, which we apply to 180 WHO member states and three other areas (Puerto Rico, Hong Kong, and Macao) representing >99.5% of the global population in 2013. The model accounts for seasonality, age-heterogeneous mixing, and the potential existence of preferentially mixing undervaccinated subpopulations, which create heterogeneity in immunization coverage that impacts transmission. Using our transmission model with the best available information about routine, supplemental, and outbreak response immunization, we characterize the complex transmission dynamics for measles and rubella historically to compare the results with available incidence and serological data. We show the results from several countries that represent diverse epidemiological situations to demonstrate the performance of the model. The model suggests relatively high measles and rubella control costs of approximately $3 billion annually for vaccination based on 2013 estimates, but still leads to approximately 17 million disability-adjusted life years lost with associated costs for treatment, home care, and productivity loss costs of approximately $4, $3, and $47 billion annually, respectively. Combined with vaccination and other financial cost estimates, our estimates imply that the eradication of measles and rubella could save at least $10 billion per year, even without considering the benefits of preventing lost productivity and potential savings from reductions in vaccination. The model should provide a useful tool for exploring the health and economic outcomes of prospective opportunities to manage measles and rubella. Improving the quality of data available to support decision making and modeling should represent a priority as countries work toward measles and rubella goals.

A Big Data Analysis Approach for Rail Failure Risk Assessment

31 May 2017 - 2:25pm

Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach.

The Mediated Amplification of a Crisis: Communicating the A/H1N1 Pandemic in Press Releases and Press Coverage in Europe

31 May 2017 - 2:25pm

In the aftermath of the A/H1N1 pandemic, health authorities were criticized for failures in crisis communication efforts, and the media were accused of amplifying the pandemic. Considering these criticisms, A/H1N1 provides a suitable case for examining risk amplification processes that may occur in the transfer of information from press releases to print news media during a health crisis. We integrated the social amplification of risk framework with theories of news decisions (news values, framing) in an attempt to contribute to existing research both theoretically and empirically. We conducted a quantitative content analysis of press releases disseminated by health and governmental authorities, as well as the quality and tabloid press in 10 European countries between March 2009 and March 2011. Altogether 243 press releases, 1,243 quality press articles, and 834 tabloid press articles were coded. Consistent with research on news values and framing the results suggest that quality and tabloid papers alike amplified A/H1N1 risks by emphasizing conflict and damage, presenting information in a more dramatized way, and using risk-amplifying frames to a greater extent and risk-attenuating frames to a lesser extent than press releases. To some extent, the quality and tabloid press differed in how risk information was presented. While tabloid press articles seemed to follow the leading quality press with regards to content and framing of health crisis coverage, they exhibited a stronger emphasis on drama and emotion in the way they presented information.

Modeling Precheck Parallel Screening Process in the Face of Strategic Applicants with Incomplete Information and Screening Errors

29 May 2017 - 11:17am

In security check systems, tighter screening processes increase the security level, but also cause more congestion, which could cause longer wait times. Having to deal with more congestion in lines could also cause issues for the screeners. The Transportation Security Administration (TSA) Precheck Program was introduced to create fast lanes in airports with the goal of expediting passengers who the TSA does not deem to be threats. In this lane, the TSA allows passengers to enjoy fewer restrictions in order to speed up the screening time. Motivated by the TSA Precheck Program, we study parallel queueing imperfect screening systems, where the potential normal and adversary participants/applicants decide whether to apply to the Precheck Program or not. The approved participants would be assigned to a faster screening channel based on a screening policy determined by an approver, who balances the concerns of safety of the passengers and congestion of the lines. There exist three types of optimal normal applicant's application strategy, which depend on whether the marginal payoff is negative or positive, or whether the marginal benefit equals the marginal cost. An adversary applicant would not apply when the screening policy is sufficiently large or the number of utilized benefits is sufficiently small. The basic model is extended by considering (1) applicants' parameters to follow different distributions and (2) applicants to have risk levels, where the approver determines the threshold value needed to qualify for Precheck. This article integrates game theory and queueing theory to study the optimal screening policy and provides some insights to imperfect parallel queueing screening systems.

Tracking and Analyzing Individual Distress Following Terrorist Attacks Using Social Media Streams

29 May 2017 - 11:16am

Risk research has theorized a number of mechanisms that might trigger, prolong, or potentially alleviate individuals' distress following terrorist attacks. These mechanisms are difficult to examine in a single study, however, because the social conditions of terrorist attacks are difficult to simulate in laboratory experiments and appropriate preattack baselines are difficult to establish with surveys. To address this challenge, we propose the use of computational focus groups and a novel analysis framework to analyze a social media stream that archives user history and location. The approach uses time-stamped behavior to quantify an individual's preattack behavior after an attack has occurred, enabling the assessment of time-specific changes in the intensity and duration of an individual's distress, as well as the assessment of individual and social-level covariates. To exemplify the methodology, we collected over 18 million tweets from 15,509 users located in Paris on November 13, 2015, and measured the degree to which they expressed anxiety, anger, and sadness after the attacks. The analysis resulted in findings that would be difficult to observe through other methods, such as that news media exposure had competing, time-dependent effects on anxiety, and that gender dynamics are complicated by baseline behavior. Opportunities for integrating computational focus group analysis with traditional methods are discussed.

Incremental Sampling Methodology: Applications for Background Screening Assessments

29 May 2017 - 11:16am

This article presents the findings from a numerical simulation study that was conducted to evaluate the performance of alternative statistical analysis methods for background screening assessments when data sets are generated with incremental sampling methods (ISMs). A wide range of background and site conditions are represented in order to test different ISM sampling designs. Both hypothesis tests and upper tolerance limit (UTL) screening methods were implemented following U.S. Environmental Protection Agency (USEPA) guidance for specifying error rates. The simulations show that hypothesis testing using two-sample t-tests can meet standard performance criteria under a wide range of conditions, even with relatively small sample sizes. Key factors that affect the performance include unequal population variances and small absolute differences in population means. UTL methods are generally not recommended due to conceptual limitations in the technique when applied to ISM data sets from single decision units and due to insufficient power given standard statistical sample sizes from ISM.

Bias-Corrected Estimation in Continuous Sampling Plans

29 May 2017 - 11:15am

Continuous sampling plans (CSPs) are algorithms used for monitoring and maintaining the quality of a production line. Although considerable work has been done on the development of CSPs, to our knowledge, there has been no corresponding effort in developing estimators with good statistical properties for data arising from a CSP inspection process. For example, information about the failure rate of the process will affect the management of the process, both in terms of selecting appropriate CSP parameters to keep the failure rate after inspection at a suitable level, and in terms of policy, for example, whether the process should be completely inspected, or shut down. The motivation for this exercise was developing sampling protocols for Australia's Department of Agriculture and Water Resources for monitoring the biosecurity compliance of incoming goods at international borders. In this study, we show that maximum likelihood estimation of the failure rate under a sampling scheme can be biased depending on when estimation is performed, and we provide explicit expressions for the main contribution of the bias under various CSPs. We then construct bias-corrected estimators and confidence intervals, and evaluate their performance in a numerical study.

Bayesian Quantile Impairment Threshold Benchmark Dose Estimation for Continuous Endpoints

29 May 2017 - 11:15am

Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose–response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure–response quantile relationship, which gives the model flexibility to estimate the quantal dose–response function. We describe this methodology and apply it to both epidemiology and toxicology data.

Industrial Safety and Utopia: Insights from the Fukushima Daiichi Accident

29 May 2017 - 11:15am

Feedback from industrial accidents is provided by various state or even international, institutions, and lessons learned can be controversial. However, there has been little research into organizational learning at the international level. This article helps to fill the gap through an in-depth review of official reports of the Fukushima Daiichi accident published shortly after the event. We present a new method to analyze the arguments contained in these voluminous documents. Taking an intertextual perspective, the method focuses on the accident narratives, their rationale, and links between “facts,” “causes,” and “recommendations.” The aim is to evaluate how the findings of the various reports are consistent with (or contradict) “institutionalized knowledge,” and identify the social representations that underpin them. We find that although the scientific controversy surrounding the results of the various inquiries reflects different ethical perspectives, they are integrated into the same utopian ideal. The involvement of multiple actors in this controversy raises questions about the public construction of epistemic authority, and we highlight the special status given to the International Atomic Energy Agency in this regard.

Pages