The uncertainty of infectious disease outbreaks is underestimated
Uncertainty can be classified as either aleatoric (intrinsic randomness) or epistemic (imperfect knowledge of parameters). Majority of frameworks assessing infectious disease risk consider only epistemic uncertainty. We only ever observe a single epidemic, and therefore cannot empirically determine aleatoric uncertainty. Here, for the first time, we characterise both epistemic and aleatoric uncertainty using a time-varying general branching processes. Our framework explicitly decomposes aleatoric variance into mechanistic components, quantifying the contribution to uncertainty produced by each factor in the epidemic process, and how these contributions vary over time. The aleatoric variance of an outbreak is itself a renewal equation where past variance affects future variance. Surprisingly, superspreading is not necessary for substantial uncertainty, and profound variation in outbreak size can occur even without overdispersion in the distribution of the number of secondary infections. Aleatoric forecasting uncertainty grows dynamically and rapidly, and so forecasting using only epistemic uncertainty is a significant underestimate. Failure to account for aleatoric uncertainty will ensure that policymakers are misled about the substantially higher true extent of potential risk. We demonstrate our method, and the extent to which potential risk is underestimated, using two historical examples: the 2003 Hong Kong severe acute respiratory syndrome (SARS) outbreak, and the early 2020 UK COVID-19 epidemic. Our framework provides analytical tools to estimate epidemic uncertainty with limited data, to provide reasonable worst-case scenarios and assess both epistemic and aleatoric uncertainty in forecasting, and to retrospectively assess an epidemic and thereby provide a baseline risk estimate for future outbreaks. Our work strongly supports the precautionary principle in pandemic response.
READ FULL TEXT