Three key lessons to improve risk-based decision-making
When the Start Network began anticipating crises and releasing funds before crisis onset in 2016, we aimed to catalyse a culture shift to a more risk-aware humanitarian system. By encouraging Start Network members to take risks and act in advance of a crisis, we expected a steep learning curve in terms of our forecasting skill. Some crises would happen as predicted while others would not.
In 2019, we commissioned our first evaluation of crisis anticipation at the Start Network. We were keen to reflect on our risk-taking, look at which hazards we needed to invest in to improve our skill, and learn how to better measure the quality of anticipation alert notes submitted to the Start Fund. A key element of this was to look back across anticipation alerts and see where our forecasted emergencies had happened as expected and what kind of differences we had seen.
The evaluation looked at fourteen anticipatory projects from thirteen different forecasted crises. It concluded that half of them had not occurred, which prompted a wider review of all the projects where data was available to determine whether their forecasts were correct. To do this, we used information submitted by implementing agencies when their project has finished.
We looked at data from 37 projects, which were implemented across 24 different forecasted emergencies. Thirty-six percent of forecasted emergencies took place as predicted or with a more significant impact, meaning 64% either did not occur or occurred with less intensity. While the Start Network saw a few ‘false alarms’ as a characteristic of a healthily risk-taking humanitarian system, the number of near misses seemed high. Looking into the data, we learned three key points which will inform our approach moving forward:
1. Whether or not a forecast was correct is not a simple ‘yes’ or ‘no’, some variations we see are:
- The crisis happened but it was less (or more) severe, for example, there were rains but they weren’t torrential
- The political event happened, but it did not have the forecasted humanitarian impacts
- The crisis happened but not at the scale anticipated, or the impact happened in a different location
Understanding these nuances helps us to better understand common pitfalls of forecasting different hazards.
2. Each type of emergency, for example a disease outbreak, forced displacement, or river flood, requires a very different forecast, and the reasons why these forecasts may not happen differ significantly. Thirty-four percent of forecasts that did not happen as expected were disease outbreaks, where the outbreak not occurring may in part be due to the success of Start Network projects. For example, the evaluation included a case study of alert 308 for cholera in Somalia, which indicated that project activities limited the spread of cholera.
In all cases, we hope a forecasted emergency will not occur and harm will be avoided. By acting before a crisis, we are ultimately at the mercy of probability. A 100% correct forecast rate would be highly unlikely, raising questions about the quality of the data we take on forecasted crises and what happens next.
3. ‘False alarms’ are accepted more easily where the amount of funding allocated to a forecasted emergency is in proportion to the level of risk and underlying vulnerability is high. Eighty-five percent of Start Network members surveyed for the evaluation agreed or strongly agreed that funding levels were appropriate for the level of risk (only those who had direct alert experience answered this question).
We continue to work with decision makers, to support them to allocate funds for anticipated crises effectively. With the exception of disease outbreaks, the Start Network cannot impact how a forecasted scenario plays out. Our objective is to ensure the highest quality information is provided for decision makers, ensuring our decisions are defensible and that uncertainty is managed.
We have experimented with new ways to manage uncertainty. For alert 205, for anticipated forced return of refugees from Pakistan to Afghanistan, decision makers allocated knowing the crisis might not take place. Funds were transferred on the basis that if the numbers of arrivals were low, the funds would be recouped, which happened with close cooperation between members and the central Start Network team. We have also built an acceptance of uncertainty into the process of allocating funds, by encouraging Start Network members to consider multiple scenarios for predicted crises and make statements around the confidence they have in their forecasts.
The ‘no regrets’ approach has taken shape in practice, usually involving small investments of value to communities in any scenario. For example, sustainable infrastructure to divert floods and landslides (gabion walls, alert 173 Tajikistan), training health staff in Ebola case management (254 and 283 anticipation of Ebola in Uganda and Rwanda) and training staff in emergency assessments (alert 175 anticipation of election violence in Kenya). All of these investments have a value beyond the initial spike in risk they were designed to mitigate.
The Start Network will continue to experiment with different types of forecasting, risk analysis and ways of making risk-based decisions. This will likely result in further ‘false alarms’, accurate forecasts, and learning that will contribute toward a shift to a more risk-aware humanitarian system, where communities access support before a crisis hits.