The Early Warning Project strives to use the best methods to assess risks of mass atrocities around the world, so we are happy to see a new paper (PDF) from the Good Judgment Project (GJP)—a multi-year, U.S. government-funded forecasting experiment that's set to conclude later this year—getting lots of press coverage this week. As Kelsey Atherton describes for Popular Science (here),
The results from this tournament have surprised people accustomed to hearing that people, even experts, are generally poor forecasters. As the authors of the paper on GJP's findings report, "Overall, forecasters were significantly better than chance."
By design, the Early Warning Project’s Expert Opinion Pool uses many of the methods and strategies the paper discusses. For starters, the opinion pool approach—in which individuals are asked to estimate the probability of various events' occurrence and can update those forecasts as often as they like—is one of the methods GJP uses to elicit and combine forecasts from its participants.
GJP also found that forecasters with “political knowledge” tend to be more accurate than ones who don't closely follow the news. For us, this means recruiting participants with subject-matter expertise and regional knowledge to answer questions about things like which countries are most likely to see an onset of mass killing in the coming year and whether or not the government of Colombia and the FARC will strike a peace deal.
GJP also reports that forecasters working in teams were about 10-percent more accurate than forecasters working alone. Our opinion pool doesn't implement this idea exactly, but we do mimic some important features of it. In the study, team members were encouraged to work together and debate and share their forecasts with each other. In our opinion pool, participants can see and track the aggregate forecast and can discuss their forecasts and relevant news with other members of the pool in comment threads attached to each question.
The article points to further findings that are also relevant to our work. As Max Nisen summarized for The Atlantic (here), it turns out that
Based on these findings, we are looking to offer training in probabilistic reasoning and avoiding judgmental errors later this year. As our opinion pool grows, we also hope to experiment with explicit teaming to see if it boosts our pool's accuracy, too.
If you are interested in participating in our Expert Opinion Pool, we can be reached at ewp@ushmm.org.