sparcboxbuck
What happened to my ¤cash?
Not for an epidemiologist. This branch of science exists for reasons, one of which being it's more reliable than message board opinions of people who haven't studied it.
They are not infallible. So far some of the "top" scientists have been so far off with their predictions I would be embarrassed if I were them. And they factored in "social distancing" etc.
Without question a number of the forecasts have been incredibly inaccurate. I think the lack of knowns about core parameters that feed the models are clearly evident by the lack of forecasting accuracy. I don't know that I'd say that the models are bad, but given sources of data and quantity of data, the models reflect(ed) the best information available at the time. I certainly do not begrudge the epideologist community for the variation and accuracy in their forecasts -- at least in the earliest stages.
That said, I also think that there are other ways that the forecasts could have been shared with the public that would have gone a long way to increase confidence in their message. There have been a few good examples of this too, so not pointing fingers at everyone. Specifically, when modeling situations where there are significant questions about the reliability of the data, it's not uncommon to run simulations where the inputs are varied such that they cover the best guesses as opposed to the single best guess. From this, confidence bands can be assigned around the single point projections. That view simultaneously provides a best / worst case scenario look as well as the single best guess of the forecasted metric.
The problem is that the public has been given point estimates from these forecasts that have a near zero chance of being correct (think: area under the curve for an infinitely small domain ->0) whereas a much better representation of the forecasts may have been confidence bands around the point estimates such that it reflected a range of probabilistic outcomes... say .5, .75 and .95. Had the public been presented ranges, which frankly I think what the epidemiologists and others who are contributing to decisions being made are using, the public would have much greater confidence in the information being shared.
Thinking back to one of the now more, almost laughable, forecasts... the initial projections in Ohio. They picked a point estimate that was wildly wrong and even on the surface at the time, it did not pass the red-faced test. Think of how that information would have been consumed if they offered an expected range based on what they knew at the time.
That said, my guess is that the 'worst case scenario' may have been intentionally selected to get everyone's attention and to drive urgency. If that's the case, it was a matter of lies, damn lies and statistics.
Either way, here we are... and the choices in terms of how things were communicated -- either intentionally or because it was assumed that the public knew the forecasts are rough estimates -- many people have lost trust in what is likely good information... and as we go deeper into this mess, the information gets better each day while confidence does not recover.
Upvote
0