One to two million dead. Several million infected. These COVID-19 headlines have rattled the American people and their policymakers. As a result, the country is effectively shut down, with a whopping 16.8 million people losing their jobs in just the last few weeks. Some estimate that unemployment eventually could be as high as 32%. And, others are warning that we could see record-setting bankruptcies in the next 12 months if the federal stimulus package doesn't work as intended.

Most Americans are still trying to wrap their heads around what is happening and whether this will ever actually end. One would think that this once-in-a-lifetime COVID-19 chaos and the predictions of doom are based on reliable science and data.

Sadly, that may not be the case. In fact, perhaps the most outrageous part of the COVID-19 shut down is that it is largely based on epidemiological models that would be considered "junk science" if used to predict, for example, fatality rates. "Junk science," of course, is a loaded term that has a special meaning for lawyers, and especially litigators.

The litigation process is aimed at finding the truth. And in that truth-finding process, experts are often called upon to provide testimony to assist the decision-maker—usually a jury of laypersons—reach the truth and render a verdict. But expert testimony isn't admissible simply because it's uttered by a scientist with impressive credentials. Instead, an expert's opinions must meet certain criteria to ensure that the information given to the decision-maker is reliable.

Judges play a critical role in that process, serving as gatekeepers who decide whether an expert's opinion is sufficiently reliable for a jury to consider. To assess reliability, judges evaluate a variety of factors, including whether a theory or methodology can and has been tested; whether it has been subjected to peer review; whether there is a known rate of error; and, whether the theory or methodology is generally accepted in the relevant scientific community.

In general, epidemiological models are used in U.S. courts to establish a cause-effect link between, for example, an alleged harmful drug or product and a particular disease. The models are almost always based on published, peer-reviewed studies that collect data over a period of several years.

But the COVID-19 models that have been used to justify the U.S. shutdown appear to come nowhere near the level of rigor demanded in our legal system. We need look no further than the statements of modelers themselves. "Don't Believe the COVID-19 Models, That's not what they're for," reads one headline from The Atlantic. "Why It's So Freaking Hard To Make a Good COVID-19 Model," reads another from FiveThirtyEight.

The admissions in these articles are striking and very disturbing because the government-mandated shut down that will continue to harm the lives of millions of Americans is based in large measure on these unreliable models. "Right answers are not what epidemiological models are for," and "[t]he models are not snapshots of the future," writes Zeynep Tufekci in The Atlantic. That is a stunning statement because Americans are bombarded every day with doomsday scenarios that rely on these very models as, in fact, snapshots of the future.

This is not to say that Americans should ignore the precautions that have been outlined by government officials. The point here is that the predictions that are so often emblazoned on televised charts and graphs must be put in the proper context.

For example, the inputs to the current COVID-19 models that produce doomsday predictions never have been peer-reviewed or subjected to the rigorous scientific scrutiny that a court would demand. The reasons are obvious: This is an emergency, and there is no time to engage in the normal course of scientific inquiry and validation. The modelers are not to blame for this. It's just the reality that comes with attempting to model a real-time event that has too many moving parts and too many unknowns.

The writers at FiveThirtyEight have identified many of the unknowns and what they call "fuzzy" inputs to these models, including: variable fatality and infection rates; symptomaticity ratios (how many people are symptomatic versus asymptomatic) that are simply "guess[es]" at this point; non-uniform rates of contact; a "moving target" rate of transmission per contact; and, variable estimates of the duration of infectiousness.

Of all these inputs, the most important, and the most widely reported, is the fatality rate. Remember, COVID-19 has been reported to be deadly, which implies to most Americans that, if you get it, you might die. That is a very serious and scary concept.

However, when assessing the fatality rate of COVID-19, the data is hopelessly unreliable. Nina Schwalbe documents this problem in the World Economic Forum, concluding that "we are massively over projecting the percent of infected people who die of COVID-19. It's a dangerous message that is causing fear … ."

At the same time, others are reporting just the opposite—that the death toll from COVID-19 is actually higher than the models suggest. See Sarah Kliff and Julie Bosman, "Official Counts Understate the U.S. Coronavirus Death Toll," New York Times (April 5, 2020). Kliff and Bosman report that the true fatality rate has not been reliably captured due to "inconsistent protocols, limited resources and a patchwork of decision-making from one state or county to the next." They point out that there currently is no uniform protocol for reporting deaths related to COVID-19. Tellingly, the CDC, according to Kliff and Bosman, instructed "officials to report deaths where the patient has tested positive or, in an absence of testing, 'if the circumstances are compelling within a reasonable degree of certainty.'" That could mean death from pneumonia is actually reported as death from COVID-19, or vice versa.

All of this means that there is no way to know now whether any of the models that have been used to shut down our society are reliable at all. According to Tufekci, "[w]ith this novel coronavirus, there are a lot of things we don't know because we've never tested our models, and we have no way to do so." And so it is that we get the sobering message from David Adam in Nature, that "[t]he true performance of simulations in this pandemic might become clear only months or years from now."

Perhaps the models upon which policymakers have been relying don't need to satisfy the standards imposed by U.S. courts. As Tufekci observes, these models may help us avoid the potentially calamitous results by spurring us to act to chop those extreme branches off the models' decision tree (e.g., by social distancing, etc.). Generally speaking, we don't disagree with Tufekci when she says that we should not "drown ourselves in endless discussions about the error bars, the clarity around parameters, the wide range of outcomes, and the applicability of the underlying data." In other words, let's not focus on whether these models are reliable truth-tellers—as required in our judicial system—but rather on whether these models help us avoid catastrophic extremes.

But is that a sufficient basis upon which to shut down an entire society and put millions of people out of work for perhaps several months? The bottom line is that the data is all over the place, with no uniform way of accurately or reasonably measuring COVID-19's effects. Economists agree, which makes deciding when to re-open the economy that much more difficult.

The common refrain reverberating through social media and from politicians and media pundits is to follow the "science" and the "facts." But the epidemiological models, as we currently understand them, aren't reliable "science," and they don't convey reliable "facts," as the experts in this field have openly admitted and as the constantly changing model predictions confirm. Instead, the models rely on an endless pile of subjective assumptions that may or may not exist in the real world and that seem to change every day.

Just recently, for instance, Adam Raymond of Intelligencer reported that the University of Washington's Institute for Health Metrics and Evaluation (IHME) once again changed its predictions of COVID-19 deaths based on new data. The previous IHME model predictions—100,000 to 240,000 dead—were flashed up on our television screens by policymakers who used the model to impose a draconian economic shut down.

But just in the past few days, the IHME model changed again—twice—showing fewer and fewer estimated COVID-19 deaths. Make no mistake, the model is likely to change again, and no one knows with any reasonable degree of certainty whether the death estimates will go back up or go back down.

It may be that the IHME and other models are acceptable even though they don't rise to the level of true science because it's all we have in a world of imperfect information. But if that's the case, the politicians, pundits, and media outlets floating the doomsday estimates should level with the American people about the limitations of the models rather than scare everyone with big numbers of death and disease. We know, of course, that they won't do that because that doesn't make for a good headline or ratings—or, as Tufekci correctly put it, doesn't jive with the media's "horse-race, he-said-she-said scripts."

There is a lot at stake in these times of uncertainty. We have to balance protecting physical health with preserving economic health and, indeed, societal order as we know it. Of course, we all know that those goals are not mutually exclusive. The balancing act is extremely difficult and the outcomes are momentous when many are dying, millions have lost their jobs, and businesses of all shapes and sizes face an uncertain future. But widely and repeatedly publicizing doomsday predictions from models that are, in this critical point in time, inherently unreliable for that purpose makes balancing these competing goals all but impossible. The model mania needs to stop.

Ugo Colella and John J. Zefutie Jr. are co-founders of Colella Zefutie LLC, a boutique law firm based in Washington, D.C. with offices in New York, New Jersey, and Northern Virginia.