It might seem intuitively obvious that the most important ingredient for ethics in any profession is human morality. And on one level that’s true. It’s your belief in right and wrong that motivates you to care about ethics in the first place. But on another level, it can be a devastating hindrance.
In the field of analytics, for example, the goal is to dispassionately analyze data to uncover truth. That truth is then used to take action in the world. As a human being, you may have some idea about what a “good action” would be. And you might be right. But, as an analyst, if you allow your idea of “good” to influence your interpretation of data, you’ve introduced bias, and that’s not good.
One example with worldwide, life and death consequences is analysis regarding the Covid-19 pandemic.
To illustrate the implications of morality in analysis, let’s consider a common and deeply held moral position regarding, in this case, vaccines. Here’s a hypothetical quote based on a moral judgment:
“Thank goodness the vaccines have arrived. They’re safe and effective, but they’ll only get us back to normal if almost everyone participates, so we need to convince people to take them. Anything that sows seeds of doubt is not good and will only embolden those who seek to misinform and frighten the public away from the vaccines.”
Whether you agree or disagree with this sentiment, can you see how holding this moral position could influence analysis? Moral conviction leads to bias, which leads to misleading analysis, which hardens the conviction, and so goes the cycle, thus often thwarting the very principles the moral judgment is based upon.
Consider the analysis included in this FDA Briefing Document, sponsored by Moderna, used to request Emergency Use Authorization (EUA). If you were asked to analyze this information in detail and report your findings in summary, how would you do it?
If you were to analyze the data with the belief that your job is to convince your audience that the vaccine is safe and effective, then you might quickly scan the document and point to the data in Table 9, indicating 94.5% efficacy, accepting this conclusion without critical analysis, and ignoring potential issues with the claim. And you might summarize side effects by saying that they were “mild”, or otherwise downplay them, while overlooking Table 23, indicating that 17.4% of vaccine recipients in the 18 – 64 age group had at least one severe (grade 3) systemic adverse reaction within 7 days of the second injection.
It may well turn out that the very common severe, systemic side effects will wear off after a few days and there will be few additional adverse reactions in the medium to long-term. But to claim or imply that severe side effects are rare is simply untrue. And to uncritically accept the claims of effectiveness without proactively attempting to poke holes in the claim is unwise and, for an analyst or any other professional who should know better, like a journalist, a medical clinician, or a researcher, it’s unethical.
Yet this is exactly the bias you will see in nearly all analysis and media reports. Once the “correct moral position” reached critical mass, it quickly cascaded to near unanimity, thus reinforcing the belief, further biasing any analysis. Groupthink has taken over.
Knowing that severe systemic side effects were very common in the clinical trials, how will analysts form and express opinions on the safety and effectiveness of the vaccines – still in experimental status – as they are injected into millions of people outside of those trials? Will we, as analysts and other professionals, just get a general sense of the results from the mass media and carefully selected experts and use that vague perception to form firm beliefs? Or will we proactively evaluate news reports with a critical eye and keep our opinions held loosely as we seek the underlying data, ask appropriate questions about the quality of that data, and form our own opinions on what’s true and what isn’t? Will we even notice if reliable data on safety and effectiveness is or is not available?
Once an analyst has formed an opinion based on an ethical – that is, dispassionate and objective – analysis of the data, there is nothing wrong with using this information to passionately advocate a moral position. It’s when it works in reverse that there’s a problem. Once you’ve emotionally invested in a point of view, it’s difficult to change course, and it’s easy to be selective, consciously or not, looking for evidence to support a moral stance.
You can apply this same idea to many questions related to Covid-19: Does hydroxychloroquine work against the illness, or is it dangerous and ineffective? Have Covid-19 deaths been over-counted, under-counted, or are the official figures largely accurate? Do masks slow the spread of the virus?
Do you believe you know the answers to these questions? If you do, why do you hold these beliefs? Is it because you’ve examined the data and the best evidence on all sides of the issue? Or is it because you believe there are morally correct answers based on what “everybody knows” to be true?
We could apply this same thought process, of course, to any number of major news stories or to everyday business decisions that have moral implications, whether in health care, human resources, finance, criminal justice, or any other field. In any of these areas, to ethically evaluate data, you must put your ideas of morality aside. Find the truth and tell the truth, to the best of your ability.
It may seem counterintuitive and even wrong to suppress moral judgments when seeking to introduce a strong ethical component into analysis. But that’s exactly what’s needed. As an analyst, truth is your ethical obligation. As a human being, guided by morality, what you then do with that truth is up to you.
Update 2/1/2021 – Here is an important example of extreme bias, in this case from the US Centers for Disease Control (CDC). The bias is so profound and the analysis so flawed that it’s difficult to believe it was published. Not only that, but the terrible analysis was repeated, uncritically, in The USA Today.
Here’s an example from the article (emphasis mine), derived from slide 36 in the CDC presentation:
“One way to figure out whether COVID-19 vaccines kill people is to look at the number of people expected to die over a period of time and compare that with deaths that occurred within a few days of vaccination.
VAERS [Vaccine Adverse Event Reporting System] received reports of 196 deaths after COVID-19 vaccination.
Of those, 66% were residents of long-term care facilities. About 1.3 million nursing home residents were vaccinated from Dec. 21 to Jan. 18.
In a group that large followed over that length of time, 11,440 people would be expected to die of all causes. That led the CDC to conclude the much lower number of nursing home deaths were not caused by vaccination.”
Do you see the problem? The USA Today dutifully repeated these numbers, saying that the CDC concluded that the deaths were not caused by the vaccine. But what these numbers really show is that the number of deaths following vaccination are obviously dramatically under-reported! Adverse events – including deaths – are supposed to be reported in VAERS if it is even theoretically possible that the event could have been caused by the vaccine.
Another example from the article, derived from slide 24 in the CDC presentation:
“In the vaccinated group, four people reported Bell’s Palsy, a form of facial paralysis seen in a small number of people in each of the vaccine trials. In the unvaccinated group, there were 348 cases.”
So, is the CDC now claiming that not only does the vaccine not cause Bell’s Palsy – a concern that was raised due to the imbalanced number of cases that occurred in the vaccine arm of the phase 3 clinical trials – but that the vaccine actually prevents Bell’s Palsy, along with a long list of other potential ailments?
Imagine if the numbers were reversed. Imagine if 348 cases of Bell’s Palsy occurred in vaccinated individuals and only four occurred in an unvaccinated population. Do you think those numbers would have been published without careful scrutiny into the quality of the data and the legitimacy of the comparison?
NOTE: I realize that this post could be perceived as itself starting from a moral point of view, contradicting the very point I’m trying to make. It’s certainly conceivable that the moral position of “we must let people know that this vaccine is not safe; we must at least discourage any thought that the vaccine should be mandatory for any common activity” could bias analysis just as well as the opposite moral position I noted in the post. In fact, in the first draft of this post I tried to give equal weight to the bias created by both moral positions. But when I looked for examples where data from clinical trials (my focus area, given the topic is about analysis of data) was selectively misrepresented to exaggerate the dangers, I couldn’t find any, outside of social media posts and comments. I did find a headline or two that pointed to deaths in the clinical trials, seeming to imply that the deaths could have been caused by the vaccines, when there is no hard evidence of that. But this was rare and it was quickly noted in those articles that there were a roughly equal number of deaths in the vaccine and placebo groups. Trying to give equal weight to both moral positions felt forced. The truth is the moral position noted in the post is the position taken nearly or completely universally in “credible” media, and the bias and the obvious influence on the content is likewise nearly or completely universal, accepting claims of efficacy without skepticism and downplaying adverse reactions. An honest assessment of the impact of moral judgement on analysis must convey this point, proportional to the overwhelming evidence.