Thomas J. Froehlich, Ph.D., is Professor Emeritus in the School of Information at Kent State University. He has published extensively about ethical considerations in information professions. Dr. Froehlich teaches and publishes extensivively on misinformation in the public sphere, including a chapter in a recent collection, Navigating Fake News, Alternative Facts, and Misinformation in a Post-Truth World. He spoke with us regarding disinformation in the twenty-first century. This is part one of a two-part interview.
How would you define “disinformation”?
Disinformation: lies or false information supplied with the deliberate intent to mislead, most often in a political context. In theory, this should be easy to identify; in practice, it is not always that clear.
What motivates someone to engage in disinformation?
There is no one motive for the creation, acceptance or spread of disinformation. At the top of the disinformation chain, it is to retain political and economic advantages. At the middle class or lower level, it is to nurture grievances whether founded or not, to maintain privilege or to engage in negative polarization (where voters side with one candidate not out of faith in him/her but out of fury with the other side).
There are predispositions to accept disinformation: cognitive bias, gullibility, willful ignorance, self-deception, avoiding discordant information, etc. These are heavily influenced by cognitive authorities, such as news sources, peers, religious leaders, social media channels and political associations.
Is there a meaningful difference between willful “disinformation” and perhaps inadvertent “misinformation” where wrong information is being given out without ulterior motivation?
There is not a simple answer to this question. For example, a public figure spreads misinformation about the coronavirus then it is repeated in certain news channels. On one level this may be seen as merely misinformation, but it is disinformation disguised as misinformation, because the ultimate objective is control and manipulation.
Now, if I hear something on social media and misinterpret it and tell a friend, such an action might be done without ulterior motivation. This is inadvertent misinformation. But these days, even such “innocent” acts may represent cognitive bias – I misunderstand the message to prove my point or the “rightness” of my bias.
Our constraints on lies, false information and attacks on genuine expertise have been replaced by self-righteous, unjustified opinions.
What is the typical transmission route of disinformation? Does it matter if the participants are willful or neutral transmitters?
There are some very willful transmitters at the top who make money and retain power from it, but we have a disinformation ecology, consisting of messages and messengers. Like-minded (and like-propagating) cognitive authorities, news channels, social media, etc. that all reinforce the same messages, make it difficult to question the reliability of a given message and control which communication channels are the “right” ones to which to pay attention.
Any agency in the chain can start some disinforming message and it get echoed back to most, if not all elements in the chain. Repetition is a major factor in cementing the disinformation: how could anyone reject some bit of misinformation coming from so many different sources saying the same thing? To question any message is to question the whole edifice and one’s stake in this ecology. That is why it is hard to change anyone in what is called a propaganda feedback loop or filter bubble. The filter bubble is a self-propagating and self-reinforcing, explicitly refuting any challenging sources.
How has social media transformed our disinformation consumption?
Social media has considerably aggravated (dis/mis-) information. Studies have shown false information spreads more quickly and broadly than true information on the internet and that false information is virtually impossible to retract and stays in people’s minds, whether it is retracted or not. Prior to the internet and social media, information, misinformation and disinformation ran through clearance circles or reliability checks (e.g., newspapers and news broadcasts checking their sources, especially for their reliability and trustworthiness).
Now a single person with any theory or grudge or false belief can broadcast it on social media, can attract millions of followers and they have no filters that will check their assertions. Not only that but one or few voices can seem to be amplified into millions, by bots and software algorithms. Our constraints on lies, false information and attacks on genuine expertise have been replaced by self-righteous, unjustified opinions.
Part Two of this interview can be read here.