Why isn’t it enough to be an expert to be trusted?

we have a widespread problem of trust; in particular, there is talk of a lack of trust in experts. This is not new, trust has always been fragile; has been in crisis for as long as it exists. Mistrust of experts is as old as democracy, and there has never been a golden age in which the competent reigned supreme and loved by the people. As in all relationships there have been ups and downs, this is definitely one of the low moments, neither the first nor the last in its history. What experts should ask themselves is not how to rebuild the destroyed trust but how to find the lost trustworthiness.

“I would like to propose to the benevolent consideration of the reader a theory that may perhaps seem paradoxical and subversive”. With these words began Bertrand Russell’s essay On the Value of Skepticism , written in 1928, and later collected in his Skeptical Essays . “The theory is this”, continues Russell, “that it would be wise not to believe a proposition until there is a well-founded reason to assume it is true.” Admittedly, this is a good idea, a moderate but demanding expression of skepticism; the world would really be better if we all did that. “If this opinion became common, our social life and our political system would be completely transformed,” acknowledged Russell.

This theory can guide us even when we find ourselves in the situation where we have to judge disagreeing experts. The English philosopher writes that the skepticism he hopes for is reduced to these three propositions:


1) that when experts agree in affirming one thing, the opposite opinion can no longer be considered certain;

2) that when they disagree, no option can be considered certain by non-experts;

3) that when experts agree that there is no sufficient reason for a positive opinion, the common man would do well to suspend his judgment.

That’s all. “These propositions seem perhaps very simple: yet once accepted, they would completely revolutionize human life,” wrote Russell.

Almost a century later we can say that the skeptical revolution never came. The author was aware of the difficulties, he knew that “opinions held with passion are always those for which there is no good justification”. A phrase perhaps too absolute for a skeptic – always? -, but if we look at the public debates we are dealing with it is a true statement in many cases.

Russell’s propositions were based on an implicit assumption: experts are epistemic authorities . It is precisely this idea that today seems most in question. Philosophers define it thus:

Epistemic authority is the expert’s ability to influence other individuals by “forcing” them to adopt a belief on the basis of epistemic superiority.

There is here a fundamental difference with respect to another type of authority, the so-called “practical authority”, the one capable of imposing the execution of an action; on the epistemic level, things are different, one cannot force someone else to believe something. Philosopher Michel Croce points out this in an essay from 2020 ( Whom can I trust , Il Mulino): “In general, the term ‘epistemic authority’ does not enjoy a good reputation, due to the value we attribute to the ideal of of the epistemic subject, which apparently is in contrast with the notion and role of epistemic authority ”.

What is difficult to accept is the very idea that there are individuals who are authorized to tell us what to believe. For this reason, philosophers consider “epistemic authority” exclusively the typical property of those who are experts in a specific field. This position is also tacitly present in much of the public debate.

It may be useful, instead, to take a step forward in the debate, to distinguish the two concepts. Thinking of an epistemic authority as something other than an expert. It is not as abstract as it seems. We owe the distinction to the American philosopher Linda Zagzebski. A criterion for distinguishing epistemic experts and authorities lies in classifying subjects according to the role they play within the community and there are three relevant aspects: the reliability of the epistemic subjects, their relationship with their respective interlocutors, their intellectual abilities . Both subjects, the expert and the epistemic authority, are reliable, while the relationship with inexperienced interlocutors is different: “The notion of expert requires that the subject be epistemically superior to the majority of individuals in a given field, while the notion of epistemic authority it simply requires that the latter be superior to the interlocutor ”; the expert can then have a better understanding of the sector in question,

People do not separate science from its moral implications, those who do not believe in climate change do so not because they contest the evidence, but because those theses conflict with their own values.

The element of the relationship with the interlocutor is a necessary requirement for epistemic authority, it is not for the expert: an expert may be able to contribute to the progress of his discipline regardless of the relationships he has with particular interlocutors. An epistemic authority and an expert also have different intellectual virtues: the knowledge of authority is addressed to those who are not insiders, the qualities of the expert are instead oriented to research.

Scientists, therefore, to be believed, must take a step further, they must remember who they are. Scientific research is synonymous with pact, community, consensus, but often scientists simply reaffirm their neutrality, insisting that science has no political, social, economic or moral color, saying things like: “To the force gravity does not matter whether you are on the right or on the left ”,“ Acid rain falls on rich and poor ”,“ Radioactive emissions affect you both before and after the elections ”.

All true, but not enough. It is not only on this basis that we can trust them: whether it is correct or not, public opinion relates science to its implications; a clear distinction between facts and values ​​does not work. People do not separate science from its moral implications, those who do not believe in climate change do not do so because they dispute the evidence, but because those theses conflict with their own values, political or religious, or with their own economic interests, the your lifestyle.

Scientists usually see these criticisms as fallacious: ad hominem , therefore illegitimate. “But if we take seriously the conclusion that science is a consensual social process, then who are the scientists is a relevant question,” recalls Naomi Oreskes in her latest essay, Why Trust Science? One of the fundamental theses of the work of the American author, biologist by training, is that the idea of ​​science as an activityneutral, devoid of values, be a myth: we must remember that utility, economic or otherwise, has long been a justification for supporting science, financially and culturally. Science is not a neutral enterprise, and neither are scientists as individuals. “No one can be truly neutral, when scientists say they are, they are false because they claim the impossible. Unless we want to consider them naive or idiot savants , we should consider them dishonest ”writes Oreskes.

The values ​​of many scientists can (or should) be shared by a general public. For scientists, the regaining of trust passes through overcoming the myth of objectivity.

Honesty, openness, transparency are valuesproper to scientific research. “How can scientists be honest and at the same time deny that they have values?” you ask. The reluctance of scientists to discuss their own values ​​may give the impression that those values ​​are problematic and therefore need to be hidden, or that there is no value they believe in. Would you trust a person who has no values? Would you believe a person who believes in nothing? I would say no, we would be dealing with a sociopath. If instead we believed that that person shares at least some of your values, not necessarily all of them, we might be more willing to listen to them. Neutrality with respect to values ​​can be defended from an epistemological point of view, but in practice it does not work:

We know that it is easier to establish trust between people who share values, but scientists and those who communicate science often overlook this aspect: “The dominant style in scientific research hides not only the values ​​of the author but even his own humanity. Not only are values ​​not expressed, emotions canceled, adjectives removed, the very word ‘I’ is implicitly forbidden ”writes Oreskes. The method is linked to a certain idea of ​​objectivity: the scientific paperideal is written not only as if the author had no feelings, as if the author was not even a human being, but it can turn out to be a strategic error: “By suppressing their own values ​​and insisting on the neutrality of science, scientists have taken a wrong way. They made the mistake of thinking that people would trust them if they believed in science’s disinterest in values ​​”.

Instead the opposite happened. The values ​​of many scientists can (or should) be shared by a general public. For scientists as well as for journalists, the regaining of trust passes through overcoming the myth of objectivity. “Even if we disagree” Oreskes concludes “on many political issues, our values ​​can coincide, at least in part. Clarifying the points on which we can agree, and explaining how they are related to scientific work, can help us overcome the mistrust that often seems to prevail ”.

 

Leave a Comment