The PhilPapers Fallacy (Part 6 of 9)
Table of contents
Part 2: Relevant expertise matters
Part 4: Selection effects matter
Part 5: How PhilPapers respondents interpreted the survey questions matters
Part 7: Philosophical fashions matter
Part 8: Demographic, social, and cultural forces matter
2.5 Independence matters
That many people believe X is evidence of X will be enhanced by a few considerations. This includes whether the people who believe X are well-informed, reasonable, and so on. That philosophers have more knowledge and expertise with a given philosophical issue may be evidence of X, but we must also consider whether or not, and to what extent, philosophers arrive at their conclusions independently of one another. If 1000 philosophers were all tasked with coming to a conclusion about a given topic, and worked in isolation for a decade, and reached the same conclusion, this is better evidence of that conclusion than if all 1000 philosophers lived together, read the same works, spoke to one another, and so on. Why? Because their conclusions would be less independent.
Social interaction and shared methods could cause the group to converge on mistaken conclusions. A highly influential subset of thinkers could influence how the rest think, or they could all come to adopt the same methods or rely on the same texts. Discussions with one another could cause convergence on shared views for reasons unrelated to those views being correct. This does accurately capture the present state of philosophy. Philosophers rely on similar methods, read similar texts, interact with one another regularly, and tend to insulate what they do from the rest of the world (by carrying out ongoing conversations socially and academically insulated from other fields, developing technical jargon that creates a high barrier to entry for outsiders, interacting with one another more than others at conferences, talks, in publications, etc.).
How much stock should we put in the proportion of philosophers who endorse one position over another? Presumably, it's not just a numbers game. Suppose, for instance, we eventually develop the technology to create brain emulations with identical beliefs and intuitions to the philosopher in question, or a rogue philosopher moves to an earth-like plan with plenty of resources and creates clone factories and makes 10 billion clones of themselves, all with the same philosophical beliefs. Under these circumstances, an adequately representative survey of all analytic philosophers would include all of these emulations or clones.
Take, for instance, responses towards the trolley problem in the 2020 PhilPapers Survey. In the trolley problem, a train is headed down the tracks and cannot stop. There are five people on the track. If it continues, it will kill them. You may pull a switch. If you do, it will divert the trolley onto a second track, where it will kill one person instead of the five. You have only two choices: pull the switch, or don’t pull the switch.
A majority of philosophers accept or lean towards pulling the switch: a little over 63%. Only about 13% accept or lean towards not pulling the switch (with the remainder choosing a variety of other options).
Suppose a staunch Kantian who believes we have a moral obligation not to pull the switch travels to another planet, and makes ten billion clones of themselves. 1735 philosophers responded to the trolley problem question. If 13.31% favored not switching, or 231 philosophers. Add another ten billion to the survey respondents, and we now have 10,000,000,231 who oppose pulling the switch. Redo the survey, and we get well over 99.99% who oppose pulling the switch. This yields an overwhelming consensus against pulling the switch. Should we take this as compelling evidence that the morally correct response is to not pull the switch?
No. Of course not. For the obvious reason that clones with the same beliefs didn’t arrive at those beliefs independently of the Kantian who initially formed those beliefs. For comparison, if we found an error in the code of a common program used to make calculations, we wouldn’t conclude that if most mathematicians had endorsed the output of the code as true, that it would be true. Once we identified the error, we’d know they had all made a systematic error they wouldn’t endorse on reflection.
The same could be said of the Kantian in question. Suppose, after creating their ten billion clones, they set their clones up in perfect isolation from the rest of the universe, and provided them with enough TV shows and pool tables to leave them in a recreational malaise, while they went back to earth to declare their victory over the utilitarians.
However, on arrival, they encountered a convincing utilitarian who caused them to realize they’d made a mistake in their reasoning. Correcting this mistake entails that the morally correct action is to pull the switch.
Should the Kantian endorse this view? After all, over 99.99% of philosophers oppose pulling the switch. How should they weigh the overwhelming consensus against pulling the switch against what strikes them as clear and obvious reasons to favor doing so?
They shouldn’t put much stock in such conclusions at all. Presumably, all ten billion of their benighted clones made the same mistake. Their reasoning wasn’t independent of one another. Ultimately, what matters is the quality of the arguments, evidence, and reasoning for one’s view, not the sheer proportion of people who endorse one or another of competing views. And if everyone is making the same mistake, then it doesn’t matter how many people think a particular thing, what matters is why they think the thing. In practice, we take lots of experts thinking something to evidence of that thing because we expect each of them to bring a partially distinct perspective to the issue such that, for each to arrive at the same conclusion is a bit like different witnesses all agreeing on what they saw. If all of the witnesses had the same biases, or the same motivations to lie, or were looking out of the same window, this undermines the cumulative strength of their testimony. Just the same, the degree to which philosophers provide cumulative evidence turns, in part, on the degree to which they serve as independent witnesses to the evidence. It’s hard to pin down precisely what constitutes “independence,” but note that even if philosophers were distinct in many respects, so long as they share one factor in common that leads them off track, then they can all be mistaken.
At best, the proportion of experts is a proxy for the quality of the arguments and evidence for a given position. If a bunch of smart people investigate a topic, and reliably come to the same conclusions, this plausibly provides some evidence in favor of whatever conclusions they tend to favor. However, the quality of such evidence crucially depends on the degree to which these experts differ from one another along potential biasing dimensions that could cause their conclusions to be off-track from the truth, and whatever other factors are relevant to their evaluation of the arguments or evidence for a position being independent of other evaluators. The fewer such differences, the less independent their judgments. And the less independent their judgments, the more they asymptotically approach the situation we are in with the clones: any particular error that causes one of the clones to make a mistake will necessarily cause all of the other clones to make the same mistake, for the same reason, since, by stipulation, what makes them clones just is (among other things) that they are similar in precisely those respects relevant to the formation of their philosophical beliefs.
The value of a bunch of experts all converging on similar conclusions thus rests in their at least partial independence from one another, with respect to such potentially relevant causal factors. Just as a whole bunch of mathematicians relying on the same code can make the same mistake, a bunch of philosophers can converge on the same philosophical mistake if they all share certain presuppositions or commitments or draw conclusions earlier in the chain of inference, or in the formation of their patterns of belief and intuitions, that share a common source.
Take a step back, and suppose you don’t know that all of the mathematicians are relying on the same code to draw certain mathematical conclusions. We might instead suppose that each mathematician, using the same, reliable set of mathematical axioms, are each privately doing all the calculations on their own, and yet somehow they’re all making exactly the same mistake. The more mathematicians, the more the likelihood of this approaches zero. Yet the moment we realize there is a common cause to their mistakes, the fact that they were all making exactly the same mistake is no longer implausible at all.
Suppose, for instance, evolution had shaped the human mind to find belief in God incorrigible. People could reason, develop sophisticated sciences, and do basically anything, but no matter how hard they try, they couldn’t shake their conviction in God. In this case, belief in God wouldn’t be responsive to what the world is like. Yet in practice, philosophers treat their intuitions just this way: as epistemic blank checks. Think about how moral realists react to arguments against realism. Louise Antony said:
Any argument for moral scepticism will be based on premises which are less obvious than the existence of objective moral values and duties themselves.
And this sentiment is routinely echoed among moral realists. It’s as though the strength of their convictions are themselves a form of evidence. And if the realist insists they are, then all the arguments and evidence in the world may be irrelevant when thrown up against the walls of conviction on the part of the realist. How can we independently evaluate the degree to which the realist’s convictions are a good reason to endorse moral realism, independent of the publicly evaluable arguments and evidence? We can’t. The bottom line is, when realists make such appeals, they’re speaking only on behalf of themselves. They are, in effect, reporting their priors. Then they’ll make some additional maneuvers to suggest that because most philosophers share their convictions, that this isn’t so much private evidence but an indication that rational agents are converging on the truth.
But reasoning isn’t a black box, or at least, it doesn’t have to be. Members of any religious group could be said to converge on belief in the tenets of that religion, but this isn’t good evidence the religion is true. We know that cultural and social factors can prompt insular groups that interact with one another to share beliefs and adopt dogmas for reasons unrelated to the truth of those claims. Why should we suppose the same hasn’t happened, and isn’t continuing to happen, with moral realists? I don’t think we should. To really know whether convergence among philosophers is an indication of the truth, we would need to have knowledge of why they endorse a particular view. And that would mean understanding the psychological processes at play in any given instance of convergence.
What if the preponderance of realism is the result of self-selection of unrepresentative people predisposed to endorse moral realism for reasons unrelated to the quality of the arguments?
What if various social and cultural factors create an atmosphere of pressure to conform with current philosophical trends, and this prompts people to reason in biased or motivated ways?
What if the methods of analytic philosophy lend themselves towards conceptual and linguistic errors that increase the probability of endorsing and retaining a strong conviction in moral realism for reasons unrelated to its truth or the quality of arguments for it?
What if people critical of the methods that are leading to these mistakes are discouraged from entering or remaining in the field? What if they are made to feel unwelcome, or simply feel there’s little reason to study topics using a method that doesn’t resonate with them or that strikes them as ineffective?
What if surveyed populations come from cultures and societies that, relative to the rest of the world’s population, or to counterfactual conditions under which people developed in different circumstances, they are disproportionately likely to favor realism?
If any one of these factors were in play, we could see a substantial skew towards moral realism among surveyed populations for reasons unrelated to its truth. And that’s just it: I think all of these factors are in play.
Unfortunately, it’s hard to demonstrate which of these factors is in play and to what extent it is influencing the proportion who favor moral realism when surveyed. Philosophers don’t arrive at conclusions using a program whose code we can directly assess for errors. We’re not going to find that there was a typo in line 182,952 of some Plato++ code, and that’s why over 40% of philosophers inexplicably endorse aesthetic objectivism. So if they are making any errors, it’s going to be a lot harder to demonstrate precisely what those errors are. Perhaps they aren’t making any errors at all. But there’s one thing we should keep in mind: philosophers do not arrive at their philosophical conclusions independently of one another.