Incorrigibility and the futility of defeaters
1.0 Phenomenal gridlock
Suppose two philosophers endorse phenomenal conservatism. If it seems to them that something is true, then they are justified in believing it is true in the absence of defeaters.
To one philosopher, it seems that P. To the other, it seems that not-P. They consider themselves justified in believing P and not-P, respectively, in the absence of defeaters.
Each attempts to present defeaters to the other. Neither is convinced that the defeaters are sufficient to override the strength of their seemings that P and not-P, respectively. Are there any steps that can be taken to resolve these disagreements? If so, what are those steps?
Many philosophical views may play out like an evidential game, where one’s goal is to tally up the points for and against competing views, and reach a conclusion based on which view has the most points in its favor.
Yet philosophers often seem to reserve a kind of “epistemic fudge factor” for the end of the game. Once the points are tallied, they add a seemingly arbitrary amount of points to whichever side they initially favored.
“Sure,” a philosopher might say, “you’ve presented many more points in favor of not-P than I did for P. But, it just really seems to me like P. No arguments against P can be sufficient to override how obvious it is that P is true.”
If this was the attitude of a philosopher at the outset a discussion, or the outset they anticipate having at the end, then what are they trying to achieve in the discussion?
If your position on the matter is effectively incorrigible, but you still wanted to persuade others that it was true, that would be one thing. But I often see philosophers simply argue that they are personally justified in believing whatever it is they believe.
If your position is decisively resolved by how things seem to you, but those seemings aren’t available to and don’t have the same epistemic status for whoever you are talking to, then they can’t persuade you, and you can’t persuade them. At this point, it’s unclear how philosophical dialectic could, in principle, resolve the matter in a satisfactory way.
For instance, suppose you have private access to a seeming that counts 1,000 points in favor of P. Suppose someone else has private access to a seeming that counts 1,000 points in favor of not-P. Now suppose any discussion for or against these positions could achieve a maximum of 100 points. If so, then it’s not possible for either person to change their mind by having a discussion. This is what a lot of philosophical disputes look like to me.
Nevertheless, I see many philosophers appeal to how things seem to them to argue that they are justified in believing whatever it is they believe in the absence of defeaters. Let’s just grant this for the sake of argument. Where do we go from there? Sure, okay, let’s say they’re justified. Typically, I expect someone who is arguing for some position, P, to present reasons that could, at least in principle, convince me to endorse P. Even if someone has correctly established that they are personally justified in believing P, that doesn’t, by itself, move the dial much with respect to my own confidence in P.
One concern I have with this kind of appeal is that it seems fairly innocuous. However, it is sometimes leveraged towards a further end. Philosophers will often report how something seems, or that something is “intuitive,” without specifying whose seemings or intuitions they’re referring to. Sometimes they even say “we,” e.g., “we find this intuitive.” Who is “we”? We (as in those of us reading what they say) aren’t told.
2.0 Not just my seemings!
Such appeals may allude to an appeal to commonsense, or to a presumptively shared intuition. Yet little or no effort is typically made to establish that the intuition or seeming is shared by others. Claims about the seemings or intuitions of people in general, or any population of people other than philosophers (that is, the other ~8 billion people on the planet) are empirical claims. They aren’t the sorts of things that can established by a priori reasoning alone. Even when philosophers grant this, they will often claim that their personal experiences, familiarity with the language, conversations with people, informal polls conducted in their classrooms, and so on are sufficient to justify broad generalizations about people.
Ironically, claims about whether these appeals generalize are themselves empirical claims. You can no more establish that such methods are highly generalizable via a priori reasoning than you could establish the claims themselves (e.g., “most people find it intuitive that P”) by a priori reasoning. If a philosopher’s informal polls of students in their classes are highly generalizable to how people around the world think, well, where’s the empirical evidence for that?
There’s no good evidence that these sorts of appeals tend to be reliable. And the only way we could find out if such generalizations were reliable would be to conduct a bunch of cross-cultural research on enough of the sorts of claims philosophers make to get a sense of whether their claims are generally correct or not.
Given what empirical evidence we do have about cross-cultural differences, it’s very unlikely philosophers are in a good position to make such claims. For instance, Henrich, Heine, and Norenzayan (2010) gathered substantial evidence that the populations that comprise most psychological research are outliers with respect to most of the rest of the world’s populations.
Furthermore, ongoing indications that there are substantial difficulties with conducting replicable, reproducible, and robust psychological research demonstrate that it is incredibly difficult to confidently establish a given psychological phenomenon within a particular population, and that’s with much larger samples of participants and much better methods (e.g., statistical procedures) for evaluating responses than is available philosophers, which amounts to introspection, discussions with colleagues, informal polls, and so on (Nosek et al., 2022).
In other words, even psychologists actively seeking to gather quantitative data within a given population face significant methodological difficulties, and even when they succeed, those findings are often limited to the populations sampled (often e.g., members of particular cultures). Yet philosophers often speak as though their claims apply to everyone: such claims are even more general but are made with much less reliable data. If we can’t make strong inferences about people within a given population even with rigorous quantitative data gathered from hundreds or thousands of participants over the course of one or more empirical studies, why should we suppose philosophers can make even broader claims about everyone on earth with no empirical evidence at all? If efforts to address such questions using the very tools designed to do so are fraught with difficulty, why are philosophers so confident they can answer those questions without using tools designed to answer them, and without any empirical means of corroborating their judgments beyond the mutual assurances of their colleagues?
3.0 Generalizing from experience
I suspect philosophers would be more cautious about the generalizability of their claims if they were aware of how difficult it is to generalize from any given set of findings, not just with respect to different populations, but with respect to stimuli, experimenters, and so on. As Yarkoni (2022) argued in a recent paper, even psychologists are insufficiently attentive to these concerns. For instance, researchers will often employ one instance of, or a small sample of stimuli, gather results, and then generalize to the entire domain from which those stimuli were drawn, without modeling the stimuli they used as a random and potentially unrepresentative subset of the domain they were drawn from.
To illustrate this problem, imagine you wanted to survey attitudes towards fruit in general. Imagine that you simply wanted to know whether people thought fruit, overall, tasted good or bad.
Suppose you sought to test this claim by asking people to rate how much they liked apples and bananas on a scale from (1) Very disgusting to (7) Very tasty. You then administered this survey to a group of participants, and you found that the mean score was 6.5. Should we conclude that “people tend to like fruit”?
Not necessarily. One problem of generalizability is the familiar one: the attitudes of the participants in your sample may not generalize to whatever population you make inferences about, such as “all people.”
Yet there’s another serious issue with this survey: why think that the mean score, collapsing across measures of taste preferences towards apples and bananas, is an accurate measure of people’s preferences towards “fruit” as a category? Would you get similar results if you chose different fruit? How many fruits would you need to choose? Which fruits? What if you choose durian and passion fruit? Would you have obtained the same or similar results with that same sample?
The problem is that even if we could somehow survey people’s attitude towards every fruit, and obtain some average measure of their attitude towards fruit, expressed attitudes about the selection of fruit we use to make inferences about their attitudes with respect to all fruit would be more or less representative of the category “fruit.” If we don’t take this into consideration, we can make mistaken inferences, e.g., that because the mean score in our study was 6.5, that people tend to like the taste of fruit as a whole. Note, in this case, that we wouldn’t just be generalizing from our sample to people outside the sample. We’d be generalizing from the fruit we sampled to all other fruit. It may very well be that neither inference is justified. And, whereas psychologists may recognize that their participants are intended to reflect an approximately random sampling of the population they are from, and factor this into their models, they rarely if ever treat stimuli the same way, even when they should for similar inferential reasons.
How does this relate to philosophers making inferences based on their experiences? Well, what kind of stimuli are philosophers using in their informal surveys? What kinds of questions are they posing to people? How well do the particular questions, discussion points, thought experiments, and so on that they employ in the service of testing a particular hypothesis about how the people around them think reflect the domain they’re sampled from?
Who knows. What even is the domain? What counts as stimuli? What counts as a “participant” in their inferences? In many cases, philosophers won’t have any kind of reliable access to the “data” of their experiences. It’s not like they have a quantifiable metric of the strength of each person’s intuitions towards P. Rather, they may have, at best, memories and impressions about how they think people tended to respond on the topic. Maybe they’ll have specific stories they can call on, e.g., “there was that one time in class when I asked…” Our recollections are likely to be riddled with error, and multiple iterations of recalling those experiences are likely to distort them, as we project our biases back onto our recollections, pulling our memories in line with our expectations and desires. This is part of the reason why we developed empirical psychology in the first place. If personal experience was sufficient, why bother with p-values and effect sizes?
There is yet another factor often missing from conventional psychological experiments: the experimenter. If the experimenter interacts with participants, or does anything that could cause outcomes to vary from one experimenter to another, then this, too, reflects a potential source of variation that could be modeled: the experimenter is an n=1 from the entire population of possible experimenters. For example, imagine one experimenter has a strong sense that the expected outcome goes in a particular direction, e.g., mean x > mean y. Another has a strong sense that the outcome goes in the opposite direction, mean y > mean x.
And suppose they must interact with participants, speaking to them for several minutes, presenting them with stimuli, or employing a script that is relevant to an experimental manipulation or could influence how the respondent’s answers or behavior are measured. Their expectations could influence participants in ways that alter the outcome of the study. This is one of the main reasons researchers should conduct double-blind experiments whenever this is an option: to ensure researcher bias is kept to a minimum. Yet a myriad of other factors could influence how participants interact with the experimenter, such as the experimenter’s personality, body language, tone of voice, and so on.
In any given instance, how well does one experimenter represent the domain of “all possible experimenters?” Well, how well would any individual represent all members of a much larger category of people? And if experimenters do influence outcomes, how much of an influence do they have in general? That question may have no sensible answer: it’s going to vary from study to study. If you’re conducting an online survey, there may be no effect, or the effect ay be a negligible one. If you’re conducting a study that involves complicated interactions with the experimenter, it may matter a great deal. There may be no typical effect because studies vary too much. So we might then have to ask how much does experimenter variation matter for any particular study. In most cases, the answer will be: who knows? Nobody bothers to check. And even if we wanted to check, what are we going to do, hire a thousand experimenters? It would be impractical. Experimenter variation, to the extent that it influences outcomes, is going to be extremely difficult to address. If the study involves fMRI, for instance, and you needed at least 30 experimenters to adequately ensure variation between experimenters isn’t an issue, how are you going to do that? It’s not going to be easy to get 30 graduate students with the requisite knowledge to run the same study.
Why does this matter when it comes to philosophers claiming that their personal experiences and conversations are a sufficient guide as to how nonphilosophers think about philosophical issues? Well, let me address this, first, in the form of a question:
Do you think who the philosopher is, how they think, what they say, and how they interact with the people they talk to could influence how those people respond? And do you think that if a different philosopher were interested in the same philosophical question (e.g., the existence of God, whether we have free will, what the fundamental nature of reality is, etc.) that differences in how they think, act, and speak with others could result in differences in how others respond?
Philosophers bring their entire personality and demeanor to bear in every interaction with everyone they interact with: students, family members, colleagues, strangers at dinner parties, and so on. How they speak, how they think, and all their attendant interests, biases, ways of speaking, turns of phrase, expectations about how people are likely to respond, and so on. And they don’t follow a single, simple script. They engage in extended, ongoing, dynamic interactions with others, where they have numerous opportunities to shape and channel the discussion down particular pathways.
Philosophers should not picture the people they interact with as passively waiting queries from philosophers, that, upon receiving any general inquiry about the same topic would receive the same response, regardless of which philosopher is asking, or how they ask. Rather, philosophers play an active role in influencing the course of their interactions with others. Note, too, that in many of their interactions, their interlocutors may already know much about the philosopher in question’s philosophical views, personality, and so on (a factor that isn’t typically present in psychological studies), and they may already know the philosopher’s position. Even when they don’t, there is a good chance that philosophers will explicitly state their position, hint at it, or give off other signs as to where they stand on the issue (conveyed by e.g., tone or the way they frame questions) that allow interlocutors to accurately guess at their likely position at far above chance levels.
Thus, one factor in these interactions is that the philosopher may hold a particular view, and is asking the participant to weigh in on the issue. The fact that the participant knows the philosopher probably has a view on the matter, and a view that they are committed to or are passionate about, could have a substantial influence on how the participant responds. Think about the difference between an idle conversation about the existence of God with a friend, compared to a conversation with missionaries who just came knocking at the door. Do you respond the exact same way in both cases? Of course not. How you present yourself, and the social contexts in which you do so, influence how others react to you.
And that’s just the point: philosophers can’t vary one major factor in all of their interactions with others: themselves.
We already have good reason to believe that a researcher’s expectations can influence the outcome of a study. Such influences are enough to invalidate results, even when they consist of a fairly innocuous interaction that occurs over a minute, and even when that behavior is held fairly consistently across participants.
Philosophers, on the other hand, engage in a far more deliberate and extensive way with many of their interlocutors. Inquiries about how other people think may follow extended discussion, readings, assignments, debates, and so on. They may occur in group discussions, where students have a chance to influence one another. They may be solicited verbally in front of the instructor, where social desirability and other factors could encourage students to respond differently than if they were able to respond anonymously on a written survey, or to answer from behind the relative anonymity of a computer screen. Students, and anyone else who interacts with an instructor, may already know about their beliefs and expectations. Even when they don’t, they make assumptions at better-than-chance rates based on the philosopher’s previous remarks, tone, body language, and so on. We have no way of assessing to what extent a philosopher’s way of interacting with people influenced how those people responded, because these interactions aren’t recorded, and participants don’t have their thoughts and attitudes measured afterwards. So we have no idea to what extent any given philosopher’s reports about the outcome of their interactions with others were influenced by how they interacted with them.
Consider this study by Strickland and Suben (2012), which provides some evidence of experimenter bias in experimental philosophy research. From the abstract:
It has long been known that scientists have a tendency to conduct experiments in a way that brings about the expected outcome. Here, we provide the first direct demonstration of this type of experimenter bias in experimental philosophy. Opposed to previously discovered types of experimenter bias mediated by face-to-face interactions between experimenters and participants, here we show that experimenters also have a tendency to create stimuli in a way that brings about expected outcomes. (p. 457)
Note that these studies were conducted with undergraduate experimenters. Results may or may not reflect whatever biases might influence how professional philosophers teaching philosophy courses when they informally poll students. Perhaps philosophy instructors are less subject to “experimenter” bias because they are more careful when phrasing their questions. Perhaps they are more subject to such biases because they have more entrenched views and greater susceptibility to confirmation bias. Perhaps these influences wash each other out. We really don’t know, nor could we readily find out. However difficult the matter may be to resolve, however, it is an empirical question. It is one thing for philosophers to lay claim to subject matters that fall outside the scope of empirical inquiry. But when philosophers make claims about how other people think or what other people mean, they are making psychological claims, and armchair reasoning is simply not an adequate tool for addressing these claims.
One final problem with these interactions is the risk of spontaneous theorizing. Spontaneous theorizing occurs when a person who holds no determinate position on a topic develops a position by engagement with the study context that sought to assess which position they endorsed. That is, the study context causes them to form a determinate position that doesn’t reflect the views they held prior to participating in the study.
When philosophers interact with people, they often present puzzles or questions in ways that (a) presuppose a host of background assumptions that may not be shared with whoever they are talking to and (b) present those people with menu of prepackaged philosophical positions to choose from; they might even insist these are the only possible views.
Suppose a person has no position on a given topic because they’ve never thought about it at all, and none of what they say or do would commit them to one or another of competing philosophical views. This person then suffers the misfortune of sitting next to a philosopher on a plane, and is interrogated about their philosophical views, including their views on this issue. Initially, they respond with “I don’t know, I’ve never thought about it.” The philosopher then enthusiastically launches into a detailed lecture about the topic. “You see, in the literature, there are exactly three positions one can take with respect to the question. Position A, position B, and position C” (note that they mention position C with an extra twinge of enthusiasm, a hint as to their preferred position. It might also make sense to list your favorite view last). The participant may be biased towards choosing C, suspecting (correctly) that the philosopher endorses that view. But even if they don’t, they may choose A or B. Even if most people endorsed a view other than the philosopher’s, the philosopher may have the impression that:
(a) Ordinary people have philosophical views on these matters, and may have had those views prior to doing philosophy. It’s just a matter of sorting out which intuitions they had. Maybe they just lacked the terms to describe what they thought. But once the philosopher furnishes them with the appropriate labels, it becomes clear which view they held all along.
(b) People’s views reliably conform to the categories the philosopher presents.
(c) If the philosopher’s presentation is biased in favor of their position, they may find a majority of people favor their view in particular.
The very way in which philosophers present questions to nonphilosophers could create the illusion that nonphilosophers have pretheoretical positions on philosophical matters even when they don’t, that those positions conform to the positions discussed in philosophy, and perhaps that people are biased towards a particular view (often the views of the philosopher posing the question). Critically, this could occur even if none of this is true. This is part of the reason appealing to one’s personal experiences when describing how nonphilosophers think about philosophical issues is a questionable enterprise: we don’t have access to the counterfactual conditions in which people are prompted to express their philosophical positions without having the philosophical issue at stake presented to them using the terms, language, concepts, distinctions, and biases of philosophers presenting it to them. What we’d need is a method to assess how people think about these issues that doesn’t risk causing them to form particular philosophical positions as a feature of the measuring process. A measure should not produce its own outcomes.
In short: a philosopher’s interactions with others are not a reliable source of information about how those people think about philosophical issues. They are an especially poor indication of what those people thought about philosophical issues prior to studying philosophy. Philosophers who rely on their experiences with others are vulnerable to drawing conclusions that are subject to a host of biases and potential mistakes, including: Confirmation bias, sampling bias, availability heuristics, and distorted, mistaken, or selective memory.
4.0 Intuitive to who?
When philosophers are content to claim that they are justified in believing something in the absence of defeaters, this can undermine productive discussion.
One of the primary purposes of engaging in philosophical discussions is to provide publicly evaluable arguments, evidence, and reasons for endorsing a particular philosophical position. Why publicly evaluable? Because these are the only sorts of considerations that have the persuasive heft to actually change anyone’s mind, at least in the sorts of ways philosophers would be inclined to endorse, on reflection. I assume they don’t think people should adopt philosophical views they didn’t previously hold merely on the basis of a philosopher assuring them that the claim in question seems true to them.
Compare, for instance, to someone claiming that all life on earth evolved via natural selection. Would anyone for whom this didn’t seem to be true be persuaded by someone claiming that it “seems true” to them, and that “they are justified in believing that life evolved by natural selection” so long as nobody could provide defeaters? No. They would expect the kinds of empirical evidence typically offered in support of the theory. Just the same, if the philosopher’s goal is to convince someone else of a philosophical position, nobody is going to be persuaded by the philosopher saying “well that’s how it seems to me.”
If the philosopher then wants to claim that the burden of proof is on anyone who disagrees with them to provide defeaters, this is a strange move to make. For comparison, if I said I saw Bigfoot in the woods, it may seem to me that Bigfoot exists, but it doesn’t necessarily seem that way to anyone else. It would be strange to insist the burden of proof is on them to show that I didn’t see Bigfoot.
I don’t think philosophers intend, then, to appeal to the mere fact that something seems to be a certain way to them to be a genuine burden-shifting move in any respect other than the trivial respect that they may be personally justified in holding that view, not in any publicly evaluable sort of way. Yet they may present how things seem to them to be just the kind of consideration for which others shoulder some burden to overcome. If it were explicit that the philosopher were merely claiming that that’s how things seemed to them, and not anyone else, this may be a dialectical non-starter. This is where I think the subtle shift comes into play, and this may account for why philosophers frequently say things like:
Intuitively, X.
It seems that X.
It’s intuitive that X.
It’s obvious that X.
It’s commonsensical that X.
We think that X.
What philosophers often do is shift between what are in fact nothing more than appeals to how things seem to the philosopher (n = 1), but then they employ language seems to implicitly make claims that generalize to other people, or that seem to treat seemings or intuitions in an unusual way: as though there are facts about which sorts of claims are “intuitive” or “obvious” or “self-evident” in some kind of free-floating way. A given proposition could be “obvious,” independent of how obvious it seems to anyone in particular, or any quantitative claims about any specified proportion of the population. A claim cannot be obvious simpliciter. This isn’t even intelligible. Obviousness is, by its nature, perspectival; something can only be obvious to; it can’t just be “obvious.” Now, one could insist that by “obvious” one isn’t making a claim about what is or isn’t obvious to anyone in particular, but one is instead making some kind of broader or more abstract claim about some set of conditions under which the consideration in question would seem obvious to some specified type of agent, e.g.:
(1) Most people find obvious.
(2) All people would find obvious.
(3) Most people who have rationally reflected on the matter and met certain epistemic conditions would find obvious.
(4) All people who have rationally reflected on the matter and met certain epistemic conditions would find obvious.
(5) Most philosophers find obvious.
(6) All philosophers find obvious.
(7) Most rational agents would find obvious.
(8) All rational agents would find obvious.
…and so on. One could get more specific if they wished, as well. Claims about how obvious something is are, at least in part, empirical (and, in particular, psychological) claims since they describe not some abstract epistemic principle, but facts about how things seem from a given perspective or perspectives. It makes no more sense to say that a given food is “tasty” without this implicitly referencing some standard that may or may not apply to the speaker, whoever’s reading or listening to what they’re saying, or any arbitrary set of real or possible agents. Tastiness is a relational property; so, too, is obviousness.
And yet philosophers routinely use terms like obvious, self-evident, or intuitive without any explicit indication of which perspective they’re talking about, nor is there sufficient context to make it clear who or what they’re talking about. It’s not even clear what they mean by these terms, exactly. Instead, we’ll simply be told that, “intuitively, it seems that such-and-such.” Intuitively to who?
Why do I raise such a fuss about this? When I studied philosophy, and over the many years I’ve discussed philosophical topics, I routinely find that I don’t have the intuitions most people in a given discussion report having. When presented with a scenario, I would find that I had no inclination whatsoever to report as “intuitive” whatever it was most other people were reporting. What the hell are these intuitions people are having? When confronted with thought experiments that present me with some discrete set of options: “yes” or “no,” “realism,” or “antirealism,” “true, or “false,” etc. I don’t feel any consistent impulse to judge in accordance with the categories I’m given across situations. There is no sui generis feeling, like a sixth sense, some “intellectual seeming,” that is purportedly analogous to sensory experiences. As far as I can tell, I don’t have anything like this. What if the propensity for reporting these sorts of intuitions is a learned habit? What if it is a kind of constructed experience that one develops via engagement with particular approaches to doing philosophy? I’m skeptical of the very notion of a philosophical intuition.
Setting this aside, suppose we grant for the sake of argument that there is something sufficiently approximating an intuition. Suppose that I have them, too. Even if this is the case, there’s still something strange about claiming that something “is intuitive,” or insisting some response to some thought experiment is “counterintuitive.” For instance, I’ve often heard people insist that it would be “counterintuitive” to accept the repugnant conclusion, and that to do so would involve “biting the bullet.”
Such remarks are strange. They seem (to me!) to treat it as a property of the philosophical position that it is counterintuitive, and not a fact about the person reacting to it. So what exactly do people mean when they say that a response to a particular philosophical position is “counterintuitive”? If they actually think that the response itself is somehow “counterintuitive,” then I’m at a loss as to what they could mean by that.
In principle, couldn’t anyone find anything intuitive or counterintuitive? Isn’t whether something is intuitive or counterintuitive dependent on whether a particular agent finds it intuitive or counterintuitive? It would seem to me that the only way to make sense of saying something is “counterintuitive” is as an empirical claim about how the conditions under which agents (perhaps with certain characteristics) find the claim in question to be intuitive or not.
I expect if a philosopher reads this, they may have something to say about what all this talk of things being intuitive means. I expect I’d find other philosophers would give other answers. At least some philosophers seem to think that intuitions aren’t even a standard feature of the philosophical methods. According to Cappelen (2012), the widespread assumption that analytic philosophers frequently appeal to intuitions as evidence is false. I’m not going to address that here, but what does seem clear to me is that philosophers routinely make claims about things seeming some way, or being intuitive, or whatever, in a way that is unclear, but could readily be made clear if they simply rephrased what they were saying.
I think we sometimes see a shift from “this seems true to me,” to “this seems true” simpliciter. And if something “seems true,” then the onus is on anyone who denies this to demonstrate otherwise. But the slide from something seeming true to you to it seeming true simpliciter isn’t one you’re entitled to make in virtue of it seeming true to you. Philosophers might instead opt for more meaningful and defensible remarks, e.g., that if something seems true or intuitive, that they mean that it seems true or intuitive, “to most people,” or that it’s the “commonsense” or whatever. But once they start making claims like this, such claims reveal themselves to be empirical claims about human psychology. And this exposes the philosopher to a legitimate question: How do you know that?
This has been a long and winding road. I started out with a concern about phenomenal conservatism. I’ve ended up talking about intuitions and evidence. I’m not sure how I got here, but sometimes letting the pieces fall where they may is a useful exercise for starting to get my thoughts in order. If I can make one final attempt to tie all of this together, it’s this: familiarity with psychology, and especially research on cognitive biases and cross-cultural differences in psychology, has made me intensely aware of just how idiosyncratic any particular person or group of people's views about the world can be, and, more importantly, just how wrong we can be. I suspect that if philosophers engaged more with psychology, it would go a long way in tempering their confidence in the reliability of their intuitions.
I also recommend that when philosophers make claims about something being “intuitive,” or “seeming” to be a certain way, that they specify what they mean by this. Intuitive to who? Seeming to who? If the claims in question aren’t empirical, it’s important we get clear on what is being claimed; it may be that those of us inclined to regard such remarks as meaningful only in an empirical sense will simply deny the claims in question are “intuitive" in some other respect. If they are making empirical claims (and, in particular, psychological claims) then getting clear about what those claims are can provide critics the opportunity to evaluate whether the claims in question are actually true.
References
Cappelen, H. (2012). Philosophy without intuitions. Oxford, UK: Oxford University Press.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?. Behavioral and brain sciences, 33(2-3), 61-83.
Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., ... & Vazire, S. (2022). Replicability, robustness, and reproducibility in psychological science. Annual review of psychology, 73, 719-748.
Strickland, B., & Suben, A. (2012). Experimenter philosophy: The problem of experimenter bias in experimental philosophy. Review of Philosophy and Psychology, 3, 457-467.
Yarkoni, T. (2022). The generalizability crisis. Behavioral and Brain Sciences, 45, e1.