1.0 Introduction
I’ve been working on a response to Bentham’s Bulldog’s (hereafter “BB”) blog, “Moral realism is true” for a while. That response ended up approaching the length of a short book, so I’ve opted to write a slightly shorter critique. It’s still very long, but I wanted to get it out there anyway. In a way, it’s a kind of retrospective of various points I’ve raised about the dispute between moral realists and antirealists in the past two years on this blog. As you’ll see, I frequently refer to earlier posts that address specific points.
In brief: BB presents an aggressive polemic in favor of moral realism, but fails to present any good arguments for moral realism. My “short” response will still be pretty long. I don’t think it’d be worthwhile to revisit the longer version. We’ll see.
Much of this outline is a point-by-point of the order of commentary as it appears in BB’s post. That will probably not make for very exciting reading. I would strongly encourage reading or referencing the article before reading this or referencing it as I proceed.
2.0 Response to “An Introduction to moral realism”
BB’s introduction consists almost entirely of empty rhetoric and dubious claims that seem intended to depict antirealism as an insane and disreputable view that no reasonable person would endorse. BB’s remarks suggest:
Antirealism is “crazy”
Antirealism isn’t worth taking seriously
Antirealism falls in the category of allegedly absurd views like external world skepticism
Members of this category don’t have much significance
Members of this category are rare
Members of this category exist largely as curiosities for philosophers
This is mostly just tendentious rhetorical sneering that reveals more about BB’s attitude towards a view than any substantive philosophical claims. What I find more puzzling is the suggestion that skeptical views are especially rare. It is very common for philosophers to hold skeptical positions. See the 2020 PhilPapers survey. To provide just a few examples:
15% endorse scientific antirealism
29.8% deny the hard problem of consciousness
11.2% deny free will
These are not tiny numbers, and represent only a very small sample of skeptical positions that appear in the PhilPapers survey. Taken in isolation, they might look fairly small, but with so many positions one can take, it would be unsurprising if a majority of philosophers held at least one “skeptical” position. In fact, 5.4% are external world skeptics. That’s 96 people who responded to that question: not an insignificant number of philosophers at all.
In short: it is common for philosophers to hold skeptical views. More generally, philosophers often hold unconventional or rare views that deviate from what most other philosophers think.
BB also trots out a lot of the mistakes I’ve documented in the past few years. Take this remark:
So if you think that the sentence that will follow this one is true and would be so even if no one else thought it was, you’re a moral realist. It’s typically wrong to torture infants for fun!
This is run of the mill normative entanglement. I’m an antirealist, and I think “It’s typically wrong to torture infants for fun!” I even think it’d be wrong even if no one else thought it was! Giving the impression that antirealism entails disagreeing with this is misleading. BB should know better.
BB also says:
We do not live in a bleak world, devoid of meaning and value. Our world is packed with value, positively buzzing with it, at least, if you know where to look, and don’t fall pray [sic] to crazy skepticism.
This is also normative entanglement. Antirealism only entails that we live in a world without stance-independent moral facts, not a world without “meaning and value”. The latter is a normative claim that is consistent with both realist and antirealist conceptions of meaning and value. Realists are not entitled to help themselves to the presumption that only realist conceptions of meaning and value are legitimate, and that therefore, if you’re an antirealist, you think the world has no meaning and value. Antirealists can both (a) reject moral realism and (b) reject realist conceptions of meaning and value, and hold that antirealist conceptions of meaning and value are correct.
BB does what many realists do: they drop any explicit indication of a realist conception of first-order terms like “meaning” and “value,” which gives the impression that antirealism rejects not only the realist’s conception of these notions, but any conception of these notions at all (as if only the realist’s conception of meaning and value are legitimate). I call this the halfway fallacy, and discuss why I think it’s a problem here.
BB also employs what I take to be objectionable rhetoric by suggesting that antirealism involves biting a bullet. I explain why I think this is a mistake here. The short version of this is simple: I not only deny moral realism, I also deny that there’s anything appealing about realism or any presumption in its favor. I don’t think rejecting moral realism involves biting a bullet any more than denying the existence of vampires does.
BB and other realists often try to frame moral antirealism as some kind of insane fringe view that only gibbering idiots would endorse. It’s never a good sign when critics have to go out of their way to craft a misleading narrative to try to make rival positions look bad. If the position really is so stupid and insane, the arguments themselves should be enough to demonstrate this without all the rhetorical fireworks.
3.0 Responding to “A Point About Methodology”
BB’s next section introduces phenomenal conservatism (PC). This is, roughly, the view that you are justified in believing that things are the way they seem so long as there isn't a compelling reason to give up those beliefs.
I don’t endorse phenomenal conservatism, but granting it for the sake of argument does very little (if anything) to strengthen the case for moral realism.
Phenomenal conservatism at best provides only private, personal “evidence” for a view: if you find that things seem a certain way to you, then you are to that extent “justified” (whatever that means) in believing those things.
It seems to me that moral realism isn’t true. PC does just as much to “justify” my antirealism as it does to justify the realist’s realism (if it seems to them that realism is true). PC is neutral with respect to which of these positions is correct, and does not distinctly favor realism or antirealism.
BB also proposes “wise” PC:
Wise Phenomenal Conservatism: If P seems true upon careful reflection from competent observers, that gives us some prima facie reason to believe P.
The use of “careful reflection” and “competent observers” provides a ton of wiggle room. I regard myself as a competent observer who has carefully reflected on moral realism, and the result is that I am even more confident it isn’t true. This revision to PC probably isn’t going to achieve much, since it will just prompt a pivot towards discussion of what careful reflection and competent observation entails, and how we can determine who meets these conditions.
BB brings up some responses to PC, but I don’t care about those so I will move on.
4.0 Responding to, “2 Some Intuitions That Support Moral Realism”
BB begins with the following:
The most commonly cited objection to moral anti-realism in the literature is that it’s unintuitive. There is a vast wealth of scenarios in which anti-realism ends up being very counterintuitive.
BB does something I frequently criticize: describe things as intuitive or counterintuitive without qualification. Counterintuitive to who? No claim is intrinsically intuitive; how “intuitive” something is depends on the intuitions of the person evaluating the claim. I don’t find moral antirealism counterintuitive, nor do I think there are any scenarios where it is counterintuitive. It better accords with my intuitions, and I don’t think there are any good reasons to accord more weight to BB’s or anyone else’s intuitions than my own. So claims that it’s “counterintuitive”, in and of themselves, don’t have much dialectical force. One could say “I find such claims implausible” and then speculate about whether your readers will, too.
BB goes on to say each version of antirealism has distinct counterintuitive implications:
We’ll divide things up more specifically; each particular version of anti-realism has special cases in which it delivers exceptionally unintuitive results. Here are two cases
I now turn to these cases.
4.1 The first “counterintuitive” case
Here’s BB’s first case:
This first case is the thing that convinced me of moral realism originally. Consider the world as it was at the time of the dinosaurs before anyone had any moral beliefs. Think about scenarios in which dinosaurs experienced immense agony, having their throats ripped out by other dinosaurs. It seems really, really obvious that that was bad.
Of course it was bad. This in no way demonstrates anything counterintuitive about any form of antirealism. An antirealist can simply regard this as bad.
BB continues:
The thing that’s bad about having one’s throat ripped out has nothing to do with the opinions of moral observers. Rather, it has to do with the actual badness of having one’s throat ripped out by a T-Rex.
So BB describes a scenario, says that it “seems really, really obvious that that was bad,” which is an ambiguous remark that can be interpreted in ways that are trivially easy to show are consistent with antirealism, then follows this by simply asserting that it was bad in a way only consistent with realism. So BB’s first demonstration of something counterintuitive is to present an innocuous scenario then assert that it’s bad in a realist sense.
When I say that such scenarios are bad, I am telling you something about what I think about them: That I disapprove of them, regard them as undesirable, don’t want them to occur, and so on. It absolutely has something to do with the opinion of a moral observer: me. Here, BB says this has to do with “actual badness.” Here we have the use of a deceptive modifier, “actual” (see this article where I elaborate on how realists misuse deceptive modifiers).
The implication here is that if something were bad in some nonrealist sense, like a subjectivist sense, then it isn’t actually bad. Well, I don’t like the taste of shit. But I’m not a gastronomic realist (i.e., I don’t think there are stance-independent facts about whether food tastes good or bad). Does that mean the taste of shit isn’t “actually bad”? Should I just be indifferent to the taste of what I eat since there are no stance-independent normative facts about taste? I don’t know about you, but that strikes me as ridiculous.
Realists have no business claiming that only their conception of morality involves “actual” badness. Antirealism isn’t the view that nothing is “actually” good or bad. It’s a rejection that anything is good or bad in the realist’s sense. The antirealist is not obliged to grant that things only could be good or bad in the realist’s sense, i.e., would only “actually” be good or bad if they were stance-independently good or bad: we can reject this, too. And I do. I think the only sense in which anything is “actually” good or bad is in an antirealist sense. What you see here is, yet again, rhetoric and misleading framing and a presumption in favor of realism all through BB’s characterization of the dispute.
So far, BB has not presented any substantive critique of antirealism. BB appears to have presented a scenario and asserted that things in that scenario are bad in a realist’s sense. Assertions aren’t arguments. Perhaps a reader is supposed to read this and go “yea, I think it’s that way, too.” I’d be surprised if as standard a scenario as this prompted someone to reflect or recognize they had realist intuitions where they didn’t previously. I suspect instead this would simply prompt them to affirm whatever they were already disposed to affirm. But perhaps this scenario would somehow prompt some readers to recognize realist inclinations. I don’t know. Whatever the case may be, this scenario does not strike me as having much argumentative force.
BB continues:
When we think about what’s bad about pain, anti-realists get the order of explanation wrong. We think that pain is bad because it is — it’s not bad merely because we think it is.
Again, who is “we”? BB makes a vague and unqualified assertion about how “we” think: that “we” think things are bad because they are; they’re not bad because we think they are. Again, this is simply an assertion. Assertions aren’t arguments. This is the very thing BB is supposed to be demonstrating. Not simply declaring. Insofar as this assertion is supposed to characterize how anyone other than BB thinks: that’s an empirical claim, and not one BB is entitled to assert as true. This isn’t how I think. When I say things are “bad” the causality is in the antirealist direction: their badness is constituted by my attitude towards them. I think it’s BB who gets the causal story backwards. BB has presented absolutely nothing that would suggest he’s right about this and I’m wrong. What we have here are mere assertions.
4.2 BB’s second “counterintuitive” case
BB’s second scenario is:
The second broad, general case is of the following variety. Take any action — torturing infants for fun is a good example because pretty much everyone agrees that it’s the type of thing you generally shouldn’t do. It really seems like the following sentence is true
“It’s wrong to torture infants for fun, and it would be wrong to do so even if everyone thought it wasn’t wrong.”
It seems to who? That sentence doesn’t “seem true” to me. It’s also unclear what it means. Is BB asking me whether I’d think torturing infants for fun would be wrong, relative to my standards, even if everyone else thought it wasn’t wrong? If so, that’d be consistent with antirealism: I do think it’d be wrong even if everyone else thought otherwise.
If, instead, I am included in this scenario, then I’d be being asked whether it’d still be wrong even if I thought it wasn’t wrong. But note that there are two different versions of me: the actual me evaluating this scenario, and a hypothetical me with repugnant moral values. Again, who is this statement being relativized to? To the actual me or the hypothetical me? If you ask me whether torturing infants for fun would still be wrong relative to my actual moral values even if a hypothetical version of me thought it wasn’t wrong, then the answer is, again, still consistent with antirealism: yes, it’d still be wrong, relative to my actual values.
It wouldn’t be wrong relative to the hypothetical version of me’s values, but this is trivially true for the antirealist: if on my view what it means for it to be wrong just is whether it is wrong relative to an evaluative standard, then if you say “suppose everyone held an evaluative standard according to which it wasn’t wrong,” then it would be trivially true that the action in question wouldn’t be wrong relative to the evaluative standards of anyone in that hypothetical. This is, again, trivially true. It’s like asking:
“If nobody liked the taste of chocolate, would anyone like the taste of chocolate?”
The answer will be a definitive “of course not.” BB’s scenario exploits ambiguity to confuse readers into having the misleading impression that if you’re a moral antirealist, that you’re somehow contingently okay with torturing babies for fun in a nontrivial sense. After all, if BB wants to show there’s something mistaken or wrong or repugnant or unappealing about moral antirealism, it won’t do to say “if you thought torturing babies for fun wasn’t wrong, would you think torturing babies for fun wasn’t wrong?” Conditional on antirealist views that relativize moral claims, that’s all such a question would amount to, so it’d be trivial. The only way for BB’s scenario to “work,” i.e., to not ask something trivial, is if we employ a realist notion of wrongness in here somewhere. In that case, though, the scenario is superfluous: it may ostensibly be intended to serve as an intuition pump, but since it’s worded in an ambiguous and misleading way, whatever value it has for this purpose is inextricably entangled with its ambiguous and confounding aspects; adequate disambiguation functionally amounts to simply asking the reader if torturing infants for fun is stance-independently wrong. The “scenario” is at best a mere recapitulation of asking someone if they’re a realist towards a specific moral issue, and at worst is actively misleading.
The futility of this scenario is eclipsed by the next example BB gives:
Similarly, if there were a society that thought that they were religiously commanded to peck out the eyes of infants, they would be doing something really wrong. This would be so even if every single person in that society thought it wasn’t wrong.
I cannot stress this enough: I am an antirealist, and I completely agree with BB. That’s because whether *I* think something is morally right or wrong isn’t determined by whether individuals or societies approve of a particular action. I don’t think that if some society is okay with plucking out the eyes of infants, that this somehow makes it okay. Whether it’s morally good or bad relative to my values depends on my values. BB gives the impression here that we should actually interpret what he’s asking in the previous scenario to whether something is morally right or wrong depends on the values of the agents performing the action, i.e., agent relativism. Agent relativism holds that whether an action is right or wrong is determined by the standards of the agent performing the action or, in the case of agent cultural relativism, the standards of that agent’s culture.
This form of relativism has the unusual but noteworthy implication that if Alex thinks torturing babies for fun is good, then it is, in fact, good, in such a way that I and everyone else must respect: if Alex wants to torture babies, and attempts to do so, the rest of us are obliged to regard this as “good” and to stand aside. In other words, agent relativism imposes constraints on everyone else’s actions that are binding on those people independent of their own goals, standards, and values. It functions a lot more like realism than any antirealist view I’d consider remotely acceptable. I often refer to it as “a la carte realism.” While not technically a form of moral realism, since moral facts are stance-dependent, its most objectionable elements are, ironically, precisely the respects in which it most closely resembles moral realism. Critics of antirealism often depict relativism as if agent relativism were the only form of relativism.
Antirealists do not have to think that an action wouldn’t be wrong if the people performing that action think it’s not wrong. Our own evaluative standpoints don’t have to shift and move in accord with other people’s moral standards. We can (and I do) always judge in accord with our own standards. Moral antirealism doesn’t entail agent relativism. Insofar as BB’s scenario conflates antirealism and agent relativism, this scenario not only doesn’t serve as any substantive critique of antirealism or bolster the case for realism, it serves only to muddle the dispute.
5.0 Discovery vs. invention
BB next employs another line of reasoning that does nothing to bolster the case for realism:
This becomes especially clear when we consider moral questions that we’re not sure about. When we try to make a decision about whether abortion is wrong, or eating meat, we’re trying to discover, not invent, the answer.
Again, note that BB simply asserts a realist-friendly reaction to this, rather than presenting anything like an argument. Worse, this is a false dichotomy. The impression BB gives is that when it comes to moral deliberation, we have two options:
Discover the stance-independent moral truth
“Invent” an answer
I reject both of these options. BB gives the impression that if you don’t deliberate with an eye towards stance-independent truth, that you just make-up whatever answer you want. This gives the impression that an antirealist is relegated, when it comes to moral deliberation, to a transparently constructive and blatantly arbitrary process of simply deciding in the moment whether they’d like to torture babies or whatever. This is not at all what a moral antirealist is limited to. Consider everyday nonmoral decisions: what career to choose, what clothes to buy, what to have for lunch. Suppose, for a moment, that in these scenarios one’s goal is to optimize with respect to one’s own goals and preferences. That is, suppose we’re not gastronomic realists and fashion realists and so on: when we are trying to decide what clothing to buy, our goal is simply to buy clothing we want to wear and that will optimize for our goals.
Would this mean we instantly and immediately know which clothing to buy at the store?
Of course not. What clothing will best serve our interests is not immediate or transparent to us. We may have to think:
I like green more than blue, but I already have more green than blue shirts, so this one’s out
Hmmm, I like the texture of this one, but at that price? Perhaps not
This one is nice, but isn’t this kind of going out of fashion?
This one’s a bit too tight around the arms, but the other one’s too long…
Even when making decisions entirely with respect to our own goals, preferences, and values, and for matters of deliberation where I suspect many readers will agree that, at the very least, we’re not presumptively being realists about the matter at hand, we still have to deliberate and think things through. We don’t simply invent our answers.
When it comes to morality, I don’t invent my values, or invent solutions to moral dilemmas. Yet I still experience moral dilemmas. My moral values are not transparent and obvious to me, nor their application to specific cases is not obvious, nor the degree to which I weigh one moral consideration against another, and so on.
In a certain respect, then, the antirealist can “discover” what is morally right or wrong relative to their own values, preferences, standards, epistemic frameworks, and so on. If BB’s use of “discover” is, by stipulation, limited to discovering the stance-independent moral facts, then BB has presented a genuinely false dichotomy. If, instead, its flexible enough to be inclusive of conceptions of discovery consistent with antirealism, then an antirealist can choose “discover” just like the realist, then the dichotomy loses all its force and is no longer presents a distinction that favors the realist.
BB adds this line:
If the answer were just whatever we or someone else said it was — or if there was no answer — then it would make no sense to deliberate about whether or not it was wrong.
An antirealist is not obliged to characterize moral deliberation in terms of an action being right or wrong entirely on the basis of whether I “say” it is. We can and do employ self-imposed constructive procedures for determining whether specific actions are morally right or wrong. For instance, a moral antirealist can be personally committed to utilitarianism, in that they favor maximizing happiness and minimizing suffering. They may still struggle with the question of whether abortion is moral or immoral because they may not know whether any given abortion policy or stance towards abortion would, if implemented, maximize utility. The idea that if you’re an antirealist that it’d make no sense to deliberate is simply mistaken.
As far as what one would do if “there was no answer,” this again needs to be disambiguated. If there was no answer to whether abortion was morally right or wrong in a realist sense? Or in some other sense? Even if nothing is morally right or wrong in the realist sense, and even if that’s the only sense in which anything is or could be morally right or wrong, such that nothing is morally wrong or wrong, well, sure, then it wouldn’t make sense to deliberate about whether abortion is “morally right” or “morally wrong.”
So what? What follows from that? Does that mean that the error theorist who holds such a view should just stop thinking about abortion, or not care? No. Absolutely not. Even if you don’t think abortion is, technically speaking, “morally right,” or “morally wrong,” you can still deliberate about what kind of society you want to live in, and what policies would optimize for your preferences, and so on. Nothing about the inability to deliberate specifically about whether something is stance-independently right or wrong has any practical consequences for deliberating about what would optimize for your personal values and preferences. Realists routinely give the impression of something like a “realism or nihilism” gambit: that if you don’t accept a realist framing of deliberation or whatever, that you’re left with nothing. This is complete nonsense.
Imagine if I did the same for taste: You have two options:
Accept that there are stance-independent gastronomic facts governing what you should or shouldn’t eat, according to which when, setting aside all dietary, ethical, financial,and other considerations, and your sole consideration is how good or bad food tastes, you are obligated not to eat foods that taste better to you, but to eat foods that are stance-independently tasty, i.e., tasty independent of how they taste to you or anyone else
Be completely indifferent to taste decisions. You have zero rational basis for preferring to eat foods that taste better to you than ones that taste terrible. It makes absolutely no sense to prefer eating chocolate over feces, or to eat bread instead an equally nutritious and edible pile of nutritive goo that tastes like vomit.
This is, of course, ridiculous. If you reject (1) you’re not obliged to accept (2), or vice versa. I don’t think there are stance-independent facts about what food is tasty or not. This in no way makes it impossible or nonsensical for me to deliberate about what to eat for lunch. It also does not entail that I invent my food preferences.
Likewise, I’m not limited to either discovering stance-independent moral facts or just “inventing” moral truths on a whim. BB sets up what is at best a false dichotomy that an antirealist can trivially circumvent.
6.0 Arguing about morality
Next, BB says:
Whenever you argue about morality, it seems you are assuming that there is some right answer — and that answer isn’t made sense by anyone’s attitude towards it.
Again, “it seems.” It seems to who? It doesn’t seem this way to me.
Once again, BB gives the impression that some feature of ordinary thought, like deliberation, or in this case, argumentation, presupposes realism. This is not true.
When people argue about things, they can and do appeal to intersubjective and shared goals and values. Suppose you and I are both committed antirealists and utilitarians. We could, given these commitments, argue about public policy or whether an action was right or wrong relative to our shared moral frameworks. If I argue with a complete stranger, I can and typically do intend to appeal to their own values, or I may wish to prompt them to reconsider their stance or commitments. They may be favoring a policy I dislike and don’t want implemented, so I may seek to get them to recognize this policy isn’t something they’d approve of on reflection. This does not require me to think that they must reflect on what the stance-independent moral facts are. I might just be trying to get them to realize they’re being inconsistent, or a bit of a jerk.
Furthermore, people routinely argue with the intention of achieving their goals, rather than arriving at the truth. Call this “coordination argumentation,” rather than “truth-targetd argumentation,” just to pick the first terms that came to mind. People haggle about the price of goods. This doesn’t entail “price realism.” Friends argue about what pizza toppings to get when ordering pizza. This doesn’t entail “pizza topping realism.” People argue because they want things and other people want things and they need to negotiate.
Absolutely nothing about the fact that people argue lends itself to moral realism. Antirealists can and do argue all the time. We want things, and we frequently assume others do, too. Many arguments are rooted in these simple facts.
In this case, BB seems to think the way people argue implies they think there’s a stance-independently correct answer. But I don’t think BB has shown that this is the case. It instead appears to me that BB is reporting on how things seem to him. I would invite BB to consider that his conception of how things seem may be predicated on simplistic and inaccurate picture of the reasons and motivations people would have for arguing.
Lastly, it’s worth noting just how weak this point would be even if it were true. Suppose when many people argued about moral issues they were supposing some type of moral realism. That might suggest a presumption of moral realism was baked into…what? Much contemporary English? The psychology of Americans? That’s hardly a powerful source of evidence for moral realism.
What BB seems to be trying to show here, which is typical of realist arguments, is that people are generally inclined towards realism, and that perhaps the reader is already inclined towards realism. I don’t think these attempts are successful, but even if they were, they’d establish very little: a very weak presumption in favor of realism. Nothing amounting to a substantive argument in its favor. You can try to build a cumulative case off of a bunch of weak lines of evidence like this, I suppose.
BB goes on to consider some potential rejoinders. They’re not very good, but let’s have a look at the second one (the first is too weak to bother with):
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
BB responds:
Very few people disagree, at least based on initial intuitions, with the judgments I’ve laid out. I did a small poll of people on Twitter, asking the question of whether it would be wrong to torture infants for fun, and would be so even if no one thought it was. So far, 82.6% of people have been in agreement.
This is not a good response. First, BB claims that “Very few people disagree” with the judgments he’s presented. BB makes a vague, underspecified empirical declaration about the psychology of some unknown population of people. What evidence does BB present? A Twitter poll. Twitter polls of the people who see BB’s posts are not representative of any meaningfully relevant population of people. Do these findings generalize to people in general? Who knows, but probably not.
Note that BB has conducted what amounts to an empirical test of people’s metaethical views. There are several limitations with this measure, all of which center on the lack of rigor and care taken to construct a good measure and to assess its validity.
First, we have little information about the representativeness of the sample. That is, whatever proportion BB obtains in conducting this survey, we don’t know how well this proportion represents how people in general would respond. Respondents may be disproportionately likely to agree with BB, or even to disagree.
Furthermore, the results may not be independent; that is, people could potentially see other people’s responses or see comments about those responses prior to responding to the survey.
Third, we have little direct evidence about how participants interpret the question that was asked. This is essential, since unintended interpretations are irrelevant for evaluating a particular hypothesis.
The study is likely also underpowered.
Furthermore, when you present ambiguous, poorly-phrased, and misleading questions, the proportion of people who judge superficially in the direction you find favorable isn’t a good indication that they share your specific position. It’s very, very tricky to craft appropriate questions for probing untrained people’s metaethical judgments. I developed this position because I wrote my dissertation specifically on this topic. BB, along with philosophers in general, underestimate the degree to which ambiguity, pragmatic considerations, normatively loaded remarks, and other factors can contribute to people’s responses to scenarios (especially dichotomous or forced-choice scenarios that restrict one’s answers to a simple, e.g. yes/no) being diagnostic of whether they endorse the position you think they do.
My colleague David Moss and I wrote a short paper addressing this issue which you can find here. The problems we outline here only scratch the surface. It’s incredibly difficult for nonphilosophers to respond to philosophical scenarios in a diagnostic way even when you carefully attempt to disambiguate those questions and avoid biasing factors. BB doesn’t appear to me to put even minimal effort into doing this, and, if anything, seems to me to dial up the ambiguity and biasing aspects of the questions BB poses. As such, I don’t think people’s responses to these questions would be meaningful, anyway.
What we’d really need to know is why people responded in one way or another, and it may turn out in many cases that the reason why had very little to do with a clear and distinctive endorsement of a specific metaethical position, to the exclusion of other, irrelevant considerations (e.g., normative considerations). Realists routinely wrap their framing of realist positions up in a confounding bow, such that normatively desirable implications are entangled with endorsing the realist’s position, while unappealing, repugnant, monstrous, or stupid notions are entangled with the antirealist’s view. You’d need to disambiguate these to show that people favor realist responses for their own sake. Realists presenting scenarios almost never do this, and BB’s scenarios are no exception.
Of course, these findings are only informative if the measures BB used were valid. So even if we set aside the fact that we have almost no reason at all to think BB’s results are representative of people in general, or of any informative population of people at all, there’s still the question of whether BB asked a valid question; that is, a question that consistently and reliably allows us to distinguish people’s views in accord with the stipulated operationalizations; in this case, presumably the goal is to distinguish realists from antirealists. BB simply hasn’t done the work to show that a Twitter poll is sufficient to provide much substantive evidence that his position is widely shared among any meaningful population of people.
Yet one of the most critical deficiencies of appealing to such evidence is that we already have a wealth of more carefully designed empirical studies on how ordinary people respond to questions about metaethics. Such studies have been designed by researchers with the knowledge and training to devise such measures, were gathered under more careful conditions, feature larger sample sizes, feature a wide variety of measures, and employ measures that have been refined for well over a decade to account for and avoid the many methodological pitfalls myself and others have identified with measures of metaethics. Why would BB conduct a flimsy online survey, when there’s already an entire body of empirical literature on the question? Why not appeal to that empirical literature?
Conducting a survey like this is like using a flimsy personal poll as evidence for the proportion of atheists in the United States, while making no mention of Pew or Gallup data. It’s strange. If I conducted a survey of my friends and found most were atheists, I doubt anyone would take this as good evidence that most people are atheists. Does BB not know that there’s already research on metaethics? Does BB not care? Does BB think his own question is better than all the research out there? Thinking you can establish that most people agree with you by conducting a Twitter poll, which will be visible primary to people who follow you, and will involve a self-selected sample, and for which you’ve shown no indication of having validated or workshopped to iron out ambiguities or confounds in the wording, is bizarre. There’s a whole literature not just on how nonphilosophers think about questions like these, but a literature that I myself am a part of that evaluates the validity of these measures.
Even well-trained psychologists and philosophers who attempt to very carefully solicit people’s metaethical responses using a variety of methods under far more rigorous conditions than BB’s employed struggle to do so. BB’s survey is, to put it bluntly, almost totally uninformative. It’s also weird that BB didn’t include a link to the poll or even what question was asked. I don’t think there’s anything suspicious about that, but it suggests to me an attitude of complacency and lack of awareness for the challenges of constructing a valid question (i.e., a question that actually tells you what you want to know).
I’ve discussed this so many times it’s become tedious: the best available empirical evidence does not suggest that most people are moral realists, act like moral realists, think like moral realists, speak like moral realists, endorse moral realism, or in any way favor moral realism. The bulk of the psychological research on folk metaethics goes back about twenty years. Over the past two decades, myself and others, such as Thomas Pölzler, Jennifer Wright, James Beebe, Taylor Davis, and David Moss, to name some of the authors involved, have identified methodological shortcomings in earlier research, and have sought to correct for those shortcomings. Newer, and more robust empirical studies tend to find very high rates of antirealism. See, for instance, these results from Pölzler and Wright (2020):
This is Fig. 1, on page 73 of Pölzler and Wright (2020).
Or consider this more recent, cross-cultural study from Pölzler, Tomabechi, and Suzuki. They employ a range of measures and ways of coding the data, but even if you use their most conservative measure, which amplifies realism to a much greater extent relative to their other methods you still get results like this:
The measures used in these studies may not be valid, or may suffer methodological shortcomings. But they’re more informative than BB’s Twitter poll. At the very least, I’d hope such findings give BB and others pause in the constant and insistent presumption that most people are intuitive moral realists. The data just does not seem to point in this direction at the moment.
BB goes on to say:
Also, those who disagree tend to have views that I think are factually mistaken on independent grounds. Anti-realists seem more likely to adopt other claims that I find implausible.
It seems like BB is providing biographical details. If you share BB’s other views, then maybe this will have some traction. Otherwise, I don’t think it carries much weight if one’s goal is to present a case for moral realism.
Next, BB says:
Additionally, they tend to make the error of not placing significant weight on moral intuitions.
It’s not clear if this is true. Which intuitions? Normative intuitions or metaethical ones? BB’s remark here simply isn’t clear. If BB is talking about metaethical intuitions, most antirealists I know find realism intuitive and put, if anything, more stock into intuitions than I think they should. I’m a bit of an outlier in that I don’t have realist intuitions in the first place. That may put me in good company relative to the general population, since I don’t think they typically have realist intuitions either, but at least among philosophers I think BB’s claim is probably not true. It’s hard to say because it’s too underspecified to readily evaluate.
BB then says:
Thus, I think we have independent reasons to prefer the belief in realism.
We have independent reasons because antirealists tend to hold views BB finds implausible? So is BB just assuming his readers share his intuitions? Maybe they do, but this is starting to look an awful lot like preaching to the choir.
BB next says:
It also seems like a lot of the anti-realists who don’t find the sentence “it’s typically wrong to torture infants for fun and would be so even if everyone disagreed” intuitive, tend to be confused about what moral statements mean — about what it means to say that things are wrong.
Note, again, that BB is talking about how things seem without qualification. Seems to who? Antirealists don’t seem confused to me. If BB thinks we’re confused, it’s BB’s job to present arguments or evidence for that. Simply reporting that “it seems” we’re confused is hardly an argument. I don’t think we are confused. Rather, we simply disagree with you about what those statements mean or what the people making those statements mean. Disagreement is not confusion.
BB then says:
I, on the other hand, like most moral realists, and indeed many anti-realists, understand what the sentence means. Thus, I have direct acquaintance to the coherence of moral sentences — I directly understand what it means to say that things are bad or wrong.
Again, this is a mere assertion. Why should I grant that BB and most moral realists “understand what the sentence means”? BB hasn’t presented any substantive arguments or evidence. What are we to conclude? That BB has demonstrated he understands what a sentence means because he conducted a Twitter poll? BB claims to have direct acquaintance, but what entitles BB to such a claim? Why couldn’t I similarly assert that I, contra BB, am acquainted with the meaning of these terms, and that what they mean better accords with antirealism than realism? Again, BB is not presenting arguments, but just asserting things, and, at best, providing very feeble evidence that is, if anything, overshadowed by evidence to the contrary. Carefully constructed psychological studies are at least better than improvised Twitter polls conducted on one’s personal Twitter account.
BB next says:
If it turned out that a lot of the skeptics of quantum mechanics just turned out to not understand the theory, that would give us good reason to discount their views. This seems to be pretty much the situation in the moral domain.
BB has presented no substantive arguments or evidence that I or any other moral antirealists don’t understand “the moral domain,” or what moral claims mean, or anything of the sort. As far as I can tell, this is simply asserted. There’s also no specification about what proportion of antirealists misunderstood, nor is any supporting evidence of any specific antirealist not understanding anything in particular actually given. Who are these antirealists? What do they not understand? Why does BB think they don’t understand it?
7.0 Most philosophers are realists
Next, BB says:
Additionally, given that most philosophers are moral realists, we have good reason to find it the more intuitively plausible view. If the consensus of people who have carefully studied an issue tends to support moral realism, this gives us good reason to think that moral realism is true.
No, it doesn’t. I’ve addressed this at length in a nine-part series called The PhilPapers Fallacy, where I critique the notion that the fact that most analytic philosophers are moral realists is good evidence of moral realism. You can find that here, with the table of contents to the set of posts at the bottom. Here’s the synopsis, reproduced here:
People often appeal to the proportion of philosophers who endorse a particular view in the PhilPapers survey as evidence that a given philosophical position is true. Such appeals are often overused or misused in ways that are epistemically suspect, e.g., to end conversations or imply that if you reject the majority view on the matter that you are much more likely to be mistaken, or that you’re arrogant for believing you’re correct but most experts aren’t.
That most respondents to the PhilPapers survey endorse a particular view is very weak evidence that the view is true. Almost everyone responding to the survey is an analytic philosopher, and the degree to which the convergence of their judgments provides strong evidence is contingent on, among other things, (a) the degree to which analytic philosophy confers the relevant kind of expertise and (b) the degree to which their judgments are independent of one another.
There is good reason to believe people trained in analytic philosophy represent an extremely narrow and highly unrepresentative subset of human thought, and there is little evidence that the judgments that develop as a result of studying analytic philosophy are reflective of how people from other populations, or people under different cultural, historical, and educational conditions, would think about the same issues (if they would think about those issues at all).
Since most philosophers responding to the 2020 PhilPapers survey come from WEIRD populations, most of them are psychological outliers with respect to most of the rest of humanity. Their idiosyncrasies are further reinforced by self-selection effects (those who pursue careers in philosophy are more similar to one another than two randomly selected members of the population they come from), a narrow education that focuses on a shared canon of predominantly WEIRD authors, and induction into an extremely insular academic subculture that serves to further reinforce the homogenization of the thinking of its members. As such, analytic philosophers are, psychologically speaking, outliers among outliers among outliers.
At present, there is little evidence or compelling theoretical basis for believing that human minds would converge on the same proportion of assent to particular philosophical issues as what we see in the 2020 PhilPapers survey results if they were surveyed under different counterfactual conditions.
There is also little evidence nor much in the way of a compelling case that analytic philosophy confers expertise at being correct about philosophical disputes. The presumption that the preponderance of analytic philosophers sharing the same view is evidence that the view is correct is predicated, at least in part, on the further presumption that the questions are legitimate and that mainstream analytic philosophical methods are a good way to resolve those questions. Both of these claims are subject to legitimate skepticism. Analytic philosophy is a subculture that inducts its members into an extremely idiosyncratic, narrow, and comparatively homogeneous way of thought that is utterly unlike how the rest of humanity thinks. It has little track record of success and little external corroborating evidence of its efficacy.
Critics are not, therefore, obliged to confer substantial evidential weight on the proportion of analytic philosophers who endorse a particular philosophical position. Resolving how much stock we should put in what most philosophers think rests, first and foremost, on resolution about the efficacy of their methods.
8.0 Responding to “What if the folk think differently”
In the next section, BB addresses the suggestion that most people may not be realists. BB begins by saying:
I’m supremely confident that if you asked the folk whether it would be typically wrong to torture infants for fun, even if no one thought it was, they’d tend to say yes.
I already addressed this above: this is a terrible question that’s ambiguous and not a good way to measure whether respondents are realists or not. If BB thinks otherwise, BB is welcome to do empirical research that establishes the validity of such a question as a diagnostic tool for evaluating whether nonphilosophers are realists.
BB then says:
Additionally, it turns out that The Folk Probably do Think What you Think They Think.
BB seems to think this article justifies his claim that moral realism is an intuitive, commonsense view. Yet this seems to be based almost entirely on taking the title of the paper literally. Titles of papers published in journals are often intentionally cute or provocative. Taking this one at face value is bizarre and a little embarrassing. This paper does not show that philosophers probably think what you think they think. How could it? What you think they think will depend on what you think they think. I think most people are not realists. Does that mean that they probably aren’t realists?
BB could say that if most philosophers think most nonphilosophers are realists, that this is what’s probably true: that it’s a statistical claim, i.e., that most nonphilosophers probably think what most philosophers think they think. And perhaps most philosophers think most people are moral realists. That seems plausible enough. So, does the paper establish this? That most nonphilosophers probably think what most philosophers think they think?
No, not really. Before considering this, note a few things:
First, this is only a single study. As Scott Alexander warns, one should beware the man of one study. It’s more than a little questionable for BB to trot out one study that purportedly supports his claims. I could provide at least a dozen that support my contentions, probably more than that, in large part because I myself conducted many of these studies.
Second, it at best provides extremely indirect evidence that most nonphilosophers are moral realists. Note that, in contrast, the data I’d appeal to directly addresses the question of whether the folk are moral realists (and suggests, I contend, that most of them aren’t).
Finally, there is the study itself. Does the study provide a good justification for thinking most people are moral realists, or that realism is intuitive or a commonsense view among most nonphilosophers? Not even close. Unfortunately, BB persisted in making this claim some time later, and was much more explicit about it. This occurred in a Facebook exchange on Joe Schmid’s wall, where BB stated that:
But philosophers are usually write [sic] about what the folk think.
This claim is beyond wrong. Right about what they think in what context? Without context? Are philosophers usually right to any arbitrary level of specificity? With respect to any claim about what the folk think? This remark displays BB’s abject ignorance when it comes to making clear, precise, and appropriately well-specified claims. Suppose I found that if you gave doctors a detailed case report, told them that the patient had one of two diagnoses, (1) or (2), and you found that doctors chose the correct diagnosis at significantly higher rates than chance, would you conclude that “doctors are usually correct in their diagnoses of illnesses”?
Absolutely not. Because most doctors aren’t diagnosing people under such highly narrow, specific, tailored conditions. If you wanted to show that doctors generally make accurate diagnoses, you’d need enough data to generalize from the results of your studies to the actual decisions doctors make in the scenarios you’re talking about. It is not enough to show that in some lab study you get some result consistent with a hypothesis that therefore you’ve definitively established some claim based on generalizing from your results to whatever claim you make outside of that study. It is extremely difficult to establish the external validity of a particular study’s results; you often need a ton of data, or to triangulate on such a conclusion by appealing to a broad body of mutually corroborating literature, or to ground your interpretation of your data in a solid and well-supported theory, or to provide extraneous evidence that you’ve validated your measures…or, ideally, all of the above. Citing a single study this far removed from the conclusion BB wants to reach doesn't even come close to making a strong case for BB’s claims.
The only thing the study BB cites achieves is showing that if you give a handful of philosophers a description of a handful of unrepresentative study designs with the response options and conditions available to them, that they can predict whether there’d be a significant difference and what direction that difference is in. This is not very robust information. It doesn’t tell much about the degree of the difference and, more importantly, as I address below, it doesn’t tell you why there is such a difference. This is not a good way to determine whether philosophers know what the folk think, and it especially doesn’t justify generalizing from the cases featured in these studies to claims made outside the context of these studies. I reinvent the wheel a lot, but even if I have my limits. I addressed these points in the Facebook exchange with BB mentioned above, so I’ll simply reproduce that comment here, in its entirety:
Matthew Adelstein Here’s one issue.
The authors canvas four studies. These studies are:
Knobe, J., & Fraser, B. (2008). Causal judgment and moral judgment: Two experiments. Moral psychology, 2, 441-447.
Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63(3), 190-194.
Livengood, J., & Machery, E. (2007). The folk probably don't think what you think they think: Experiments on causation by absence. Midwest Studies in Philosophy, 31(1), 107-127.
Nichols, S., & Knobe, J. (2007). Moral responsibility and determinism: The cognitive science of folk intuitions. Nous, 41(4), 663-685.
The authors show that when you present the stimuli for these studies, that the surveyed population of philosophers that were asked about them were able to accurately predict the outcome of the studies most of the time. There are already a number of concerns with this framing, but I’ll set those aside for now to focus on some other issues.
(1) First, let’s look at who the participants in these studies were.
(i) Knobe & Fraser (2008): Two studies. The first was n=18 intro to philosophy students at UNC. I didn’t see a sample size for the second study, nor any other demographic info. It’s plausible they were all UNC students, but I’m trying to be quick here and didn’t look to see if this info is anywhere.
(ii) Knobe (2003): Study 1 and 2 consisted of 78 and 42 people in a Manhattan public park, respectively.
(iii) Livengood & Machery (2007): 95 students at the University of Pittsburgh
(iv) Nichols & Knobe (2007). All studies conducted with undergraduates at the University of Utah.
Taken together, these studies all reflect the attitudes and judgments of people responding to surveys in English in the United States. Most of the participants were college students. These studies were conducted in a particular cultural context: WEIRD societies. They were conducted on what were likely mostly WEIRD populations (though all of the studies did a bad job of providing significant demographic data), all of the studies had small samples, and most of the studies (3 of 4) were conducted on college students in particular.
WEIRD is an acronym that stands for “Western, Educated, Industrialized, Rich, and Democratic.” It was a term proposed to describe a clustering pattern of demographic traits characteristic populations that comprise the vast majority of research participants in psychology, and the vast majority of those conducting this research.
When it comes to making generalizations about how “the folk,” or people in general think, it is important to gather representative data. That is, you should sample from populations who are sufficiently representative of the population about which you wish to generalize that inferential statistics permits one to make judgments about that population based on the participants in one’s sample. If, for instance, if wanted to know whether most people in the United States were Taylor Swift fans, it would make no sense to survey attendees at a Taylor Swift concert, for the obvious reason that people attending the concert would be more likely to like Taylor Swift.
Why is this a problem for four studies that figure in Dunaway et al’s study? The problem is that all four studies were conducted in WEIRD populations. And WEIRD populations are psychological outliers. Along numerous measurable dimensions of human psychology, people from WEIRD populations tend to anchor one or the other end of the extreme of these distributions. Thus, not only are people from WEIRD populations often unrepresentative of how people in general think, they are often the *least* representative population available. They are, at a population level, psychological outliers with respect to most of the world’s population. The evidence for this is strong, and only continues to grow with time. And ALL FOUR of the studies reported here were conducted in WEIRD populations (for what it’s worth, they were probably also conducted primarily by people from WEIRD populations, which could influence the way questions were framed, how results were interpreted, and so on, introducing a whole slew of additional biases I’m not even addressing directly). As such, the original studies themselves have such low generalizability that they, themselves, don’t tell us about how “the folk” think. At best, they might tell us about how college students or people in public parks in Manhattan think, but it’s not at all clear that how people in these places think reflects how people everywhere think.
And if the original studies don’t even come close to telling us what “the folk,” think, how on earth is the ability for philosophers to accurately predict the results of these studies supposed to indicate that philosophers know what “the folk” think? The answer to this is very simple: it doesn’t. Even if we ignored every other methodological problem with these studies, the bottom line is that even under ideal conditions the findings reported in this study wouldn’t even come close to providing robust evidence of how nonphilosophers think about the issues in question. And there are many other methodological problems with these studies.
There are even bigger problems with generalizability when one focuses on the judgments of college students in particular. Indeed, in some cases, we have empirical evidence that people around the ages of those most likely to be undergraduates are disproportionately likely to be *unrepresentative* of people of other age groups. See, for instance, Beebe and Sackris’s (2016) data on this with respect to metaethical views, which shows that people around college age are less likely to give responses interpreted by researchers as "realist" responses and more likely to give "antirealist" responses.
In short, the studies themselves have such low generalizability that they don’t tell us what “the folk” think. At best, they might tell us what college students in the US or people in Manhattan parks think. And I do mean "at best": I doubt they are even successful at this modest goal. Yet what college students in the US or people in Mahattan [sic] parks think is unlikely to be representative of what most of the rest of the world thinks. As a result, the studies are not a good proxy for what “the folk” think.
Given this, even if philosophers could predict the outcomes of these studies, and even if those studies had valid measures, were correctly interpreted, and so on (all highly contestable claims in their own right), the findings don’t tell us what “the folk” think for one simple reason: the original studies themselves don’t tell us what the folk think.
Note that this alone is probably sufficient to undermine any strong claims about these findings. And yet there are still more problems with these studies. If anything, that the original study isn't a good indication of what Matthew seems to think it indicates is likely overdetermined by a variety of additional considerations, subsets of which would likely to be independently sufficient to severely limit what the study tells us.
References
Beebe, J. R., & Sackris, D. (2016). Moral objectivism across the lifespan. Philosophical Psychology, 29(6), 912-929.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?. Behavioral and brain sciences, 33(2-3), 61-83.
In short, there is little reason to believe the studies themselves were representative of how “people” think, so even if philosophers could accurately predict the results of these studies, those studies lack the generalizability to tell us about how people in general think. Compound this with the fact that the four studies the authors examined are not representative of research on how nonphilosophers think, and you basically have a lack of generalizability squared. That is, you have an unrepresentative sampling of studies that are themselves unrepresentative. This alone is sufficient to cast serious doubt on BB’s claim that philosophers generally know how nonphilosophers think.
There are yet more problems, however. In follow-up comments, I raised the following concerns:
Joe Schmid & Matthew Adelstein See, I told you this would take a bit of work! Unfortunately, raising objections to a claim often requires more work and words than making the claim in the first place, whether that claim ultimately turns out to be correct or not. Note that even these considerations are truncated when it comes to raising problems for conducting studies on student and WEIRD populations.
And I didn't even get into problems with researcher bias (which empirical studies suggest does influence Xphi studies), low statistical power, problems of interpreting the results of the studies, the severe problem of stimulus sampling with these studies (see 1 below), the fact that 3 of the 4 studies include Knobe on the research project, which even further limits the representativeness of the studies, the fact that most of the studies are about or are adjacent to moral/normative considerations, which makes them unrepresentative of Xphi in general, the fact that the researchers selected studies based on whether the authors of those studies claimed the findings were “surprising”: this is a strange standard to choose, since the most important factor in having a paper accepted for publication in psychology is the novelty of the findings, and there are very strong norms in place for people to report that their findings are “surprising” or to use terms to indicate that one’s findings are novel, interesting, and ultimately worth publishing.
That is, we have good reasons to think people would call their findings “surprising” regardless of how surprising they were, because there are massive incentives in place to do so. And, at any rate, perhaps we should infer that Knobe is not a good judge of which findings are surprising before we leap to the conclusion that philosophers are really good inferring how nonphilosophers think. That seems like a far more parsimonious account of this particular set of four studies, given that Knobe was an author on three of them.
(1) Stimulus sampling: Here’s a general problem in a lot of research. Researchers will use a particular set of stimuli, such as a set of four questions, or four examples of some putative domain, and then generalize from how participants respond to that stimuli to how people think about the domain as a whole.
Suppose, for instance, I wanted to evaluate whether people “like fruit.” I need to choose four fruit to ask them about. You can imagine conducting two different studies:
(a) Ask about apples, bananas, oranges, and grapes
(b) Ask about durian, papaya, figs, cranberries
We ask people to rate how much they like each fruit on a scale (1 = hate it, 5 = love it), then average across the four fruits to get a mean fruit preference score. Would you expect the same results if we ran these two studies? I wouldn’t. And would you expect either study to tell us what people think about fruit in general? Again, probably not. Why? Because there’s no good reason to think set (a) or (b) is representative of “fruit” as a domain. First, of course some populations will vary in whether they prefer the fruits in (a) or (b) more, but setting aside this concern, suppose wanted to know just about fruit preferences in the United States. If so, (a) is going to win by a landslide. Yet neither (a) nor (b) would tell us about “fruit” as a domain. This is because (a) and (b) are unsystematic and nonrandomly selected: they don’t *represent* fruit as a domain, but instead reflect very popular and much less popular fruit (in the US), respectively.
When researchers run studies, if they want those studies to generalize to all people, they already face the steep challenge that their *participants* are typically not representative of people in general. Yet another huge problem which is almost totally ignored turns on considerations like those outlined above regarding fruit: researchers often want to genrealize [sic] from their *stimuli* to some broader category of phenomena. Yet, whereas they recognize and model the participants in their studies as a random factor, they almost never bother to model their stimuli as a random factor. In effect, they treat their stimuli as though it is perfectly representative of the domain about which it is intended to quantify over, even though (a) this is almost certainly not true and (b) even though there are statistical methods available for avoiding this presumption. Of course, the problem with (b) is that it’s harder to do, and will often result in your findings being far less impressive. Who is going to put in the work to produce less impressive results? Not anyone who wants to win the competition of getting more publications.
It would be absurd for me to go into much more detail than that here, so I’ll direct you to a blog post and an article which develop on this problem in greater length.
https://www.r-bloggers.com/.../the-stimuli-as-a-fixed.../
https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0000014
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: a new and comprehensive solution to a pervasive but largely ignored problem. Journal of personality and social psychology, 103(1), 54.
Why does all of this matter? For a very simple reason: The Dunaway et al. (2013) paper involves an analysis of four particular Xphi studies. Yet there is no good evidence, nor any good reason, to think that these studies are a representative sampling of Xphi studies in general. As such, not only does the study suffer from ridiculouslly low generalizability with respect to the participants in the study, there’s no good reason to think the studies themselves are representative of Xphi studies.
This results in a double-whammy of super low generalizability. Note, too, that (a) the criterion for study selection was explicitly nonrandom, (b) no principled methods were employed for randomly selecting representative studies, (c) Knobe is sole, first, and second author on three of the studies, further suggesting that we’re not dealing with a representative sample of Xphi studies so much as, at best, a representative sample of studies conducted by Josh Knobe (and even that’s questionable), further limiting generalizability, and (d) the studies are all on very closely related subjects: three are explicitly about causal judgments (and the other is about attributions of intentional action), and three are about moral/normative evaluation (including the one not included in the first category). This means that the studies cover an extremely narrow range of the subject matter of folk philosophical thought and of Xphi research in general. As such, we’re dealing with an extremely narrow slice of research: It’s mostly early Xphi research on causality and moral judgment conducted by Knobe and colleagues. And we’re supposed to conclude that the ability to predict the outcome of *those* studies years after they were published and percolated through the academy are a good indication that philosophers know how nonphilosophers think?
It’s incredibly difficult for psychologists to figure out how people think even with high-powered, representative, carefully-designed studies conducted by experts with years of experience. Researchers face methodological problems piled on top of one another. See here for example:
Yarkoni, T. (2022). The generalizability crisis. Behavioral and Brain Sciences, 45, e1.
…Yet we’re supposed to believe that philosophers can infer how nonphilosophers think based on their ability to predict the outcome of a tiny, unrepresentative handful of small studies?
Followed by this comment:
Joe Schmid & Matthew Adelstein Oh, and I *also* didn't add that there's a paper criticizing the results of the Dunaway et al. (2013) paper that takes a different angle than mine, further piling on the problems with this study:
https://www.tandfonline.com/.../10.../09515089.2016.1194971
Abstract: "Some philosophers have criticized experimental philosophy for being superfluous. Jackson (1998) implies that experimental philosophy studies are unnecessary. More recently, Dunaway, Edmunds, and Manley (2013) empirically demonstrate that experimental studies do not deliver surprising results, which is a pro tanto reason for foregoing conducting such studies. This paper gives theoretical and empirical considerations against the superfluity criticism. The questions concerning the surprisingness of experimental philosophy studies have not been properly disambiguated, and their metaphilosophical significance have not been properly assessed. Once the most relevant question is identified, a re-analysis of Dunaway and colleagues’ data actually undermines the superfluity criticism."
Liao, S. Y. (2016). Are philosophers good intuition predictors?. Philosophical Psychology, 29(7), 1004-1014.
One of the most important points to stress is that, even if you can predict the results of a study, that does not necessarily tell you how the respondents to that study think. Such a claim relies on the presumption that you’ve interpreted the results of the study correctly, such that the observed response patterns in your data are the result of valid measures and that you’ve interpreted them correctly. Correctly predicting, for instance, that most people would choose a “realist response” over an “antirealist response” does not entail that most of the respondents are realists, since this would only follow if choosing what was operationalized as a realist response actually indicates that the respondent is a realist. This was a point stressed by Joe Schmid, in his response to BB:
Matthew, Lance already hit most of the nails on the head. My three main problems, which largely reiterate Lance’s, are:
(1) We can only conclude that the philosophers surveyed are good at predicting some folk responses to certain survey questions; this doesn’t mean they’re good at predicting folk views or intuitions. Survey questions are very faint indicators of folk views and intuitions — they’re hugely liable to unintended interpretations, ambiguities, spontaneous theorizing, and other confounds.
(2) The results are not justifiably generalized to philosophers’ abilities to predict folk views more generally. From the fact that philosophers can predict some folk’s responses to some survey questions from 4 papers [with relatively small sample sizes], it is incredibly hasty to conclude that they’re good at predicting folk survey responses more generally on the dozens or hundreds of surveys that have and might be conducted — let alone folk views on the thousands of philosophical questions more generally (as opposed to responses to survey questions), and let alone folk intuitions.
(3) The surveys are done on WEIRD populations; even if those survey responses were good indicators of the views and intuitions of the survey respondents, and even if philosophers are generally good at predicting these responses, that doesn’t mean philosophers are generally good at predicting the views and intuitions of ‘the folk’; we need some reason to think the WEIRD survey populations are representative of ‘the folk’ more generally, including non-WEIRD populations.
Point (1) is critical: predicting the results of studies does not entail that you can predict what people think.
Finally, we now have a decade of research showing that seemingly reasonable attempts to address exactly this question have historically failed because participants do not interpret questions about metaethics as researchers intend. I cannot stress enough that I am not basing this on a superficial evaluation of available data, but because I have personally spent the last decade specifically specializing in this exact question. I quite literally specialize in the methods used to determine whether nonphilosophers are moral realists. It is ludicrously difficult to devise valid measures. Even if the studies used in Dunaway et al. relied on valid measures, and that predicting the results of those studies entailed predicting what people think, this won’t necessarily generalize to the specific case of metaethics.
It is one thing to say that a given body of data has poor generalizability: that you aren’t justified in generalizing from a given body of data to draw inferences about some larger population in the absence of data to the contrary. But this is a case where we actually have quite a lot of evidence to the contrary. More importantly, those studies which have done the most to distance themselves from the methodological shortcomings of earlier studies tend to find fairly high rates of antirealist responses from participants. I talk about this at length on this blog, on my channel, and in my research. Simply put, there is no good empirical case that most people are moral realists, that most people find realism intuitive, or that moral realism is a “commonsense” view widely held by nonphilosophers. Given the current state of available evidence, BB just isn’t justified in suggesting otherwise. I don’t want to rehash my case against widespread moral realism here, so I’ll direct you to some of my other posts on the matter:
J. P. Andrew insists people are moral realists, but empirical data does not support this claim
J. P. Andrew: Let's schedule a debate on whether most people are moral realists
Watkins and the persistent (and probably false) claim that most people are moral realists
Unfortunately, despite specifically stating that he’d be interested in hearing what my objections were, BB never responded to Joe and my responses. I like to flatter myself that this is because we made a case too strong for BB to rebut, but threads always stop somewhere and I sometimes forget to reply or lose track of people’s responses.
In short: the Dunaway paper doesn’t license BB to claim that realism is a commonsense view.
9.0 A response to “Classifying anti-realists”
In the next section, BB says:
Given that, as previously discussed, moral realism is the view that there are true moral statements, that are true independently of people’s beliefs about them, there are three ways to deny it.
BB then lists noncognitivism, error theory, and subjectivism as the three ways to deny realism.
BB is, once again, incorrect. These are not the only ways to deny moral realism. I’m a moral antirealist, and I don’t endorse any of these positions. BB is echoing a claim made by Huemer in Ethical Intuitionism that there are only three possible antirealist positions. Huemer, like BB, is simply incorrect.
All three of these positions rely on a semantic thesis:
Error theory: Moral claims express propositions that purport to describe stance-independent moral facts
Noncognitivism: Moral claims do not express propositions and therefore cannot be true or false
Subjectivism: Moral claims are true or false relative to some standard
All three positions turn on a position about the meaning of moral claims.
They all appear to presume that there is a category, “moral claim,” and that all members of this category share the same semantic content, in that they, as a category, refer to stance-independent moral facts, express nonpropositional content only, or express claims that are true or false relative to some standard.
Here’s the problem: I don’t think there is any category like this. I don’t think there is a category of “claim,” a “moral claim,” that shares some specific semantic content. As an aside, I don’t even think there is a moral domain at all, so I’d have considerable objections to there even being a well-defined category of “morality” in the first place. But setting that aside, the characterization common among these positions of language and meaning is one that seems to rely on a conception of language that I don’t share. I endorse a language-as-use view, according to which words, phrases, and claims themselves don’t mean anything: it’s the people using language that mean things. This is captured by my slogan that “words don’t mean things; people mean things.” I cannot emphasize enough that I mean this literally: I do not think words mean anything. I think the people using words mean things, and the words are their means of conveying what they mean.
When we talk about the “meaning of moral statements,” I take this as an awkward and non-literal characterization of what people mean when they make moral statements, where “moral statements,” would be operationalized as some rough attempt at capturing a subset of ordinary discourse. I do think that when people make moral claims, they mean things, but:
I take facts about what people mean to be empirical questions
I endorse folk metaethical indeterminacy, i.e., I think that with respect to metaethical considerations, ordinary people rarely mean to express claims that determinately fit a realist or antirealist analysis at all
Mainstream analytic philosophers addressing questions about the meaning of moral claims tend to either (1) appear not to treat such questions as empirical at all, and instead see such claims as a priori. I reject this view, or (2) may nominally regard such questions as empirical but believe armchair philosophy is sufficient to settle such questions reasonably well. I disagree with (2) as well. As such, I either fundamentally reject the philosophical presuppositions behind the approach most philosophers take to addressing metaethical questions, or, at best, believe they’re using embarrassingly bad methods for addressing those questions: I am adamantly against generalizing from the armchair. If you want to know what nonphilosophers mean when they make moral claims, you need to engage in empirical research.
…And that’s what I did. I came away with the impression that, whatever they mean, they are not making claims that determinately fit error theory, noncognitivism, or subjectivism. These are not the only logical possibilities, pace Huemer. Huemer is mistaken about this. Alternative possibilities are not novel, nor did I invent them. Loeb describes one possibility: moral incoherentism: people may hold conflicting presuppositions at the same time, and, as a result, their moral claims could be incoherent. This is not strictly speaking error theory, subjectivism, or noncognitivism, and therefore reflects a genuinely unique type of antirealism. Michael Gill proposes folk metaethical variability and indeterminacy: it could be that there is considerable variation in the metaethical presuppositions implicit in ordinary moral discourse, in which case no single antirealist view would be correct; one would need to be a pluralist: insofar as people speak like realists, one might be an error theorist, and insofar as they use moral language to express nonpropositional attitudes, one might be a noncognitivist. Arguably, this would simply yield a hybrid view: a combination of the three possibilities BB lists. However, Gill’s other proposal, indeterminacy, reflects a genuine departure from the three: in those instances in which a nonphilosopher’s moral claims neither determinately comport with a realist or antirealist analysis, there may be no fact of the matter about whether the person’s claim fits the error theorist, noncognitivist, or subjectivist analysis.
Even when this occurs, you can still believe there are no stance-independent moral facts. Denying that there are such facts does not require a commitment to a determinate semantic analysis of the meaning of moral claims. Antirealists are not required to share BB or Huemer’s assumptions about language and meaning. Once again, this is a point I’ve already addressed on my blog, so if you want to see my develop on this point more, I do so here.
In short: these three views are not the only antirealist views that one can endorse. Antirealism only requires you to reject moral realism. It does not require you to make a claim about the meaning of moral claims. You could be agnostic about the meaning of moral claims, or endorse indeterminacy, variability, or incoherentism. Fundamentally, the problem with the claim that these are the only three options is that all three positions rely on substantive assumptions about language and meaning that are not an inherent feature of moral antirealism as a position, and one is therefore not obliged to endorse one of these positions to qualify as an antirealist.
One of the strategies you can employ if you insist there’s only three antirealist positions is an argument by process of elimination: if you can demonstrate some deficiency in each of these forms of antirealism, you can declare victory for realism by default. I think it’s dialectically important, as a results, for BB, Huemer, and others, to sideline positions like mine and show they’re not legitimate possibilities. This allows the process of elimination arguments to go through. I have yet to see any good argument for why these are the only three options, though. This appears to be, once again, a matter of assertion.
Although I don’t endorse any of these antirealist views, critiques of all three are usually illuminating, in that they’re not very good. Even when moral realists can tee up the target position to knock it down, they still fail. So let’s go through BB’s critiques of each of these positions.
10.0 The three traditional antirealist positions
10.1 Noncognitivism
Noncognitivism is the view that moral claims do not express propositions (statements that can be true or false) and that therefore moral claims are neither true nor false. Compare to utterances like “Eww!” (an emotive expression) or “Shut the door!” (an imperative). These utterances cannot be true or false. Noncognitivists hold that, appearances notwithstanding, moral claims likewise express only nonpropositional content like this. BB quotes a previous article that explains why BB thinks noncognitivism is implausible. The argument is a reiteration of the Frege-Geach problem.
The problem is supposed to be that, if moral claims didn’t express propositions, then we shouldn’t be able to place moral claims into syllogisms, arguments, lines of reasoning, and other chains of inference that only make sense if the phrases they contain are propositions. BB Gives an example:
“It’s wrong to torture infants for fun, most of the time,”
is neither true nor false
The statement
“If it’s wrong to torture infants, then I shouldn’t torture infants
It’s wrong to torture infants
Therefore, I shouldn’t torture infants”
Is incoherent. It’s like saying if shut the door then open the window, shut the door, therefore, open the window.
I’ve always found this to be an atrocious and silly objection to noncognitivism. Perhaps a very crude and flat-footed form of noncognitivism is vulnerable to this criticism. According to such a view, the only appropriate use of “X is wrong” amounts to something like “X? Boo!” or “Don’t X.” So this would make certain utterances like the ones BB provides appear incoherent. And it probably does successfully indicate that such a view is incoherent. Only, is a noncognitivist obliged to think that ordinary moral language is as rigid and inflexible as this?
No. People may use moral claims primarily to express nonpropositional claims even if one can shift into using them in a propositional way in various contexts. How reflective of ordinary thought and language is a syllogism like the one BB offers, anyway?
Not very. It is the artificial construction of the philosopher. Most people don’t speak in modus ponens most of the time. And if terms and phrases are flexible and change based on context, it could still turn out that in most instances, the primary use of a moral claim is to express nonpropositional content, even if there are conceivable contexts where there is nothing especially strange about treating them as propositional. In other words, the phrase “X is wrong,” in ordinary language need not always and in every case mean exactly the same thing, for it to still be the case that moral claims primarily serve to express emotions or issue commands. There may be little reason to prioritize the assertoric functions ordinary moral claims can play if these are secondary to and parasitic on their primary uses. At best, Frege-Geach only reveals a problem for rigid semanticists, but similar problems emerge for rigidity in every direction, in that one can always find apparent pockets of English moral discourse that are hard to reconcile with an opposing view, provided one wishes to adhere to the uniformity and determinacy assumptions, as outlined by Gill. The real problem here is a problem for all rigid semantic accounts: there was never any good reason to think all moral claims share the same uniform and determinate set of semantic characteristics in the first place.
It’s also worth noting that there are a variety of hybrid accounts and various approaches to language that circumvent the rigidity and crudeness of classical noncognitivist views. The objection BB offers is, at best, only serviceable as a critique of crude, flat-footed, early forms of noncognitivism from decades ago.
BB goes on to cite Huemer, who provides other reasons to regard moral claims as propositions:
(a) Evaluative statements take the form of declarative sentences, rather than, say, imperatives, questions, or interjections. 'Pleasure is good' has the same grammatical form as 'Weasels are mammals'. Sentences of this form are normally used to make factual assertions. )] In contrast, the paradigms of non-cognitive utterances, such as 'Hurray for x' and 'Pursue x', are not declarative sentences.
Everyone recognizes this and it was never in dispute. The mere fact that something is expressed as a declarative sentence doesn’t mean that it functions to express a proposition. People use declarative sentences to express nonpropositional content in English: e.g., “You will not go to that party,” “This cake is delicious!”. I don’t think that what a person means isn’t determined by grammar; I think it’s determined by what they’re trying to do with their words.
Note, too, that these remarks only concern English. Such claims may or may not extend to the 7000+ other languages in the world.
(b) Moral predicates can be transformed into abstract nouns, suggesting that they are intended to refer to properties; we talk about 'goodness', 'rightness', and so on, as in 'I am not questioning the act's prudence, but its rightness'.
People reify and speak in metaphorical terms all the time. This is a common feature of English and probably most other languages (if not all of them). It demonstrates very little.
(c) We ascribe to evaluations the same sort of properties as other propositions. You can say, 'It is true that I have done some wrong things in the past', 'It is false that contraception is murder', and 'It is possible that abortion is wrong'. 'True', 'false', and 'possible' are predicates that we apply only to propositions. No one would say, 'It is true that ouch', 'It is false that shut the door', or 'It is possible that hurray'.
This recapitulates the same crude criticism of noncognitivism outlined above. If a noncognitivist has a rigid and inflexible conception of language, this criticism may land. But they don’t have to think language is that rigid.
I also question how common or representative such remarks are of ordinary moral language. These sound more like the sorts of things philosophers say. Note, again, that such considerations only emphasize English, as well. They may or may not reflect how speakers of other languages tend to use moral language (if they use moral language at all).
Finally:
(d) All the propositional attitude verbs can be prefixed to evaluative statements. We can say, 'Jon believes that the war was just', 'I hope I did the right thing', 'I wish we had a better President', and 'I wonder whether I did the right thing'. In contrast, no one would say, 'Jon believes that ouch', 'I hope that hurray for the Broncos', 'I wish that shut the door', or 'I wonder whether please pass the salt'. The obvious explanation is that such I11ental [sic] states as believing, hoping, wishing, and wondering are by their nature propositional: To hope is to hope that something is the case, to wonder is to wonder whether something is the case, and so on. That is why one cannot hope that one did the right thing unless there is a proposition-something that might be the case-corresponding to the expression 'one did the right thing'.
There are a few more examples but they’re more or less of the same flavor. This kind of reasoning tells you how people can use moral claims. That’s important. It at least means that there are instances in which people saying things like “murder is wrong,” aren’t merely expressing an emotion or issuing a command. And insofar as noncognitivism supposes that that’s all and only what people do with moral language, well, it’s probably not a very sensible position. Note, however, that such “tests” are purely armchair tests: by thinking about how it seems reasonable for us to use certain terms and phrases, it appears some of those terms and phrases seem to comfortably play propositional roles in at least some contexts.
What does this tell us, though? If we suppose that questions about what people mean when they make moral claims are empirical questions, then Huemer, BB, and others are not employing the best methods for determining how people use moral language. As the famous phrase goes, no plan survives contact with the enemy. Just so, no simple armchair hypothesis survives contact with people. People are complicated and messy, and so is language. All of the armchair considerations in the world could crumble the moment we conduct actual empirical research on people are trying to do with moral language. And that’s just it: why suppose they’re only trying to do one thing? Either assert or not assert propositions? While my own twist on noncognitivism may not be orthodox, we could discover as a matter of empirical fact that in most ordinary contexts in which people make moral claims, the primary function of those claims is to express nonpropositional content. And suppose that the use of declarative sentences involved for rhetorical and persuasive reasons, not because people really thought there are “moral facts” (stance-independent or otherwise). If so, then the standard grammatical rules of our language, and many ways in which people talk, could both give the impression that moral language “is propositional” in some contexts, even if in most actual instances of use, they don’t serve to express propositions.
What then? Do moral claims express propositions or not? The answer would, in these cases, turn out to be, “it depends.” Even if people use moral language in a propositional way some of the time, we may still ask why people use language in that way: for what purpose? To what end? Such questions are not easy to answer from the armchair. These are, ex hypothesi, empirical questions, and are best addressed using empirical data. Why, then, do BB and Huemer rely on armchair considerations? This strikes me as a strange way to address empirical questions, if they are, in fact, empirical questions.
Suppose instead that questions about what people mean aren’t empirical questions. What are they questions about, then, if not what people mean? Are these supposed to be a priori questions? If so, about what? English sentences? There’d be something profoundly strange about the question of the meaning of moral claims not being an empirical question, but philosophers may buy into such a notion. Even if they do, we still face what I will call the empiricist’s dilemma: either
(a) these questions are empirical, in which case empirical methods are the best means of addressing them, not, as BB and Huemer do, by engaging in armchair theorizing
(b) these questions are not empirical, in which case they face a somewhat different challenge.
With respect to (b), suppose we declare that the cognitivism/noncognitivism question is not an empirical question. Now suppose we discover that, in 100% of actual, real-world instances in which people say things like “murder is wrong,” and “honesty is morally good,” they themselves intend only to express emotions or issue commands. That is, their communicative goal is to convey nonpropositional content alone. What are we to conclude?
One possibility is to suppose that these are not actual moral claims. Since these people are only expressing nonpropositional content, and moral claims are, a priori, propositions, then it would appear that nobody ever actually makes moral claims in everyday situations. This strikes me as a reductio ad absurdum: if you’re so committed to “moral claims” being propositional that this would lead you to conclude that nobody ever actually makes moral claims in ordinary contexts, there’s something wrong with your theory.
If, instead, we suppose that these are moral claims, then what? We might say that meaning isn’t determined by the goals of speakers, and that these people’s moral claims actually are propositions. This, too, strikes me as very strange: it would mean that while in practice, in everyday contexts, nobody ever used moral claims to express propositions, they nevertheless do, in fact, express propositions. This seems to me to rely on an insane view of language, where the language itself means things, rather than the people using it. If a philosopher wants to defend such a view, they’re welcome to it, and if noncognitivsits and cogntivists alike want to go for something like this, I’d object to both. I don’t believe words and phrase have any sort of meaning independent of their use in actual contexts. Even so, views like this may be popular among philosophers, so they may opt for something like this. And that’s just it: what if I don’t accept this view of language? This brings us back to the claim that only the three views outlined by BB are available to the antirealist. If it turns out that these views rely on specific commitments to particular views in philosophy of language, and you don’t endorse those views, then what? Do you just not get to have a metaethical view? That’d be ridiculous. You certainly can have a metaethical position without endorsing a specific view about language and meaning.
None of this should be taken as some sort of defense of noncognitivism. I don’t endorse noncognitivism. However, I think criticisms of noncognitivism stem from and appear plausible only insofar as one endorses views of language that I don’t share: namely, the presumption that there is a category, “moral claim,” that shares a uniform semantics, such that all moral claims are propositional or nonpropositional. If we examine the actual patterns of use of moral claims in ordinary discourse, where “moral claim” is operationalized to refer to the sorts of things people actually say, it’s an open empirical question what people mean when they make those claims. Critics of noncognitivism rarely, if ever, engage with empirical literature on how people use language. If they think that the meaning of moral claims doesn’t turn on empirical considerations, critics like me will reject those presumptions. If they think they are empirical questions, then why aren’t they engaging in or consulting empirical research? Perhaps armchair considerations are sufficient to establish that it ordinary English does appear to allow, without it seeming strange or improper, to speak of moral claims in ways that treat them as propositions. Fair enough: perhaps so. This does not, by itself, show that moral claims are propositions. It only shows that they can be. And perhaps in other contexts they’re not. If that’s enough to refute noncognitivism, perhaps it’s refuted. But it doesn’t establish cognitivism, either, if cognitivism is the view that all moral claims are propositions. Perhaps the cognitivism/noncognitivism dispute is a false dichotomy based on incorrect conceptions about the rigidity of language.
In sum: insofar as noncognitivists buy into the same conception of language as realists do, they may very well be vulnerable to these charges. But there are ways of thinking about moral language that don’t require such views, and that still allow for a view of ordinary moral thought and discourse that is at least similar to noncognitivism. I suspect, then, that insofar as Huemer’s critiques of noncognitivism “work,” they work largely on the basis of sharing in the same mistaken presumptions about language and meaning as the target noncognitivist.
10.2 Error theory
BB begins with this description of error theory:
Error theory says that all positive moral statements are false.
This is a bad description of error theory, though BB later clarifies what the error theorist thinks to some extent. Error theorists specifically hold that all first-order moral claims involve an implicit commitment to one or more false presuppositions and are therefore systematically false. The most common form of error theory holds that moral claims purport to describe stance-independent moral facts, and since there are none, then such statements are false. This is a bit like if one were to go around making claims about the various features of unicorns, even though unicorns didn’t exist:
Unicorns like cupcakes.
Unicorns can magically heal you.
Unicorns dislike violence.
Since these claims implicitly presuppose that unicorns exist, then if unicorns don’t exist, these statements are false. Just so, if moral claims presuppose that there are stance-independent moral facts, but there aren’t any, then such claims are false.
BB’s description doesn’t clearly convey this initially, but later adds:
The error theorist has to say that the meaning of those terms is exactly the same as what the realist thinks.
What about BB’s objections to error theory?
Error theory is best described as in error theory, because of how sharply it diverges from the truth. It runs into a problem — there are obviously some true moral statements. Consider the following six examples.
What the icebox killers did was wrong.
The holocaust was immoral.
Torturing infants for fun is typically wrong.
Burning people at the stake is wrong.
It is immoral to cause innocent people to experience infinite torture.
Pleasure is better than pain.
The only way in which this objection would work is if Bb meant that there are obviously some stance-independently true moral statements. So error theory holds that there are no stance-independent moral facts. BB’s objection to this? That it’s obvious that there are such facts. This is not an objection. It is simply an assertion to the contrary. I could simply retort: “No, it’s obvious there are no such facts.” After all, it seems obvious to me. That’s literally true.
BB’s response here is very feeble. At best, it’s simply a reiteration of some kind of Moorean foot-stomping insistence that it’s just obvious that moral realism is true. A realist could offer this response to any argument at all for an antirealist position, so what’s the point in doing it here? Why not just say: “It’s obvious moral realism is true. QED.” and end his post there?
If, in the absence of any specific criticism of a metaethical theory, one always just falls back on reiterating that realism is obviously true, what’s the point in describing these positions and framing one’s response as though one is offering a distinctive response to various alternatives?
BB also says a few other word things. Consider this remark:
The error theorist has to say that the meaning of those terms is exactly the same as what the realist thinks.
Has to? BB frames this like the error theorist is making some sort of concession. They’re not.
The error theorist has to think that when people say the holocaust is bad, they’re actually making a mistake. However, this is terribly implausible. It really, really doesn’t seem like the claim ‘the holocaust is bad’ is mistaken.
If by “bad” you mean “stance-independently bad” then it does seem mistaken to me. I don’t think anything is “stance-independently bad.” In fact, it seems obvious to me that it isn’t bad. If that strikes BB or anyone else as horrifying or repugnant, then I think they’re making the same mistake as I keep pointing to of engaging in normative entanglement. Not thinking something is “stance-independently bad” doesn’t require you to not oppose it with just as much passion, or to feel just as much repugnance, as a moral realist. An error theorist can, without any inconsistency, be exactly as opposed to murder, torture, and so on as the realist. There are no substantive practical implications to thinking the things on BB’s list aren’t stance-independently immoral, or bad, or whatever. BB and others constantly entangle metaethical claims with normative claims, giving the false and misleading impression that if you deny the realist’s metaphysical views, that you’re somehow an awful, terrible, no good person with repugnant views. I’m not the first or only person to draw attention to this problem. Joyce alludes to normative entanglement in the SEP entry on moral antirealism:
The last example (“Stealing is not morally wrong”) calls for an extra comment. In ordinary conversation—where, presumably, the possibility of moral error theory is not considered a live option—someone who claims that X is not wrong would be taken to be implying that X is morally good or at least morally permissible. And if “X” denotes something awful, like torturing innocent people, then this can be used to make the error theorist look awful. But when we are doing metaethics, and the possibility of moral error theory is on the table, then this ordinary implication breaks down. The error theorist doesn’t think that torturing innocent people is morally wrong, but doesn’t think that it is morally good or morally permissible either. It is important that criticisms of the moral error theorist do not trade on equivocating between the implications that hold in ordinary contexts and the implications that hold in metaethical contexts.
I believe all or most of the rhetorical force BB and others get out of depicting the error theorist as thinking that torturing babies is “not morally wrong,” derives from just this conflation: the pragmatic implication that the error theorist’s views are truly ridiculous and awful and repugnant. Yet if the error theorist merely thinks that a certain metaphysical thesis is baked into everyday discourse, that that thesis is mistaken, but that this has absolutely no practical consequences at all and has no impact on their attitudes, behavior, or judgments, what exactly so objectionable about the error theorist’s view? Merely rejecting baroque metaphysics is no cause for pearl clutching horror, yet such pearl clutching seems to lurk beneath the incredulous declarations that the error theorist’s view is “implausible.” WHY is it implausible? I believe BB and others mistakenly find it implausible because they fail to disconnect the error theorist’s claim from the implications they and others project onto it, and I think that’s because they themselves think things only are or could be valuable if they’re valuable in a realist sense. Neither the error theorist nor anyone else is obliged to share in this view. The realist seems to partially impose their own preconceptions onto the error theorist or other antirealist’s response and worldview, then stand back in shock and horror at what the error theorist appears to be saying, thinking, and feeling. This is a partial failure of imagination, and an insistence on partially imposing one’s own philosophical views on other people. I call this the halfway fallacy, and discuss it here:
The halfway fallacy occurs when one argues that a particular position contrary to their own has one or more flaws or undesirable characteristics, but those flaws or undesirable characteristics are only applicable to the position if some set of claims you believe are true are in fact true, but you haven’t argued for and those the objections are directed at are free to (and in many cases probably would) reject. In other words, the problem occurs when one holds certain presuppositions that those who hold the view are free to reject (it might also be the case that they aren’t merely free to reject these presuppositions but do, and it may even be that rejecting the presuppositions is a natural and synergistic feature of the contrary view).
One might also call this the fallacy of unshared presumption. The central problem with this form of reasoning is that an argument for or against some position is based on considering the implications of someone holding one or more views contrary to one’s own, but, critically, not considering that they reject certain other presuppositions you hold.
I believe BB and other realists are routinely guilty of the halfway fallacy when criticizing error theory and other antirealist positions.
Then we get this:
Any argument for error theory will be way less intuitive than the notion that the Holocaust was, in fact, bad.
What is this? An argument? It sounds like a slogan. It also seems like it’s declaring, in advance, that no argument for error theory will be intuitive, without even having to consider what the content of those arguments actually is. How on earth can BB possibly know this? Even if it were true, what is the implication? If an argument’s premises need to be intuitive for BB to accept them, then is BB implying no argument for error theory could be convincing in principle? If not, then is BB only suggesting that the premises of arguments for error theory will always be “way less intuitive” than the claim that something is stance-independently bad (note BB still just says “bad” without specification, as if antirealists like me don’t think these things are “bad”), but that BB would still consider accepting such arguments anyway? If so, then what’s the point of saying this?
Note, again, the use of “intuitive” as if intuitions aren’t features of people. Claims can’t be “intuitive” or “counterintuitive” in and of themselves; they can only be intuitive to someone. BB then says, “Let’s test these intuitions,” but for some reason repeats a handful of seemingly unrelated principles from earlier. There may be a typo or duplication here or something. I’m not sure, so I’ll move on and maybe BB will clarify.
10.3 Subjectivism
BB next turns to subjectivism, which he characterizes as follows:
Subjectivism holds that moral facts depend on some people’s beliefs or desires. This could be the desires of a culture — if so, it’s called cultural relativism.
This is ambiguous between agent and appraiser relativism. Agent relativism holds that moral claims are made true by the standards of the agent performing the action or the culture of that agent. Appraiser relativism holds that moral claims are true or false relative to the standards of whoever is evaluating the moral action (or principle, character trait, etc.) in question. Critics of relativism routinely focus only on agent relativism, critiquing it to the exclusion of appraiser relativism, as though the latter didn’t exist. BB does that here.
10.3.1 Cultural relativism
But BB also gets a little silly with the rhetoric. Look at how BB opens this discussion:
This is an embarrassing subheading. In any case, BB goes on to describe cultural relativism as follows:
Cultural relativism is — as the sub-header suggested — something that I find rather implausible. There are no serious philosophers that I know of who defend cultural relativism. One is a cultural relativist if they think that something is right if a society thinks that it is right.
BB makes no clear distinction between agent and appraiser cultural relativism. The last sentence indicates that this is agent cultural relativism, not appraiser cultural relativism. An agent cultural relativist thinks that if a particular culture regards an action is good or bad, then it is good or bad. That culture “fixes” or sets the moral standards for itself. An appraiser cultural relativist instead holds that when we say things like:
“That’s morally wrong.”
This means something like:
“That violates my culture’s standards.”
The appraiser cultural relativist can and does judge people according to their own culture’s moral standards, not according to the standards of the culture whose practices they’re evaluating. This distinction is mentioned in the SEP entry on moral relativism:
[...] that to which truth or justification is relative may be the persons making the moral judgments or the persons about whom the judgments are made. These are sometimes called appraiser and agent relativism respectively. Appraiser relativism suggests that we do or should make moral judgments on the basis of our own standards, while agent relativism implies that the relevant standards are those of the persons we are judging (of course, in some cases these may coincide). Appraiser relativism is the more common position, and it will usually be assumed in the discussion that follows. Finally, MMR may be offered as the best explanation of what people already believe, or it may be put forward as a position people ought to accept regardless of what they now believe. There will be occasion to discuss both claims below, though the latter is probably the more common one.
In any case, appraiser relativism does not carry the implication that if some culture or individual thinks torture is okay, that you must think it’s okay for them to torture people. That’s only a feature of agent relativism. I elaborate on the agent/appraiser distinction here.
Does BB’s critique “work” against agent cultural relativism?
Sort of. If you really think that if some culture thinks it’s okay to torture babies then it is okay for them to torture babies, well, I’m not going to get on board with that. In that respect, I share the revulsion and opposition to such a view that BB likely shares. But I don’t think this makes the view false as a metaethical theory. I reject such views because:
They don’t reflect how I speak or think
I don’t endorse the normative and practical implications of these views
Insofar as they purport to capture ordinary thought and language, I doubt they capture any more than a small subset of some populations
I bet BB would agree with (a), (b), and (c). Note, however, that relativist accounts traditionally purport to capture ordinary moral semantics. In this respect, insofar as they succeed at doing so, they could turn out to be descriptively true, and, insofar as they are true in this respect, most critiques that appeal to one’s horror, shock, and outrage, would simply fail.
Suppose the relativist holds this view:
When ordinary people make moral claims, those claims serve to express a propositional statement about what is or isn’t consistent with the standards of that culture and, if it is consistent with the standards of that culture, this is what it means for the action to be good or permissible.
Now suppose that this is, in fact, what people mean. When they say things like “murder is wrong,” they mean “murder is inconsistent with my culture’s moral standards,” and further we establish that what it means for an action to be right or wrong just is that it is right or wrong according to that culture’s moral standards. If one makes the further normative leap to suppose that everyone is bound by this normative-value-fixing feature of ordinary moral language, then presto: agent cultural relativism is true, whether you like it or not.
In other words, whether agent cultural relativism is true or not could just turn out to be true in virtue of the semantics of ordinary moral language and the associated metaphysics of moral-fact-fixing features of language. If this is just how language and metaphysics plays out, then, and that it really is the case that each culture “decides for itself” what is “good or bad” relative to that culture, well, tough shit. Facts about how people speak aren’t refuted by horror or repugnance.
Of course, it isn’t repugnance or horror at the implications of agent cultural relativism that serve as direct or decisive evidence that it’s false. Rather, it’s that such an account is counterintuitive. That is, it may not seem to us that this is what people mean when they make moral claims. And that may very well be evidence that this isn’t what people mean. Fair enough: if this isn’t what people mean, and agent cultural relativism is in part supposed to be a descriptive account of what people mean, then it is to that extent false. But again, I would’ve thought that facts about what people mean are empirical questions. While our intuitions may provide some evidence of what people mean, and may be a fairly reliable guide, the final arbiter will be empirical data, not armchair hypotheses. Suppose it turned out that, as a matter of fact, this is what people mean: they really do speak and think like agent cultural relativists. If one holds a view of language where ordinary usage fixes the subject matter, then it simply follows that, as a matter of descriptive fact, agent cultural relativism correctly captures the meaning of ordinary moral claims.
This is where things get weird. Agent cultural relativism isn’t, strictly speaking, characterized as an account of ordinary language. It seems to carry normative moral implications. If one thinks that when people make moral claims, those claims refer to what is morally right or wrong relative to their cultures, this isn’t enough: one must also think that those claims in fact determine what is morally right or wrong for members of that culture. But how could that be a feature of the meaning of the claims? Simple: it can’t. This is a further, normative thesis that isn’t and couldn’t be a feature of ordinary moral language because it’s a normative thesis, not a thesis about the meaning of moral claims; it’d be one thing to say that people who make moral claims intend for those claims to be interpreted as having these normative implications; it’s quite another to say that their claims in fact carry those normative implications: the latter could not in principle be a feature of the meaning of the claims in question, for the same reason that if, when I say “that is morally obligatory,” I mean “that maximizes utility,” this actually makes it the case that maximizing utility is morally obligatory. Simply put: agent cultural relativism is not a purely metaethical theory at all; it is both a metaethical theory and a normative moral theory.
Ironically, it is the normative implications of “agent” relativism that are so repugnant, not the metaethical implications. Even more ironically, what makes these normative implications so objectionable is the fact that they function the way moral facts function for moral realists. What do I mean by this? I mean this:
Moral realists hold that there are stance-independent moral facts, and typically maintain (unless they’re some kind of naturalist) that these moral facts have some kind of “authority” over us: they compel us to act, they “bind” us, they “require” us to comply, and so on.
Agent relativists hold that there are stance-dependent moral facts that likewise bind us in more or less the same way. Whereas for the realist there are facts that bind us that are true independent of anyone’s standards, for the agent relativist there are facts that bind us independent of our own (or our culture’s) standards: namely, the standards of other individuals and cultures. If the members of another culture thinks it’s okay to torture babies, then agent relativists grant that it is okay for them torture babies in such a way that you must be okay with them torturing babies.
This may very well be a ridiculous view, but it’s ridiculous because it compels us to honor the moral values of other individuals or cultures, even if those values conflict with our own values.
Let’s grant that agent relativism is a highly objectionable view. What about appraiser relativism? Unfortunately, BB doesn’t offer a critique of appraiser relativism. As such, BB has not adequately refuted subjectivism. Appraiser subjectivism and cultural relativism don’t carry the normative implications outlined above. They can simply construe moral claims to be reports of the speaker’s standards or the standards of their culture. Such statements carry no necessary implications about how anyone else ought to act. For instance, suppose a psychopath says:
It’s morally good for me to torture babies for fun.
The appraiser relativist interprets this as:
It’s consistent with my moral standards to torture babies for fun.
If it is consistent with the psychopath’s standards, this would be trivially true. But it also carries no normative or practical implications for anyone else: all such a truth amounts to is the truth that the psychopath has accurately reported their personal values. You don’t have to honor or care about those values at all, because, for the appraiser relativist, your own moral claims reflect your values (or your culture’s). It’s much harder to show that such a position carries repugnant implications. I haven’t seen anyone convincingly do so.
10.3.2 Individual subjectivism
BB next turns to individual subjectivism:
Individual subjectivism says that morality is determined by the attitude of the speaker. The statement murder is wrong means “I disapprove of murder.”
This appears to be appraiser individual subjectivism. Technically, the culture/individual and agent/appraiser distinctions are orthogonal to one another: one can be an agent cultural relativist or an appraiser cultural relativist, and one can likewise be an agent individual subjectivist or an appraiser individual subjectivist. BB only addresses agent cultural relativism and appraiser individual relativism. BB says he’s given objections in a previous article, and quotes himself as follows:
If it’s determined by the moral system of the speaker the following claims are true.
“When the Nazi whose ethical system held that the primary ethical obligation was killing jews said “It is moral to kill jews,”” they were right.
“When slave owners said ‘the interests of slaves don’t matter,’ they were right.”
“When Caligula says "It is good to torture people,” and does so, he’s right”
“The person who thinks that it’s good to maximize suffering is right when he says “it’s moral to set little kids on fire””
Additionally, when I say “we should be utilitarians,” and Kant says “we shouldn’t be utilitarians,” we’re not actually disagreeing.
BB seems to mix up agent and appraiser relativism here. BB’s initial characterization appears to be appraiser individual subjectivism. This is because, if someone says “X is wrong,” then they mean “I disapprove of X,” then their claims only amount to reports about their personal standards. This does not, in itself, entail that those claims fix or determine what is morally right or wrong for that person. Yet these examples seem to indicate that BB is discussing agent individual subjectivism. First, BB says:
If it’s determined by the moral system of the speaker the following claims are true.
This is ambiguous and consistent with both agent and appraiser relativism. What matters is what it is that’s being determined as true. If when a person says “X is wrong” they just mean “I disapprove of X,” the only thing that’s being determined to be true when the person makes moral claims are facts about whether they approve or disapprove of the claims in question and whether that disapproval is consistent with the claim being made, e.g., if they say “X is wrong,” and they disapprove of X, the fact that they disapprove of X is what makes “X is wrong” true.
If this is all subjectivism amounts to, BB’s examples are all extremely misleading: they only appear counterintuitive, horrifying, repugnant, and awful if the reader fails to interpret them in line with the appraiser individual subjectivist’s analysis of the meaning of these claims. Let’s translate them in accord with that analysis:
“When the Nazi whose ethical system held that the primary ethical obligation was killing jews said “It is moral to kill jews,”” this statement meant “We approve of killing Jews.” Since they did approve of killing Jews, their statement was true.
“When slave owners said ‘the interests of slaves don’t matter,’ this meant “We don’t care about the interests of slaves.” Since they didn’t care about the interests of slaves, this statement was true.”
“When Caligula says "It is good to torture people,” and does so, he means “I like torturing people.” Since Caligula does like torturing people, this statement was true.
“The person who thinks that it’s good to maximize suffering is right when he says “it’s moral to set little kids on fire,” because he means “Setting little kids on fire will maximize suffering and that’s what I want to do,” and since it will maximize suffering is what he wants to do, what he says is true.
As you can see, once you translate all of these statements into the appraiser individual subjectivist’s analyses, all of these statements are true: they’re true reports about the attitude of approval or disapproval of the speaker. By obscuring this fact, BB’s remarks give the impression that if you endorse those statements because you agree with the subjectivist, that you think killing Jews, slavery, and baby torture are good. But this is not an implication of the appraiser subjectivist’s analysis of the meaning of these statements.
The force of BB’s objections rely on actively misleading readers. I rarely say this, but BB’s “arguments” here are not just terrible, they are unethical: even if BB isn’t aware of how misleading his remarks are, he has a minimal moral obligation not to be so negligent and sloppy in his presentation so as to mislead audiences. I believe BB’s objections here rely so extensively on rhetoric and misleading framing that, if doing so isn’t intentional (which would definitely make it unethical), it’s still culpably negligent. In short: I’m not just saying BB is wrong here (he is), I’m saying BB’s remarks are so misleading as to be unethical. It is morally wrong to misleadingly imply that philosophers who hold philosophical positions contrary to your own are okay with genocide and torture when this impression is entirely the result of your own negligence. BB and other realists should stop doing this: stop implying that antirealists are okay with genocide and torture, when you ought to know better.
BB’s final remark is also an interesting one:
Additionally, when I say “we should be utilitarians,” and Kant says “we shouldn’t be utilitarians,” we’re not actually disagreeing.
I suppose this is supposed to indicate some kind of counterintuitive implication of individual subjectivism. It isn’t. Once again, let’s translate it:
Additionally, when I say “I approve of us being utilitarians”” and Kant says “I disapprove of us being utilitarians” we’re not actually disagreeing.
This is a disagreement. It’s not a disagreement about what’s true. It’s a disagreement about what to do. Suppose you and a friend want to order food. You’re in the mood for pizza. They’re in the mood for sushi. If you say “We should get pizza,” and your friend says, “We should get sushi,” does this require you to be a gastronomic realist in order to disagree? No: you need not think there is some stance-independent fact of the matter about the “correct,” food to get, where the “correct” food to get has nothing to do with your preferences. Nothing strikes me (and, I hope, not you) about construing these remarks in subjective terms:
You: “I would prefer we get pizza.”
Them: “I would prefer we get sushi.”
Neither of you disagrees about what’s true. But you do disagree about what to do. When realists claim that relativism and other views entail that nobody “disagrees,” they are implicitly employing a narrow conception of “disagreement” that involves only disputes about what’s true. But people also disagree on matters of coordination: peoples’ goals conflict, and when this happens, they run into conflicts. When you want to pay $5000 for a car, but the car salesperson wants you to pay $6000, you disagree about the price of the car. Such “disagreements” don’t require anyone to think there is a “true” car price.
Subjectivists can and do disagree in this respect: If I want to live in a world without torture, and a psychopath wants to live in a world with torture, we don’t agree about what we should do. We don’t have to think there are stance-independent moral facts for us to have conflicting attitudes about what to do, nor would it be senseless or insane or a waste of time or self-contradictory to argue. Arguing likewise doesn’t require that one argue about what’s true. Arguing can involve negotiation about what to do.
Realists, and philosophers more generally, have an arbitrarily narrow conception of “disagreement,” the rest of us are not obliged to endorse. Once again, I have an article addressing this, called “What’s true vs. what to do.” Check it out.
11.1 Irrational desires
BB next presents the following argument:
1 If moral realism is not true, then we don’t have irrational desires
2 We do have irrational desires
Therefore, moral realism is true
Deductive arguments like this are always silly and pointless. They always require disambiguation. Once disambiguated, the premises either just entail the conclusion in virtue of the definitions of the terms, or don’t. If the former, the argument is trivial. If the latter, the argument isn’t sound because at least one premise will be false (or at least, one will be free to reject at least one of the premises). That’s just how deductive arguments work. They simply repackage the conclusion.
This argument is no different. Once we disambiguate what “irrational” desires are, this will either require:
Disambiguating “irrational” in such a way that for something to be irrational is for it to be inconsistent with stance-independent moral facts
Disambiguating “irrational” in such a way that the antirealist can endorse that people can have “irrational desires”
If (a), then the argument is not meaningfully different from simply reiterating that moral realism is true in the second premise, in which case the antirealist can reject P2, while if it’s (b) then the antirealist can reject P1. Since any non-ridiculous version of this argument wouldn’t allow for an antirealist to hold that there are “irrational desires” in a way consistent with antirealism, we’d have to go with (a), in which case whatever “irrational desires” are, if they exist, they must entail that realism is true. Let’s see what BB does with this argument.
BB begins by saying:
Premise one seems the most controversial to laypersons, but it is premise 2 that is disputed by the philosophical anti-realists.
I don’t know if that’s true, but okay.
Morality is about what we have reason to do — impartial reason, to be specific. These reasons are not dependent on our desires.
Once again, BB simply…presents an assertion. I don’t agree that morality is “about what we have reason to do,” nor do I grant that it specifically is about what we have “impartial reason” to do. It is entirely consistent with conventional conceptions of “morality” for partiality to not be immoral or to fall outside the bounds of morality: one might even think partiality is morally obligatory: that we have a moral obligation to assign greater worth to ourselves, our families, or our societies, and that it would be outright immoral to be impartial. BB is imposing a highly intellectualized, philosophical, Western conception of morality on his very conception of what morality is “about.” None of the rest of us are obligated to accept this as part of the meaning of morality or what morality is about. Impartiality is a feature of some normative moral theories, it isn’t part of any definition of morality I’d accept. BB continues:
Morality thus describes what reasons we have to do things, unmoored from our desires.
Again, assertions without arguments. I can do the same: no it’s not. QED.
When one claims it’s wrong to murder, they mean that, even were one to desires murdering another, they shouldn’t do it — they have a reason not to do it, independent of desires.
When one claims this? Who is “one” or “they”? That’s not what I mean when I make moral claims. Who is BB talking about?
While we’re at it, what does BB mean by a “reason”?
All of these remarks are setup for an argument for the first premise:
1 If there are desire independent reasons, there are impartial desire independent reasons
2 If there are impartial desire independent reasons, morality is objective
Therefore, morality is objective.
…This doesn’t appear to be a valid argument. For it to be valid, you’d need another premise, like “There are desire independent reasons.” But since moral realism more or less just is the view that there are desire independent reasons, this would be blatantly question-begging.
Note that this also doesn’t specify that the reasons in question are moral reasons, so there’s some sloppiness there: it doesn’t follow that if normative realism is true that moral realism is true. I also see little reason to think that if there are desire independent reasons that there are impartial desire independent reasons. Why would that follow? BB says:
Premise 1 is trivial — impartial desire independent reasons are just identical to non-impartial desire independent reasons, but adding in a requirement of impartiality.
Yea, they’re identical other than in precisely the way they’re not…that still doesn’t mean that if there are non-impartial desire independent reasons that therefore there are impartial ones. One can simply reject the first premise without issue here. I’m not sure what BB is doing here but this argument is…not good.
BB concludes:
Thus, if you actual [sic] have reasons to have particular desires — to aim for particular things, then morality is objective. Let’s now investigate that assumption.
Sure, but this is trivial. Of course if you have reason to have particular desires independent of your desires then a view according to which there are facts about what reasons you have to do things independent of your desires is true and a view according to which you don’t have such reasons is false. What is BB trying to do here? Defeat his opposition with tautologies? This is no different than saying:
Thus if there are bananas, then the view that “there are bananas” is true, and the view that “there are no bananas” is not true.
Next, BB seeks to defend the second premise:
Premise 2 states that there are, in fact, irrational desires. This premise is obvious enough.
No, it isn’t. It’s “obvious” only if you don’t disambiguate it. Of course I think there are “irrational desires,” but by this I don’t mean that there are desires which are inconsistent with what one has stance-independent reason to do. I mean that one can have desires that are inconsistent with achieving their broader reflective objectives. For instance, if a person deeply desires to be healthy, then a desire to eat nachos all day would be “irrational” in the trivial sense that this would be inconsistent with their other desire. I only endorse instrumental conceptions of rationality, according to which one is irrational insofar as one voluntarily acts out of accord with what would achieve their overall goals: some lower order desire may conflict with a higher order one, and insofar as the person acts on the former, they’re “irrational” in the sense that they’re failing to do what is in their own interests, relative to their own standards. Nothing about this implies or entails realism.
BB does what many philosophers do:
Present an innocuous statement that is ambiguous between a number of readings and that, in ordinary language, carries significant pragmatic implications
Declare that the statement is obviously true or false
Equivocate on the obvious truth or falsehood, which emerges from the ordinary reading of that statement, where those pragmatic implications are operative and yield the judgment that it’s obviously true or false, by conflating this “ordinary, pragmatic” use of the statement. This is achieved by:
Substituting in one’s purely semantic, non-pragmatic, stipulative account of the meanings of the key terms in the statement, one’s “stipulative, non-pragmatic” use of the same statement
One then claims that because it’s obviously true or false in the sense described in (4), the stipulative, non-pragmatic use, rather than in the sense of (3), when what causes people to generally conclude that it’s obviously true or false the ordinary, pragmatic use, not the stipulative, non-pragmatic use of the term.
BB, like many others, unwittingly relies on this conflation between ordinary language and technical discourse to give the impression that his philosophical positions are obvious, when in fact if you extract the pragmatic features of those statements it’s no longer clear that his position is as obvious as he seems to think it is.
In everyday discourse, to claim that there are “no irrational desires,” carries pragmatic implications for how you’d react to other people: that if you saw someone smoking cigarettes, or pursuing courses of action that seemed to reliably lead to their own misery, that you’d think “nothing to see here, moving along…” This is a perfectly reasonable presumption to make, because everyday contexts in which people make such remarks typically only manifest in conversations where something is at stake. People don’t just go around asserting abstract, technical philosophical truths for no purpose other than to make true statements. People talk for some purpose and to some end, and such purposes and ends usually center on that person’s desires or goals. Most people most of the time in most contexts aren’t just trying to say what they think is true; insofar as people bother to report one truth over another, there’s typically some reason why they’re doing so other than a mere desire to say what’s true.
Given this, if a person were to say in some real-world situation that “there are no irrational desires,” or “that person’s desires aren’t irrational,” this isn’t merely an expression of their philosophical views (if it is at all); rather, they’d be saying this for some purpose. They may be expressing their evaluation of the person, such as approval or disapproval, or otherwise signaling some social or conversational or personal goal.
Philosophers don’t treat language like this. They act like we’re just proposition-spewing automatons. They abstract what we say away from everyday contexts, then seek to analyze what we mean by what we say outside those contexts. But this is simply throwing out the bathwater with the bathwater: what we mean just is what we’re trying to do in those contexts.
That’s all language is: goal-directed behavior. It isn’t some bizarre quasi-mystical effort to align the mouth-sounds we make with some Platonic pattern. We talk to our ends and for purposes. Trying to distance the meanings of our terms from their pragmatic implications makes about as much sense as trying to study why people play baseball by studying baseballs and bats. If you want to study why people play baseball, you have to study the people playing baseball. If you want to know what people mean when they use certain terms or phrases, you have to study those people.
The philosopher’s misconceptions about language are, I believe, the original sin behind so many of these conflations and errors and bad arguments. It explains why BB and others think it’s “obvious” that there are “irrational desires.” The conflation between ordinary language, with its practical purposes and goal-directedness, which is captured in its pragmatic elements, and their weird, distilled, technical, theory-laden use of analogs to those terms, causes them to think that if something is “obvious” in the former sense this somehow carries over to their theories, because they think the ordinary use of the terms matches their technical use. If it doesn’t, then we’ve got a very, very big problem. In other words, suppose we have these two phrases:
Phrase 1 (Ordinary sense): There are irrational desires.
Phrase 2 (BB’s sense): There are irrational desires.
Now suppose we ask what each of these sentences means, and we discover they mean:
Phrase 1: [All sorts of things, but not typically the same thing as Phrase 2]. For instance, one thing people might mean is “People sometimes choose to do things that aren’t in their best interests, by their own lights, and they come to regret those things.”
Phrase 2: There are stance-independent reasons why we should have certain desires.
If this turns out to be the case, then the obviousness of Phrase 1 doesn’t transfer over to Phrase 2. BB is not entitled to presume that if it’s obvious that there are “irrational desires,” that therefore it’s obvious there are stance-independent reasons why we should have certain desires.
If BB wants to directly insist the latter is “obvious,” this is a much tougher sell: nonphilosophers won’t know what a “stance-independent reason” is, because this is a technical term and there’s little indication ordinary people would understand this term. I’d also add that, on empirical grounds, I also doubt they have this concept or think about rationality in a realist way, even if they don’t have the vocabulary to do so.
BB goes on to say this:
Note here I use desire in a broad sense. By desire I do not mean what merely enjoys; that obviously can’t be irrational. My preference for chocolate ice-cream over vanilla ice cream clearly cannot be in error. Rather, I use desire in a broad sense to indicate one’s ultimate aims, in light of the things that they enjoy. I’ll use desire, broad aims, goals, and ultimate goals interchangeably.
Thus, the question is not whether one who prefers chocolate to vanilla is a fool. Instead, it’s whether someone who prefers chocolate to vanilla but gets vanilla for no reason is acting foolishly.
This is strange, because it’s consistent with antirealism to think someone is foolish if they do something that doesn’t optimize for their own preferences. It looks like an instrumentalist conception of rationality.
BB then says this, which seems like a weird pivot and departure from this characterization of rationality, which at first glance appears consistent with antirealism, whereas these remarks do not:
The anti-realist is in the difficult position of denying one of the most evident facts of the human condition — that we can be fools not merely in how we get what we want but in what we want in the first place.
11.2 Future Tuesday Indifference
Antirealists are in no more a difficult position by denying this than we are in denying moral realism itself. There’s nothing difficult about denying this: I think the idea that people can be fools with respect to what they want in the first place is ridiculous, a kind of category error. I don’t even think this makes sense. BB presents cases that allegedly support his claim:
1 Future Tuesday Indifference: A person doesn’t care what happens to them on a future Tuesday. When Tuesday rolls around, they care a great deal about what happens to them; they’re just indifferent to happenings on a future Tuesday. This person is given the following gamble — they can either get a pinprick on Monday or endure the fires of hell on Tuesday. If they endure the fires of hell on Tuesday, this will not merely affect what happens this Tuesday — every Tuesday until the sun burns out shall be accompanied by unfathomable misery — the likes of which can’t be imagined, next to which the collective misery of history’s worst atrocities is but a paltry, vanishing scintilla.
They know that when Tuesday rolls around, they will shriek till their vocal chords are destroyed, for the agony is unendurable (their vocal chords will be healed before Wednesday, so they shall only suffer on Tuesday). They shall cry out for death, yet none shall be afforded to them.
Yet they already know this. However, they simply do not care what happens to them on Tuesday. They do not dissociate from their Tuesday self — they think they’re the same person as their Tuesday self. However, they just don’t care what happens to themself on Tuesday.
I don’t find anything even remotely irrational about this person. I simply think they have weird preferences.
BB eventually says:
This person with indifference to future Tuesdays is clearly making an error.
Clearly to who? It’s not clear to me that they are making an error. It’s clear to me they’re not making an error. This is a great thought experiment for demonstrating that only antirealism makes sense of human rationality! Once again, it appears BB is simply directly appealing to his own intuitions and reactions to these thought experiments. Sorry, BB, but I simply don’t care how you react to these scenarios. That’s not great evidence to me that you’re correct.
However, the anti-realist must insist that, not only is it not the greatest error in human history, it isn’t an error at all.
I straightforwardly do so and deny there’s anything even remotely “difficult” about doing so. Future Tuesday Indifference is a bad thought experiment and I’m wholly unimpressed with Parfit on the topic of metaethics.
BB says:
Only the moral realist can account for their error
This is silly. Obviously we deny there is an error. It’s only an “error” conditional on the realist being correct.
BB next considers some possible responses from an antirealist.
Now the anti-realist could try to avoid this by claiming that a decision is irrational if one will regret it. However, this runs into three problems.
Here’s the first one:
First, if anti-realism is true then we have no desire independent reason to do things. It doesn’t matter if we’ll regret them. Thus, regrettably, this criteria fails. Second, by this standard both getting the pinprick on a single Monday and the hellish torture on Tuesday would be irrational, because the person who experiences them will regret each of them at various points. After all, on all days of the week except Tuesday, they’d regret making the decision to endure a Monday pinprick. Third, even if by stubbornness they never swayed in their verdict, that would in no way change whether they choose rightly.
This is not a good response. An antirealist is not obliged to grant that things only “matter” if we have desire independent reasons to do them. Things can “matter” in a way consistent with antirealism. The antirealist can therefore hold that something is irrational if we’d regret it, and that this “matters” whereby it mattering means that we care about whether we’d regret something or not, i.e., they could offer a subjective conception of things “mattering” that’s consistent with antirealism. BB offers no reasons why an antirealist couldn’t do this. He simply helps himself to the presumption that the only sense in which things can matter is in a realist sense. Antirealists don’t have to grant this. This is yet another instance of the halfway fallacy. BB next says:
Second, by this standard both getting the pinprick on a single Monday and the hellish torture on Tuesday would be irrational, because the person who experiences them will regret each of them at various points.
An antirealist can consider things as irrational all else being equal if one would regret them, but also add that if one’s options are restricted to events that one would regret all else being equal, that it’s not irrational to choose the one that they’d regret the least.
11.3 Picking grass
This is BB’s next scenario:
Picking Grass: Suppose a person hates picking grass — they derive no enjoyment from it and it causes them a good deal of suffering. There is no upside to picking grass, they don’t find it meaningful or causing of virtue. This person simply has a desire to pick grass. Suppose on top of this that they are terribly allergic to grass — picking it causes them to develop painful ulcers that itch and hurt. However, despite this, and despite never enjoying it, they spend hours a day picking grass.
Is the miserable grass picker really making no error? Could there be a conclusion more obvious than that the person who picks grass all day is acting the fool — that their life is really worse than one whose life is brimming with meaning, happiness, and love?
Once again, I simply do not find it “obvious” this person is making any sort of mistake. They’re doing exactly what they want to do. What, exactly, is the mistake they’re making? This is, once again, a brute appeal to realist intuitions. So far, BB doesn’t seem to have much else than such brute appeals.
BB’s other scenarios are all like this. They describe weird people with weird preferences, then BB prompts us to share in how “obvious” it is that these people are “fools.” Well, they may be “fools” by my lights: they’re doing things and living their lives in ways I find unfortunate and pointless and regrettable…relative to my values. But I don’t think they’re making any mistakes relative to their own values, and I don’t think there’s any other sense in which they’re making any kind of normative errors, because, after all, I’m not a realist.
For me, these kinds of thought experiments backfire: they’re so terrible and underwhelming that I think to myself, “If this is the best realists have, then they probably don’t have much” and it actually causes me to be even more confident that moral realists are wrong. Yet this is all BB has in support of his premises: a repetitive list of thought experiments that only serve to preach to the choir. There is nothing to make an antirealist question their stance.
I think what’s going on with these scenarios is that BB and other realists are simply projecting their own values and preferences onto the scenarios, failing to model things from the first person POV of the people in these scenarios, mixing up how awful they would find it to live their lives in those ways, and then concluding that it must be bad for the people living their lives in that way. This is a speculative hypothesis, but let me just spell it out explicitly: I think BB and others may very well be crypto-antirealists who actually think more or less in the way I do: they act in accord with their own goals and values, and only care about their own goals and values, but they mistakenly project or externalize these onto the world around them, mistakenly thinking their own preferences and values are actually coming from the outside and are part of the furniture of the world itself. I don’t have any proof of this, and I don’t think BB or any other realists are committed to agreeing with me about this: I take them at their word that they mean what they say. But I do think that something like this could be going on outside of their conscious awareness.
In short: I suspect realists actually think more in line with antirealists, but fail to realize it, and that it is, in fact, antirealists who are better attuned not only to their own thinking but to how people think in general. Another way to put this is that I think realists are often really bad at introspection. I think we antirealists are generally better at it, and that realists are tripped up over their phenomenology and lost in weird intellectual ratiocinations that abstract away from the world so much they lose track of it.
12.0 The discovery argument
BB’s next argument begins by talking about mathematical discovery:
One of the arguments made for mathematical platonism is the argument from mathematical discovery. The basic claim is as follows; we cannot make discoveries in purely fictional domains. If mathematics was invented not discovered, how in the world would we make mathematical discoveries? How would we learn new things about mathematics — things that we didn’t already know?
The idea that “discovery” implies realism about a domain does not strike me as very compelling. One can invent a set of rules, or axioms, then make discoveries relative to those rules. This happens all the time. We’ve made lots of discoveries in chess. One can likewise make discoveries about what would best promote human interests, or achieve one’s goals, or maximize utility, and so on, without any of this suggesting there are stance-independent moral facts. Discovery is not inconsistent with antirealism. Nevertheless, BB suggests otherwise:
Well, when it comes to normative ethics, the same broad principle is true. If morality really were something that we made up rather than discovered, then it would be very unlikely that we’d be able to reach reflective equilibrium with our beliefs — wrap them up into some neat little web.
Yet again BB just makes assertions, without explanations or arguments. Why would it be “very unlikely”? Who knows! BB doesn’t say why. He just asserts that this is the case. Reflective equilibrium involves resolving tensions and inconsistencies in one’s overall set of beliefs in order to move towards a state of greater internal coherence. Nothing about such a process is inconsistent with moral antirealism. The discover/made up dichotomy is a false dichotomy: you make discover things within something that’s “made up,” and, in any case, it’s simply false that antirealism entails that one’s moral values are “made up.” Compare: I didn’t make up my food preferences, yet there is still no stance-independent fact of the matter about what food is good or bad.
What BB is delivering here is a canard cake: layers upon layers of false dichotomies, misrepresentations, dubious framings of the dispute, and underdeveloped distinctions, all in the service of implying there’s something defective or objectionable about antirealism without actually explicating what it is.
But as I’ve argued at great length, we can reach reflective equilibrium with our moral beliefs — they do converge. We can make significant moral discovery. The repugnant conclusion is a prime example of a significant moral discovery that we have made.
People have generally converged on which chess openings are better or worse. That doesn’t mean chess wasn’t invented.
With respect to the repugnant conclusion: I don’t find it repugnant. So I deny it’s any sort of “discovery.” Note that BB doesn’t show how it’s a discovery or explain what the discovery is or how we discover it. He just casually asserts that the repugnant conclusion is something “we” discovered.
BB continues with the unargued assertions:
Thus, there are two facts about moral discovery that favor moral realism.
First, the fact that we can make significant numbers of non-trivial moral discoveries in the first place favors it — for it’s much more strongly predicted on the realist hypothesis than the anti-realist hypothesis.
Once again, this is misleading. Either the “discoveries” in question are consistent with antirealism, in which case the fact that we’ve made such discoveries doesn’t favor moral realism, or the “discoveries” are of such a kind that they’re somehow better evidence of realism than of antirealism, in which case I’d probably just deny that we made discoveries of this kind. Unfortunately, BB doesn’t present any arguments or reasons to think we’ve made any such discoveries, he just asserts that we have.
Second, the fact that there’s a clear pattern to the moral convergence. Again, this is a hugely controversial thesis — and if you don’t think the arguments I’ve made in my 36-part series are at least mostly right, you won’t find this persuasive. However, if it turns out that every time we carefully reflect on a case it ends up being consistent with some simple pattern of decision-making, that really favors moral realism.
There’s also convergence on drinking Coca-Cola and enjoying the same TV shows. Convergence isn’t better evidence of realism than antirealism unless realist explanations for convergence are better than antirealist explanations. BB doesn’t actually show that realist explanations are better (or at least, not here). He just asserts that convergence favors moral realism.
Next, BB presents an inductive argument:
Consider every other domain in which the following features are true.
1 There is divergence prior to careful reflection.
2 There are persuasive arguments that would lead to convergence after adequate ideal reflection.
3 Many people think it’s a realist domain
All other cases which have those features end up being realist. This thus provides a potent inductive case that the same is true of moral realism.
This is too vague a sketch to be very useful. If the best you can show is that some people are realists about other domains, so maybe realism is true in this case, well, which domains? Let’s actually look at some examples. Note, too, that the third point is simply that many people think these domains are realist domains. If BB wants to argue that there’s an inductive case in favor of thinking moral realism is true, great: I agree. But one can always suspect that people are inconsistent or mistaken in those other domains, as well.
We’d actually have to agree that realism is consistently the best explanation in these other domains for this argument to even get off the ground, but BB doesn’t provide any specific examples or actually do much to build an inductive case. This brings me to a general observation about BB’s arguments: they’re all pretty bad, but they also come off as half-hearted and low-effort, as though BB couldn’t be bothered to put much effort into making a case even if a case could be made.
13.0 Phenomenal introspection
BB next says:
If we have an accurate way of gaining knowledge and this method informs us of moral realism, then this gives us a good reason to be a moral realist, in much the same way that, if a magic 8 ball was always right, and it informed us of some fact, that would give us good reason to believe the fact.
Sure, that’s true. If we have a way of knowing stuff and this method indicates moral realism is true, then this is a good reason to think moral realism is true. This is trivially true of literally any claim at all, including antirealism. What is the point of saying something like this?
BB continues:
Neil Sinhababu argues that we have a reliable way to gain access to a moral truth — this way is phenomenal introspection. Phenomenal introspection involves reflecting on a mental state and forming beliefs about what its [sic] like. Here are examples of several beliefs formed through phenomenal introspection.
My experience of the lemon is brighter than my experience of the endless void that I saw recently.
My experience of the car is louder than my experience of the crickets.
My experience of having my hand set on fire was painful.
We have solid evolutionary reason to expect phenomenal introspection to be reliable — after all, beings who are able to form reliable beliefs about their mental states are much more likely to survive and reproduce than ones that are not. We generally trust phenomenal introspection and have significant evidence for its reliability.
Thus, if we arrive at a belief through phenomenal introspection, we should trust it.
Naturally, BB is going to claim that phenomenal introspection suggests that realism is true:
Well, it turns out that through phenomenal introspection, we arrive at the belief that pleasure is good. When we reflect on what it’s like to, for example, eat tasty food, we conclude that it’s good. Thus, we are reliably informed of a moral fact.
Once again, BB invokes an unspecified “we,” apparently presuming to speak on behalf of everyone in the world, rather than just himself. BB is not in a position to speak for other people. Phenomenal introspection is a private matter. If BB wants to claim that when he introspects, it seems things are “good” in a way that favors realism, fine. But BB is not in a position to make such declarations about the results of other people’s phenomenal introspection. That’s a matter for each of those people to report.
When I introspect, nothing about my experience of pleasure or tasty food suggests that it’s “good” in a realist sense. On the contrary, it seems good in a way that favors antirealism. In other words, my phenomenal introspection suggests that antirealism is true and that realism is not true. In fact, virtually nothing could seem more obvious to me in all the world.
BB next addresses me explicitly and directly:
Lance Bush has written a response to an article I wrote about this argument; I’ll address his response here.
BB starts by summarizing Sinhibabu’s argument:
I summarize Sinhababu’s argument as follows.
Premise 1: Phenomenal introspection is the only reliable way of forming moral beliefs.
Premise 2: Phenomenal introspection informs us of only hedonism
Conclusion: Hedonism is true…and pleasure is the only good.
First, “good” is ambiguous: is this a normative claim, a metaethical claim, or both? If this is supposed to be an argument for moral realism, it’s misleading and objectionable to use normative language like “good” without qualification. I’m an antirealist, and I think things are good or bad, just not stance-independently good or bad. It would be far better for those presenting such arguments to be explicit about this. Failure to do so amplifies the risk of normative entanglement: falsely implying that antirealists don’t think things are good or bad. Some antirealists may not, but this isn’t an entailment of antirealism so it isn’t appropriate to imply it.
Second, there’s a lot baked into the first premise. What’s a “moral belief”? Why is phenomenal introspection the only way to gain knowledge of “moral beliefs”? In any case, even if we just grant whatever is going on in the first premise, I’d probably just reject the second premise. Why on earth would I accept that phenomenal introspection informs us that “hedonism” is true? If “hedonism” here is taken to be realist-hedonism, where pleasure is the only stance-independent good, then of course I’m not going to accept that there’s a method that informs us of what’s true, and it informs us that there pleasure is stance-independently good. Why on earth would an antirealist grant that? One would need an argument for this. Presenting this as the argument is dialectically toothless. I could present an analogously pointless argument myself:
P1: Lance’s epistemic framework is the only way to arrive at truth.
P2: According to Lance’s epistemic framework, moral realism is false.
C: Therefore, moral realism is false.
Absent a good argument for P1, nobody other than myself has any reason to take this seriously.
BB continues:
However, we can ignore premise one, because it serves as a reason other methods are unreliable — not as a reason phenomenal introspection is reliable. Lance says
I have a lot of concerns with (1), given that I don’t know what is meant by a “moral belief”
BB answers:
I take a moral belief to be a belief about what is right and wrong, or what one should or shouldn’t do, or about what is good and bad. Morality is fundamentally about what we have impartial reason to do, independent of our desires.
The first part of this is inadequate: I’d just have to ask BB what it means to say something is “right” or “wrong”, what one “should” or “shouldn’t” do, and what’s “good” or “bad.” Note that all of these terms have nonmoral uses: We talk about the answers to test questions being right or wrong, we talk about what a chef should or shouldn’t do in the kitchen, and we talk about which movies are good or bad. All of these terms are only distinctively moral insofar as one implicitly appends “moral” to each of these terms: morally right or wrong, morally should or shouldn’t, and morally good or bad, which, of course, doesn’t help clarify what “moral” means. What BB does in this first response is simply implicitly employ the concept of morality in explaining the concept of morality. This is circular and useless.
Next, BB says that a moral belief is “fundamentally about what we have impartial reason to do, independent of our desires.” Well, that sounds like realism. In that case, the first premise seems to imply that there is a method for acquiring knowledge of stance-independent moral facts, i.e., knowledge that presupposes that moral realism is true. Since I don’t think we have a “reliable” (i.e., truth-conducive) method of forming “moral beliefs”, where such beliefs entail realism, then I’m not going to accept the first premise. Once one clarifies what is going on with the premise, the premise either itself begs the question in favor of moral realism, or BB’s presented an enthymeme with one or more implicit question-begging premises. In short: BB’s summary of Sinhibabu’s argument appears to presume realism is true. In which case, I see no reason to accept the argument.
BB quotes me criticizing Sinhibabu’s claims about the reliability of phenomenal introspection, then says:
Lance here criticizes some types of introspection — however, none of this is phenomenal introspection. People are good at forming reliable beliefs about their experiences, less good at forming reliable beliefs about, for example, their emotions. Not all introspection is alike.
No, BB, I was specifically talking about phenomenal introspection. I’m claiming that phenomenal introspection is unreliable. Here’s what I had said:
However, my initial reaction is to reject (2) because it seems like Sinhababu overestimates what kinds of information is available via introspection on one’s phenomenology, at least not without bringing in substantial background assumptions that aren't themselves part of the experience or that might have a causal influence on the nature of the experience. It’s possible, for instance, that a commitment to or sympathy towards moral realism can influence one’s experiences in such a way that those experiences seem to confirm or support one’s realist views, when in fact it’s one’s realist views causing the experience. Since people lack adequate introspective access to their unconscious psychological processes, introspection may be an extraordinarily unreliable tool for doing philosophy.
My point here is that various causal factors can influence the content of one’s phenomenal introspection in ways that undermine its truth-tracking status, selectively or in general. I was not talking about non-phenomenal introspection. BB has simply misinterpreted me. Note, too, that BB once again simply asserts “People are good at forming reliable beliefs about their experiences.” Well, BB, that’s exactly what I am denying!
I am, after all, broadly speaking in the illusionist camp about consciousness. I think people’s experiences systematically misrepresent the way the world is and that people systematically fail to introspect in a reliable way about the nature of their experiences. I don’t grant that people are good at phenomenal introspection, least of all philosophers, who I believe are inducted into ways of thinking that further compromise their ability to introspect accurately. If anything, I think studying philosophy has probably made BB worse at introspection! For what it’s worth, if by “phenomenal” Sinhibabu is alluding to actual phenomenal states, or qualia, then I don’t even think there are any such things, so I certainly don’t think one can introspect reliably about them, because I don’t even think they exist to be introspected about in the first place.
BB again quotes me in this article, where I said this:
Philosophers may think that they can appeal to theoretically neutral “seemings” to build philosophical theories, but not appreciate that the causal linkages cut both ways, and that their philosophical inclinations, built up over years of studying academic philosophy, can influence how they interpret their experiences, and do so in a way that isn’t introspectively accessible. If this does occur (and I suspect it not only does, but is ubiquitous), philosophers who appeal to how things seem to support their philosophical views are, effectively, appealing to their commitment to their philosophical positions as evidence in support of their commitment to their philosophical positions. Without a better understanding of the psychological processes at play in philosophical account-building, philosophers strike me as being in an epistemically questionable situation when they so confidently appeal to their philosophical intuitions and seemings.
BB responds:
I think this objection to phenomenal conservatism is wrong. One can reject a seeming. For example, to me, the conclusion I describe here seems wrong, however, I end up accepting it upon reflection, because the balance of seemings supports it.
There was no objection to phenomenal conservatism here, so I don’t know what BB is talking about. Phenomenal conservatism holds that if something seems true, then you’re justified in believing it’s true in the absence of defeaters. I’m not rejecting that. I’m pointing to defeaters for specific people and specific forms of phenomenal introspection. I never suggested one can’t reject a seeming, either, so I have no idea what BB is on about here.
My objections are, however, apparently irrelevant because Sinhababu’s argument doesn’t rely on “seemings” but on “phenomenal introspection”:
But we can table this discussion because Sinhababu doesn’t rely on seemings — he relies on phenomenal introspection.
I hope you can hear my eyes rolling at that pronouncement: swap out “seemings” for “phenomenal introspections” in my above remark and I’d say the same damn thing. The distinction isn’t important to any points I am making. So this move of drawing attention to the distinction is irrelevant: it doesn’t change the substance of my claims.
BB continues to quote large portions of my article so I won’t repost those here, then eventually responds:
I agree that generally introspecting on experiences doesn’t inform us of their mind-independent goodness. But if we introspect on experiences that we don’t want but are pleasurable, they still feel good, showing that their goodness doesn’t depend on our desires.
This is not a good line of reasoning, because it smuggles in realist presumptions. Note that BB makes remark: “showing that their goodness doesn’t depend on our desires.”
Their goodness? Are you kidding me? Why on earth would I grant that because I can desire not to have a pleasurable state that this shows that the pleasurable state involves some kind of “goodness” independent of my desires? I want to pause here to draw attention to just how bad BB’s reasoning is here: BB is so caught up in thinking in realist terms that it doesn’t appear to have occurred to him that he’s reifying “goodness” in precisely the way I and other antirealists find objectionable. I don’t think pleasurable states have any sort of “goodness” independent of my desires!
When I introspect about experiences that I don’t want that are pleasurable, they do have a “pleasure” feel to them, but it’s misleading to say they feel “good.” Just what does one mean by this? They feel pleasurable, to be sure, and one might offer a descriptive account of what “pleasure states” are like and use the term “good” to refer to that description. But the use of the term “good” here is ambiguous: is it referring to the descriptive qualities of the state, or a normative evaluative stance towards the state? I think BB equivocates on the meaning of “good” and “goodness” here:
But if we introspect on experiences that we don’t want but are pleasurable, they still feel good [in a non-normative, descriptive sense], showing that their goodness [in a normative, non-descriptive] doesn’t depend on our desires.
I don’t think normativity is built into my experiences. I don’t think there are “goodness qualia.” Maybe BB does, but if so, and if BB appeals instead to the claim that when we introspect, that pleasurable states feel “good” in some normative sense, then I’d just deny this, since my experiences don’t feel “good” to me in that way.
BB does suggest this with his next response:
But when you reflect on pleasure it feels good in a way that seems to give one a reason to promote it — to produce more of it.
Speak for yourself! When I reflect on my pleasure, it does not feel good to me in a way that “seems to give one reason to promote it.” That is precisely the kind of theory-laden, philosophical nonsense I think BB and others infer, fail to realize that they’re inferring, and then mistakenly claim is a feature of their phenomenology. BB once again appears to be making claims about people’s phenomenology: if they’re claims about me, they’re false. If they’re claims about people in general, well, those are empirical claims, and I doubt BB has good empirical evidence for those claims. If they’re claims about BB’s own introspection, that’s fine, but I see no good reason to privilege BB’s introspections over my own.
BB then says:
This is a distinctly moral notion.
Not for me it isn’t. Nothing about my experience of pleasure is “distinctly moral.” Once again, BB seems to be engaged in some kind of phenomenal imperialism, implying some claim about how I or others think without any evidence. If this is how it seems to BB: once again, fine, but BB is not in a position to speak on behalf of everyone else.
BB just continues with similar assertions:
Pleasure feels good in the sense that it’s desirable, worth aiming at, worth promoting.
Not to me, and I don’t think it feels that way to most other people (but that’s an empirical claim). Again, speak for yourself.
If this argument successfully establishes that pleasure is worth promoting, then it has done all that it needs to do.
Yea, but it hasn’t.
I want to draw attention to another difference between BB and I. BB quotes me saying this:
I don’t think any of my experiences involve any distinctively moral phenomenology, and such experiences are better explained in nonmoral terms. I’d note, however, that the notion that “hedonism is true” doesn’t make clear that hedonism is the true moral theory which isn’t explicitly stated here. I don’t know if Sinhababu (or BB, or anyone else) claims to have distinctively moral phenomenology, but I don’t think that I do, and I’m skeptical that anyone else does.
Then responds:
This question is ambiguous, but I think the answer would be no.
…That’s it. Notice that BB doesn’t explain what the ambiguity is, or try to disambiguate two or more interpretations. He just says it’s “ambiguous” and moves on. Ambiguous how? Who knows! He doesn’t bother to explain! This is a difference between BB and I: I take the time to actually disambiguate what I claim are ambiguous remarks. In this case, at least, BB doesn’t. He just asserts something is the case and moves on. This is lazy philosophy.
Next, BB quotes me saying this:
In any case, if this remark: “Therefore, hedonism is true — pleasure is the only good,” … is meant to convey the notion that hedonism is true in a way indicative of moral realism, I still I am very confident that it doesn’t mean anything; that is, I think this is literally unintelligible. I find my experiences to be good, in that I consider them good, but I don’t think this in any way indicates that they are good independent of me considering them as such, nor do I think this even makes any sense.
BB says he has a few things to say:
1 It seems that most people have an intuitive sense of what it means to say something is wrong. This normal usage acquaintance is going to be more helpful than some formulaic definition that appears in a dictionary.
Unfortunately, this is completely irrelevant to my point. Even if I grant that “most people”, presumably nonphilosophers, are competent users of the non-technical English word “wrong” in ordinary language, this has absolutely nothing to do with what I’m claiming is unintelligible. I am not claiming that everyday uses of the term “wrong” are unintelligible. I’m claiming Sinhababu’s use of the term in a technical context is unintelligible. Note that I specifically said:
[...] if this remark: “Therefore, hedonism is true — pleasure is the only good,” … is meant to convey the notion that hedonism is true in a way indicative of moral realism [...]
I'm talking about the meaning of BB’s use of the term good, in the context of BB’s argument. I’m not talking about ordinary language uses of “good.” After all, I don’t think ordinary people are moral realists! I explicitly maintain that BB’s use of the term doesn’t match ordinary language and thought, so ordinary language and thought is (on my view) irrelevant. If anything, the disparity between BB’s usage and what ordinary language and thought is one of the issues I have with BB’s use of terms like “good”: I find them to be weird, technical, and idiosyncratic.
2 This seems rather like denying that there’s knowledge on the grounds that we don’t have a good definition of it. Things are very difficult to define — but that doesn’t mean we can’t be confident in our concepts of them. Nothing is ever satisfactorily defined.
No, it’s not. There’s lots of things that are hard to define but that are perfectly intelligible. I am very much on board with the late Wittgenstein, and have been critical of the failed project of providing “sufficient and necessary conditions,” of engaging in conceptual analysis, and of attempting to treat philosophy as some kind of super-dictionary adventure almost for as long as I’ve done philosophy. I would be the absolute last person to demand “good definitions” for anything. BB has this completely wrong.
3 I take morality to be about what we have impartial reason to aim at. In other words, what we’d aim at if we were fully rational and impartial.
This is wildly unhelpful. Even if I employed standard analytic philosophical methods, I’d reject the first part of this: I don’t grant that philosophy is about what we have “impartial” reason to aim at. I think it’s perfectly consistent even within analytic philosophy to maintain that it isn’t analytically true that morality requires impartiality. I see no issue at all with normative moral theories explicitly including various forms of partiality towards oneself and others. These theories don’t fail merely on the grounds that they’re not impartial.
Second, notions like “fully rational,” insofar as they bake in realist conceptions of rationality (and, I would bet, they do), seem to characterize morality in realist terms from the outset. Why on earth would I grant that? BB is welcome to stipulate a definition like this, but I’m not obliged to grant it the status of being a good or correct account of morality. At best, I’d regard it as a proprietary account.
In any case, this explanation does nothing to address my charge of unintelligibility, nor do BB’s other two replies.
BB continues to make claims that don’t really seem to engage with my concerns:
The beliefs about what they’re like are beliefs about the experience. So, for example, the belief that hunger is uncomfortable is reliably formed through phenomenal introspection.
That they’re “uncomfortable” in what sense? If we’re to move from phenomenal introspection to something like realism, then either the realist “stuff” is part of the content of the phenomenal introspection (which I’m denying), or it’s an inference one makes based on the content of the phenomenal introspection, but is not itself a part of it, in which case (a) it isn’t even phenomenal introspection, but involves non-phenomenal inferences about our phenomenal introspection, and (b) even if phenomenal introspection were reliable, that doesn’t mean one’s inferences about one’s phenomenal introspection are reliable.
In other words, if phenomenal introspection is somehow intended to support moral realism, this is either going to be because something in the content of our phenomenal experiences that lends itself to realism, e.g., if we have access to “intrinsic goodness qualia” or whatever, or it isn’t part of the experience, in which case we’re inferring realism based on those experiences, in which case even if phenomenal introspection were reliable, that doesn’t entail that the theoretical inferences we make about our experiences are reliable.
BB quotes me again, and the chain of quotes within quotes is getting a little complicated, so I’ll indicate explicitly what I was saying and what BB was saying:
Me: There are other difficulties with BB’s framing here:
BB: “Premise 2 is true — when we reflect on pleasure we conclude that it’s good and that pain is bad.”
Me: This is ambiguous. What does BB mean by ‘good’ and ‘bad’? Since I understand these in antirealist terms, if Premise 2 is taken to imply that they’re true in a realist sense, then I simply deny the premise. I find it odd and disappointing that BB would echo the common tendency for philosophers to engage in such ambiguous claims. BB knows as well as I do that one of the central disputes in metaethics is between realism and antirealism. So why would BB present a premise that only includes, on the surface, normative claims, without making the metaethical presuppositions in the claim explicit?
BB says in response:
This was responded to above — when we reflect on pain we conclude that it’s the type of thing that’s worth avoiding, that there should be less of. We conclude this even in cases when we want pain. To give an example, I recall when I was very young wanting to be cold for some reason. I found that it still felt unpleasant, despite my desire to brave the cold.
Once again with the “we”. This is not what happens when I reflect on pain. I do not conclude that “it’s the type of thing that’s worth avoiding,” in general, or with respect to this particular scenario. I don’t think there should be less of the “unpleasant” experience associated with cold even in circumstances where I find it desirable. I don’t think this is the type of thing “worth avoiding.” Yet BB seems perfectly comfortable speaking on behalf of others. BB should get over this conceit: simply because you’ve reached a particular conclusion, doesn’t mean others have or should reach the same conclusion.
Yet again, BB continues with the bizarre use of “we” without qualification. Imagine I did this:
When we introspect, we realize moral realism is obviously not true.
I doubt BB or anyone else would take this seriously, because that’s not what they conclude when they introspect. It’s just bizarre to just go around talking about what “we” conclude when “we” introspect as if you’re speaking on behalf of others.
Can I nevertheless just declare that this is what BB concludes when BB introspects? Of course not. Because I don’t have enough intellectual conceit to imagine that when anyone else introspects, that they will arrive at the same conclusions as me (or at least they should if they’re thinking “properly”, i.e., like me). BB, apparently, does. BB is simply presenting himself as in a privileged position to speak on behalf of how others think.
I hope BB will in the future either (a) speak only for himself (b) speak on behalf of people who agree with him or (c) specify who “we” is and present reasons or evidence to think his claims about this “we” are true. Just dropping “we” and “us” all over the place like this is a very poor way to argue; it’s unclear, and depending on what one means, it may simply not be true.
Finally, I say this:
The other problem with this remark is the claim that when “we” reflect on pleasure we conclude that it’s good and that pain is bad. Who’s “we”? Not me, certainly. I don’t reach the same conclusions as BB does via introspection. BB echoes yet another bad habit of contemporary analytic philosophers: making empirical claims about how other people think without doing the requisite empirical work. BB does not have any direct access to what other people’s phenomenology is like, so there’s little justification in making claims about what things are like for other people in the absence of evidence. And there’s little empirical evidence most people claim to have phenomenology that lends itself to moral realism.
BB responds:
I think Lance does — he’s just terminologically confused. When he reflects on his pain, he concludes it’s worth avoiding — that’s why he avoids it! I think if he reflected on being in pain even in cases when he wanted to be in pain, he’d similarly conclude that it was undesirable.
This is remarkable. BB really does appear not only to be making claims about his own phenomenology, but to be making claims about other people’s phenomenology. Here BB is quite literally just declaring—without any arguments or evidence—that my experiences are just like his. Based on what, exactly? Maybe BB does think this. Okay. Why? And why should any of us agree with him?
Not only does BB claim to know what my phenomenology is like, he even declares what my conclusions are. He says, “he concludes it’s worth avoiding.” Again, based on what?
BB presents little reason to think he knows better than I do what my experiences are like, what the results of my philosophical introspection are, or what my conclusions are about my experiences. He then makes this claim:
I think if he reflected on being in pain even in cases when he wanted to be in pain, he’d similarly conclude that it was undesirable.
I’ve already reflected on this and concluded otherwise. I’ve been studying this topic for a very long time. It’s really bizarre to suggest that if I were to reflect on the matter, I’d reach the same conclusion, as if I hadn’t already reflected on the matter many times, reached conclusions contrary to this, and that’s part of why I’m suggesting otherwise.
I am happy to grant that BB has reflected on his experiences and reached different conclusions from me. I just think BB’s conclusions are wrong. Maybe BB has similar phenomenology as me, but is making incorrect inferences about it. Maybe his phenomenology is different. Absent compelling evidence one way or another, I see little reason to think BB’s speculations about what my experiences are like, and how I’ve reflected on my experiences, are any more likely the result of confusion or error than his own.
14.0 Addressing BB’s section, “Responding to Objections”
BB responds to several objections. I don’t find any of these objections very good to begin with so I have no interest in defending them.
15.0 Conclusion
I don’t have much to add. I don’t think BB has presented a single decent argument for moral realism. I hesitate to even say BB has done much to present many arguments at all. Almost everything BB has to say is some variation of the notion that realism is obvious, intuitive, and comports with “our” phenomenology. I believe I’ve adequately conveyed why I don’t think this is convincing at all. Insofar as BB has presented much beyond this, I don’t find any of that very convincing, either. Many of BB’s objections turn on misleading characterizations and framings of antirealist commitments. Once disambiguated, a clear picture of the dialectic reveals, I believe, that BB has very little to say in favor of realism or against antirealism. Antirealists can easily rebut everything of substance BB has to say here, what little there is.
Excellent article. I have one question in particular:
> I’ve discussed this so many times it’s become tedious: the best available empirical evidence does not suggest that most people are moral realists, act like moral realists, think like moral realists, speak like moral realists, endorse moral realism, or in any way favor moral realism.
That's been my experience as well, though realists never tire of claiming the opposite. Do you have a single article where you've collected this information (with links to the relevant studies and/or summaries, tables etc)? It would be great to have that in a single place, both so you could link to it yourself and because it would be an invaluable resource for refuting this claim in the wild.
Layperson (with respect to moral philosophy) here. I am confused by realism, moral or otherwise. If the suffering of dinosaurs being bad is an objective moral fact, meaning it’s independent on the opinions of any observer or even on the existence of any observer, where is this fact stored? If not in an observer (in the form of an opinion) where else? In the mind of god? If somehow moral beliefs were wiped out from all of our brains with some scifi device, do moral realists think we would be able to rediscover them from scratch, without any difference with respect to the original?