David Friedman on Moral Realism
Some time ago, David Friedman wrote about moral realism. Friedman makes the following claim:
It is only a slight exaggeration to say that almost everyone believes in moral realism and almost everyone, at least in the circles I usually move in, denies believing in it. Everyone, with the possible exception of psychopaths, feels that some things — stealing from a friend who trusts you, for example — are wrong, not just illegal or imprudent but wrong. And yet most people other than the seriously religious, at least in the academic world where I have spent most of my life, would deny the existence of moral facts, interpret them, if pushed, as preferences, like the preference for chocolate ice cream over vanilla, or as social rules we have been trained into.
This post makes two questionable claims:
Most people believe in moral realism
Only psychopaths could actually be moral antirealists, with the implication that self-proclaimed moral antirealists aren’t really antirealists (this, at least, is how I’ve interpreted the remarks on psychopathy in the context of the full post.)
The first claim is unsubstantiated and probably false. Whether most people believe in moral realism is an empirical claim, and it is not supported by available data.
The second claim is unsubstantiated, probably false, and strikes me as very implausible. Having moral feelings and acting on those feelings does not require belief (implicit or otherwise) in moral realism. Friedman presents no compelling reason to think that self-professed moral antirealists are generally engaged in some kind of performative contradiction that reveals that they’re really moral realists, nor that the only sorts of people that would reject moral realism are psychopaths.
I elaborate on both of these claims in this article, and respond to some of Friedman’s other remarks.
There is very little that is new in these objections. Those of you who are already familiar with the critiques I raise will get little out of reading this. So I will first offer a brief summary, because I delve into many of the same points below. If nothing in the summary seems new to you, I would encourage you to skip over this post. I wrote it largely in the hopes that Friedman or others unfamiliar with my objections would have a single place where all of the types of critiques I raise are leveled directly at Friedman’s post.
Summary
Friedman suggests in numerous ways that most people believe in realism, and have phenomenology and intuitions that support moral realism (e.g., feeling as though they “perceive” moral facts). As I have argued many times, this is an empirical claim, there is little evidence to support it, and what evidence we do have suggests this isn’t true.
I don’t know what Friedman’s stance is, but some philosophers are dismissive of the importance of empirical data for addressing such questions. Those philosophers, at least, are mistaken. I reiterate several reasons why armchair judgments are not adequate for addressing these questions, largely centering on the fact that most people’s ways of thinking about these issues are extremely culturally parochial and they typically fail to appreciate the degree to which human psychology can vary.
Friedman suggests that most people who claim to be moral antirealists aren’t actually moral antirealists, and that possibly only psychopaths lack the sorts of intuitions or phenomenology that supports intuitionistic approaches to realism. This suggests a very unflattering picture of self-professed antirealists: either we’re wrong about and act in a way inconsistent with our own beliefs, or we’re psychopaths. This implied dichotomy is not well-supported in Friedman’s article. Once again, these are empirical claims, they aren’t supported by available empirical data, and they are extremely implausible.
Friedman presents a Pascalian wager. It’s unclear what exactly the wager is, but it doesn’t look very promising. I’d like to hear it fleshed out more, but I am confident I’d have objections to it.
Friedman endorses ethical intuitionism. Once again, I doubt most people actually have realist intuitions of the sort Huemer and others report having. There is some discussion of a faculty for “perceiving” moral facts. Such language is dubious. If it’s intended to closely parallel actual sensory faculties, and is subject to empirical investigation, I doubt we’d find evidence in support of such a faculty. If it is supposed to be some empirically unassailable analog to actual perception, and relies on some kind of metaphor for real perception, I’m not sure what the metaphor is or how this faculty is supposed to work. Friedman appeals primarily to consistency in our moral phenomenology as evidence of the faculty’s reliability: people supposedly not only to “perceive” moral facts, but to perceive the same moral facts. According to Friedman, this suggests they’re really perceiving moral facts.
I challenge this on numerous grounds: I don’t think most people feel like they’re perceiving moral facts in the first place, I think there’s a lot more moral disagreement than Friedman seems to think there is, the degree to which people share similar moral attitudes is an empirical question anyway, and anecdotal and personal experience is a poor guide because the people Friedman and other intuitionists are familiar with represent a very small and psychologically unrepresentative sample of humanity from which generalizations are probably not justified. Even if, aside from all of these objections, most people did report realist phenomenology and did share similar moral values, this still wouldn’t be good evidence of realism, since there are better explanations for why people would think this way than realism being true.
Alright, that wraps up the short version of my objections. For most of you, that should be enough. For the rest of you, consider what follows to be extensive footnotes.
1.0 There is no good evidence most people believe in moral realism
People often make claims that would best be addressed by adequate empirical data in almost total absence of any actual empirical data to support those claims. David Friedman begins his remarks on moral realism with the inauspicious proclamation that:
It is only a slight exaggeration to say that almost everyone believes in moral realism and almost everyone, at least in the circles I usually move in, denies believing in it.
The proportion of people who believe in moral realism is an empirical consideration about human psychology. I have argued extensively that there is little empirical evidence to support the claim that any significant number of people are moral realists, much less anyone. Friedman later makes a similar remark. In arguing for moral realism, Friedman adds:
No god is required for the argument, merely the nature of right and wrong, good and evil, as most human beings intuit them.
Here, Friedman appears to clearly express the view that “most human beings” intuit a realist conception of morality.
First, as I argue here, we are not in a good epistemic position to generalize about what the entirety of humanity believes from the armchair. Our personal experiences are nowhere near adequate to make claims about what most people believe.
I have also addressed the specific empirical claim that most people are moral realists numerous times, including the following posts:
Critiques of J. P. Andrew’s claim that most people are moral realists:
Critiques of Dominik’s claim that most people are moral realists:
Critiques of Ben Watkins’s claim that most people are moral realists:
Critiques of Mike Huemer’s claim that most people are moral realists:
…the tl;dr: available empirical evidence appears to show that many people endorse moral antirealism. However, these studies have extremely poor generalizability, meaning that we do not know how well the results of these studies generalize to populations that didn’t participate in these studies, which is most of the world’s populations.
These studies also suffer serious methodological shortcomings, and may not provide valid measures. Thus, it may be that we don’t have an accurate estimate of the proportion of moral realists in any population of nonphilosophers. However, the most methodologically rigorous studies consistently find very high rates of antirealist responses among nonphilosophers. Even the less rigorous studies find high rates of antirealist responses. This leaves us in the following situation:
We probably know very little about what proportion of people endorse moral realism
If we do know anything, the best available evidence suggests that a majority of college
students and people on Amazon’s Mechanical Turk in the United States are moral antirealists, while rates of antirealism or antirealist responses are very high, or at least non-negligible, among numerous populations across cultures
No plausible reading of available empirical data would support the claim that almost everyone believes in moral realism.
Note the populations where we find high rates of antirealist responses are typically the same populations people who claim most people are moral realists tend to come from. If they are relying on personal experience and anecdotes, one would think their judgments about the populations they come from would be the most reliable. If they’re not, we have even less reason to think they’d be in a good position to make accurate judgments about populations they’re unfamiliar with. In any case, what we don’t have is a body of literature consisting of well-validated measures that consistently show that the vast majority of participants endorse moral realism.
Nevertheless, the claim that “almost everyone believes in moral realism” could be true. I want to highlight a remarkable conceit about such claims. How would Friedman, or anyone without extensive knowledge of human psychology, be in a position to make such a claim? One might have compelling evidence that people are predisposed via natural selection towards a belief in moral realism, or one might have compelling cross-cultural data that people in most societies around the world believe in moral realism.
Such data simply does not exist. Despite over twenty years of research on the psychology of metaethics, we still haven’t devised a single, well-established, valid measure for evaluating whether native English speakers in the United States who share the same cultural and linguistic background as most researchers conducting these studies are moral realists. On the contrary, as I have pointed out ad nauseam in the blog posts linked above, as researchers have identified methodological shortcomings in prior research and done their best to mitigate these shortcomings, studies have begun to find extremely high and consistent rates of people endorsing moral antirealism.
Simply put, there is no good reason to think most people believe in moral realism.
I wrote my dissertation specifically on the question of whether nonphilosophers are moral realists. And after all that, while I am confident in my conclusions, they are still tentative, and there is still a lifetime’s worth of work to do to support (or discover I am mistaken) about folk metaethical indeterminacy (the view that most nonphilosophers have no particular metaethical stances or commitments). Figuring out what everyone thinks is really difficult. I don’t understand why people are comfortable making sweeping claims about how everyone thinks.
2.0 Who needs data?
Philosophers and other academics may scoff at the necessity of such data. Why bother? We don’t need to do rigorous cross-cultural studies to determine whether almost everyone likes sex or believes the moon exists. Some facts are obvious, and, where they might not be immediately obvious, can be gleaned from a bit of experience. Perhaps one of those facts is that people are moral realists.
Is it, though? Why are people so confident about this? I suspect many philosophers and those in their orbit who opine on such matters suffer from a kind of culturally parochial nearsightedness: they consult how they and the people around them seem to think, or be, or act, or speak, or expand their scope to a crude gestalt of their surrounding culture, and generalize from these anecdotes and crude impressions to everyone. It would be a profound understatement to point out that generalizing from unsystematic impressions to the entirety of humanity is not a reliable way to make accurate inferences about how humans think. Psychologists struggle to draw confident conclusions even with decades of cross-cultural data.
Such judgments are sometimes justified: most of the people around me don’t like stepping on sharp objects, or being lied to. Most of them laugh at things they find funny, avoid things they don’t like, and want to be liked by others. I am confident most other populations are similar in all of these respects.
I am confident that one can make universal inferences in all of these cases because there are good reasons for thinking the causal factors associated with these behaviors are due to shared, fundamental features of human experience: there are good biological and evolutionary reasons why people would avoid pain; pain avoidance is found in nonhuman animals and is generally critical to survival. I don’t simply infer that because I dislike pain and the people around me dislike pain, that almost everyone dislikes pain. Rather, I also have good reasons to think there is a reason why people dislike pain, and this reason almost certainly applies to the circumstances every other human population experiences just as much as my own circumstances. Such inferences are thus not simply a crude inductive inference, of “I’ve seen ten apples, and they were all red, so apples are probably typically red.” Rather, it is much closer to having a compelling model of gravity, and understanding that there is no good reason to think people living elsewhere wouldn’t be subject to the same gravitational forces you are subject to.
Is morality like this? Well, I don’t know! That depends on what we mean by “morality.” Morality is a notoriously slippery notion: simultaneously so familiar that its meaning may feel obvious (to me and probably to you, though perhaps not so other people), and so slippery that it has been highly resistant to any attempts to offer a reasonable empirical account that conceptually unifies moral judgments (see e.g., Sinnott-Armstrong & Wheatley, 2012; 2014) or to devise any reasonably well-specified characterization that garners broad consensus and isn’t objectionably broad or objectionably narrow in scope.
I endorse Stich and Machery’s views on the matter: like Stich (2018), I don’t even think there is a moral domain, and like Machery, I believe morality is a historical invention. Thinking in distinctively “moral” terms only arose in some cultures and, I believe, simply isn’t and wasn’t present in others, much as distinctively European notions of chivalry are cultural inventions that did not arise in all historical populations.
To be clear: I am not proposing the absurd notion that some cultures don’t have norms, or empathy, or institutions for distributing rewards and punishments, they have all of these and more. What I specifically deny is that there is any philosophically defensible account, or defensibly universal psychological operationalization of a particular construct, that adequately characterizes the normative judgments, attitudes, practices, and institutions of all present and historical human populations in such a way so as to point to a universally shared conception of a distinctive moral domain. In short, I suspect that morality is culturally parochial, and that moral psychology is not a pancultural feature of human cognition.
Suppose I’m wrong. Suppose all human populations share an innate predisposition for thinking in distinctively moral terms. I don’t think the evidence supports this claim (see e.g., Machery & Mallon, 2010), but maybe future studies will vindicate such claims (I’m happy to bet against this, though). That would certainly increase the plausibility that everyone believes in moral realism. After all, it’s hard to see how people would be realists about morality if they don’t think in moral terms in the first place. But is there any good reason to suppose that a distinctive belief that one’s moral judgments pick out stance-independent moral facts is a universal feature of human thought? If so, why? One can speculate from the armchair. Perhaps believing that these facts are “out there” and not expressions of our personal values or the standards of our culture motivates greater compliance with those norms, and perhaps those norms have adaptive advantages. Perhaps such beliefs are associated with an evolved predisposition for believing in God, and are part of a suite of cognitive predispositions that reinforce prosocial behavior.
These are definite possibilities. But they are possibilities that would be best established by taking empirical evidence seriously, and this is no easy task. Such a task would involve weaving together a tapestry of findings that span numerous disciplines: many areas of psychology (e.g. cognitive, social, personality, etc.), neuroscience, behavioral genetics, anthropology, sociology, linguistics, history, paleontology, and philosophy. Reaching firm conclusions on such matters is a lifetime’s work, perhaps more, of comprehensive, integrative, and careful analysis. There are people who have made forays into human thinking of this kind, such as Sterenly (2012), in the wonderful book The Evolved Apprentice. Such efforts are far from the norm, and, in any case, I don’t know of any efforts that would corroborate the claim that almost everyone believes in moral realism.
I have dwelled on the claim that most people believe in moral realism enough. My plea is not simply that people reconsider whether a claim like “most people believe in moral realism” is true. My plea is for people to recognize that this is an empirical claim, that it can only be established by gathering adequate empirical data, and that doing so is incredibly difficult.
3.0 Denying moral realism
Let us return to the second part of Friedman’s claim: although Friedman claims that everyone believes in moral realism, he also states that many people in familiar circles would deny that they believe in moral realism. Here is the full remark:
It is only a slight exaggeration to say that almost everyone believes in moral realism and almost everyone, at least in the circles I usually move in, denies believing in it. Everyone, with the possible exception of psychopaths, feels that some things — stealing from a friend who trusts you, for example — are wrong, not just illegal or imprudent but wrong. And yet most people other than the seriously religious, at least in the academic world where I have spent most of my life, would deny the existence of moral facts, interpret them, if pushed, as preferences, like the preference for chocolate ice cream over vanilla, or as social rules we have been trained into.
I’m a moral antirealist. I believe some things “are wrong,” and that they are “not just illegal or imprudent but wrong.” This does not make me a moral realist. What I mean when I say that I believe somethings are wrong varies based on context, but I usually mean some combination of: I disapprove of them, I don’t want them to happen, I want people who do those things to be punished, the action in question makes me upset or outraged or disgusted, that the action is inconsistent with the kind of world I want to live in and the kind of events I want to take place.
What I am not doing is ascribing moral properties to anything, or assigning intrinsic value to anything, or referring to stance-independent moral facts, or in any way pointing to some realm of facts or properties that could be reasonably constitute the moral facts one presumably must believe in in order to be a “moral realist.”
In short, moral antirealists can believe that some things are “wrong.” Moral realists don’t have a monopoly on normative language, and nothing about being a moral antirealist bars me from thinking that some actions are “wrong” any more than being a gastronomic antirealist (someone that denies there are objective facts about what food is good or bad) prohibits me from saying that some foods are “gross.” Normative language is straightforwardly consistent with antirealism.
Friedman’s language in this passage is thus unhelpfully ambiguous. As I have pointed out many times before, people who discuss metaethics often shift between language that explicitly includes metaethical modifiers like “stance-independent” or “objective,” as in “stance-independently wrong,” and language that excludes such modifiers, as in “wrong.” When people employ the latter wording, their remarks become ambiguous: does “wrong” mean “stance-independently wrong” or not? If it does, then sure, I don’t think anything is “wrong” only in the sense that I don’t think it’s “stance-independently wrong,” … but that doesn’t mean I don’t think it’s wrong in any other sense.
If you’re discussing metaethics, why drop the clarificatory label of “stance-independent” or the infamously confusing “objective”? The result is that we are left with superficially normative moral language that is consistent with both moral realism and moral antirealism. Such ambiguity can allow moral realists to equivocate on the implied meaning of the terms, taking advantage of normative entanglement to imply moral antirealists are evil or, in this case “psychopaths,” if they don’t share the moral realist’s metaethical standards,o
Perhaps part of the issue comes from the quote Friedman begins with:
Moral realism is the view that there are facts of the matter about which actions are right and which wrong, and about which things are good and which bad. (Routledge Encyclopedia of Philosophy)
This quote (the author is Jonathan Dancy) is unfortunate because it cuts off the rest of what follows, which fleshes out the definition in a way that substantively elaborates on this remark taken in isolation.
One minor concern is the turn of phrase “fact of the matter.” What is a fact “of the matter”? Is something only a fact of the matter if it is an irreducibly normative stance-independent fact? Or are stance-independent descriptive moral facts, as one might find in a moral naturalist’s account, consistent with the claim? What about constructivist accounts? What about relativist accounts which hold that moral claims depend on the stances of individuals or cultures?
Consider the latter case. If you’re a subjectivist, and believe moral claims are true or false relative to the standards of individuals, then a claim like “stealing is wrong” would mean something like “stealing is inconsistent with my moral standards.” Such statements are descriptive claims about the standards of individuals, and when an individual makes such a claim, there is a stance-independent fact about whether or not they do, in fact, hold the moral standard in question. For instance, if Alex believes that stealing is morally wrong, and Alex says “Stealing is morally wrong,” and by this Alex means “stealing is inconsistent with my moral standards,” then this claim is true. Is it a fact of the matter that it’s true? I don’t know, because fact of the matter isn’t a technical term. I’d say it’s a fact of the matter, if someone didn’t stipulate a realist-only reading of this turn of phrase. After all, it is a stance-independent fact that Alex’s assertions about her own moral standard are, by stipulation, true.
Fortunately, if you read the rest of the passage, Dancy elaborates in a way that helpfully suggests that moral realism may not reflect so much a categorical distinction, such that one either is or isn’t a moral realist, but that one may be more or less of a moral realist depending on how many of a cluster of potential positions they endorse, the totality of which would make them a “full-blown” moral realist.
Moral realism is the view that there are facts of the matter about which actions are right and which wrong, and about which things are good and which bad. But behind this bald statement lies a wealth of complexity. If one is a full-blown moral realist, one probably accepts the following three claims.
Dancy goes on to highlight three claims that, taken together, would make one a more robust realist:
Moral claims are “special’ and distinct from other facts (realists differ in the ways in which moral facts are special and distinct, so while this is vague, it’s a preliminary remark, and I think it’s fine)
Moral facts are “independent of any beliefs or thoughts we might have about them” (i.e., stance-independence)
We can make mistakes about what is right or wrong
This is a summary, so of course the nuances of these remarks aren’t cashed out in full. Taken at face value, I would actually agree that there’s something special and distinct about moral facts, and I think there are ways in which a person can make mistakes about what is right or wrong that is consistent with antirealism; I’d only more straightforwardly reject (2). Am I most of the way to moral realism? Hardly. Disambiguation of just what is meant by (1)-(3) would reveal that I am very far from a realist.
Note, for instance, (3). If you’re a constructivist or a cultural relativist, you could just be wrong about what moral standards result from a given constructivist procedure, or you could just be wrong about what your culture’s standards are. Being mistaken about what is morally right or wrong is straightforwardly consistent with various antirealist positions, so while there are construals of being capable of making mistakes about one’s moral values that are more consistent (or only consistent) with realism, antirealism has the resources to easily accommodate (3). Classical noncognitivists would have some trouble, though.
What’s important to highlight is that a lot of presuppositions about what’s meant by “wrong” can fly under the radar. A realist, in using the term “wrong,” may mean to say:
“Lying is morally wrong in a sense that is special and distinct, stance-independently true, and one can be mistaken about whether lying is right or wrong.”
That’s not what *I* mean by “wrong.” So what am I committing myself to if I agree that “[...] that some things — stealing from a friend who trusts you, for example — are wrong, not just illegal or imprudent but wrong”? I only deny that anything is stance-independently wrong; but I don’t think “wrong” just means “stance-independently wrong,” so it’s not clear why, as an antirealist, I would disagree with this claim, unless, and only unless, it were explicitly disambiguated in a way that made clear that “wrong” = “stance-independently wrong.”
In other words, I think that some things (including the example) are wrong, just not in the full-blown realist sense. Proponents of moral realism should be more explicit about which metaethical presuppositions they have in mind when invoking what appears to be normative moral terms like “wrong.” Judging that some things are “wrong” is consistent with many forms of moral antirealism, including my own views. It’s misleading to imply (or to state, as some do) that antirealists don’t think some things are morally wrong.
I know I’ve said this many times, but I am going to keep making this point as long as people keep failing to disambiguate metaethical claims and normative claims. This isn’t just idle pedantry: the failure to do so causes people who make such claims to make the mistake of thinking antirealism is somehow associated with villainy and evil, and they mislead their audiences into thinking this as well. The result isn’t simply that realists critical of views like mine are mistaken: their mistakes literally depict me as an evil psychopath. I feel more than justified in taking issue with this.
This lack of clarity makes Friedman’s remarks difficult to evaluate, but it still seems clear that Friedman believes that people (a) believe in some form of moral realism but (b) many people deny this. I’m not clear on what exactly they’re supposed to be denying, but consider what Friedman goes on to say, which includes a partial positive characterization of what those in denial of their belief in moral realism purportedly do think:
And yet most people other than the seriously religious, at least in the academic world where I have spent most of my life, would deny the existence of moral facts, interpret them, if pushed, as preferences, like the preference for chocolate ice cream over vanilla, or as social rules we have been trained into.
I question whether Friedman really has good evidence or reason to believe this, but let’s suppose it’s true that most of them would endorse something like this. If so, excellent. That’s pretty much what I think, too. If they (and perhaps I) are really moral realists even though we’d say this, this would suggest that Friedman is putting forward the hypothesis that many people are crypto-realists: they purport to be moral antirealists of some kind, but are, in fact, wrong (perhaps they are lying, perhaps they’re confused, Friedman doesn’t explicitly say).
This is a very strong claim. And one would hope there’d be good evidence for it. Unfortunately, I don’t think Friedman provides any. Here we have Friedman claiming most people are moral realists, but Friedman’s interactions with others appear to indicate that many (perhaps most) people would claim to be antirealists. I already think the claim that almost everyone believes in moral realism is a remarkably bold claim to make. It is far bolder, and even a bit strange, to me, for someone to maintain such a claim when they themselves report evidence that would (all else being equal) suggest otherwise. Perhaps if a bunch of people tell you (or would tell, if asked) that they’re moral antirealists…it’s because they are moral antirealists. I would at least take that to be my starting assumption, and overridden only if there are good grounds for doing so. I don’t think there are, and I don’t think Friedman provides any. What does Friedman say on the matter?
Friedman says that the inconsistency (presumably that people’s judgments or attitudes are misaligned with their purported moral antirealism) is illustrated by an example where people claim that you should not impose your moral standards on people who have moral practices contrary to your own. For instance, there may be cultures where under certain circumstances it is morally appropriate to kill one’s family members. Friedman states:
To which the obvious response is “why shouldn’t I stop him? According to my moral system, killing your father is wrong and should be prevented.”
There is a way in which some moral antirealists may find themselves in an inconsistent situation: they might think:
There are no objective moral facts
It is morally wrong to impose your moral standards on members of cultures with different moral practices
First, there is no necessary inconsistency here. This would only be inconsistent if someone held that it is objectively morally wrong to impose your moral standards on members of cultures with different moral practices. But the relativist isn’t obligated to endorse such a view. Nor, for that matter, are they obligated to endorse any form of relativism at all!
Furthermore, there’s no inconsistency in holding that there are no objective moral facts, but hold the subjective moral standard that one ought not impose their moral values on members of cultures with different moral practices. Once again, we have an ambiguity with (b): is this a metaethical claim and a normative claim, or just a normative claim? Inconsistency only arises if it’s the former; there is no necessary inconsistency between antirealism, including forms of cultural relativism (e.g., appraiser relativism), and holding the view that it’s permissible or that it’s impermissible relative to one’s own standards or the standards of one culture to impose one’s moral standards on members of cultures with different moral practices.
In addition, constructivist antirealists, error theorists, noncognitivists, and quietists are going to have little issue avoiding inconsistencies like these. So there are plenty of options for moral antirealists that wouldn’t plausibly saddle them with this supposed inconsistency. If people claim to be antirealists, well, what kind of antirealists do they claim to be? That should matter when claiming these people have inconsistent views. If, as Friedman’s remarks suggest, they hold that morality is a matter of preference, there is no necessary inconsistency between rejecting moral realism and having the preference that we not impose our values on members of other cultures.
I grant that some people may be confused, and simultaneously claim both that there are no stance-independent moral facts, and think that there are at least some stance-independent moral facts, but how many moral antirealists hold contradictory views like this? I have no idea. Does Friedman? If so, based on what arguments or evidence? It’s certainly not presented in the article.
Weirdly, the rest of Friedman’s post doesn’t seem to do much to support the claim that almost everyone believes in moral realism. Instead, Friedman offers a Pascalian wager in favor of moral realism. Friedman says:
One explanation of our moral feelings is that right and wrong are real and our beliefs about right and wrong at least roughly correct. The other is that morality is a mistake; we have been brainwashed by our culture, or perhaps our genes, into feeling the way we do, but there is really no good reason why one ought to feed the hungry or ought not to torture small children.
Here we have the underspecified use of the term “our.” Who is “our”? Which people, specifically? All of them? Most people across cultures? It’s frustrating that people will make claims like this without being clear on what they mean.
My feelings, at least, aren’t best explained by moral realism. I don’t have realist phenomenology or intuitions or judgments or beliefs in the first place. Maybe lots of people are like me. I don’t know. How would we know, unless we ran enough studies to find out?
Next, we have the claim that right and wrong are “real.” This is another one of those strange tragedies about the realist/antirealist debate: antirealism is not the view that morality isn’t real. It’s the view that there are no stance-independent moral facts. I don’t think morality “isn’t real,” I just don’t think it involves stance-independent moral facts.
If by “real” Friedman just means “stance-independent,” this would be helpful to clarify. Moral realists don’t have a monopoly on morality being “real,” the name of the position not withstanding: realism isn’t the view that morality is “real.”
I think gastronomy is “real”. That doesn’t mean I think red wines taste better than white wines regardless of my or anyone else’s preference. Using “real” to refer to a distinctively realist characterization is unclear at best, and in many cases will be unnecessarily misleading.
4.0 Moral antirealism and psychopathy
I want to turn to another feature of Friedman’s remarks, quoted earlier, emphasis added:
It is only a slight exaggeration to say that almost everyone believes in moral realism and almost everyone, at least in the circles I usually move in, denies believing in it. Everyone, with the possible exception of psychopaths, feels that some things — stealing from a friend who trusts you, for example — are wrong, not just illegal or imprudent but wrong. And yet most people other than the seriously religious, at least in the academic world where I have spent most of my life, would deny the existence of moral facts, interpret them, if pushed, as preferences, like the preference for chocolate ice cream over vanilla, or as social rules we have been trained into.
I think the context implies that by “wrong” Friedman means “stance-independently wrong,” so the claim would amount to something like “Everyone except possibly psychopaths is a moral realist.” If “wrong” doesn’t mean stance-independently wrong here, then it’s not clear how the remark would connect to the surrounding remarks: an antirealist can (and I do) feel that things are wrong, just not stance-independently wrong. If that possibility is on the table, then this remark wouldn’t make much sense given the sentences it’s sandwiched between.
The most troubling aspect of this remark is that it appears to indicate that moral antirealists are psychopaths, that I and others are, in fact, psychopaths. But given the claim that almost everyone believes in moral realism, an alternative interpretation (suggested by people in my discord server and on Facebook) would be that since I and other self-professed moral antirealists aren’t psychopaths, we’re not actually moral antirealists. I suspect something like this is the most plausible interpretation: (a) most people believe in moral realism, (b) many people in certain circles profess to be moral antirealists, (c) most of them probably aren’t, since they think and do things inconsistent with a genuine commitment to moral antirealism (d) maybe there are genuine moral antirealists, but they’d be psychopaths.
If so, this would be an instance of a common objection to moral antirealism: that its proponents “act like” moral realists: that the way we think, speak, judge, and behave belies our commitment to moral realism, even if we profess not to be moral realists. The implication that moral antirealists are psychopaths would be pretty bad. But the implication that since we’re not psychopaths, we must be confused or mistaken about our own metaethical views is pretty objectionable, too. Before getting to that, let’s first consider the implied association between moral antirealism and psychopathy.
Part of the issue is that the relation between psychopathy and moral antirealism isn’t made explicit in that sentence. It is the surrounding sentences that support this interpretation. Here’s why. The first sentence indicates that people believe they’re moral realists even if they deny it, then the second pivots to a seemingly normative moral claim…that everyone “feels that some things [...] are wrong.” Then the very next sentence indicates that most people (other than the seriously religious) would deny the “existence of moral facts,” and interpret them instead as preferences. This seems to indicate that what the psychopath denies is that there are “moral facts.” Since these remarks immediately follow a definition of moral realism as “the view that there are facts of the matter about which actions are right and which wrong, and about which things are good and which bad,” this seems to me to lend itself to the natural interpretation that people everyone believes in moral realism, but that the only possible exception to this are psychopaths.
This language may be hyperbolic, and isn’t intended to include academics who endorse moral antirealism. That is, most of us aren’t moral antirealists, we’re just moral realists who are mistaken about our own views. But to the extent that there is some allusion to the possibility that we’d have to be psychopaths, and perhaps are psychopaths if we’re genuinely committed to moral realism, this is a fairly objectionable remark. Incautiously dumping us in with the likes of Jeffrey Dahmer is a pretty negligent way of addressing those with contrary views. If it isn’t hyperbolic, and is meant literally, then it’s absurd. You do not have to be a psychopath to be a moral antirealist.
According to the 2020 PhilPapers survey, 26.1% of respondents endorsed moral antirealism. Are they all mistaken about their own views? How many are psychopaths?
One difficulty with interpreting these remarks is that Friedman states that non-psychopaths feel some things are (stance-independently?) wrong. One can feel a certain way but ultimately reject that feeling. I might feel that something supernatural happened, but reject this as a silly superstitious sentiment. Just so, perhaps moral antirealists generally feel moral realism is true, but reject it on intellectual grounds. It seems plausible to me that many, perhaps most, would say so. But it’s unlikely they’d all say so. So what proportion of psychopaths among professional philosophers are we talking about here?
Critically, it also doesn’t feel that way to me. Yet I am not a psychopath. Note that given the specific wording Friedman chose, not only would I have to be wrong about my professed belief in moral antirealism to not be a psychopath, I’d also have to be wrong about how things feel to me. If Friedman would in fact maintain that moral antirealists like me are really crypto-realists, what about my phenomenology? Am I even mistaken about that? If so, it’d be a remarkably strong claim to suggest that I’m even mistaken about my own phenomenology. Not impossible: I myself think people are mistaken about their own phenomenology all the time. I’d just want to know, if Friedman would think I’m mistaken about my phenomenology, why he thinks this.
The real problem with these remarks, however, is that there simply is no good reason to think that one could only be a moral antirealist if they were a psychopath in the first place. Psychopathy is a term for a cluster of psychological characteristics. Rather than offer you my own homebrewed characterization, I’ll cite an academic article:
Psychopathy is a neuropsychiatric disorder marked by deficient emotional responses, lack of empathy, and poor behavioral controls, commonly resulting in persistent antisocial deviance and criminal behavior. (Anderson & Kiehl, 2014)
Here is another account:
Psychopathy is a personality disorder characterized by a constellation of affective, interpersonal, lifestyle and antisocial features whose antecedents can be identified in a subgroup of young people showing severe antisocial behaviour. (De Brito et al., 2021)
The term “psychopathy” has a long history, and psychologists may eschew the label and endorsement of a distinctive phenomenon in favor of other labels and phenomena that capture elements commonly associated with psychopathy.
There may also be a distinction between colloquial usage of the term and more formal uses. In any case, the term’s colloquial uses the term typically refers to a cluster of antisocial traits, including a callous disregard for other people’s welfare, extreme egocentrism, manipulativeness, Machiavellianism, poor impulse culture, a propensity for violence, sadism, cruelty, and an inability to form close interpersonal relationships with others. In popular culture, psychopathy is associated with serial killers, cannibals, and perpetrators of the world’s worst atrocities. Calling someone a psychopath is a big deal. It’s a terrible label to ascribe to anyone without justification.
Is there any justification for associating moral antirealism with psychopathy? I don’t think so. To my knowledge, there is no evidence that would indicate that people who don’t feel some things are (stance-independently) “wrong” indicates that they have psychopathic tendencies like being manipulative, callous, lacking in empathy, egocentric, Machiavellian, sadistic, violent, and so on. It might be that psychopaths would tend to endorse moral antirealism, but that wouldn’t establish that antirealists tend to be (or are always) psychopaths. In other words, even if every psychopath was a moral antirealist, that wouldn’t mean that if you’re a moral antirealist, this entails that you’re a psychopath. It may be positively correlated with psychopathy, but it’s not clear why that would necessarily be objectionable. If we found out that all psychopaths like mashed potatoes, would that mean eating mashed potatoes is an act of villainy?
5.0 Are moral antirealists crypto-realists?
The more interesting claim isn’t the suggestion that if you were really a moral antirealist, you’d be a psychopath. It’s the suggestion that since we’re not psychopaths, we must not be moral antirealists after all. I may claim to be a moral antirealist…but I’m not. Let’s call this the “crypto-realism hypothesis.” The crypto-realism hypothesis is the hypothesis that a large proportion (perhaps most, or even nearly all) people who claim to be moral antirealists are not, in fact, moral antirealists. For a number of reasons, what these people claim to believe is not consistent with their actual beliefs and behavior. Here are a few hypotheses as to why this apparent inconsistency may arise:
They are lying. It could be that they genuinely, consciously endorse moral realism, but are lying about it. Friedman doesn’t raise this possibility, so we’ll set it aside.
They are confused about the terms. It’s also possible that people are moral realists but are confused about what realism and antirealism are, and simply don’t understand the relevant terminology. Friedman doesn’t mention this, either.
They are unaware of their own realist commitments. Another possibility is that people’s genuine commitments are revealed in the way they speak and act, and possibly in features of how they think that they are somehow not noticing when they endorse moral antirealism. Such people claim to be moral antirealists, but what they say, think, and do belies the truth: they are committed to moral realism, whether they recognize this or not.
These aren’t mutually exclusive, but I suspect (3) is the most plausible candidate.
I’ve said a lot, but I don’t have much to say about this. Friedman simply does not present any good arguments or evidence that would suggest people who claim to be moral antirealists are actually realists. Friedman has not convincingly shown that such people tend to do anything consistent with such claims, and says very little that would support the conclusion that people who claim to be moral antirealists are mistaken about their own beliefs or commitments.
We should generally take people at their word, unless we have good reason not to. If someone says they’re a moral antirealist, it’s reasonable to suppose that they are. I grant that if that person said and did things that were genuinely inconsistent with a commitment to antirealism, and did so on a regular basis, it would be reasonable to question whether their endorsement of antirealism were a genuine reflection of their beliefs. However, there are many judgments and actions that realists sometimes depict as inconsistent with antirealism: imposing one’s moral standards on others, having moral standards at all, having strong moral reactions to things, and so on. None of this is inconsistent with moral antirealism. At all.
There are tensions in a certain kind of naïve, flat-footed agent relativism. Agent relativism is the view that whether an action is right or wrong depends on the moral standards of the person performing the action (or for agent cultural relativism on the standards of that person’s culture). If Alex thinks it’s okay to steal, then it is okay for Alex to steal. Such a view would require the rest of us to consider it permissible for Alex to steal if she wished. Yet, so the reasoning goes, the agent relativist will tend, in practice, to object if Alex were to attempt to steal from them. This reveals an inconsistency: on the one hand, the agent relativist claims to be committed to the view that it’s morally permissible for Alex to steal. On the other, they object to Alex stealing. Aha! Their commitment to relativism is inconsistent with their attitudes and actions.
This kind of reasoning seems to underwrite much of what people seem to have in mind when they say they people don’t “act like” antirealists.
I am tempted to say “there is no nice way to put this” but there probably is. I’m going to put this not-so-nicely anyway: the insistence that antirealists “act like” moral realists if they engage in moral judgment, are passionate about their moral values, and so on, is fantastically stupid.
It shouldn’t take more than a moment’s reflection to realize that moral antirealism doesn’t require you to think that if someone else is totally okay with stealing, that you must be totally okay with them stealing. You can recognize that other people are okay with stealing while still being against those people stealing yourself. You don’t have to think it’s stance-independently wrong for other people to steal to think it’s wrong according to your own moral standards. And there is no magic rule that bars moral antirealists from objecting to other people lying or stealing and taking actions to stop them from doing so. Moral antirealists have no (stance-independent) moral obligation to respect or care about other people’s views about what is morally right or wrong. They might feel they have a moral obligation to respect other people’s moral beliefs, but this is not logically entailed by moral antirealism.
6.0 Pascalian argument for moral realism
Friedman also presents a version of Pascal’s Wager in favor of moral realism. Friedman remarks:
One explanation of our moral feelings is that right and wrong are real and our beliefs about right and wrong at least roughly correct. The other is that morality is a mistake; we have been brainwashed by our culture, or perhaps our genes, into feeling the way we do, but there is really no good reason why one ought to feed the hungry or ought not to torture small children.
“The other” implies that there are only two explanations. This isn’t true. First, this dichotomy turns on the presumption that “our” moral feelings are explained by right and wrong being “real” and our beliefs about them are “roughly correct.”
There are a few of the standard bad philosophy tropes here:
The presumption that people are moral realists.
The incautious use of “our.” People will just say “we” or “our” without specifying who they’re referring to.
The suggestion that realists think morality is “real” and antirealists don’t. This isn’t true.
Friedman then presents what appears to be a kind of Pascalian wager in favor of moral realism:
If morality is real and you act as if it were not, you will do bad things — and if morality is real you ought not to do bad things. If morality is an illusion and you act as if it were not you may miss the opportunity to commit a few pleasurable wrongs but since morality correlates tolerably, although not perfectly, with rational self interest, the cost is unlikely to be large. It follows that if you are uncertain which of the two explanations is correct you ought to act as if the first is.
First, let’s focus on this remark:
If morality is real and you act as if it were not, you will do bad things
If by “real” Friedman is referencing stance-independent moral facts, why should we think that this is true? Let’s suppose moral realism is true, but you don’t think that it is. Why would this lead you to do bad things? If you act in accordance with your goals and values, those goals and values could be aligned with the stance-independent moral facts. For instance, suppose realism is true, and that realists are generally correct about what the moral facts are. As a result, the following claims are true:
It’s stance-independently wrong to torture babies
It’s stance-independently wrong to steal just for fun
It’s stance-independently wrong to set cats on fire
…and so on. I am not aware of any studies that show antirealists are out torturing babies or setting cats on fire. Friedman or others might offer the following explanation:
These people are not actually moral antirealists. They’re committed to moral realism and believe these actions are wrong, even if they say otherwise.
I have a simpler explanation:
They aren’t doing these things because they don’t want to do them.
Which sounds more plausible to you? Leave a comment and let me know.
The realist’s explanation here is a weird, roundabout one. On my view, people act in accordance with their goals and values. People don’t set cats on fire because they don’t want to. The alternative view is one where people think there is some set of facts about what you should or shouldn’t do that is true independent of whether you want to do those things…and then what? You don’t do the things you shouldn’t do independent of whether you wanted to do them because you happen to not want to do anything that you “shouldn’t do” independent of whether you wanted to do it? Motivation is going to have to enter the story here at some point.
On my view, we go directly from desire to action. The only plausible way I can imagine that someone would comply with what the stance-independent moral facts were would be if they had a desire, or motivation, to comply with whatever the stance-independent moral facts are. I say “plausible” and this is, of course, coming from my own perspective. I’m not supposing that realists must endorse Humean views or something similar to them. One crude comparison between my own view and an alternative model would look like this:
Simple quasi-Humean antirealist model:
Desire → Act in accordance with desire
Simple quasi-Humean realist model:
Desire to do whatever the stance-independent moral facts are → Belief that something is a stance-independent moral fact → Act in accordance with the belief, because one desires to act in accordance with the belief.
I don’t think realists must (or are even likely) to endorse a model like this. They might be externalists about motivation and think that the moral facts are the moral facts, regardless of whether there is any necessary connection between those facts and our motivation. Or they might have some alternative view of rationality or agency or human cognition or have a more sophisticated view that doesn’t comport with some crude, linear model of human cognition. My point isn’t that this is what realists think. My point is to stress that the antirealist can offer a fairly straightforward explanation of human psychology that raises serious doubts about the plausibility of the notion that disbelief in moral realism would somehow lead people to “do bad things.”
Now I want to return to the rest of the remark We’re next given these claims:
If morality is real and you act as if it were not, you will do bad things — and if morality is real you ought not to do bad things. If morality is an illusion and you act as if it were not you may miss the opportunity to commit a few pleasurable wrongs but since morality correlates tolerably, although not perfectly, with rational self interest, the cost is unlikely to be large. It follows that if you are uncertain which of the two explanations is correct you ought to act as if the first is.
We appear to have two options on this view:
Act like moral realism is true
Act like moral realism is false
If we act like realism is true but it isn’t, we lose out on whatever we might have wanted to do that would be enjoyable, but we abstained from out of our commitment to realism. The cost of acting like moral realism is true would be low if it isn’t. Conversely, if moral realism is true, then the things we would have done are things we ought not to have done.
Somehow, Friedman thinks this tradeoff favors acting as if moral realism is true. I’m not super clear on what the argument is, but I suspect the issue is with this remark: if morality is real you ought not to do bad things.
This type of “ought” is a stance-independent ought. If moral realism is true, then there are facts about what you ought to do and what you ought not do. Regardless of whether moral realism is true, there are also facts about what you want to do, independent of whether or not those things are consistent with the stance-independent moral facts. Let’s call these the “stance-independent” oughts, ought-nots, and so on. These roughly correspond to Friedman’s “rational self-interest.”
So the tradeoff appears to be this:
If realism is true, but you act like it isn’t, you increase your stance-dependent goods, but decrease your stance-independent goods. The increase in stance-dependent goods isn’t high, though, because most of our stance-dependent goods will align with the stance-independent goods.
This seems to treat stance-dependent and stance-independent goods as commensurable, and the presumption is that the tradeoff favors acting as if moral realism is true because you give up a small amount of stance-dependent goods for some presumably greater gain in stance-independent goods.
First, are they fungible? If so, what’s the exchange rate? How many units of rationally self-interested pleasure equals one unit of stance-independent good? How does one make these tradeoffs? Why should I care at all about stance-independent goods, even if there were such things? When I act, in accordance with my goals and values. I don’t care what the stance-independent moral facts are. Moral realists can insist as much as they like that such facts are facts about what I “should do” by definition.
Well, too bad. I won’t do what the stance-independent moral obligate me to do, unless I happen to already want to on stance-dependent grounds. What happens if I don’t comply with what I stance-independently ought to do?
As far as I can tell, absolutely nothing. There are no consequences. No substantive ones, anyway. I suppose people could correctly say that I am doing something I stance-independently ought not to, but why should I care about that? Insisting I “should care” by definition simply misses the point: I stance-independently should care, sure, but I don’t stance-dependently care about what I stance-independently “should” care about. And I act entirely, and exclusively on the basis of what I stance-dependently care about. So unless the realist has some way of demonstrating that as a matter of descriptive fact that I do in fact care about what the stance-independent facts are, I simply won’t comply with them. The realist can call this “evil,” or “wrong,” or insist I “shouldn’t” do so as much as they like, and I simply won’t care.
A realist can insist that realism would nevertheless be true, under the circumstances. But if the best that you could say of your moral theory is that it’s technically true, but it has absolutely no force whatsoever to compel people to comply with it, it’s a rather inert and worthless thing. And that’s just it: moral realism at best furnishes us with something inert and worthless. For me, that’s as good as to say its false, but even if one doesn’t want to dive into pragmatic waters, realism still strikes me as one of the most pointless philosophical views one could endorse.
In any case, if this is a Pascalian wager, it’s not a very clear one. I’ll clip my objections, then, unless I encounter a more fleshed out version of the wager.
7.0 Moral intuitionism
Friedman’s last section is on Hume’s law, but I want to focus on his endorsement of ethical intuitionism. Friedman states:
My argument starts with intuitionism, the philosophical position that holds that just as humans have senses such as sight and hearing that imperfectly sense physical facts so we have a moral sense that imperfectly senses moral facts.
Talk of senses and sensing moral facts has always struck me as highly suspicious. If the faculties in question are like our ordinary sensory faculties, one would expect them to be mediated by physiological processes: we have special organs for seeing, hearing, tasting, and so on, and scientists have uncovered the complicated physiological means by which these senses process external stimuli. Are our moral senses supposed to be like this? If so, they put themselves in the territory of empirical inquiry. I know of no good evidence to suggest people have a special moral faculty that allows them to sense moral facts.
If this isn’t what such a sense involves, then how does it work, and why should we believe we have such a sense? I don’t believe I sense moral facts, and I don’t think anyone else does, either.
One reason for thinking we have such a sense simply reiterates the claim that most people feel like they sense facts of this kind. Friedman states:
The argument for intuitionism, beyond the fact that it describes how most people feel about morality — that certain acts really are wicked — is consistency of perceptions.
How does Friedman know that this is how most people feel about morality? The other component of this claim is consistency:
If moral perceptions are similarly consistent that would be evidence that there is a moral reality out there which we are perceiving, just as the fact that multiple people report seeing the same thing is evidence, not proof but evidence, that that thing is really there.
The idea here seems to be something like this:
We seem to be perceiving stance-independent moral facts
If we all seem to be perceiving the same thing, this is evidence that there really is some stance-independent set of moral facts we’re all perceiving
We are perceiving roughly the same thing
This is evidence of moral realism, and evidence that we’re accurately perceiving some of the stance-independent moral facts
Shared moral values probably does provide some evidence of moral realism. However, it’s not very strong evidence. There are numerous alternative explanation for why we might all appear to perceive the same stance-independent moral facts.
However, before we even get to that, the most serious problem with this line of reasoning is that it’s simply not true that “we” seem to be perceiving stance-independent moral facts. I grant that a few thousand academic philosophers claim to perceive such facts, and maybe many of the people around them claim to perceive those facts, but that’s hardly solid ground for generalizing to everyone else.
Second, we may question whether people who purport to “see” these stance-independent moral facts really do purport to see the same facts. It’s not so clear to me that they do. There’s quite a lot of moral disagreement, and I doubt all moral disagreement among philosophers is reducible to disagreements about nonmoral facts.
Third, the degree to which consistency is evidence turns in part on the degree to which people’s judgments are made independent of one another. Clearly, some people report having realist intuitions: explicit proponents of ethical intuitionism, like Mike Huemer and, presumably, Friedman report having the sort of phenomenology that lends itself towards realism, and there are many others.
Suppose that, as I suspect, such intuitions are acquired, not in virtue of some pancultural, shared faculty for detecting moral facts, but people acquire the illusion, or mistaken sense, of having such experiences as a result of being steeped in certain cultural traditions, religious traditions, and philosophical traditions, most notably the distinctive monotheistic religions of the Western world, the Western philosophical canon, and of analytic philosophy in particular, a recent, idiosyncratic, Anglophone form of Western philosophy that is especially parochial, confined largely to one language and one cluster of intertwined cultures.
To massively simplify things, imagine that if you grow up in cultures with a significant Judeo-Christian influence, you are saturated in a tradition that depicts the world as split into the cosmic forces of good and evil, and that places a single God atop a throne from which they issue a host of laws. Rightness and wrongness, in such a world, come to be seen as cosmic, supernatural forces that transcend the material world and that dictate the fate and purpose of the universe. Generation after generation is exposed to these ideas. Then, only in the past few centuries, has the theistic grip on the human mind begun to ease. Secular moral realism emerged in force only in the last century. It’s that recent.
Strip away those theistic trappings of a Manichean struggle between light and darkness, and the moral law descending from Mount Sinai, and what are you left with? An immediate pivot towards subjectivism and constructivism and noncognitivism? For some of us: yes, apparently. But is it really such a stretch to imagine that you undercut people’s sense of a lawgiver, they’re still left with a sense that there’s still a law out there?
My point here is that if particular cultural motifs cause people to hallucinate a moral reality “out there,” and most of the people who report realist phenomenology are steeped in those cultures, then they’re not independent witnesses all mutually corroborating the same account. That their judgments are consistent with one another should be no more surprising than if we discovered that in an isolated society with thousands of years of traditions that involve belief in ghosts and spirits, many people report, with near certainty, that they have witnessed ghosts and spirits. And, given their shared cultural heritage, would it come as any surprise if they tended to offer similar descriptions of how those spirits acted and what they looked like? Of course not.
Many of the rest of us wouldn’t take such testimony seriously, because we have good reason to think there are no ghosts or spirits, and in any case could readily attribute consistency in people’s reports to their shared culture and traditions.
So why can’t we do the same with ethical intuitionists who claim they can detect moral facts?
Well, suppose we found that, time after time, isolated populations all offered the same accounts of ghosts and spirits, even when we could confirm that they had no contact with one another. Well, that would call for an explanation. I wouldn’t immediately jump to the conclusion that ghosts and spirits are probably real. Maybe some shared quirk of evolution led people to share an evolved predisposition for believing in ghosts and spirits. The point is, though, that before we even considered such hypotheses, we’d first have to actually establish that a bunch of independent societies all mutually converged on similar reports of the behavior and appearance of spirits.
So have moral realists actually established that a bunch of independent populations all mutually converged on similar phenomenological reports of stance-independent moral facts, and have they all tended to agree on the normative content of those moral facts (e.g., they all share the same moral values)? No. Not even close.
This is why the claim that most people are moral realists is actually important for the ethical intuitionist’s case. If it turns out ethical intuitionists are just weird outliers who don’t think like most of the rest of us, this undermines their claims. They start to sound much more like people who claim to have the power of astral projection than people who are simply reporting another of the mundane sets of facts in the world: tastes, colors, sounds, moral facts, etc.
It’s important for ethical intuitionists to frame their judgments as mundane and ordinary as possible. And ethical intuitionists historically (I’ll even dare to say typically) simply assert that most people are moral realists, often with little or no substantive evidence to support such claims. If they’re wrong, it may be that there is far less consistency than they suppose. And, as I have endeavored to suggest, I suspect that what little consistency there is is largely the product of a narrow appeal to members of a culturally homogenous population, anyway.
People also claim when you really look into it, there’s not that much moral disagreement. Friedman echoes this sentiment, saying:
The equivalent moral claim is that there is also little disagreement about the moral status of a sufficiently well specified situation.
Agreement between who? There may be broad agreement among contemporary members of industrialized civilizations, or among academics, or among WEIRD populations, or among the people Friedman has spoken to. But most of these people are steeped in similar cultures and traditions. It’s possible that had history gone differently, mainstream moral standards would differ. And it’s possible that the more isolated cultures are, the more different their moral values are. But I also suspect that, were to survey people in the past, their moral values would be quite unlike our own. Would the members of most ancient civilizations agree about various moral issues in most well-specified situations?
I don’t know. I doubt it. But without extensive knowledge of history, anthropology, or cross-cultural psychology, why would anyone be so confident that most people would tend to arrive at the same moral conclusions? Simply put: I am confident most of my neighbors would agree on well-specified moral issues. But my rough impressions of what my neighbors would think provides little insight into what e.g., people living in the Old Kingdom of Egypt in 2500 BCE would have thought about that same issue. Does Friedman point to compelling empirical research revealing how all societies have historically converged on the same moral values? No. Instead, we get this:
Very few people of whatever political persuasion, reading “A Christmas Carol,” see Ebenezer Scrooge Mark I as the hero. C.S. Lewis in The Abolition of Man argued that all societies have, at some base level, the same moral code, which he referred to as the tao.
These examples are cute, but at the risk of sounding a bit rude, I find these examples so unconvincing I find them anti-persuasive. If people had better evidence, I’d expect them to present it. Instead, how do we know all societies have the same moral code? The guy who wrote The Chronicles of Narnia said they did and called it the “tao.” Riveting stuff, if you’re a pastor who’s taken a few hits from a bong.
Another reason why consistency provides little support for intuitionism is that consistency in normative moral values is consistent with, and plausibly predicted by, views of human psychology consistent with antirealism. Consider normative and evaluative judgments in nonmoral domains which we on reflection may not be normative realists about. As usual, let’s go with gastronomic judgments. I don’t think there are stance-independent gastronomic facts about what food is intrinsically good or bad, independent of how it tastes. I hope you don’t think so, either. Nevertheless, are people generally pretty consistent in their evaluative judgments about what sort of food is good or bad at the same level of abstraction that they purportedly agree, according to Friedman and many moral realists, about moral issues?
Yes. Sure, people vary in whether they like cilantro or how spicy they like their food or whether they prefer pasta or bread, but move up one level of abstraction: people tend to like foods that have similar balances of savory and sweet and salty and sour. Is it any surprise that french fries have taken the world by storm? That people love umami? That people all over the world tend to like chocolate and ice cream and pizza and fried foods and cookies and cake and sugar and bacon?
There shouldn’t be. So what we observe is an extraordinary degree of consistency in food preferences. How do we explain that? By invoking the notion that some foods are “intrinsically tasty” and that we have a special “gastropathy” sense for detecting the intrinsic, immaterial “tastiness” of food? Maybe some foods have more taste particles (tasticles?) in them?
No. Such theories are stupid and no reasonable person would bother with them. Our shared human psychology gets the job done just fine. We are all members of a species with a shared evolutionary history. We have similar physiologies, including similar tongues, metabolisms, taste receptors, and brains. Of course we’re going to have similar food preferences. So why on earth would it come as any surprise if we discovered that people tend to have similar attitudes about what, morally speaking, they think is right or wrong, good or bad? It wouldn’t.
Tack realism onto this, and the picture doesn’t change much: If some people end up projecting their attitudes and personal beliefs onto the world, and come to believe their moral values are instantiated out there in the world, there should come as no surprise if they tend to have similar moral values, any more than if we tricked a bunch of people into being gastronomic realists it’d be surprising if they happened to like the same kinds of food.
Antirealists have no problem offering a straightforward account of how and why people would share similar moral standards. There is no need to suppose people have a special faculty for detecting moral facts, any more than we should suppose that chocolate cake is better than dirt because it has greater tasticle density.
Friedman also makes this remark:
My claim is not that moral perceptions are consistent or that intuitionism is true. It is that whether the moral perceptions reported by different people are consistent is a non-moral fact
Note the language here. The moral perceptions. As if people are perceiving facts. Every type of perception I’m familiar with is facilitated by specialized organs that detect identifiable features of the external world. If this “perception” is supposed to be the same kind of thing, then is it likewise facilitated by some empirically detectable means? If not, then what is meant by “perception”? Is it some kind of metaphor, or analog, for familiar forms of (physically-mediated) perception? If so, how does one cash the explanatory check of explaining what it actually is, rather than drawing a mysterious comparison to something it isn’t?
Friedman asks, perhaps a bit rhetorically:
Why would different people all perceive the same thing if it is not there to be perceived?
I suppose I’m supposed to answer “they’re not. It really is there.” But people share a lot of bullshit beliefs in common. People share belief in superstitions, Bigfoot, alien abductions, paranormal powers, demonic possession, and so on. Would it really be such a shock if a bunch of people exposed to the same concepts and ideas could share in the same intellectual hallucination of some external moral reality?
Friedman considers possibilities a bit like those I’ve proposed:
A possible answer is that it would be very weak evidence because there are other plausible explanations for consistency of moral beliefs. Perhaps our common moral perceptions are the result of evolution hard wiring into us beliefs that caused our ancestors to behave in ways that led to reproductive success. Perhaps we have been indoctrinated by our societies with beliefs that make societies more likely to survive, consistent across societies because societies that didn’t conform didn’t survive.
These explanations are a bit crude, but they’re at least in the ballpark of what I might say. The problem is that they hinge a bit too much on the notion that the beliefs in question would enhance reproductive fitness. Friedman exploits this in discounting these explanations:
Those are possible explanations for consistent moral perceptions consistent with moral nihilism but they depend on non-moral facts. Suppose one could show that some widely held moral beliefs did not contribute to either reproductive or societal success. If such evidence existed, and if we observed consistency across humans of moral judgement, that would be evidence for the existence of moral facts that humans can perceive. Hence it would be evidence for those moral facts that humans do perceive.
I don’t find this persuasive. Not everything people do is adaptive or a direct product of natural selection. I don’t think the shared propensity for claiming to “perceive” moral facts is hardwired into us by evolution, nor do I think it makes societies more likely to survive. I just think it’s a mistake, plain and simple.
Human civilization has repeatedly demonstrated a propensity for shared false beliefs. Religions, bad medicine, ludicrous superstitions, and pseudoscientific nonsense have all persisted. Some of these practices may have adaptive benefits. But I don’t think they all have to in order to persist. Naturalistic explanations of some tendency among humans doesn’t require us to appeal to the benefits of the behavior. Is our only way of accounting for smoking cigarettes to figure out how it is that smoking cigarettes actually leads us to have more babies? No. Human minds at the individual and collective level can get hijacked by bad ideas.
Friedman’s post ends abruptly on this topic. I like that choice, stylistically, so I think I’ll do the same.
References
Anderson, N. E., & Kiehl, K. A. (2014). Psychopathy: developmental perspectives and their implications for treatment. Restorative neurology and neuroscience, 32(1), 103-117.
De Brito, S. A., Forth, A. E., Baskin-Sommers, A. R., Brazil, I. A., Kimonis, E. R., Pardini, D., ... & Viding, E. (2021). Psychopathy. Nature Reviews Disease Primers, 7(1), 49.
Machery, E. (2018). Morality: A historical invention. In K. J. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 259-265). New York, NY: The Guilford Press.
Sinnott-Armstrong, W., & Wheatley, T. (2012). The disunity of morality and why it matters to philosophy. The Monist, 95(3), 355-377.
Sinnott-Armstrong, W., & Wheatley, T. (2014). Are moral judgments unified?. Philosophical Psychology, 27(4), 451-474.
Sterelny, K. (2012). The evolved apprentice. Cambridge, MA: MIT press.
Stich, S. (2018). The moral domain. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 547- 555). New York, NY: Guilford Press.