1.0 Introduction
Over on Twitter, @DanKellyFreedom made the following claim:
The biggest practical problem with moral non-realism (incl. non-cognitivism, subjectivism, relativism, error theory) is that if moral claims are about our feelings, emotions, sentiments, or just plain wrong, then there’s nothing to discuss, reason, or be consistent about. (1/2)
If the difference between right and wrong, and good and bad, is determined by our widely varying emotions or feelings, then they’re all bullshit concepts and might really does make right. This is why so many anti-vegans claim to be non-realists. They want to avoid reason. (2/2)
Note that moral non-realism is defined in terms of the standard antirealist positions. However, antirealist positions like my own don’t fit any of these categories (I discuss this here). Hopefully, in time, characterizations of moral antirealism won’t force every position into the artificial strictures of the mainstream accounts, all of which rely on mistaken semantic theses and are thus all wrong.
1.1 The role of emotions in moral judgment
If our moral standards are determined by our emotions, and our emotions vary, then people are going to have varied moral standards. On that, I agree. They probably are heavily influenced (rather than entirely determined) by our emotions, though I think “determined by our emotions” is a bit too vague, and that the reality is probably a lot more complex. Learning, cultural factors, and other influences likely cause changes in our emotions, e.g., witnessing the horrors of factory farming may prompt a change in one’s emotional reactions, which in turn cause changes in one’s moral values. So emotions probably exist in a dynamic relation to other cognitive states, and to the extent that they determine our moral values, they likely do so in a way that is causally intertwined with non-emotional processes. None of that gets you to realism, but it can complicate denunciations of moral antirealist accounts that are predicated on a potentially unsatisfying psychology thesis: “it’s just emotions” can sound objectionably reductive. Worse, “emotions” are also popularly contrasted with “reason,” introspection, and intellectual virtues more generally, with emotions relegated to a kind of impulsive, often irrational secondary role. This isn’t an accurate depiction of emotion, but this popular image can contribute to the dialectical stench that surrounds antirealism and makes it smell unappealing.
1.2 emotions → moral values = bullshit?
What I’m not sure about is why it would follow from the view that emotions → moral values that concepts like right, wrong, good, and bad are “all bullshit concepts” or that “might really does make right.” I’m not quite sure what’s meant by “bullshit concepts.” Bullshit in what respect? If this is just cashed out as “they’re not objective” or “they’re not stance-independent,” that would be true, but it’s not clear to me why that would make them “bullshit.”
1.3 Might makes right
As far as might making right: interpreted literally, this wouldn’t follow from moral antirealism. It would not follow that if nothing is stance-independently correct that therefore if one has the power to impose their will on others, this makes it the case that their actions are morally “correct.” If by “correct” one meant stance-independently, this would definitely not follow. If one means in some stance-dependent respect, this would also make little sense. Why would that follow? With respect to which stance? Certainly not mine! I don’t agree that might makes right. I think this phrase is better interpreted in some sort of non-literal, descriptive sense: if there is no stance-independent right or wrong, then those who are powerful will get their way and impose their moral standards on others. If so, what’s strange about this is that this would be true or false regardless of whether moral realism was correct. So I’m also at a loss as to why someone would say that one of the implications is that “might makes right.” This doesn’t literally follow, and it’s not clear that it would even follow in any distinctive way in at least one plausible non-literal way. Simply put: a moral antirealist does not have to accept that if someone is stronger, that they determine what is morally right or wrong. They don’t have to accept this in any normative moral sense, nor as a matter of practicality. It may be that if someone is more powerful they can impose their will, and perhaps the antirealist, unlike the realist, couldn’t say that this is stance-independently wrong.
But so what if it isn’t? If that were the case, then the undesirable implication of moral antirealism seems to be that it doesn’t give us stance-independent moral facts. That wouldn’t be a very substantive objection. It’d be like objecting to atheism on the grounds that it denies the existence of God, and thus cannot point to God to resolve disputes. This isn’t a very troubling objection to the atheist: sure, if God doesn’t exist, God can’t resolve disputes. And if there are no stance-independent moral facts, one cannot point to such facts when condemning the impositions of the strong. I would say that’s unfortunate, but true, but I don’t think it would matter either way. A powerful person could just shrug at the moral facts and impose their will, anyway.
1.4 Emotions aren’t bullshit
My emotions and feelings matter to me quite a bit. Yours very likely matter to you. How I feel about things is closely tied to what I care about. I care about my future and my wellbeing. I care about the future and wellbeing of others. Feelings seem to me like the best place to locate one’s moral values: my moral values are an expression of what I care about. If anything, I am being far too cautious here: emotions plausibly form a substantial foundation, if not the entirety of the foundation, of what it would even mean to care about or value anything. I suppose it may be possible to have goals without accompanying emotions, but the sense of accomplishment, satisfaction, or joy that coincides with achieving our objectives or having the kinds of experiences we want to have are emotions.
What’s the alternative to the view that moral values reflect our emotions (and perhaps subjective beliefs, attitudes, and desires, which isn’t going to get you realism so it can probably be safely added here)? Moral facts that are “true” independent of whether I care about them or value them? Facts that “have authority” over me and everyone else regardless of whether complying with them would yield the kinds of lives we want to live, or make us into the kinds of people we want to be? Outsourcing morality to something other than one’s emotions and personal beliefs and values strikes me as one of the most profound and obvious missteps in all of philosophy.
The only way anyone could or would comply with moral facts out there is if they cared to do so, if the only way they’d care is if they had some desire to comply with such facts. If not emotions, then something like a desire or subjective value strikes me as plausibly critical to anything other than our emotions or desires serving as a foundation for our moral concerns.
My gastronomic preferences are based entirely on how I feel about the food and the emotions that food induces (i.e., states of pleasure or disgust). By “gastronomic preferences” I mean to distinguish my judgment about what food tastes good or bad from other considerations relevant to deciding what to eat: cost, nutrition, ethics, convenience, and so on. Setting all of those factors aside, there is the distilled matter of whether the food tastes good or tastes bad. And when it comes to such judgments, they’re based entirely on my (emotional) reactions to that food. Does that make food preferences bullshit?
1.5 Nothing to reason about
The remark I find the most objectionable, however, is this one:
[...] if moral claims are about our feelings, emotions, sentiments, or just plain wrong, then there’s nothing to discuss, reason, or be consistent about.
This is simply not true. Again, consider the gastronomic example. If our food preferences are determined by our emotions or feelings, does this mean that there’s nothing to discuss, reason, or be consistent about when it comes to the culinary arts? Clearly not: people discuss and reason about food all the time, and there’s nothing stopping them from pursuing consistency, either.
A community of people whose moral values are likewise grounded entirely in their emotions are not bereft of the ability to discuss, reason about, or be consistent.
Insofar as people share the same values, they can discuss, reason about, and develop consistent norms and institutions to mutually coordinate and regulate their behavior. For comparison, there is no stance-independent fact mandating that a group of people build a home. Yet if they all want to do so, and therefore share the same goals, they can then reason about and discuss the best methods to build a house.
In other words, whenever a group of people share sufficient intersubjective overlap in their values, what they can reason about and discuss are the most effective means of acting in accord with those values and succeeding with respect to their shared goals. Thus, even if our moral values were entirely a product of our emotions, shared intersubjective value is perfectly adequate for people to continue discussing and reasoning. And if they wish to hold consistent values, and endorse a set of practices consistent with their values, they can likewise work collectively to get into reflective equilibrium with respect to their values. Suppose, for instance, a person finds that they both believe that (a) everyone should be treated equally but (b) they support a policy where some people are treated unequally. People often hold views that are in tension or outright conflict with one another. When they discover such inconsistencies, they can reflect on those values and either drop one or the other value (or both), resulting in a more consistent set of values or beliefs. Such a process is fully available to moral antirealists.
Insofar as people discover that their values conflict with one another, they can also discuss, reason, strive for consistency. Many disagreements are not disagreement about what’s true, but disagreements about what to do. If people have conflicting goals, they may not disagree about the stance-independent facts. That is, they may agree on what’s true about the world. Under these circumstances, they can still reason through the best means to cooperate with one another. Diplomacy, trade, negotiation, and compromise all emerge in situations in which people have conflicting goals. There need not be a stance-independent fact about the correct resolution to such conflicts for people to reason through and discuss those conflicting goals.
Means to reason, discuss, and strive for consistency are consistent with moral antirealism. People don’t have to think that there are stance-independent moral facts in order to strive for consistent moral standards, to work together to build productive societies, or to resolve conflicts with people that have different values.
There is this persistent realist canard that moral disagreement simply doesn’t make sense if people aren’t realists. This is complete nonsense. Such philosophers have devised an artificially narrow conception of “disagreement,” construing it exclusively in terms of disputes about what’s true. Under such circumstances, antirealists may very well be unable to readily disagree. Of course, even this isn’t actually true. If cultural relativism were true, members of the same culture could disagree about what was right or wrong relative to their culture’s moral standards. And constructivists could disagree about what’s morally right or wrong with respect to a particular constructivist standard. Thus, even straightforward disagreement about what’s true is consistent with moral antirealism. But even if we set these particular antirealist positions aside, and consider others, that purportedly struggle with disagreement, e.g., individual subjectivism and noncognitivism, they don’t struggle at all. They may not be able to accommodate disagreements about what’s true. But let me introduce a slogan:
Many disagreements are not about what’s true, but about what to do.
Disagreements about what to do routinely occur in everyday life, and in circumstances where it would make little sense to suppose that there is or must be a stance-independent fact of the matter. If you and a friend are trying to decide which movie to see at the theater, one of you may wish to see a science fiction movie, while the other wants to see a romantic comedy. Must one of you be correct independent of your cinematic preferences? No. This is silly. Nevertheless, you disagree about what to do, and will hopefully devise some means of resolving the problem, e.g., “fine, let’s see the movie you want to see, but let’s see the other one next week.” When people have conflicting goals, they can reason through and discuss means of resolving those conflicts without striving to figure out what they “ought to do” independent of their goals and standards. This is just as true of morality as it is true for picking out a movie.
That was a very enjoyable read. When I initially saw the tweet at the beginning, I got rather irrationally angry at the implication that a moral anti-realist position rendered all discussion of morality superfluous. I was happy to see you addressed basically all of my objections in good form.
[Not a philosopher]
From some reading of philosophy-adjacent people discussing All That Stuff it seems to me that they constantly refer to those things they call "intuitions" and I'm failing to see how those "intuitions" substantively differ from, to use a colloquial term, "feelings", which I think are best operationalised as "emotionally informed beliefs".
Also, yes, very much on the "what to do" vs "what is true" distinction.