I don't care if I should care if moral realism is true
1.0 Introduction
If moral realism was true, and there were stance-independent facts, I would not care at all. The reason for this is simple: I am only interested in acting on my values. And I do not value compliance with whatever the stance-independent moral facts are. If those moral facts happened to align with my values, I’d be motivated to perform those actions, but this would be coincidental and would have nothing to do with the fact that the actions in question were stance-independently moral. If they don’t align with my values, I would have absolutely no interest in complying with them.
Unless I am radically mistaken about my own psychology or about how motivation works, this picture would remain unchanged were moral realism itself true. All the insistence in the world that refusing to comply with the moral facts would be immoral or irrational is irrelevant to me. David Lewis recognized the powerlessness of the realist’s “authority”:
Why care about objective value or ethical reality? The sanction is that if you do not, your inner states will fail to deserve folk-theoretical names. Not a threat that will strike terror into the hearts of the wicked! But whoever thought that philosophy could replace the hangman?
Both Sides Brigade (BSB) objects to this sentiment in a blog post “Of Course Everyone Should Care About Objective Moral Facts.” BSB’s article is a response to this one, from Discourse. It might make sense to read both of those first.
The title helps itself to a claim that doesn’t move me: I’m an antirealist about all normativity, including whatever domain the “should” in the title may fall in. Not only do I not care about stance-independent moral facts, I also don’t care if I should care about them. “Shoulds” that aren’t reducible to facts about my values have no sway over me. Reality itself wields no axe. If anyone is to be decapitated, people must hold the blade. That’s how actual authority works. The “authority” realists believe in is a mockery to the notion, and has about as much power to enforce as a man in his basement declaring himself the Pope.
From the very outset, even if there was an argument with the true conclusion that I “should” care about such facts, I wouldn’t care, and, in any case I just don’t care. If BSB doesn’t like that, too bad. This clip perfectly captures my reaction to even a hypothetical situation in which there were stance-independent normative facts about what I should do (or care about):
2.0 Oh, the seemings you’ll have!
BSB makes a few preliminary claims before getting to the meat of the dispute. Here is one:
Now, to be clear, what follows isn’t meant to be a criticism of Discourse in particular, since as I said, it’s a pretty widespread sentiment among relativists in general. But as a widespread sentiment, it still just seems obviously wrong!
If you’re a regular reader of my blog, you’ll already hear these words in your head, but I’ll say them anyway:
It “seems” obviously wrong to who?
It’s not obviously wrong to me. In fact, it’s obviously right! If BSB is making a personal report about their own psychology, and some of the surrounding commentary suggests this is the case, well, that’s fine. God’s existence is obvious to many theists. Astrology’s efficacy is obvious to many astrologers. Bigfoot’s magnificently furry feet are quite real to those who believe in Bigfoot. None of the rest of us are obliged to care in the slightest. I simply don’t care if something is “obviously wrong” to BSB. BSB has wildly different priors and background beliefs to me. As a bit of autobiography, this is perhaps interesting, but if such remarks are intended to serve as rhetorical weight on the scales against the relativist, I’d advise readers to be cautious when people claim things are “obvious.” This has no force if we don’t share the critic’s judgments about what’s obvious. I don’t mean to be too harsh; as I said, there really is some casual autobiography going on here, as BSB next remarks:
I really can’t wrap my head around the logic — and since I’m stuck between Christmas dinners with nothing to do, I thought it would be fun to quickly lay out exactly why I find it so baffling.
Fair enough, but perhaps BSB’s struggles to understand the logic are a result of BSB failing to fully consider the matter from the relativist’s point of view. As we’ll see, I think BSB falls victim to the halfway fallacy, and continues to impose their own perspective or presuppositions onto the framing of the dispute in a way that a relativist isn’t obliged to grant (and that I certainly don’t).
To be fair: I may very well be guilty of the halfway fallacy myself in claiming I wouldn’t care if moral realism was true. Why? Because if moral realism is true, this may be associated (or strongly suggest, or even entail) that much of what I think is mistaken in addition to my mistaken rejection of moral realism. My reaction to the truth of moral realism could very well change were I persuaded of the truth of numerous auxiliary claims. I can’t escape my own point of view, so I will simply concede that if I am mistaken about enough positions at the periphery of moral realism that lend weight to its truth, I may not be in a position to adequately anticipate how I’d react were I both to believe moral realism and the host of auxiliary positions that support it.1 These concerns cut both ways. I am willing to concede as much, but as of yet I rarely see realists do the same. Perhaps some will.
Either way, our respective stances on moral realism and its implications stand or fall not merely on the basis of the positions themselves, but the auxiliary positions that surround them. I think this kind of position holism is an extremely important consideration and one of the main reasons philosophers struggle to resolve matters: they argue about positions in isolation from one another, but few if any positions are islands unto themselves. It’s like a grand competition between people deciding which move is best or who scored the last point, but nobody can agree on which game they’re playing. Until this latter matter is settled, disputes on the field seem like a profound waste of time.
3.0 Relativism and properties
My primary objection to BSB’s account centers on the way BSB frames the relativist’s interests. This occurs here:
Let me start by pressing a simple point, one that relativists themselves often complain about people forgetting: With the exception of a few clinical psychopaths and/or error theorists, everyone agrees that some things really are wrong! Take recreational puppy torture, for instance — the fundamental disagreement between realists and relativists has never been whether that sort of act has the property of wrongness, but rather what the nature of that wrongness consists in. The realist thinks the wrongness of torturing puppies is an objective moral fact, while the relativist thinks that same wrongness is the product of their own subjective stances. Still, both agree that, whatever the ultimate nature of wrongness turns out to be, it’s definitely something that recreational puppy torture has.
There is a sense in which I agree: I do think some things “really are wrong.” However, I think BSB begins to smuggle in certain analytic presuppositions into the framing of this truth that I don’t accept. When I say that some things “really are wrong,” what I mean is that there are certain things that are inconsistent with my moral values. That’s it. Yet BSB says:
[…] the fundamental disagreement between realists and relativists has never been whether that sort of act has the property of wrongness, but rather what the nature of that wrongness consists in.
This may be true for some relativists, but I don’t accept this framing. I don’t think the act of puppy torture “has the property of wrongness,” in some unspecified way, where anything could serve as the property in question, and it’s only a matter of determining what that property is. BSB takes a sort of top-down approach: first we agree that something “has the property of wrongness,” whatever that means, then we try to figure out what that property consists in, in such a way that we’ve committed ourselves to caring about that wrongness, whatever it turns out to be.
This is not how I approach the matter. I take a kind of bottom-up approach. I disapprove of recreational puppy torture. I don’t frame my position in terms of actions having properties, and I don’t care what any analytic philosopher says, whether e.g., they insist what I think is equivalent to this; I refuse to affirm this framing. Sometimes people will be conciliatory and say something like “sure, if you want to frame this in terms of properties, then yes, puppy torture ‘has the property of wrongness’ where this just means that I disapprove of it,” or something along those lines. But I think we should stop doing this. It may be polite and agreeable to accept someone else’s framing, but much of the legerdemain (intentional or not) that analytic philosophy partakes in occurs when one accepts the analytic philosopher’s framing. Once one does, all manner of equivocation and confusion over words can occur. So no, I refuse. I do not think of puppy torture as “having the property of wrongness.” I don’t care if you stipulate that what you mean by this includes my perspective. If you do, then sure, it trivially follows that I think it “has the property of wrongness.” But I won’t say this, and I won’t agree to anyone framing my position in these terms.
3.1 A digression about framing and rhetoric
You can handle my position in the terms I phrase it, and I will engage with you, or not, and I won’t. It’s the prerogative of anyone to redescribe anyone’s position in terms suitable to their preferred terminology. In that case, let me suggest that we frame “moral realism is true” as the view that “genocide and puppy torture are awesome,” where I trivially define this phrase to just mean moral realism is true. Given this, technically all moral realists agree that genocide and puppy torture are awesome. Of course, this is trivially true, but I’ll bet you won’t see many moral realists publicly willing to accept this framing. And the reason why is obvious: they’d look bad if they did so, even if there’s no legitimate dispute about whether it’s trivially true that moral realists agree that genocide and puppy torture are awesome given the way I’ve stipulated I’m using that phrase.
Of course, I’m being silly, and this is a ridiculous example. But it highlights in the extreme what often occurs in a less extreme way when realists address antirealist positions: they insist on framings that conveniently (again, none of this has to be intentional) give the impression by pragmatic implication that antirealists are evil monsters, psychopaths, or idiots. These impressions are, I believe, doing much if not all of the heavy argumentative lifting in these exchanges, especially among audiences that lack formal training. I want to pause on this for a moment to emphasize something:
(1) That realists routinely do this seems obvious to me (see how I specify who things seem obvious to, realists? Why can’t you do this?)
(2) Whether or not it seems obvious to anyone in particular is irrelevant, since realists demonstrably do this regularly
(3) The continued practice of leveraging pragmatic implicature to make moral antirealists look like evil monsters ought to be increasingly attributed to severe ignorance on the part of moral realists who don’t realize what they’re doing, culpable negligence in the case of those in a position to know better, and active malice for those who do know better but do it anyway
(4) I sometimes see people criticize me for focusing on rhetoric rather than the substance of arguments. To these people: have you stopped and thought about whether rhetoric might play a role in whether people accept or reject positions? And that it isn’t purely about arguments? Have you considered whether rhetoric may influence people’s attitudes and dispositions in ways that prompts motivated reasoning, or causes people to be stubborn or rigid or overconfident or fail to adequately engage with the premises of arguments? Rhetoric and formal arguments are not independent of one another; they interact, because both have to be passed through the sieve of human judgment. We ignore rhetoric, and its relevance to philosophical dialectic, at our peril. I also focus on this because almost nobody else does, and because it’s important. If someone was trying to argue with you and throw shit at you at the same time, you’d probably want them to stop flinging feces before you responded to the arguments.
Note BSB’s remark above:
With the exception of a few clinical psychopaths and/or error theorists, everyone agrees that some things really are wrong!
There’s an unintentionally threatening implication here:2
People who don’t agree with me might be psychopaths.
If you don’t agree moral realists literally and explicitly imply that everyone either agrees with them or is a psychopath (at least BSB thinks you can be a psychopath or an error theorist; more on this in a moment), well, here’s an example of one of them explicitly saying this:
It is only a slight exaggeration to say that almost everyone believes in moral realism and almost everyone, at least in the circles I usually move in, denies believing in it. Everyone, with the possible exception of psychopaths, feels that some things — stealing from a friend who trusts you, for example — are wrong, not just illegal or imprudent but wrong.
I don’t feel that things are “wrong” in a realist way. I am also not a psychopath. Friedman is wrong. Now, BSB recognizes you could be a psychopath or an error theorist. Do you think an unwary listener might pick up the vibe that some of the stain of psychopathy rubs off on the error theorist by syntactic osmosis? I certainly do. Just imagine someone saying:
The people who don’t endorse this position are either genocidal maniacs or moral realists.
Is it hard to imagine someone reacting with the impression that being a moral realist might be really really bad given where it’s placed in this sentence? This is probably testable, and I bet guilt-by-association like this works to create at least a subtle negative impression about a rival perspective. Realists often do this in online debate spaces. BSB’s remarks here aren’t the worst or best instance of it; they’re rather mild by comparison to much of what I’ve seen. And they’re not quite as bad as normative entanglement, but the constant, again probably unintentional pairing of moral antirealists with psychopaths does, I suspect, contribute to a kind of malevolent aura around antirealism, one that moral realists have actively created through the way they frame things and by the examples they choose.
3.2 Returning to properties
Rhetoric and framing aside, there are also substantive errors and confusions to disentangle in BSB’s account, and much of this is rooted in this talk of the “property” of wrongness. Even if there is a sense in which you could trivially stipulate that what I think can be cashed out in terms of actions having properties, I think cashing things out in this way opens the door to precisely the kinds of analytic shenanigans that I think cause so many confusions. What it does is allow the analytic philosopher to treat the relativist antirealist and the realist’s concerns as fixed in certain respects, but with a different referent. So the antirealist and realist both agree:
Puppy torture has the property of wrongness.
But they then disagree about what “wrongness” is.
Antirealist relativist
Wrongness consists in facts about one’s stances.
Realist
Wrongness consists in stance-independent facts.
Now one can maintain that since both are committed to (1) it being true that puppy torture has the property of “wrongness”, and that (2) they care about puppy torture in virtue of its possession of this property, BSB can now argue that it is possible for the relativist to be mistaken about what the wrongness they believe in consists in, such that were they to discover that what they thought was the wrongness-making property of the action (that it is inconsistent with their values) is mistaken, and in fact it is the intrinsic, stance-independent wrongness of the action, everything else should remain intact: they clearly cared about the wrongness of recreational puppy torture when they mistakenly believed puppy torture “had the property of wrongness” where this meant that it was inconsistent with their values, but now you can just swap out “inconsistent with my values” with “inconsistent with the stance-independent moral facts.” Since we are keeping what the relativist cares about fixed (they care about the “property of wrongness,”) but allowing the referent of this property to vary, this allows BSB to argue that the relativist is confused and mistaken when insisting they wouldn’t care if something is objectively/stance-independently wrong: they clearly do care if things are “wrong,” and are just mistaken about what their wrongness consists in. This is exactly what BSB suggests:
So if objective moral facts do actually exist, which is the premise of the counterfactual here, then the relativist must be incorrect about something. But what, exactly, are they incorrect about? It can’t be their original judgment that torturing puppies is wrong, since that’s equally true given realism. Instead, all they’re actually incorrect about is whether that wrongness is subjective — they believe the property they’re talking about is a product of their own individual stances, when in reality, it’s just an objective moral fact. So once they learn that moral realism is true, what they realize is that, in some sense, they’ve been caring about objective moral facts this whole time, since it turns out that’s what wrongness actually is.
No. A thousand times: No! I think this remark highlights the deep and pervasive problem in the way BSB approaches these matters.
First, an important caveat: BSB’s objections probably do work against certain kinds of analytic stance-dependent relativist accounts. Certain analytic philosophers really do think within the analytic straitjackets characteristic of mainstream practices in the field, which are entrenched in bizarre semantic-centric and property-centric framings.
So much for these positions, if so. A relativist need not endorse them, and need not accept the framings and presuppositions operative in mainstream analytic metaethics. They shouldn’t, because they are the root of the problem for both mainstream analytic realists and antirealists, which is why I reject all contemporary analytic antirealist positions. But one can readily identify a post-analytic analog to relativism, constructivism, or whatever other stance-dependent cognitivist normative antirealist account one fancies, that simply discards the confused framings and presuppositions.
3.3 A previous comment on properties
I ended up responding to someone in a way that addresses where I planned to take this next here. So here is that comment, reproduced here. There will be some overlap with what I’ve already said, but I tried to cut out some of the redundancy:
The problem with BSB’s remark is using property talk to treat judgments of “wrongness” in such a way that “wrongness” becomes separable from attitudes/values/preferences, such that the relativist can be “incorrect” about what’s driving the judgment that puppy torture is “wrong.”
It looks like BSB thinks that both relativists and realists come at the question of whether puppy torture is wrong in a kind of stepwise process:
Have the intuition that puppy torture is wrong.
Seek an explanation for why it’s wrong. We know it’s wrong, it’s just a matter of figuring out what philosophical account explains this.
I grant that this is the way a mainstream analytic philosopher who lands on relativism might approach things, but it isn’t how I or others on Substack appear to be approaching the matter, and it’s not how a relativist “must” approach the matter. Instead, what I and others are doing is finding that we oppose puppy torture, and then when we speak of things being right or wrong this is what we intend to convey when using ordinary language phrasings.
There’s no placeholder. It’s not that I think puppy torture is “wrong” in abstracta, first, then there’s a separate, distinct question about whether this wrongness that I’m picking up on is reducible to my values or is instead some kind of objective fact. No. When I say puppy torture is wrong, this just is an expression of my attitudes and values, full stop. There’s no ambiguity here, no reasonable possibility of me being “wrong” about this. When I say it’s wrong, I am not theorizing about what wrongness is. I’m just stipulating that my use of moral talk is a way of expressing my preferences. Being a “relativist” as an individual speaker doesn’t require or presuppose some kind of semantic theory about the meaning of ordinary language terms among ordinary language users, or a theory about the meaning of ordinary moral claims themselves, and so on.
In a way, it’s a kind of top-down approach to devising an account. We start at the top level with first fixing our use of terms: “puppy torture is wrong.” We agree on this. And, critically, realists and relativists alike will affirm that we care that it’s “wrong.” Then, because our commitment to it being “wrong” is fixed, BSB can argue that we have mislocated wrongness in our values, when it is instead located in the objective wrongness of puppy torture. Since our commitment to it being “wrong” is locked on the first-order moral claim “puppy torture is wrong,” and because we’ve already granted we care about this truth, if we’re mistaken about the referent of the truth, then we’re still committed to caring, but we’re wrong about what it is we’ve cared about all along.
By analogy, suppose I and another person both agree that we care about what’s in a particular box. So we both affirm:
What’s in that box is important to me.
I think the box has X in it, and they think the box has Y in it.
Now they argue:
We’ve both agreed that what’s in the box is important to us. However, while you think that the important thing in the box is X, it isn’t X, it’s actually Y. So since you agree that what’s in the box is important, but you’re wrong that the thing in the box is X, actually what’s important to you is Y, so you should care about Y.
This is the mistake BSB is making, because I don’t just care about whatever is in the box, regardless of whether it is X or Y. Instead, my position is this:
X is what’s important to me, and I believe X is what’s in the box, so “What’s in the box is important to me” is only conditionally true on the thing in the box being X. If it turns out to be Y, I won’t care about it.
Transposing this over to talk of moral values: when I say that puppy torture is morally wrong, I am not dissociating in some abstract context the notion that “puppy torture is wrong,” and thinking this is true, whatever wrongness turns out to be, then happening to think what this wrongness consists in is my preferences. Instead, what I am saying is “Puppy torture is against my preferences,” and that’s just what I mean when I say it’s “wrong.” There’s no placeholder content in “wrong” for it to possibly be objective wrongness instead.
The result of this is that if we go all the way back to the start of this response, where I quoted BSB, we can now see what the problem is. Here’s the remark again for reference:
So if objective moral facts do actually exist, which is the premise of the counterfactual here, then the relativist must be incorrect about something.
No, I am not incorrect about anything here. My language is “pre-reduced” in advance: my talk of puppy torture being wrong just is talk of my personal preferences. There is no gap between my preferences and the meaning of the term. There is no reasonable possibility of me being incorrect, in virtue of my own commitments or ways of speaking, about “whether that wrongness is subjective.” There is no such thing, on my view, as “wrongness,” apart from subjectivity from the very outset of describing my stance and what I take (my, at least) moral claims to mean. As far as anyone else saying that puppy torture “is wrong,” well, it’s an open question to me what they mean. And they are welcome to tell me.
My way of approaching metaethics is thus not vulnerable at all to this objection from BSB. What any given instance of “puppy torture is wrong” means is, to me, an open question contingent on the communicative intent and philosophical commitments of any given speaker. There is no free-floating “puppy torture is wrong” sentence in abstracta about which BSB and I could disagree; I just reject outright that there are any meaningful sentences or claims outside some context of usage. There are only facts about what I mean, what BSB means, and what anyone else means, and we can simply report, or stipulate what is meant by any given usage of “puppy torture is wrong” that is the present subject of discussion.
BSB seems to not understand this, and I think this is partially rooted in misguided reification and property talk rooted in mainstream analytic philosophical methods.
So it’s not that I just have some inchoate sense that there’s something fishy about the way BSB is approaching the matter. It seems very clear to me what the problem is. I could be mistaken about all of this. And, as a final note, when I say it’s fishy I don’t mean to impute intent on BSB. I don’t think BSB is being e.g., suspicious or sneaky or anything. I think BSB is employing a different metaphilosophical approach to my own and that BSB’s mistakes are located in those metaphilosophical differences.
In case there is any doubt that I’ve accurately described the move BSB is making, BSB is explicit about this:
In other words, recognizing the truth of moral realism wouldn’t magically expose the relativist to a whole new world of objective moral facts that they’d been previously cut off from — it would just correct their erroneous beliefs about the stance-dependence of the moral properties they already accept as motivationally relevant.
BSB gets this exactly wrong. Yes, recognizing the truth of moral realism would expose (many of) us to a whole new world of objective moral facts I’d previously been cut off from (magically or otherwise). That’s the whole point! That’s why antirealists like myself and others are stressing how much we don’t care about these moral truths. Whatever these moral facts are, they have nothing to do with what we care about. I am not against puppy torture because I think it has the “property of wrongness,” and I am simply motivated to act on what I think has such properties. I am opposed to puppy torture and am motivated to stop it because I don’t like it and don’t want it to happen. I then label this opposition “wrong.” It’s bottom-up, not top-down. To put this in the simplest possible terms:
Puppy torture is immoral (to me) because I am opposed to it; I’m not opposed to puppy torture because I think it’s immoral.
BSB continues:
And it should be obvious that discovering the objectivity of a property which you already take to be subjectively important can’t possibly undercut that property’s motivational “oomph,” right?
BSB takes “wrongness” to be a “property,” and this “property” can be subjective or objective. The relativist cares about the property, but mistakenly thinks it’s subjective, when in fact it’s objective. So discovering it’s objective shouldn’t change whether they care about it.
But neither I nor any antirealist or relativist I know (which isn’t to say there aren’t ones I don’t know) of thinks of “wrongness” as something independent of our subjective values, such that it could even in principle turn out to be objective or stance-independent. When I say “puppy torture is wrong,” I just mean that it is against my preferences, i.e., that it’s inconsistent with my stance. So what BSB says here makes no sense on my view. It presupposes a conception of what’s under dispute that I reject, and that misconstrues what I and others think.
If wrongness simpliciter is something a relativist cares about, then the objectivity of that wrongness should at, the absolute very least, make no difference whatsoever.
Many of us don’t believe in “wrongness simpliciter.” It isn’t something we care about. I don’t even think such a notion is intelligible, personally.
BSB adds:
Rather, I think they’re just trying to emphasize that their moral reasoning has an essential affective aspect, and that “pure objective wrongness” wouldn’t move them apart from their own subjective cares and concerns. But this is a total non sequitur, since moral realism doesn’t require (or even suggest) that our moral decision-making should only be driven by some neutral, passionless detection of The Good. Instead, realism just requires that the things we naturally take to be relevant — fairness, respect, the flourishing of friends and family, and so on — carry an objective moral weight.
This just left me scratching my head. A “non sequitur” in this context would presumably mean something like “a claim that does not follow from the claims that preceded it.” But what claims preceded “I wouldn’t care if moral realism is true,” that would make such a position a non sequitur? What is it a non sequitur to? Presumably, some kind of implied claim like:
[O]ur moral decision-making should only be driven by some neutral, passionless detection of The Good. Instead, realism just requires that the things we naturally take to be relevant — fairness, respect, the flourishing of friends and family, and so on — carry an objective moral weight.
The issue here is that if we don’t take these things to be relevant, or if we take some other things to be relevant, we’re not necessarily making any mistakes. And insofar as we take the things BSB lists here as relevant, they are relevant insofar as, and only insofar as, we subjectively care about them, and no further and in no other respect. BSB still doesn’t seem to grok the antirealist perspective on this matter: things like fairness and respect are relevant when they are relevant because we care about them subjectively, and only because we do so. If we didn’t care about them, then they wouldn’t be relevant.
Not everyone “naturally” (or unnaturally, I guess?) considers the same things “relevant,” much less in the same way or to the same extent, either. And we, as relativists, would not think that an alien species that considers the greatest moral good to go on an intergalactic crusade to torture and eat all other species to be making factual errors. We’d just think they were dangerous maniacs.
BSB goes on to say:
There’s a very simple answer, then, for any relativist who demands a reason for caring about objective moral facts: Because you already do!
No, we don’t. And BSB hasn’t shown that we do.
You already care about right and wrong, and good and bad, and all that jazz, and if moral realism is true, then all those things are objective moral facts.
No, we don’t care about right and wrong, good and bad, and all that jazz. We care about our subjective values. Those subjective values are what we are referring to when we call things right and wrong, good and bad, not the other way around. You’ve got it entirely backwards.
BSB continues to reiterate the same errors and misguided presumptions, over and over:
Again, it’s important to remember that the objective moral fact and their own subjective judgment are both relating to the same property of wrongness, which is already motivationally relevant for everyone involved.
This exemplifies, more than anything else said in BSB’s post, the shenanigans wrought by this talk of “properties,” and clearly illustrates how property-talk has misled BSB into thinking we care about the “property of wrongness” independent of whether it is subjective or objective, and then the only dispute is whether it is subjective or objective. No, our use of the language is not relating to the same “property of wrongness.” We’re using the word wrongness to refer to something else, a different “property” entirely, if you insist.
And in my case, what I’m referring to is quite literally a matter of stipulation. I am informing BSB what I care about: my subjective preferences, and not the objective facts. It’s not some theory I have about what I take myself to be doing that is a serious matter of philosophical contention, any more than any other report of my personal preferences is a serious matter of philosophical dispute. I’m reporting this in the same way I’d report that I like the color purple. Now, I could be wrong about what I care about and what I think, but that’s a separate question entirely, and my position is what it is regardless of whether I’m personally deluded about what I care about. For comparison: the strength of the case for moral relativism/antirealism would remain largely unchanged even if a person presenting the case for it was lying or playing devil’s advocate.
Another angle to take is this: My subjective values are what are motivationally relevant to me, not whether something is “moral” or “immoral.” So if I discovered something was objectively moral or immoral, this would always be motivationally irrelevant to me. What I’d consider, at that point, is whether I subjectively cared, not whether it was right or wrong. If an act is something I subjectively favored and it was morally right, that I subjectively favored it is doing all the motivational work. If I subjectively favor something but it’s wrong, then I will do it anyway, because again, my preferences are doing all the motivational work. What’s motivationally relevant to me, in other words, are my preferences. If this required me to completely divorce those preferences from all moral talk of things being right, wrong, morally good, or morally bad, then so be it: if morality isn’t about my preferences, then I don’t care about it.
BSB continues with this remark:
So if the relativist figures out an objective moral fact that surprises them, all they’d be learning is that some moral property they care about — goodness, rightness, badness, wrongness, whatever — is showing up somewhere they’d previously missed it.
But we don’t care about “moral properties” like “goodness” or “badness” or whatever. We directly care about whatever it is we care about: puppies not being tortured, our family being happy, and so on. This doesn’t pass through some middleman property like “goodness” or “rightness.” BSB seems to think relativists think about morality in the way BSB does, but the way BSB thinks about morality strikes me (and others) as extremely weird. BSB seems to think that motivation works like this:
We consider what we care about
What we care about are moral properties {goodness, badness, rightness, wrongness, etc.}
Whatever it is these properties consist in, that’s what motivates us, so things that we care about, like fairness, respect, flourishing of friends and family are cared about because they have moral properties
If those moral properties are subjective, that’s fine. But if they are objective, that’s fine, too. It doesn’t really matter. What matters is that we care about which things have moral properties.
In other words, we get something like this:
Subjective values → Intermediary target of value: Moral properties → Downstream target of value: Concrete matters that exhibit these properties (e.g., fairness, respect, etc.)
But this isn’t how we care about things. Our care is direct:
Subjective values → Target of value: Concrete matters of concern, e.g., fairness, respect, etc.
We then label these targets of value “good”, “bad”, etc. These are just verbal labels, or redescriptions, of the things we value on subjective grounds. We’re not attributing the “property of goodness” to them.
4.0 Hedgehogs in the light of the moon
[…] that morality has at least some inherent conceptual content that limits just how weird the facts could get; as Philippa Foot famously points out, “no one should look at hedgehogs in the light of the moon” just isn’t the sort of thing that could properly count as an ethical command.
One does not simply point out such things. This is a claim, and it is open to contention. I don’t agree that the hedgehog claim couldn’t be a moral claim; I don’t see any reason why anyone couldn’t moralize just about anything. If BSB or Foot think otherwise, they’re welcome to argue for such a claim, but they’re not entitled to just declare it so. The wisdom of Boromir is inevitable:
There are a few other loose ends to address, which I’ll address to BSB directly:
[…] Let’s imagine a relativist figures out a way to get in touch with all the objective moral facts, and they actually line up pretty nicely with what that relativist already takes to be subjectively true — cruelty and hatred are out, generosity and kindness are in, that sort of stuff. But then let’s say there’s one particular moral issue where that relativist sees the merit to both sides and isn’t quite sure where they land. If they consult the objective moral facts and learn the actual truth of the matter, and it turns out to be one of the positions that they already considered plausible, then how should they respond? Would anyone really argue that it shouldn’t matter one bit what the objective moral facts say in that case? I’m sorry, but that’s just ridiculous!
Yes. It wouldn’t matter one bit.
I mean, come on: We’ve got someone who already has a desire to do the right thing, but when they find out what the right thing objectively is, that knowledge somehow doesn’t make a difference? If that’s really what relativists are trying to say here, then it just seems more like normative stubbornness than any sincere challenge for a realist. But on the other hand, if the relativist does take a nibble of the bullet and agrees that objective moral facts could at least play a tie-breaking role, then that’s no good for them either, since objective moral facts could only ever have a normative authority like that on account of being, you know, facts about what’s right and wrong. And since that’s what all objective moral facts are, of course, what reason could we have for only considering them in cases like these?
No, BSB, you begin with an error: we don’t desire to “do the right thing” and are then open to doing whatever the “right thing” is. We just have desires and consider some of those desires to be “the right thing.” The latter is just a labeling, a rubber stamp, a verbal afterthought; what matters to us is our values themselves. That is, we have desires about what we want to do and not do, how we want the world to be and not be, and so on, and we act accordingly. That’s it. There’s no intermediary. You think there’s an intermediary, so you mistakenly think we do, too.
You just have a profoundly misguided model of our psychology, which appears to me to be a result of imposing aspects of your own presuppositions and ways of thinking onto us, then declaring our perspectives ridiculous or whatever other negative appellations you’ve used across your various articles based on a serious mischaracterization of what we actually think. Then, when we repeatedly tell you what we actually think, this seems to bounce off of you like you’ve got some kind of forcefield around you, and you go right back to making the same mistakes over and over. It’s tedious, and I know others with similar views feel much the same way. It feels like you’re not engaging with our positions, but with some imaginary, dopey relativist whose views are all tangled up into pretzels. You accuse us of stubbornness here:
If that’s really what relativists are trying to say here, then it just seems more like normative stubbornness than any sincere challenge for a realist.
That’s not what we (I’m roughly identifying as something like a relativist here) are trying to say. You could’ve just asked us.
Instead, it just seems obviously, near-tautologically true that moral facts about which acts are good, bad, right, and wrong should always be relevant for anyone who cares about goodness, badness, rightness, and wrongness, regardless of whether it turns out those properties are objective or subjective.
From my antirealist perspective, whether something is relevant is a matter of psychology, not something that could be settled a priori. As such, whether moral facts are “relevant” to me isn’t something that could, even in principle, be a tautology.
Weirdly, towards the end of the article, you stumble on the correct solution:
The only way out of this bind would be for the relativist to say their moral judgments necessarily center on a distinct property of “subjective wrongness” that has nothing whatsoever to do with the entirely distinct property of “objective wrongness” that objective moral facts involve.
This is more or less correct, but note how tortured and byzantine this is. The notion is that our moral judgments center on “a distinct property of ‘subjective wrongness.’” Not quite: the “moral judgments” just are judgments about what’s subjectively right or wrong. They don’t “center” on the “property” of subjective wrongness, they just are judgments of this kind.
But this just isn’t how properties work, or else (ironically) some of the more abrasive moral realists out there would be correct to say that relativists and other anti-realists aren’t really doing ethics in any meaningful sense.
BSB, you are the one framing all of this in terms of properties, and you are now saying this isn’t how “properties” work. You’re insisting on framing our positions in terms familiar with and comfortable to you, then giving the imaginary relativist/antirealist who holds such a view a hard time for it. First, they could disagree with you about whether this is how properties work. Or, second, as I do, they can reject this framing altogether as a weird and confused way of talking about metaethics that they don’t accept from the outset.
Also, that conclusion that it is we, the relativist and antirealists that “aren’t really doing ethics in any meaningful sense,” seems to privilege realist conceptions of morality, but I don’t grant this: I would just flip this on its head and say that you and other realists aren’t really doing ethics in any meaningful sense. Only antirealists are talking about actual morality, because BSB and other realists are talking about things that are either unintelligible, don’t exist, or are trivial on reflection to practical deliberation (at least to me and, I maintain, most people).
That’s the funny thing about these labels: I actually do, quite literally, think something like this is the case. Since I, as an antirealist, think that only antirealist conceptions of morality are about something real and practically relevant, while I think realists are mistaken or confused, there is a literal sense (not just a for-the-sake-of-argument sense) in which I think that we antirealists think is real, and what BSB and other realists think isn’t real. In other words, antirealists are the real realists, and realists are the real antirealists. I cannot stress enough: I do not privilege realist conceptions of morality over antirealist conceptions. If I had to privilege one over the other, I’d go with antirealist conceptions, and insist that realists have been and remain wildly out to lunch on these matters.
So if relativists want to secure the independence of first-order moral reasoning from metaethical assumptions — which, as far as I can tell, is a major goal of theirs — then they’ll have to accept that relativists and realists are referring to the same property of wrongness when they make their moral judgments.
No. I reject this. And I reject any attempt on the part of a philosopher who insists I “have to” or “must” do something. There’s a simple way to show that they’re mistaken about this:
I don’t.
And if I didn’t do something, then clearly I didn’t have to!
In case it’s not obvious to you, I am being a bit tongue in cheek here. I presume “have to” means something like “is compelled by the arguments/logic” to do so, not in the sense of being literally incapable of doing otherwise. But if it’s not also obvious, I deny any convincing arguments have been presented that would support such a claim.
BSB ends with this remark:
In the same way, so long as a relativist cares about some things being wrong, then they should care just the same about those things being wrong objectively. Saying otherwise might be a good way to troll moral realists, but as an actual normative claim, it’s hard to take seriously.
“Trolling” is typically associated with attempting to provoke others. It is closely associated with being unserious or insincere, and with being inflammatory, obnoxious, or manipulative. It is a nasty thing to suggest about others, and this is a terrible way for BSB to end this post. We’re not trolling you, BSB. You just don’t understand our perspective.
Imagine if I reversed this by saying something like this:
BSB, you must be pretending to think the things you’ve said in this post. After all, nobody could present arguments as bad as you have and actually be serious about it. They’d have to be unserious and just trolling. It’s far more charitable to assume you’re trolling than that you’re this confused and incompetent.
That would be obnoxious, wouldn’t it? And yet that’s exactly how you’ve opted to frame our positions: to suggest we’re “trolls” if we present these objections.
I don’t think you’re a troll. I just think you’re wrong.
Footnotes
One objection is that I don’t actually know what I’d do if I became a moral realist. That’s true. I don’t know. I can only make judgments on the basis of my current beliefs and attitudes. And it may be that enough about the world would have to differ from my current beliefs about it that I somehow would be moved to act in accord with stance-independent moral facts. But for this to be the case, I’d need to be wrong about e.g., voluntary action or other features of human psychology, or perhaps just be wrong about myself. Both of these are real possibilities. Another problem is that I think certain forms of moral realism aren’t intelligible. If they are, then it wouldn’t make sense to speak of these positions being true, in which case any claims about how I would or wouldn’t react are moot. But it’s also possible they are intelligible but I haven’t understood them. Something about their capacity to motivate me to comply could be hidden within the meaningful content that doesn’t presently strike me as meaningful. I can’t rule these possibilities out. But I don’t think the latter concern should muzzle my present perspective on the implications of the truth of moral realism; just suppose that moral realism turns out to be true but that the other factors that would make my position mistaken aren’t, when it comes to the points made here. In other words, treat the points as conditional: conditional on the position being intelligible, and on various extraneous considerations being false, this is how I think I’d react. I’m certainly open to those various extraneous considerations being true, but I don’t even think proponents of moral realism can pass modest, antecedent hurdles, so I don’t see that as a likely possibility.
At least I assume it is unintentional, and I will keep clarifying in this way because I have encountered dozens of instances where the moment I am not crystal clear someone will opportunistically complain about me implying they’re being intentionally malicious rather than what I actually think, which is that they are extremely negligent in a way that might be motivated by probably doesn’t arise to the level of conscious awareness. Roughly what I think is that rhetorically effective framings are rewarded by audience receptivity so people are unconsciously conditioned to employ these approaches over others. I don’t think it’s malicious. I think it’s a basic feature of the way human cognition works to employ strategies that get positive feedback.



This was great. I agree with pretty much everything you said, but in particular appreciated the subtlety of the exploration of issues such as whether you might actually care if you discovered moral realism were true, because of the knock-on effect it would have on all sorts of ancilliary beliefs. A really nice point.
I love you! But I also bristle at all the hard work we do when it could be said another way (in my humble opinion.) I don’t expect anyone to endorse MY way of saying it. That’s not how this game works. But I’ll settle for just injecting these ideas into the world without acknowledgement. I’ve already been beaten up my whole life and made fun of and marginalized so I don’t care anymore.
It’s basically this: start with experience. Recognize pain/pleasure. Increase wellbeing / reduce suffering (IWRS) emerges from empathy, and empathy emerges (or not) from being a creature with qualia and predictive patterning, mirror neurons, dorsolateral PFC, a balanced amygdala, et al. No appeals to stance-independent “oughts” needed. Coherence and capacity scales it, carried by the natural gravity of IWRS. The end. Stop swatting at realists using their language. Free yourself from that loop.
You are on the right track when you say “I don’t care if it’s objectively wrong. I just care about what feels wrong to me.”
But it needs to be followed up with this:
“What happens when our empathy scales? What do our preferences converge on when we clarify our internal motivations?”
Do that. The only blockers at that point are biological. I hate to say it because it divides us but it’s true.
You don’t need to build a whole epistemic fortress to say why you don’t want to hurt puppies. But you can’t just say “I don’t want to hurt puppies, or gas 6 million Jews, ergo it’s just wrong.”
That will NEVER work. Just stop.
Get wise to “valence realism.” IWRS. “Infamous step 5.” Start using it in your arguments. Fuck analytic realists. They are not the target anymore. You are burning up too much energy and preaching to choirs.
If we want to survive the “great filter” in the next 100-200 yrs we need to stop moralizing and start engineering bc puppy torture isn’t “wrong” because of abstract properties, instead it FEELS wrong and undesirable bc of how most brains process suffering.
That feeling can be MEASURED, bitchezzzzz! Scaled. We need to do it with scalpels and drugs, not philosophy or religion.
Mapped and tuned to a natural E. The real battle is bio!
We will need to universalize empathy circuits, deprogram cruelty, and build systems that reward sentient stability. Absent that we may as well dig a ditch and lay in it. Or accept that the multi-planetary species musk is building is going to be a fucking numbed-out social Darwinist dick.
My theory of IWRS maps the universal attractor we HAVE. Help me or build a better one.
But do me the honor of engaging. Everything else is choir preach and wasted energy tbh.
https://www.stellastillwell.com/p/im-saying-it-again-differently-in?r=1xoiww&utm_campaign=post&utm_medium=web
Or the personal setup:
https://www.stellastillwell.com/p/elegy-for-an-instinct?r=1xoiww
The canonical theory:
https://www.stellastillwell.com/p/its-time-to-go-there?r=1xoiww