17 Comments

I don't think it's particularly useful to discuss moral realism without much more carefully defining it. That's really where all the action is.

For instance, suppose I think there are objective facts about what is right and wrong but those facts are essentially just a definitional lost (by right I mean ... list) and I admit that someone who had a different notion of shwrong and shright which was defined differently than my wrong and right wasn't making any kind of logical mistake in giving them the same action guiding role. Am I a realist or not? What if I think there is a particularly elegant -- and compelling to me -- principle which implies my notion of right and wrong is a particularly appealing choice for something to choose to treat as a reason to act but I still don't think someone who disagrees is factually wrong?

If yes, the notion doesn't really track what most people assume it does. If no, then what determines whether you are a realist or not turns on the details of your beliefs about what precise kind of special status realist moral beliefs need to have.

I think that's where the discussion should happen but it's pretty hard to focus on that without clearer definitions.

Expand full comment

Boo BB

Expand full comment

Lance, you are what every philospopher should be man! Amazing! Btw I'm harass you everywhere asking you if you could talk about normative ethics, effective altruism and longtermism sometime. Being always as precise, rational and lucid as you are, it would be extremely interesting for me to hear or read you discussing these topics. If you have already covered them somewhere else, my apologies.

Expand full comment

The Dino argument is particularly weak, because.dino B being eaten by Dino A is only bad from Dino B's pov --- Dino A gets a nice meal -- so it's agent relative!

Expand full comment

Good response. I read BB’s essay and was frustrated by those sleights of hand.

Also, an unrelated question. Are there antirealists that thinks ethical judgement are aesthetic judgements in disguise?

Expand full comment

Ethical judgements as disguised aesthetic judgements?

It seems an obvious point to me (though not necessarily obviously *correct*).

I'm coming more from the art and software angle than the philosophy angle, personally (ie. I don't have a degree in philosophy). It seems to me there is a certain appropriately-general definition of 'image' that covers basically anything complex that we manage to more or less treat as a mental unit, and that people mostly fail to perceive complex things in a way that is distinctly different from the aesthetic. (some of our moral language is relatively obviously making 'feels-like' distinctions)

So personally I would posit that moral judgements in most cases pass this complexity threshold and therefore it may be that people cannot relate to them in any way that is not basically aesthetic.

Expand full comment

I am personally unsure whether this is primarily in the domain of philosophy or not. At first sigh it seems like an empirical question for me. Let's assume aesthetic judgements, through some neural correlates, can be distinguished from certain other mental processes. Then, whether or not these neural correlates also appear for ethical judgements seems relevant. I am also interested in what philosophers have thought here though, hence my question.

Expand full comment

I've heard people *say* that, but haven't seen it in print. But yea, probably. I can't think of anyone in particular who's said that though.

Expand full comment

I think you made a minor editing error here: ‘So BB describes a scenario, says that it “seems really, really obvious that that was bad,” which is an ambiguous remark that can be interpreted in ways that are trivially easy to show are consistent with antirealism, then follows this by simply asserting that it was bad in a way only consistent with antirealism.’ I think the last word should probably be ‘realism’ rather than ‘antirealism’ if I’m not misunderstanding

Expand full comment

Yep that was a mistake. Fixed.

Expand full comment

Will check and fix if I got them mixed up. Thanks.

Expand full comment

It’s actually not clear to me that agent relativism implies that we should stand aside as Alex tortures babies. Agent relativism would deliver the verdict that it’s permissible for Alex to torture babies - given that he’s the one performing the action and also approves of it.

But does agent relativism deliver the verdict that it would be impermissible for us to interfere? Suppose I did decide to interfere - and I forcibly stop Alex from torturing babies.

According to agent relativism, the moral status of an action is determined by the values of the person performing the action. Given that I’m the one who took the action of stopping Alex from torturing babies and I approve of my action, it seems agent relativism would say that what I did was permissible.

It would seem then that rather than demanding tolerance and non-interference from us, agent relativism would deliver the odd result that both Alex is justified in torturing the babies and I’m justified in intervening to stop him.

Expand full comment

It's just a matter of definition. You could simply distinguish:

(1) Agent relativism_1

(2) Agent relativism_2

According to (1), if the other person thinks it's good to torture babies, it is in fact good for them to do so in a way you have a moral obligation to respect so it'd be wrong to intervene. If (2), then it's good for them to do it but also good for you to stop them if you think it's wrong.

(2) would be less odious than the first. The way critics of "relativism" depict the view, they're often going with (1) rather than (2).

Expand full comment

"agent relativism would deliver the odd result that both Alex is justified in torturing the babies and I’m justified in intervening to stop him. "

To me, moral theories that do not produce this kind of "conflicting" result are odd, or at least suspicious.

If your theory doesn't account seriously for other people having different views of morality, then I don't think it should be taken seriously; it might be better to take it as mere self-serving rationalization.

('neither are justified', which would probably be some kind of error theory, would also be a satisfactory result from a moral theory, IMO. )

A separate point is that when people talk of 'X being justified' in this way that isn't just 'people are able to produce justifications for action X' -- it usually seems like some kind of veiled moral realism to me. On my view, no actions have this type of 'justified' status beyond the empirical question of whether a person has in fact furnished justifications for them. So I may require myself to justify things before I do them, but that justification exists within me and not as a property of the action.

EDIT: I think my original comment wasn't expressed very clearly, so here is an attempt to clarify:

I see cases where a meta-ethical theory is advanced in a way that appears to be intentionally clearing the path for a particular normative moral theory to win "by default" (It's hardly difficult, for example, to find people saying 'moral realism, therefore theism') . I believe real questions of morality tend to contain genuine ambiguity and conflict, and the move of rendering cases where there appears to be real conflict into a black and white 'this person is simply correct, this other person is simply incorrect' is in my opinion both intellectually lacking and badly motivated.

Expand full comment

"If your theory doesn't account seriously for other people having different views of morality, then I don't think it should be taken seriously"

Realism accounts for the existence of other views very easily: they are just opinions and most of them are wrong. The hard, but important problem, is figuring out which is right..

or, at least, who goes to jail and why.

"To me, moral theories that do not produce this kind of "conflicting" result are odd, or at least suspicious."

Moral subjectivism, the agent relative view, doesn't have a unique ability to predict that opinions vary. Instead, it has the unique implication that they are all true, anyway. If there is no way of resolving the conflict by argument, there will be a distinct possibility be resolved by force.

If your theory cant provide a way of arguing out differences, you are left with fighting them out.

Expand full comment

"or, at least, who goes to jail and why."

That is the concern of a first-order moral theory like utilitarianism, not a meta-ethical theory like agent relativism or moral realism, though. Their business is just to describe what people *are doing* with moral talk. If they are saying anybody is incorrect, it should be an incorrectness in how they are *using language*, not in their basic positions in a moral conflict.

(to be fair, there probably are some cases where people appear to be in moral conflict based on how they talk, but are not. That would be in the domain of meta-ethics AFAICS.

And I probably should have been more precise with my language to begin with ('moral theory'?))

Expand full comment

Good point. It seems pretty easy to think of actions that I would judge as both morally good for the agent and morally good for me to act to prevent, such as when they're factually mistaken (e.g. pouring a can of water in a fire, but I know that it's gasoline). Agent relativism could work in the same way: we stop the agents acting on values we judge to be badly mistaken, even while judging them based upon their own mistaken standards.

Expand full comment