13 Comments
User's avatar
John Ketchum's avatar

Have you written a book on metaethics, or do you intend to do so?

Expand full comment
Lance S. Bush's avatar

Not yet but I will

Expand full comment
John Ketchum's avatar

I'd like to get your book when it's published. How can I find out when it's available?

Expand full comment
Andrew Sepielli's avatar

Lance -- I'm actually writing a piece about why relativism isn't a metaethical view, but rather a normative-ethical one (and also speculating about why people came to see it as meta-ethical). So yeah, why do you think it's meta-ethical? You define it in terms of the meanings of ethical terms/concepts, but that seems optional to me, no more a required part of the view than if utilitarian claimed that "wrong" means "fails to maximize utility". (Also, I apologize in advance for my characterization of relativists in the post -- I try to balance it out by making fun of myself, too!)

Expand full comment
Lance S. Bush's avatar

I see it as a view about the truth status of moral claims. Doesn't get more metaethical than that. I don't know what you mean when you say that seems optional. Aren't all definitions optional? I don't really think there's a fact of the matter about what relativism is. There are just conventions on how we choose to use terms.

Expand full comment
Andrew Sepielli's avatar

What I mean is: Utilitarianism seems like an ordinary moral theory, not a meta-ethical theory. It says that an act is morally wrong iff. it fails to maximize utility. Relativism, then, seems like an ordinary moral theory; it says that an act is morally wrong iff. the speaker or the agent or their cultures or whoever disapproves of it. You could append some further claim about meaning to relativism as I've just presented it, just as you could append some further claim about meaning to utilitarianism, but doing so doesn't make "bare utilitarianism" a meta-ethical theory, nor does it make the bare claim of relativism or stance-dependence generally a meta-ethical theory. Does that make sense?

Expand full comment
Lance S. Bush's avatar

I think I can make some sense of that but I see utilitarianism as just setting aside questions about whether what makes utility good/bad (stances or something other than stances) and instead just presuming we've settled metaethical questions and then saying okay, so, what's the good stuff? Utility.

Conversely, I see relativism as focused on what makes moral claims true in a more fundamental sense.

Expand full comment
Shawn Ruby's avatar

What’d you write this for? I’m just curious.

Expand full comment
Lance S. Bush's avatar

I am annoyed by critics of antirealist positions constantly mischaracterizing antirealism. I want my opposition to be well-informed and offer good objections, not an endless barrage of strawmen and confused nonsense. It's tedious to deal with.

Expand full comment
Shawn Ruby's avatar

Btw ir you from your discord server and ask yourself’s. Are you still an error theorist?

Expand full comment
Lance S. Bush's avatar

I wasn't an error theorist to begin with.

Expand full comment
Shawn Ruby's avatar

I only remember moral anti realism. Anyways, I'm happy your ideas are getting out there. I still disagree with you and you never answered my killer argument, but I appreciate what you contribute.

Expand full comment
Manuel del Rio's avatar

Need to read it again carefully and think it over, to see if I find anything to disagree with and how it maps to what I feel is my own stance. Haven't read metaethics yet (my approach to these topics has mostly come from evolution, game-theory and evolutionary psychology texts). I feel I have a very robust Moral antirealism, in which moral claims are rejected as saying anything about a stance-independent world. Beyond that, I am not sure if this would take to to affirming all ethical statements are false (that would be my initial inclination, in the sense of: assuming a correspondence theory of truth, there is no object of which moral claims of any type are a correspondence). I am not sure if I understood properly what stance-dependent moral facts would be. Is it something like: if you assume certain moral axioms (whether at the individual or the social level) some actions become 'true' or 'false' given those axioms and people that accept them? Like, I think i also accept a weak version of this, i.e., societies have evolved morality both biologically and culturally as a way of solving coordination problems, and if you accept some very minimal axioms you can perhaps build a contractarian view of morality as the set of freely agreed upon norms and rules that maximize individual and group flourishing and well-being.

Expand full comment