The Can'tberra Plan
I often engage in discussions about moral realism with others online. However, my remarks are often buried in comment sections in ways that make them generally inaccessible unless someone bothers to go through the comment section. To circumvent this, I will on occasion begin sharing comments I’ve written here. Much of the most substantive commentary on philosophical issues emerges in comments and, more frequently, in in-person discussions. Unfortunately, much of this is lost to the ether. I’d like to salvage some of it. So here is a comment I’ve recently written in response to this video.
To provide some context, James Fodor is presenting what appears to be a naturalist account of moral realism which holds that moral facts are facts about wellbeing. My focus is specifically on the methods used to arrive at the claim that morality is about wellbeing, and less so with the account of reasons, which takes up the latter half of the video (which is also a serious problem for this account, but that’s a topic for another post).
The account you’re offering has significant difficulties.
First, the claim that morality is about wellbeing is vague and underspecified. At 1:04 you say that you ground the content of the moral code “in terms of actions that promote human wellbeing.” Is your position that the content of moral codes is exclusively about promoting wellbeing, or that this is one of the things that the content of a moral code is about?
If the former, why not say so explicitly? It would be easy to say that all moral concerns are reducible to concerns about wellbeing. If the latter, then what else is morality about, besides wellbeing? If it’s about other things as well, how much is it about wellbeing, and how much is it about these other things? And if the latter, non-wellbeing moral considerations are significant, why aren’t they mentioned?
Either way, the position remains ambiguous. If you think morality is reducible to wellbeing alone, I think you’re going to face a simple empirical problem: there’s little reason to believe this is how people actually speak, think, or act. That is, there’s little empirical evidence that people’s moral concerns are reducible to wellbeing alone. If the latter, then your position remains underspecified in a way that makes it difficult to evaluate.
For comparison, imagine saying that the law “can be understood in terms of preventing murder.” This is vague. If it means that one of the purposes of a legal system is to prevent murder, this would be true of most legal systems. However, if it means that legal systems are exclusively about preventing murder, this would be false. At best, such a remark would be ambiguous; at worst, it would be misleadingly incomplete: while legal systems do include preventing murders, they also include preventing other crimes (stealing, fraud, etc.) and they also manage civil issues (so even saying that legal systems can be understood in terms of preventing crime would be inadequate).
Likewise, to say that morality is about wellbeing doesn’t tell us whether morality is *only* about wellbeing, or whether other concerns are moralized in a way that isn’t reducible to wellbeing. Are you a wellbeing reductionist? That is, do all apparently non-wellbeing-related concerns reduce to concerns about wellbeing, or do people moralize other concerns? If so, which other concerns? Are there individual and population level differences in which issues people moralize?
This brings me to a second concern. You outline the Canberra plan at about 4:20 as a science-friendly approach to studying philosophy. The first step involves an “analysis of words.” But what kind of analysis? Is this an a priori analysis done from the armchair? If so, it may not be especially friendly to science. If it’s an empirical analysis, what is it an empirical analysis of? Is it an analysis of how ordinary people use words, and what they think when they use them, in everyday circumstances? If so, conventional philosophical reflection is not going to be a good method for answering such questions. Why not, instead of doing something consistent with science, you just…do science?
In other words, if you want to know what people mean when they use particular words, e.g., what people mean when they say things like “murder is wrong,” this is not a question that can be adequately addressed using conventional philosophical methods alone. You’d need to draw on the methods used by psychologists, linguists, and so on.
More importantly, if you want to make claims that generalize to humanity as a whole, it makes little sense to focus exclusively on members of particular cultures or speakers of particular languages using particular terms within those languages. You’d need to study cross-cultural psychology and comparative linguistics to begin to get a sense of how people from other cultures think and to study the terms and concepts employed in different languages. And you simply won’t be able to address these questions by appealing to how you think and how you use terms in a particular language.
In other words, if we really want to understand how people speak and think about morality, we’re going to have to conduct or engage with empirical research. I haven’t seen you provide much in the way of a substantive case for why we should think morality is exclusively about wellbeing, or why we should think wellbeing is at the center of our moral concerns. Most research in moral psychology suggests people moralize a broader range of concerns than wellbeing alone, and there isn’t much empirical support for the notion that morality is reducible exclusively to concerns about wellbeing.
Finally, even if morality were reducible to concerns about wellbeing, this is still not specific enough. Whose wellbeing? Is it impartial with respect to the wellbeing of all conscious entities? Is it about maximizing wellbeing? Satisficing? How is “wellbeing” to be understood? Hedonically? In terms of desire satisfaction? In some Aristotelian way?
Take, for instance, a case like this: if a parent must choose between saving their child or saving five random people, should they choose the five random people? Suppose this would result in greater total wellbeing. Are they morally obligated to choose to save five people over their child? I suspect most people would think not. If so, this would suggest that people may not be exclusively concerned with e.g., maximizing wellbeing, but instead moralize a degree of partiality.