1.0 Introduction
Why do people engaging in philosophical disputes present syllogisms or other formal arguments? For instance, you might see someone present a modus ponens:
If P, then Q.
P.
Therefore, Q.
One might suppose people use arguments like this to convince people. And that probably is one of the main reasons people use arguments. However, one of the more valuable aspects of formal arguments is clarification. Logic is useful for extracting a valid argument out of an otherwise unclear, ambiguous, or messy set of remarks. If persuasion and clarification are the main purposes for presenting a formal argument, I’m a bit puzzled about some of the arguments I see people make. Consider this argument, from J.P. Andrew:
JPA refers to this as a “simple proof,” as if it were some type of mathematical formula. If the goal of this argument is to convince or clarify, I don’t think it’s successful in either respect.
2.0 Convince or clarify
Maybe JPA’s goals are to convince or clarify, but if so, JPA’s subsequent interactions don’t look to me like they’re consistent with either objective. JPA is fairly dismissive towards respondents, and doesn’t tend to respond in a way that seems to me to be receptive, engaging, didactic, and motivated by a desire to persuade skeptics. JPA is often impatient with requests for clarification and doesn’t seem to engage much with people who are critical of the argument or skeptical of the conclusion. This isn’t an isolated instance. I’ve critiqued JPA previously on this blog, and JPA has exhibited a similar pattern of behavior in the past (see note below). Given this type of response, I’m curious what JPA’s intentions are for presenting arguments like this.
If it were about clarification, I’d expect the surrounding context to be one where the realist’s position or reason for holding their position was unclear. But I didn’t see any context that would indicate this. JPA might also explicitly state that clarification was an objective, or respond to others in a way that consistently revealed a desire to do so. Yet I don’t observe any of that going on here. Consider how JP responds to people who question the premises (especially premise 1):
JPA rarely follows up, even if the other person responds again. Check for yourself, and if JPA does engage further I’d be happy to retract that remark and update this post indicating as much. As it stands, it looks like when people question JPA on the first premise, he simply asserts that it’s obvious and that one doesn’t need an argument for it.
That’s fine, as far as it goes. But it’s obviously not obvious to the people questioning JPA. So who is it supposed to be obvious to? If it’s obvious to JPA himself, but not his interlocutors, then the argument lacks any meaningful persuasive force unless, and only unless, the first premise were “obvious” to one’s interlocutor. And, given that (as we will see) the first premise would only entail moral realism if it were itself true in a trivial sense that logically entailed realism (which, as it happens, is a built in feature of deductive arguments of this kind), the first premise might as well just be “moral realism is true.”
3.0 Unpacking P1
The problem is that this isn’t obvious to people because of the wording of the argument itself. The argument actually obscures how trivial the content of the argument actually is. Let’s start with the first premise:
(P1) Torturing babies for fun is, in itself, wrong.
What does this mean? We probably don’t need an explication of “torture,” “babies, or “for fun,” but what’s meant by “in itself” and “wrong”? Nothing about the meaning of these terms is made explicit. This premise is ambiguous. In order for the argument to be sound, and thereby guarantee that the conclusion is true, we’d have to disambiguate the premises. How should we disambiguate P1 and P2? We have two sets of options:
Characterizations of P1 that would entail moral realism
Characterizations of P1 that don’t entail moral realism.
First, the argument is a modus ponens, so we’re dealing with two propositions:
P
If P, then Q.
Therefore, Q.
P = Torturing babies for fun is, in itself, wrong.
Q = moral realism is true.
If JPA is using a standard characterization of moral realism, then it’d be something like the claim that “there is at least one stance-independent moral fact.” So Q isn’t really an issue here.
The issue is what’s meant by P, “Torturing babies for fun is, in itself, wrong.” The most relevant distinction concerns two categories of interpretation:
P1 doesn’t mean or entail “stance-independently wrong”
P2 does mean or entail “stance-independently wrong”
One option would be that “in itself, wrong” does not mean or entail “stance-independently wrong.” If so, this is a serious problem for the argument: if it doesn’t mean or entail “stance-independently wrong,” then a moral antirealist could agree with P1. Yet in doing so, they’d now be well within their rights to reject P2, because, if P1 is consistent with antirealism, then it doesn’t follow that If P1 is true that moral realism is true. In other words, if P1 doesn’t mean or entail that torturing babies for fun is stance-independently wrong, then it doesn’t follow that if P1 is true that moral realism is true, and thus P2 would be false.
A second option is that “in itself, wrong does mean or entail “stance-independently wrong.” If so, then P2 amounts to asserting that a specific moral issue constitutes a stance-independent moral fact. If this is true, then it would be the case that both P and P→Q, and the argument would thereby establish that moral realism were true.
4.0 All deductive arguments are question-begging
The problem is that this reveals just how empty this argument actually is. If “in itself, wrong” just means stance-independently wrong, then the first premise amounts to the assertion that there is at least one stance-independent moral fact. And what is moral realism? It’s the view there is at least one stance-independent moral fact. We can substitute “moral realism” for “there are stance-independent moral facts” we could present more or less the same argument like this:
(P1) There is at least one stance-independent moral fact.
(P2) If there is at least one stance-independent moral fact, then there is at least one stance-independent moral fact.
(C) Therefore, there is at least one stance-independent moral fact.
In other words, JPA’s argument is completely vacuous. It simply asserts that there’s a stance-independent moral fact. And because there is a stance-independent moral fact, the view which holds that there’s at least one stance-independent moral fact is true. This is completely trivial, and doesn’t “prove” anything.
The above simplification isn’t entirely accurate, to be fair. There is one thing missing from it: the original argument presented by JPA relies on a claim about a specific moral fact: namely, that it is morally wrong to torture babies for fun. So at least one feature of the argument is that it includes a specific normative moral claim as well: that torturing babies for fun is wrong.
The problem is that claim is irrelevant to the conclusion insofar as it is a normative moral claim. It is only relevant to the argument insofar as the claim in question constitutes a stance-independent moral fact. The actual normative content of this specific issue is irrelevant. A person who thinks it’s morally wrong to torture babies for fun but doesn’t think it’s stance-independently wrong wouldn’t accept the second premise and wouldn’t be compelled to adopt the conclusion. The only way someone would accept the conclusion of this argument is if they both hold the normative moral stance that torturing babies for fun is morally wrong and that it is stance-independently wrong. So the only way this argument works is if you already believe there’s at least one stance-independent moral fact. And since moral realism just is the view that there’s at least one stance-independent moral fact, the argument is no different from this:
(P1) The God of the Bible exists.
(P2) If the God of the Bible exists, then theism is true.
(C) Therefore, theism is true.
If your goal was to convince people that theism is true, would this be a good argument?
No. This would be a ridiculous argument. Skeptics of theism are obviously not going to accept the first premise.
And that’s just it: this is just how logic works. If this looks like an illicit move, or a trick, or like something must be wrong with my interpretation if this is how we interpret the argument, this simply isn’t the case. JPA has presented a deductive argument. Deductive arguments are arguments that, if the premises are true and the argument is valid, the conclusion necessarily follows.
This is just how deductive arguments work. Deductive arguments are always empty and vacuous like this. At best, they can help clarify the relation between certain terms and concepts, but since one is free to stipulate the meaning of the terms and concepts one uses, one can trivially “prove” anything with deductive logic, so long as one follows the rules. As a friend of mine put it:
deduction simply organizes what we already accept and is highly contingent on the definitions involved.
Deduction isn’t capable of “proving” anything that a person didn’t already (at least implicitly) accept to begin with. It simply involves the reshuffling of words and phrases. I
In a certain sense, all deductive arguments “beg the question” in that the truth of the conclusion is contained in the premises. That’s the whole point! JPA’s argument isn’t pulling any tricks. It follows the rules. It’s just that the rules of deductive logic are so conservative that they don’t actually provide us with new knowledge at all. They repackage and clarify. That’s it. That’s super useful, and perhaps in clarifying they may prompt audiences to recognize confusions, and thereby come to accept the conclusions. But they’re not providing us with substantively new information. In this respect, deductive arguments are non-ampliative:
Valid arguments are non-ampliative, i.e., their conclusions do not contain any information that was not already contained in their premises.
I’ve gathered a few remarks that illustrate this point:
These lesson notes put the point well:
The conclusion actually just makes clear information that is already in the premises (though perhaps hidden). It doesn’t amplify or add anything new. That’s why it can’t go wrong (as long as the premises are true the conclusion must be, too).
The IEP entry on deductive and inductive arguments also gathered a number of quotes to this effect:
We may summarize by saying that the inductive argument expands upon the content of the premises by sacrificing necessity, whereas the deductive argument achieves necessity by sacrificing any expansion of content (Salman, 1984, as quoted in Shanahan)
the conclusion of a deductively valid argument is already ‘contained’ in the premises [...] (Hausman, Boardman, & Howard, 2021, as quoted in Shanahan)
In a deductive argument, the … conclusion makes explicit a bit of information already implicit in the premises … Deductive inference involves the rearranging of information. (Churchill, 1986, as quoted in Shanahan)
This is nothing new, nor is it mysterious or unknown amongst philosophers. Deductive logic has its role, but its powers of persuasion are quite limited. Again, I cannot stress enough that I am not criticizing logic, or deduction, or suggesting there’s no point in presenting arguments of this kind. What I’m wondering is why people throw arguments out there in public spaces as if the force of the syllogism is so great one can use it to defenestrate skeptics and choke enemies into philosophical submission. Just what is JPA’s “proof” supposed to achieve? It’s an ambiguous repackaging of the realist’s position. Why would that “prove” moral realism? Why does JPA call this a “proof”?
The antirealist will simply reject the first premise. It isn’t appropriate to try to use syllogisms like this to “prove” positions. An appropriate use is to minimize confusions and errors of natural language by distilling valid chains of inferences out of the chaotic jumble of everyday speech. A useful purpose for such syllogisms is to separate the logical wheat from the rhetorical chaff.
5.0 The misuse of deductive arguments
Unfortunately, it is incredibly easy to misuse deduction, and I think that’s what JPA is doing. Why torturing babies for fun? Why that example? One answer is that it’s rhetorically useful to present concrete examples like this. It’s rhetorically useful because it entangles a normative moral claim with a metaethical claim.
Normative (first-order) claim: it is morally wrong to torture babies for fun
Metaethical (second-order) claim: it is stance-independently wrong to torture babies for fun.
I believe the primary effect of syllogisms like JPA’s is to give the impression that antirealists aren’t opposed, on normative moral grounds, to awful atrocities like torturing babies, by entangling first-order moral claims inside second-order moral claims, such that to reject the second-order moral claim pragmatically implies that one rejects the first-order moral claim. This is achieved by embedding the first-order claim inside a second-order claim, to create a double-barrelled claim. One is then expected to give a decisive “true,” or “false” to the proposition, when the proposition in fact consists of two distinct propositions, smashed together in a misleading way.
This is a well-documented problem in psychological research. Imagine you were filling out a survey that asked you the following question:
How much do you agree or disagree with the following statement?
Two plus two equals four and it’s okay to set cats on fire.
(1) Strongly disagree
(2) Moderately disagree
(3) Somewhat disagree
(4) Neither agree nor disagree
(5) Somewhat agree
(6) Moderately agree
(7) Strongly agree
This statement consists of two separate claims. If you express disagreement, it’s unclear whether you disagree with the first statement, the second statement, or both. If you agree, it’s also unclear whether you agree with one or the other statement, or both. When you force someone to give a single response to two questions, you stick them with a double-barrelled question: there is no way to clearly express agreement with one of the statements but not the other. Now consider this statement:
(P1) Torturing babies for fun is, in itself, wrong.
This statement embeds the normative moral claim “torturing babies is [...] wrong” inside of a metaethical statement, as implied by “in itself.” One could separate the two claims as follows:
(P1.1) Torturing babies for fun is wrong
(P1.2) There are stance-independent moral facts. Among these facts is that torturing babies for fun is wrong.
Once you separate out the two claims, it is easy to see how an antirealist could respond: they could accept P1.1, but reject P1.2.
Not all antirealists would respond this way. Many realists focus their objections on error theorists. Error theorists hold that moral claims presume moral realism, so if one says “torturing babies for fun is wrong” this just means “torturing babies for fun is stance-independently wrong.” As such, an error theorist would see little point in distinguishing 1.1 and 1.2, since 1.2 is implicit in 1.1. And since it’s implicit in 1.1, this means that they’d reject both premises. This still doesn’t carry the awful implications realists often imply it does, though, because the issue is deeper than this. Even if you agree that “torturing babies for fun” just means “torturing babies for fun is stance-independently wrong,” and that, since you think nothing is stance-independently wrong, then it’s not “wrong,” full stop, well, what are the implications of that?
There aren’t any, really. The error theorist simply endorses a claim about how people use moral language, and believe that “wrong” in such contexts means “stance-independently wrong.” So they think that ordinary moral discourse is riddled with systematic error. They might adopt some kind of practical nihilistic view as a result of this. But they don’t have to. The issue here is that there’s a further pragmatic implication that if one doesn’t endorse the first-order moral claims, that one is indifferent with respect to the issue in question, that one doesn’t care about it, and would therefore act differently than the realist.
But this simply isn’t an implication of such views. It’s consistent with them, but not caring about the issue in question is consistent with moral realism, too. I’m not the first to notice the rhetorical aspects of realist critiques of error theory. Joyce wrote these remarks in the SEP entry on moral antirealism:
[...] the moral error theorist may allow that the following are true: “Moral wrongness does not exist,” “Augustine believed that stealing pears was wrong,” and “Stealing is not morally wrong.”
The last example (“Stealing is not morally wrong”) calls for an extra comment. In ordinary conversation—where, presumably, the possibility of moral error theory is not considered a live option—someone who claims that X is not wrong would be taken to be implying that X is morally good or at least morally permissible. And if “X” denotes something awful, like torturing innocent people, then this can be used to make the error theorist look awful. But when we are doing metaethics, and the possibility of moral error theory is on the table, then this ordinary implication breaks down. The error theorist doesn’t think that torturing innocent people is morally wrong, but doesn’t think that it is morally good or morally permissible either. It is important that criticisms of the moral error theorist do not trade on equivocating between the implications that hold in ordinary contexts and the implications that hold in metaethical contexts.
Joyce is absolutely correct here. JPA’s argument, like many others, is rhetorically effective insofar as it prompts audiences to conflate the technical commitments of a philosophical position with the use of similar phrasing in ordinary contexts. The technical position may not carry the pragmatic implications that would be associated with a structurally identical remark in those contexts, yet the syllogism may falsely give readers that impression, thereby pressuring them to accept the first premise of JPA’s argument for reasons unrelated to the merits or defensibility of the position.
Moral realists routinely do this, as do analytic philosophers more generally: syllogisms are employed, not for the purposes of clarification, but for the exact opposite: obfuscation. Such arguments serve to dupe audiences by conflating the highly rarefied, technical, and precise philosophical commitments associated with a term or phrase that is employed outside ordinary discourse and that carries a highly specific meaning, often construed as its “semantic content,” distilled in such a way that it is explicitly intended not to carry the pragmatic associations the “same” remark would have in ordinary language. One of the primary purposes of technical philosophical discourse is to extract one’s remarks from those contexts for, among other things, the express purpose of eliminating those pragmatic associations.
When you’re talking in a philosophical context, and you carefully lay out what you mean by your terms, this can be clear to competent, careful, and well-informed interlocutors familiar with philosophical dialectic. In these contexts, one’s interlocutors are expected to recognize when one is employing a phrase like “torturing babies for fun is not wrong,” to mean something very specific to that context. If you then take remarks made in these contexts, and reintroduce them into ordinary discourse as if they were part of that discourse, you expose those remarks to the pragmatic associations people would make in those contexts. By vacillating between the technical and ordinary language meaning of those phrases, one can mislead audiences into thinking the antirealist (or error theorist) into having beliefs and attitudes that they don’t actually have and imply these are implications or entailments of their philosophical position. You can then leverage these misleading implications to imply the philosophical position is itself mistaken.
Let me try to put this another way. Consider the following sentence:
“It is not wrong to torture babies for fun.”
This sentence can mean different things in different contexts. There are two relevant contexts here: ordinary language (OL), and philosophical discourse (PD). The OL version could mean something like this:
OL: “It is morally permissible to torture babies for fun and I am personally okay with it.”
While the PD version could mean something like this:
PD: “It is not stance-independently wrong to torture babies for fun.”
These technically mean something different. By conflating the OL and PD meanings, one gives the false impression that if one endorses PD, they also endorse OL.
I believe this is how people interpret syllogisms like JPA’s. Rather than serve to clarify, JPA’s syllogism actually serves to confuse people. Incidentally, it is this very aspect that probably makes arguments like this convincing to some people.
6.0 Intuition pumps
This brings us full circle. Maybe this argument is intended to persuade. But if it is, its persuasive force is derived primarily from the fact that it relies on misleading, emotionally charged conflations that falsely imply antirealists are evil psychopaths. If so, then its persuasive force is rhetorical, and not a good, above-board way to persuade people. To be clear, I do not think realists are “doing this on purpose.” I think normative entanglement is non-obvious and that ingrained habits in the field have caused people to not notice this is going on, or to disagree that it’s going on on various philosophical grounds.
However, there is another, alternative use of arguments: as intuition pumps. Arguments could serve to prompt someone to recognize that a position they held is inconsistent with their deeply held commitments. By rendering both their philosophical position and those deeply held commitments salient at the same time, one could draw attention to inconsistencies that were not previously obvious to that person, which could cause that person to recognize a tension or inconsistency in their belief. This could then cause that person to recognize that they are not in a state of reflective equilibrium: they must give up one or both beliefs, or reconcile the apparent inconsistency. Consider Alex:
Alex has always thought it obvious that “morality is relative.” There are no objective moral truths. It all depends on your culture and your beliefs. When Alex thinks about this belief, it's usually cashed out in vague, abstract terms, without thinking of concrete moral issues. When concrete moral issues do come up, they often come down to edge cases or practices that depend heavily on one’s cultural perspective.
But Alex is also deeply opposed to baby torture. Alex doesn’t know of any cultures that are in favor of baby torture, so moral opposition to something so awful has never been a salient consideration. Alex sees JPA’s post, and realizes that they’re against baby torture, regardles of what any culture has to say about it, so cultural relativism has got to be wrong! There’s at least one stance-independent moral fact. And since there is at least one such fact, well, moral realism must be true!
I think this is the most charitable interpretation of at least one useful purpose for syllogisms like JPAs: they can prompt readers to recognize inconsistencies in their own beliefs. In doing so, they can cause people to identify inconsistencies in their beliefs and correct them.
Of course, this doesn’t change the fact that such arguments are still, technically, vacuous. They simply reshuffle and repackage claims that are logically related in such a way that they trivially entail one another, once one is familiar with the terms in question. Insofar as such an argument has the force to sway anyone, it doesn’t do so by presenting them with ironclad facts they didn’t previously endorse. It does so by, at best, prompting them to notice that they already held beliefs that entailed the conclusion. Such arguments only convince someone of the conclusion insofar as they’re already convinced by the premises. The premises aren’t doing the work of convincing the person. Rather, the structure of the argument causes a person to recognize that they either have to accept the conclusion because of what they already believe, or else reject the conclusion, by abandoning one or more of their beliefs (beliefs that would prompt them to judge one or more of the premises to be true). In the end, then, such arguments persuade, at best, indirectly via clarification.
That is useful. People do hold conflicting views, and it can be immensely helpful to prompt them to recognize this by clarifying the relation between premises and a conclusion. But recall that I never said syllogisms were useless. This is exactly the purpose they’re intended to serve, and when used appropriately, they can serve this purpose quite well.
But again, and I cannot stress this enough: they only work by reshuffling terms and concepts around in such a way that you only come to accept the conclusion if you recognize that you endorse the premises. But what if you don’t endorse all of the premises? If this is the case, the syllogism itself has no power to sway you. It can’t. That isn’t its purpose. The purposes of the premises is to support the conclusion, not to support themselves. And any skeptic who isn’t confused and in need of clarification is a skeptic insofar as they reject at least one of the premises. Deductive arguments, by design, cannot (legitimately) convince skeptics of the premises. That’s just not what they’re designed to do.
JPA has not presented us with a “proof” of moral realism, insofar as a “proof” is some kind of ironclad demonstration of the truth of the conclusion. Moral skeptics (people who don’t endorse moral realism) are not idiots already agree that it’s stance-independently wrong to torture babies for fun, but are so confused and incompetent that they fail to recognize that this entails moral realism. They are people who don’t believe realists have demonstrated not simply that “moral realism is true” in some abstract sense, but also that nobody has convincingly demonstrated any particular stance-independent moral fact. For comparison, suppose I claim that “cryptids” exists. What are “cryptids”? Cryptids are organisms that allegedly exist but whose existence is not generally recognized by science. Examples include Bigfoot and the Loch Ness Monster. Now, suppose I present the following proof that cryptids exist:
(P1) Bigfoot exists.
(P2) If Bigfoot exists, then cryptids exist.
(C) Therefore, cryptids exist.
Bigfoot is a cryptid, so if Bigfoot exists then cryptids exist. But if someone says “I don’t believe in cryptids,” why would they accept P1? P1 is exactly the sort of thing you’d be expected to prove in order to prove that cryptids exist.
If JPA wants to “prove” moral realism is true, one way to do this would be to prove that it’s stance-independently wrong to torture babies just for fun. But JPA doesn’t present a proof of this. He simply asserts that it’s true. For comparison, suppose someone is asked to present arguments or evidence that Bigfoot exists, and they say “If you need an argument for this, that's not on me.” I cannot stress enough how unconvincing and silly this would be. And yet it is no different than what JPA is doing. Nobody would take this seriously. Just the same, nobody should take JPA’s “proof” of realism seriously.
7.0 Syllogisms are bad intuition pumps
Even if we abandon any pretense that this argument is any sort of “proof” and instead view it as an intuition pump that can expose inconsistencies in people’s views and prompt them to recognize that they were realists all along, it’s still not very good for that purpose. This is because it’s still suffering from normative entanglement, and thus confounds irrelevant, rhetorical considerations that serve to obscure whatever merits it has as a tool of clarification. It may yield some benefits as a tool of clarification: prompting people to place abstract considerations (“there are no stance-independent moral facts”) alongside their beliefs about concrete cases (“It’s not stance-independently wrong to torture babies for fun”), but it pays for whatever advantages it has in rendering a potential conflict between these claims by packing this clarification into a syllogism that presents statements that are ambiguous between an OL and PD context, such that it confound audiences and entangle irrelevant normative and attitudinal considerations with the philosophical considerations its ostensibly intended to isolate from these considerations.
It may be useful for people who aren’t caught up in the rhetorical implications of normative entanglement, but you could achieve this objective without those rhetorical implications by simply asking someone to consider the abstract and concrete positions alongside one another, instead of presenting a syllogism that you allege “proves” moral realism is true. You don’t need a syllogism or indeed any argument at all to expose someone’s potential state of reflective disequilibrium. This is trivially easy to do:
“Hey, do you think that nothing is stance-independently wrong? If so, then do you think it’s not stance-independently wrong to torture babies for fun? Oh, you do think that’s stance-independently wrong? Well then, I guess you don’t think that nothing is stance-independently wrong!”
This might take more unpacking of the relevant terms and concepts. Someone may say they’re a “relativist,” and you might explore what they take this to mean and what its implications might be.
Whatever the case may be, exposing inconsistencies doesn’t require or benefit from JPA’s syllogism. So why use it? In other words, one can obtain all the advantages that this syllogism could, at its absolute best, achieve, without any of the downsides. So what advantage is there in presenting this syllogism if one’s goal is clarification and/or genuine persuasion based on non-rhetorical reasons?
I contend that there is none whatsoever, and that it is therefore actively unhelpful to present syllogisms in the way JPA has. Philosophers should stop misusing deductive logic as though it “proves” things. Whatever “proofs” it provides are trivial.
Great post, very similar to Oppy’s ideas about arguments on “What derivations cannot do” and his vid w Joe Schmid. Highly recommend those two
Thought you might be interested in this example of someone using propositional language to talk about examples that are linguistically imperatives:
https://youtu.be/JKBapuo8Eig?si=91f5KyAazxUd5GRF&t=1120s