The Intuitions We Don’t Have
1.0 Distinctions, Distinctions, Distinctions
Hunch. Vibe. Sixth sense. Gut feeling. These terms capture some of the ways everyday, colloquial use of the term “intuition” is understood among native English speakers. Yet the term “intuition” is not always used this way in academic philosophy.
We (the authors, not you, the reader!) are skeptical of at least some of the ways philosophers use the term. However, philosophers use “intuition” in such varied and underspecified ways that it can be difficult to single out a distinct target of critique. Sometimes “intuition” just refers to a belief, or a disposition to believe, with appropriate epistemic hedges about the uncertainty of the one expressing the intuition.
But sometimes it refers to something… more. Philosophers draw analogies to perception, and sometimes intuition is a distinct mental faculty that tends to point to truth.
If that strikes you as a bit sketchy, you’re not alone.
Yet these more theoretically heavy, and more contestable conceptions of intuition borrow much of their acceptability from the term “intuition” itself. Since people are so familiar with the term “intuition,” and because it is so obvious that we have intuitions of the more innocuous sort (e.g., gut feelings, inclinations to believe, and so on), any skepticism directed against “intuition” can feel like skepticism about mundane and ubiquitous features of our mental lives, even when it’s not.
In other words, by adopting the same name as familiar and uncontested notions, these stranger notions of “intuitions” hide like doppelgängers among innocent instances of ordinary language.
The situation is the terminological equivalent of the science fiction trope of a person standing side by side with an evil clone:
“Shoot him! You know me, Steve! I’m the real one!”
“No, Steve. He’s lying! He’s the imposter! Remember I got you that—”
If you’re like Steve, you may wonder: What can I do to distinguish the real Roger from the fake one?
One way is to wait for someone to come along and clarify the distinction for you. Substack writer Talis Per Se has done just that with a fantastic articulation of the sort of intuitions of which we are skeptical.
(Since this entire post will turn on what’s said in that article, it’s almost pointless to read on without first reading that article, so we will assume you’ve read it as we proceed.)
Talis distinguishes between intuitions of the noted kind vs. that cluster of innocuous, everyday uses:
When philosophers talk of intuition — especially on the subject of justified beliefs — they seldom mean anything like the common usages of the term. If you look up synonyms for intuition on the internet you’ll find words like hunch, instinct, and natural inclination. But this is a far cry from what philosophers are referring to — they’re not suggesting that hunches, instinct and/or natural inclinations can be an adequate basis for the forming of beliefs.
Instead, such intuitions are something along the lines of:
[...] an intellectual striking or intellectual presentation-a type of mental experience.
Many philosophers would report having intuitions of this sort, as would many nonphilosophers who read or discuss philosophy. But it is possible that their propensity to report having “intellectual presentations” or “intellectual strikings” is similar to, e.g., the feeling one might get when reading tarot cards, engaging in cold reading, using a dowsing rod, or any number of paranormal abilities. While people having these experiences are undergoing some kind of psychological process, this does not mean that their account of their experiences is accurate or that it is achieving the outcomes they believe that it is.
Consider what’s going on with people who claim to be oracles and to prophesy through reading tea leaves or the casting of bones. Are they all liars and charlatans? Probably not; many people believe these practices yield genuine insight. And when these folks sincerely engage in these practices, they may have a kind of mental experience, a “seeming” or “presentation” of some kind that has a ring of truth to it.
However, most of us here probably agree that whatever is going on, people who are reading tea leaves or interpreting tarot cards are not, in fact, employing a distinct faculty or ability to detect truth or even be steered in that direction. “It seems that X” and “there is a presentation that X” and “it is intuitive that X” look just like “I feel that X.”
The work has not been done to show that the something they are experiencing is a legitimate way to discover anything about the world. The mere feeling that one has that they are somehow sensing the truth is no indication that one is in touch with some feature of the world, physical or otherwise. One may instead be responding to features of one’s own psychology, mistakenly interpreting memories, emotions, or other psychological states as insight into the truth when all one is doing is misinterpreting their own phenomenology.
Something similar could be going on with philosophical intuitions: When people learn to do philosophy, they are prompted to look inward and identify a kind of “presentation” of the truth by considering thought experiments or classical philosophical topics. If so, philosophical intuition may be a kind of inward-directed apophenia. Much as people interpret faces in clouds that aren’t really there due to quirks in the way our perceptual systems work, philosophers may find themselves drawn towards a sense that something is true due to spurious features of their phenomenology. And, rather than learning how to use a preexisting faculty awaiting their discovery, this pseudofaculty or pseudoperception is constructed via a process of social induction, much as one might learn to speak in tongues or collapse and go limp when touched by a faith healer.
One might question whether this kind of mistake is even possible for reports of intuitions. After all, isn’t the presentation the reality? If it strikes or seems to one that something is the case, how can this reasonably be doubted? The phenomenology just is the content.
At this point, we’d have to put our philosophical cards on the table. Claims about the infallibility of introspection are not incontestable, and there are philosophical positions according to which we can and do systematically misinterpret our own introspective experiences. Conditional on the infallibility of certain kinds of introspection, it could be that we can readily recognize the reality of intuitions via introspection alone, with no need for conducting empirical research. But this would simply pivot to a dispute about more fundamental questions about the reliability of introspection.
In any case, even if we grant the impression as-is, the stickier question remains: “Is this impression (in some particular instance) worth anything?” Yes, it inclines belief, but this is just a description of what happens, not a mark that this happening is good. Note that reading tea leaves and other presumptively non-efficacious practices can exhibit all of the same qualities Talis ascribes to sui generis intuitions:
Non-factive — in that they’re fallible.
Conscious — in that it’s a type of experience we can only have when we’re conscious.
Contentful — in that they have content.
Non-doxastic — in that they’re not beliefs.
Representational — in that they represent the world as being a certain way.
Presentational — in that they present the world as being a certain way.
(Bengson, 2015, pp. 715-717)
… But we don’t lend tea leaves any trust, right? So such intuitions must have additional criteria. And Talis provides it.
2.0 The Self-Congratulatory Stowaway
Talis compares intuitions to perceptual experiences, describing them collectively as “presentations,” which exhibit these features:
Baseless — in that they’re not consciously formed.
Gradable — in that their quality differs depending on the matter in which they represent, e.g. how clear or hazy they are.
Non-voluntary — in that it’s not something you can choose to have happen to you.
Compelling — in that they tend to incline us to accept their contents via the forming of beliefs, because they present the world as being a certain way.
Rationally assenting — in that they make the forming of such beliefs fitting, justifiable, and reasonable.
(Bengson, 2015, pp. 720-723)
Reading tea leaves meets all of these criteria, except for the last one: Rationally assenting, which purports to “make the forming of such beliefs fitting, justifiable, and reasonable.” This last one is the one that should stand out. Even if we grant that perceptions make the forming of beliefs on their basis fitting, justifiable, or reasonable, it would beg the question about the epistemic status of intuitions to bake this feature into the characterization of intuitions themselves. Absent an argument that a particular intuition is, in fact, “rationally assenting,” there’s no good reason to call that feeling “rationally assenting” innately.
If in fact we are defining “intuition” such that it includes its own rational praiseworthiness (to even a slight degree), something funny happens: It is no longer controversial that intuitions provide some measure of justification. It’s right there in the definition!
If you have a baseless, gradable, non-voluntary, compelling presentation that compels the formation of some belief but does not reasonably justify the formation of that belief, then it’s not an “intuition,” so-defined. Hmm, rather convenient, eh?
What might we call something that is like such an intuition except that it does not necessarily adequately rationalize that which it compels? We might call it an “ambi-tuition.” It’s not consciously formed, it can be clear or hazy, it’s not voluntary, it compels toward certain beliefs, but it may or may not do so in a reasonable way.
So we have feelings that are ambi-tuitions, and in any given instance of feeling one, it’s an open question whether that ambi-tuition is an intuition or not. So how do we “close” that question? Well, we have to do some work beyond the feeling (that baseless, fallible, murky, involuntary, compelling-but-maybe-in-a-bad-way feeling), don’t we? Needless to say!
Look what happened when we distinguished between a version of the “intuition” concept that includes its own applauding audience and a version that stays humble in proportion to its baselessness & fallibilism (calling it “ambi-tuition” to keep that distinction clear). What a wake up call: A sudden realization that the experience with which we’re all intimately familiar is that of ambi-tuitions and the lurch in which they leave us until we do more work.
So there we have it! We don’t experience “intuitions” so-defined, because our experiences are of ambi-tuitions, these feelings that have no epistemic grading since they haven’t yet been put under the microscope.
Now, slapping “A+” on an ambi-tuition is something you could do. The most narcissistic and overconfident people likely do this with nearly every ambi-tuition they have. But are such folks our exemplars, or our counterexemplars? (This one’s easy.)
3.0 Cutting the Applause
Neurobiological research tells us over and over again to stop patting ourselves on the back just because we feel inclined toward some belief. Why? Because our brains are full of activity with different fidelity, different tendencies, and different time delays.
For example (and roughly speaking), our vindictive amygdala with blurry vision is faster to the punch than our slower, more acute, “hold your horses” prefrontal cortex. Sometimes the literal punch.
From Robert Sapolsky’s Behave:
When sensory information enters the brain… most is funneled through that sensory way station in the thalamus and then to appropriate cortical regions (e.g., the visual or auditory cortex) for the slow, arduous process of decoding light pixels, sound waves, and so on into something identifiable. And finally information about it (‘It’s Mozart’) is passed to the limbic system.
As we say, there’s that shortcut from the thalamus directly to the amygdala, such that while the first few layers of, say, the visual cortex are futzing around with unpacking a complex image, the amygdala is already thinking, ‘That’s a gun!’ and reacting. And as we say, there’s the trade-off: Information reaches the amygdala fast but is often inaccurate. The amygdala thinks it knows what it’s seeing before the frontal cortex slams on the brakes; an innocent man reaches for his wallet and dies.
And this all gets jerked around by our precommitments, base instincts, and hormones like testosterone:
Endless self-help books urge us to be confident and optimistic. But testosterone makes people overconfident and overly optimistic, with bad consequences. In one study, pairs of subjects could consult each other before making individual choices in a task. Testosterone made subjects more likely to think their opinion was correct and to ignore input from their partner. Testosterone makes people cocky, egocentric, and narcissistic.
Testosterone boosts impulsivity and risk taking, making people do the easier thing when it’s the dumb-ass thing to do. Testosterone does this by decreasing activity in the prefrontal cortex and its functional coupling to the amygdala and increasing amygdaloid coupling with the thalamus — the source of that shortcut path of sensory information into the amygdala. Thus, more influence by split-second, low-accuracy inputs and less by the let’s-stop-and-think-about-this frontal cortex.
Read that last line again real quick. “Let’s stop and think about this.” “I’m going to Google that real quick.” “Is there something I’m missing?” That kind of stuff is that to which we (who have serious concerns about making right calls) subject our dubious ambi-tuitions. And at the far end of that further evaluation, we say (1) we have enough to call it worthy of rational assent — or (2) not. We don’t experience “1” from the get-go if we’re careful people…
4.0 Emergency Bar-Lowering
… Unless we have no other choice.
Under conditions of high pressure, like fleeting time or opportunity, our standards for “enough” get lowered, because the “iron is hot” and the time to strike is brief. There are times you have no time to “look before you leap” because there’s a grizzly bear in hot pursuit.
But what best describes those feelings? Are those instances this self-congratulatory concept of intuition at play, that contains its justification in its own definition? Nah. Those are hopeful resignations, where we actively abdicate our careful evaluations because careful evaluation is not available (indexed to provisions & restrictions of the “I don’t want to be eaten” kind).
So no, it doesn’t look like we experience “intuitions” so-defined. We experience ambi-tuitions, and then from there, some folks “call it good” because they’re overconfident, some folks say “let’s check that” because they’re careful, and some folks go “ahhhhh I don’t know but I’m gonna jump that chasm ahhhhhh!” because they’re being chased by a grizzly bear.
Are we all on the same page? If so, we’re inoculated against anyone suggesting that this feeling we have automatically confers justification. That’s because:
This feeling we all have does not wear a normative halo. It’s an ambi-tuition at best.
We choose to lend trust/credit to those ambi-tuitions per our standards & tradeoffs, and these hinge upon the circumstances in which we’re making those choices.
We can even meet Talis halfway…
In it being presented to us that P, we have, at least, prima facie justification for believing that P, absent defeaters… we then have a clear answer to how intuitions can provide justification for beliefs.
… by noting that research of the kind cited above provides a blanket defeater for the trustworthiness of knee-jerk instincts, immediate appearances, and uncritical biases. Unless under the gun, a careful person gives these nothing until subjecting them to some measure of justificatory interrogation.
That is, you can just reject prima facie justification because you’re more careful than that; it’s nearly a contradiction in terms if your justificatory method, when not in panic mode, always waits a beat in order to put first appearances to the test.
5.0 A Syncretic Opportunity Appears
Now so far our posture has been that adversus Talis, but if we look a little more closely, we may not be so different after all.
Consider this objection Talis notes:
One might worry that if we accept that one’s intuitions can justify forming certain beliefs, that almost anything is on the table. For instance, someone might have the intuition that invisible witches–that cannot be discovered by empirical investigation–exist. Let’s call this intuition W. We would presume that such a belief is unjustified, even if someone has the intuition that it is the case.
Yes, or (to be precise), we’d rather use a justificatory method that doesn’t lend that intuition anything quite yet. Talis continues:
[There are] ways to address this concern. For starters we can inquire about how clear W is, and whether it clashes with any of their other beliefs. Recall that Presentationalism doesn’t say that one must form a belief on the basis of an intuition (clear or not). We are simply inclined towards belief in most cases. But if an intuition sufficiently clashes with other intuitions and beliefs, we will judge that we would be unjustified in forming a belief on the basis of the suspect intuition. This is only one story about how there can be good reason to doubt W, and thus undermine its ability to confer justification. There may very well be all sorts of additional stories to tell that further undermine W.
Now hold on; you need a minute for that comparison, right? And you need the impetus to perform? I know what I call that: Doubting my first appearances. I have the ambi-tuition, which inclines me to belief, and would have me believing… were it not for our handy blanket defeater of “first appearances suck.” So instead I give my prefrontal cortex a beat to do its thing, and then I go off and engage in a more rigorous justificatory method, after which some measure of epistemic approval or disapproval is conferred.
Isn’t that what we do with invisible witches? We think so. This is what careful folks do with invisible witches, tea leaves, osteomancy, etc. (Indeed, our hope & dream is that everything we’ve described about our experiences of distrusting first impressions is, dare we say it, intuitive — in a more normie sense of “intuitive,” i.e., “this account makes boring sense, is coherent, and is familiar.”)
But still, at the end of the day, it looks like similar things are resulting from Talis’s example and in our alternative account. Could this be an opportunity to syncretize our framings? Could it be possible to speak the same language? After all, we might have some spooky impression of invisible witches, but after a beat, neither Talis nor we are believing in them. So, what’s going on here?
In the model at the top (with blue text), intuitions are characterized as having some measure of justification innately, and then other intuitions & beliefs put it to the test, possibly defeating them.
By contrast, in the model at the bottom (with green text), ambi-tuitions (whose “rational” status is left an open question) may incline belief, but they don’t get any approval yet. Why? Because approval and disapproval (to whatever degree) are a function of the act of justifying per some method (including principles, processes, and data).
That method does not have to be intuitive to the person having that initial intuition. The person is free to apply whatever method they wish, making it provisional. They can try all sorts of methods on for size, even ones they find rather counterintuitive (in whatever sense). The approval or disapproval the method spits out will be “tagged” according to that method (notice the “M”-tag on the ambi-tuition’s new crown).
This is a really safe framing because it reminds thinkers & writers that if they haven’t articulated their justificatory method, phrases like “be justified” and “is justified” are underspecified. Furthermore, this framing in fact contains the first framing for any method that elects to give approval out “for free” to first impressions (like a “panic mode” method) — and yet, by carefully relativizing justification to the method by which it was granted, we’re also reminded that we don’t have to. How robust! It’s all upside!
But it comes at a cost. It makes clear that the sense of “intuition” folks are promoting isn’t just conceptually harboring its own self-congratulation, but the justification it boasts is underspecified, and thus the concept itself inherits that underspecification.
And so “intuition,” if given that meaning, has gotta go.
There are a million ways to put this, but one way is just to say that such “intuition” is not a thing. It’s not a thing with coherent meaning, it’s not a thing we experience, and it’s not a thing we need or want in epistemology, ethics, or whatever else. All we need to account for “what’s going on” are ambi-tuitions (belief-inclining, but evaluatively-neutral feelings, impressions, perceptions, etc.) and evaluative methods (which determine epistemic approval or disapproval per that method).
And so those of us who said “It’s gotta go,” however weird & hostile “intuitions aren’t real” may have felt prima facie, turned out to be your pals all along.
Good thing we took a minute to talk it out.
6.0 Epilogue
We used language like “evil clone” and “stowaway” in this piece. Elsewhere we’ve used language like “smuggling” and “trick.” This kind of telic language can help convey the stealthy nature of the problems at play, but this isn’t to suggest that most people are making these moves deliberately. Like with the widespread use of telic language when describing evolutionary adaptation, the underlying cofactors are likely more related to selection (the rhetorical successes & virulence that stealthy bugs, like underspecification, can foster). Folks then take growing convention as a signal of good practice, and become part of the problem, yet all in good faith.
To get even nerdier, check out Herman Cappelen’s Philosophy Without Intuitions. Cappelen puts the term “intuition” through the linguistic & conceptual wringer, exploring the plethora of both “normie” and philosophers’ usages.
While we’ve indulged Talis’s criteria above, we must object to the notion that the criteria he provided reflect “what philosophers mean.” Philosophers mean a lot of different things, and only notice they mean different things than one another part of the time. (It’s kind of a problem.)
Learn about “midstream bivalence,” which makes lending grace to syllogisms with humble, hedged premises a perilous thing to do.
Learn more about how normativity can become conceptual cargo and drive endemic underspecification (a major problem in ethics & epistemology).
Learn more about how philosophers often state that something “is intuitive” without clarifying whose intuitions they’re referring to, and why this is a problem.
Learn more about how reliance on intuitions or “seemings” can foster incorrigibility by providing one with an unlimited private source of evidence that can defeat any defeater, effectively granting one an epistemic blank check.
References
Bengson, J. (2015). The intellectual given. Mind, 124(495), 707-760.
Sapolsky, R. M. (2017). Behave: The biology of humans at our best and worst. Penguin Press.
Note about the authors
This article was coauthored by Lance S. Bush and Stan Patton. Consider subscribing to Stan’s blog “Philosophy Stan” here.









It just seems to me that you two are stance-independently obligated to write a book together about metaphilosophy, for my benefit.
The drawings are really nice!