A few days ago I wrote a response to an article about the repugnant conclusion. You can find that remark here but I’ll quote it in full here:
I appreciate articles like this one but I do think your case may be overstated. You say:
"And, contra what many people seem to think, avoiding the Repugnant Conclusion is everyone’s problem. It is not something that only arises if you are a utilitarian."
It's not everyone's problem. It's not a problem for me.
"The answer seems blindingly obvious: you should hope A is the case! It is preferable to Z!"
Who does it seem blindingly obvious to? It does not seem obvious to me. Neither answer seems obvious to me. I think the thought experiment involves underdescribed and highly complex scenarios that nobody could accurately model. Whatever people are doing, it's a bit like running a very bad simulation, those simulations likely vary between people, and I see no reason to think people are reliably converging on conclusions based on imagining the same sorts of things. I don't think anyone can accurately model either scenario to render any meaningful sort of judgment, and even if they did, I think this would only tell us about that person's psychology; it wouldn't tell us anything about what's true in any deeper sense.
There's a far simpler response to the repugnant conclusion than to try to address it: I simply deny there's anything repugnant in the first place. I don't find it repugnant. And I take whether something is repugnant or not to be a matter of individual psychology. Since I don't find it repugnant, I simply don't think there's a problem to solve.
The author and I had a productive exchange about the matter, but Both Sides Brigade also weighed in, expressing difficulty with knowing quite what I was saying. I probably wasn’t as clear as I could’ve been. Comments aren’t the sort of thing I toil over for hours. This led to an exchange that ultimately led both of us to write quite a lot, and culminated in a very long comment from me. A few people seemed to like some of the remarks and someone suggested I turn it into a blog post, so, why not?
On a personal note: I haven’t blogged in over a month. I rarely write personal notes here, but why the hell not? The past month has been a lot. I won’t get into the specifics but it has involved a bunch of trips, crises, and illnesses that have left me unable to sit down and write productively. Sorry about that. I have a few blog posts in the works but with a baby on the way, I am not sure I’ll finish any before my hiatus.
In the meantime, maybe this post will be interesting. I don’t want to spend a lot of time retooling the comment, so here’s what I’ll do. First, I’ll give a link to the start of the exchange. Here it is. BSB said:
It’s hard for me to understand exactly what you’re saying here - are you saying that your ethical framework does endorse the repugnant conclusion as not repugnant at all, or are you saying your ethical framework makes comparing the two situations involved impossible in some way?
Here was my initial response:
I don’t think any particular answer is obvious, intuitive, and so on. I reject the whole notion that, on considering thought experiments like this one, that it’s at all a normal reaction to have some kind of impulse towards finding one or another answer “obvious.” One might have a preference for one or another outcome, but that’s about it. And since I don’t think people have the ability to accurately simulate these scenarios to know what they’d be like, I think it’s strange for people to think their judgments on the matter mean much of anything.
I think the intuitive impulses people get in response to these sorts of cases are a learned behavior that results in the pseudopsychological phenomenon of “having an intuition.” There are no such things.
This led to a longer exchange. If you want the full context for the remarks that follow, you may want to go check it out at the link I gave above which I’ll give again here.
We had a bit of a back and forth that led BSB to write quite a long post, so I won’t requote it, but here it is.
What I’ll do now is repost my response to this, which ended up turning into an essay of sorts. Here we go:
Regarding the matter being an empirical question: It would be helpful for me to understand your perspective if you directly addressed claims (1)-(3). Which of those points (if any) do you disagree with?
When a subject has been discussed for decades and the vast majority of relevant experts near-unanimously report a particular conclusion [...]
…I can stop you right there. I don’t accept that the people in question have any relevant kind of “expertise.” I emphasize relevant expertise here; philosophers have expertise on lots of things but I don’t think they’re experts at reacting to thought experiments, though I’m not sure what kind of relevant expertise you think they have. The vast majority of astrologers, the vast majority of homeopaths, etc. may think this or that, but none of us should take them seriously. What I am questioning is the entire metaphilosophical and methodological framework in which these so-called “experts” are operating in, and thus I am directly challenging the notion that they have the kind of expertise to have substantive judgments on these matters.
[...] and I see that same conclusion reported in all my personal experiences with non-experts as well
Anecdotes are not good evidence. When you engage with people in these contexts there are all manner of ways in which you could be influencing, biasing, and skewing results. Here are just a few potential issues:
Selection effects. You are more likely to interact with people who will think the same way about these issues, more likely to elicit responses from such people, and so on.
Cultural narrowness. The people you interact with are going to come from a tiny bandwidth of the human population, will tend to be homogenous, and will tend to be more similar to you than people from other cultures, societies, times, and traditions.
Perception of your beliefs, attitudes, or experiences, or may respond to your body language, tone of voice, the way you framed questions, and so on. Think of it this way. When a pastor asks someone about their sex life, is that person going to respond in the same way as if their close friend does? Absolutely not. People’s knowledge of or beliefs about who you are and what you expect or think will influence their responses. People have duped the world into thinking horses can do math using cues; the people doing this don’t even necessarily realize they’re doing it. When a person with biases or expectations directly interacts with others, they influence them. This is part of why we do surveys or have researchers blind to condition conduct studies: so we don’t corrupt the data.
Framing effects. When philosophers present issues to people, they often engage in a setup that naturally gives the impression that there’s a legitimate response to be had. They often employ forced choices, and often employ dichotomous choices, between one of two possibilities: the trolley problem, Mary’s Room, and so on are all instances of this. Ordinary human social behavior is naturally cooperative. It’d be considered weird, even rude, to say “I think this question is nonsense” or “fuck if I know.” The very act of framing questions in these ways can have several effects: it can imply the issue is a legitimate one, it can cause people to respond to shallow, superficial stimuli with shallow, superficial responses, it can cause a person to feel social pressure to respond in a cooperative way.
Other framing effects: when you present problems to people, wording and other details can influence how people respond in ways that can systematically bias people towards one or another of different responses.
The list goes on. And there are further issues once people do opt to participate in thought experiments. There are issues related to how well people can accurately conceive of these situations, how they interpret them, how to interpret people’s responses, and so on.
I teach a course on the psychology of thought experiments. In the class, we review, discuss, and write about the nature and use of thought experiments. I was already deeply skeptical of their value, but teaching the course has only led me to become even more skeptical. Here are some of the papers:
Bauman, C. W., McGraw, A. P., Bartels, D. M., & Warren, C. (2014). Revisiting external validity: Concerns about trolley problems and other sacrificial dilemmas in moral psychology. Social and Personality Psychology Compass, 8(9), 536-554.
Meier, L. J. (2022). Can thought experiments solve problems of personal identity?. Synthese, 200(3), 221.
Take the latter of these. It begins with a quote from Quine:
“The method of science fiction has its uses in philosophy, but (...) I wonder whether the limits of the method are properly heeded. To seek what is ‘logically required’ for sameness of person under unprecedented circumstances is to suggest that words have some logical force beyond what our past needs have invested them with.”
And here is the abstract:
“Good physical experiments conform to the basic methodological standards of experimental design: they are objective, reliable, and valid. But is this also true of thought experiments? Especially problems of personal identity have engendered hypothetical scenarios that are very distant from the actual world. These imagined situations have been conspicuously ineffective at resolving conflicting intuitions and deciding between the different accounts of personal identity. Using prominent examples from the literature, I argue that this is due to many of these thought experiments not adhering to the methodological standards that guide experimental design in nearly all other disciplines. I also show how empirically unwarranted background assumptions about human physiology render some of the hypothetical scenarios that are employed in the debate about personal identity highly misleading”
This is just one problem with one thought experiment, though similar issues may generalize. My point here is that there is a veritable galaxy of psychological depth to questions about the function, utility, and value of thought experiments. Why? Because our responses to thought experiments are a type of behavior, and it is the output of human cognitive processes. If we don’t understand what processes are involved and how they work, we are effectively using a tool without knowing what it is or how it works. It just renders our primary philosophical toolbox: our own minds, an unopened black box that we only access via its outputs and not by examining its internal workings. That’s not a way I think we can or should do philosophy, and it is highly vulnerable to producing mountains of baloney.
then in the absence of any reason to believe otherwise - which so far hasn’t been provided - then it seems appropriate to believe that the conclusion is widespread.
…but I have provided reasons to believe otherwise. Please check out this article:
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?. Behavioral and brain sciences, 33(2-3), 61-83.
Henrich and colleagues show that people from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) nations tend to be psychologically idiosyncratic compared to the rest of the world’s populations. Not only has almost everyone you’ve encountered probably been from a WEIRD population or heavily influenced by them, you’ve likely encountered an especially unrepresentative and non-systematically-gathered sample of people from WEIRD populations who are probably even more WEIRD than most WEIRD people. Why? Because many of these people are exposed to or interested in analytic philosophy, which is a distinctively Anglophone philosophical tradition. What is the WEIRDest population of all? The Anglophone population. In short, what you have is your memory (prone to error) or a non-representative and unsystematically gathered body of anecdotal information (prone to error) and may have asked questions in ways that skewed, biased, or leading (further errors) of an extremely unrepresentative population (which is highly prone to error for any kinds of generalizations about other populations). I mean no disrespect in saying this, but if you think that it “seems appropriate to believe that the conclusion is widespread” then I would strongly encourage you to learn more about psychology. It may seem appropriate to you, but it isn’t appropriate to well-informed social scientists who understand the pitfalls of such generalizations.
But, with that said, I think you might be interpreting my claim in a way that’s far stronger than I intend. I’m not claiming that opposition to the repugnant conclusion is somehow deeply hardwired in any essential way into every single human being, regardless of culture or education or anything else.
I didn’t think you thought that.
After all, it’s very likely that, in order to understand or engage with the question in the first place, you probably need to have at least some shared foundation in basic concepts related to the modern study of population ethics, which many people on planet Earth don’t have.
Right. And my concern is that it’s not just a matter of understanding them but being inducted into methods, dogmas, and ways of thinking that are misguided and result in confusions, errors, and preoccupations with pseudoproblems. A bit like gurus trying to teach people to see auras.
Rather, I’m just claiming that, for those who attempt the question - that is, for those who are engaging with it as intended in the context of analytic philosophy more broadly - a conclusion in favor of one world is much more common than a conclusion in favor of the other.
If so, then what does it mean to say the “conclusion is widespread”? When you say “I think it’s very hard to deny that many, if not most, people have a fairly immediate and strong reaction that one world is better than the other” what exactly are you saying? That they have such a reaction under distinctive conditions in which they’re taught about population ethics and encouraged to think in terms of the categories, distinctions, methods, and so on of analytic philosophy? What exactly is the story here? My whole point is that people don’t naturally think in terms of these categories and distinctions, and don’t go around having “intuitions” in response to thought experiments in the way analytic philosophers approach them. This is something people have to be taught or induced to do. I think this is a largely made up way of thinking, a bit like a game, that parasitizes our ordinary reasoning processes to engage in intellectual exercises that are largely pointless. Mainstream analytic philosophy is, for the most part, not engaged in an effective pursuit of constructing substantive bodies of knowledge about much of anything.
Otherwise, I don’t think it matters at all whether we’ve agreed on what constitutes a life barely worth living, or if we’re imaging the same thing, or anything like that. And frankly, I’m not sure why you think it would matter.
I do think it matters. We have a bit of a beetle box situation here. When you give people a very abstract and underdescribed description like “lives barely worth living,” this is so lacking in concrete detail that different people may readily imagine very different things. But it’s worse than that. Just what are these lives like? What do the people living them report about their lives? What would they say if asked? What does the day-to-day experience of such people look like? Is it mostly boring but safe, with few ups or downs, or is it lots of ups and lots of downs?
When you don’t adequately describe scenarios, people fill in the blanks with their own thoughts. Often this can lead to substitution: responding to a different question than the one asked. This is especially likely when the question being asked is difficult. What happens is that we’ll lean on some easier question to answer, and not realize we’re doing so, e.g.: “Which of these worlds would I rather live in?” We don’t know what exactly is going on when people have “strong reactions” or “intuitions” in response to these cases. Are they engaged in some kind of inferential process? If so, is it conscious or nonconscious? A non-inferential process? What is causing it? What are people actually rendering judgments about?
Everyday human judgment and decision-making deals with practical concerns in concrete, contextually rich, embodied, situations in which we have access to the full panoply of details. When a detective is trying to identify suspects, when a chef is trying to choose what to put on a menu, when a musician is selecting their next song, they are embedded in broad social contexts in which their problem solving is highly situated. When you abstract away from these situations, you strip your reasoning of all the contextual cues that ordinarily figure into our actual judgments. We must rely, instead, on dubious imaginative capacities. I know being on fire wouldn’t be good, but I don’t know if I’d enjoy a kind of food I’ve never eaten before without any information about it.
Thought experiments operate at a weird nexus of abstract, concrete, stipulated, underdescribed, lack of details, and so on. In the case for Mary’s Room, for instance, we’re supposed to imagine she knows “all the physical facts.” What the hell does that mean? How are we supposed to imagine that? I simply deny, outright, that anyone can accurately model what such a scenario would be like. Whatever they are doing, it’s not producing some kind of accurate reflection of what this would actually be like to produce a meaningful output.
It’s not like, for instance, asking a chemist what would happen if you combine one chemical with another. Such knowledge has to be acquired. Nobody has the faintest clue what it would be like to know “all the physical facts.” It’s not even clear what that means. If we had complete knowledge of physics, perhaps it’d be obvious then whether knowing all such facts would or wouldn’t provide all the information necessary to judge whether Mary would learn something new. But we just don’t know. The thought experiment is a joke. The same applies to many other thought experiments.
The question is about these particular states themselves, and not what actualizes or instantiates them.
That’s not what’s at issue for me. The issue is that judgments about which state of affairs is better overall than another are hard to draw when one doesn’t know any of the qualitative details. I don’t think it’s so easy to abstract away “good” and “bad” experience apart from the specific ways in which it emerges in actual contexts. There’s a big difference between a bunch of people plugged into an opium drip and people living rich, full lives, even if those lives include pain and suffering and loss. What kind of life is barely worth living? Here’s at least two ways this might occur:
A normal human life, with lots of ups, downs, and everything in between. Somehow we’re supposed to judge that, on net, it’s just barely worth living
A totally abnormal life: We generate conscious beings who experience one second of pleasure then stop existing. Their life was good, but was barely worth living.
I could generate hundreds of examples, all of which vary in the qualitative details about the kinds of lives being lived. Which of these are “barely worth living” and which aren’t? I’m not sure people would agree. And when people are responding to the repugnant conclusion, are they imagining any of these in particular? If so, then they may be making judgments about qualitatively different situations than other people, and this effectively results in them answering a different question than other people. If they’re not imagining any of these specific situations, what are they imagining? Good and bad lives in the abstract? What the hell are those? I don’t even think there is any such thing, or that it makes any sense to think about such things. If people think they can do that, then I think they’re mistaken. The situation we’re actually in is much worse than this though. It’s not clear to me whether people even know what they’re doing, and it’s not like we know people’s responses are all over the map. We don’t even know.
We don’t know much about what’s going on in people’s heads when they respond to these questions, because nobody bothers to ask. And it’s not like people themselves are going to be especially reliable judges, anyway. Most laypeople are not good at introspection, and I doubt philosophers are that much better. It may not even be possible. If we want to crack open the lid, we’re going to need to do cognitive psychology. Introspection and self-reports aren’t going to cut it. I also don’t accept analogies to ordinary decision-making. I don’t grant that because people can make accurate judgments about whether they’d prefer pancakes over waffles, or favor a new law, that they can make sweeping judgments about abstract and bizarre science fiction scenarios in a way that would match what they’d end up judging were they to actually observe such situations.
The GDP/war scenario may or may not accurately reflect what that person would prefer on reflection or on experiencing such situations. But people have knowledge of and prior experience with policies like wars and economic increases. Such judgments are based on real, everyday experiences and information that they already possess or could go do research on. Nobody has a good sense of what a world with a few million very happy people or a world with trillions of lives “barely worth living” would be like. This is imaginative science fiction.
Similarly, if two people say they prefer a hundred lives that are extremely good over a billion that are barely worth living, that’s an important shared judgment regardless of what they take those claims to be describing in any detailed sense
I don’t grant the comparison. I don’t think people are typically in a position to know what they’d prefer about science fiction scenarios. And prefer for what? GDP and wars affect our actual lives. When you’re judging the repugnant conclusion, are you being asked which world you’d live in? Presumably not. Instead people are expected to judge which world would be “better” in some obscure abstract sense that they have no stakes in. This is not the sort of thing people typically make judgments about.
Regarding the distinction between the “states” and “what instantiates them,” I’m denying that typical judgments about how good or bad people’s lives are can be meaningfully reduced to “states” of happiness and sadness. I completely deny there even is any such thing as a commensurable metric or common currency by which we can judge “states” as “good” or “bad” in some abstract sense. There are qualitative differences in the kinds of lives people live, and those qualitative details matter and are a rich and essential part of how I judge which states of affairs I prefer over others. So I reject the presuppositions you have behind the question. I don’t think it makes any sense to judge these scenarios without the concrete and specific details of what the lives of the people in question are like. I have no clue what the hell a life “barely worth living” is supposed to be. I don’t even think there’s a fact of the matter about such a question. There’s a bunch of different ways you can cash this out, and I’d have different judgments about each. I’d bet most people would, too. And just what is a life that’s a thousand times better than another life even supposed to be like? What does that look like, in practice? If I’m told it doesn’t matter, just imagine a person whose life is “1000 times better” than another person’s life, no, I don’t think I can. I don’t think anyone can. I don’t think people are readily capable or tend to judge the quality of life in practice according to some quantized metric like this. I don’t think that’s how human experience or happiness or suffering work. I don’t think they admit to discrete quantization, or that it’s easy to imagine a life that’s a “thousand times better,” literally, than some other life.
I think this idea of mathematizing the human experience is a fanciful armchair farce and that if we were actually in a position to enact the imaginary outcomes population ethicists speculate about, we’d revise our attitudes and totally change what we do the moment our speculations came into contact with reality and we actually acted on our armchair accounts. I think much of population ethics is a ridiculous intellectual game that has nothing to do with how actual policy making can or should be done. Maybe some of it has value. But the kind that has value will tend to be grounded and practically applicable.
The comparison with asking whether you’d like to live with elves or dwarves doesn’t make any sense to me, because in that case the issue would be that the overall goodness or badness of those worlds is obviously underdetermined by the description
It was not the purpose of the example to suggest that the relative happiness level would be underdetermined. It was to illustrate an example of how it’s difficult to make judgments when we lack concrete details about the situations in question. I don’t think most people can just abstract “happiness” away from the specific ways in which it manifests; instead, I think meaningful judgments about how good lives are is grounded in the specific qualitative aspects of those lives. I also think most philosophers would realize this if they weren’t caught up in pantomiming the sciences with the artificial rigor of math.
You’d ask for more details - what is the society like, what level of technology do they have, whatever - because you’d need those answers to determine the overall quality of the life you’d be living. But in the case of the repugnant conclusion, it’s the exact opposite: What you’re given is only the overall quality of the life, and if you already have that, no details should matter.
I don’t think the details and the quality can be separated in this way.
That’s why I still can’t wrap my head around what more information you could possibly want!
I need the concrete and specific details of the lives in question. I don’t just care about happy lives in the abstract. I want to know the specifics.
All the information anyone could ever give would, by stipulation, necessarily align with original description of the overall quality.
I don’t think the quality of lives is reducible to numeric values, so I don’t think this is the case. Simply stipulating that it’s the case won’t do. I judge whether I prefer one thing or another based on the qualitative specifics of that thing, not on some abstract quantification of “happiness.” I simply would have to know the specific nature of the lives in question to make any meaningful sorts of judgments about them.
It’s sorta like if I asked you to think about a shiny red car you wanted to buy, and you asked what kind of paint the car had and what model it was - the answer would just be “Whatever paint would make it shiny and red, and whatever kind of car would make you want to buy it.”
I don’t think it is like this. “Lives barely worth living” is underspecified and not something I’ve experienced before. I’ve seen shiny red things. I can easily think about shiny red things. And I am already aware that there’s lots of ways to paint cars shiny and red that don’t differ much in their qualitative aspects in ways I care about. Happiness, on the other hand, is not like this. I do not think happiness can be abstracted away from the experiences and circumstances that bring it about. And I am absolutely serious about that. I must know the details. For instance, I would not favor a world with a small number of “extremely happy” people, if this consisted of people being on drugs all day or laying around grinning at the ceiling because they were genetically engineered to be in a state of perpetual catatonic joy.
It’s just not clear to me why anyone would need access to the first-order facts of the situation, whose only role would be to help you determine the qualitative appraisal that’s already been provided.
The qualitative appraisal is based not just on the quantitative aspects but the other qualitative aspects.
How could observing the situation change their judgments, if the nature of the stipulated situation itself is pegged to ensuring that a particular judgment is given?
Sorry, I am not sure what you are asking here.
It just doesn’t seem to me like anyone could be wrong about judgments like this
There’s a book by Dr. Seuss called Green Eggs and Ham. In the book, Sam-I-Am keeps hassling someone to try green eggs and ham. They keep refusing, insisting they wouldn’t like it. But Sam-I-Am persists, and eventually they try it. It turns out they do like green eggs and ham. They were wrong about their preferences.
People cannot judge whether they prefer or like something in advance without the right kinds of experiences to compare those states of affairs to experiences they already have. I am confident I wouldn’t like ice cream made out of broken glass because I know what broken glass is like. But people aren’t in a position to readily conceive of a world with trillions of people with “lives barely worth living,” such that they could render any meaningful sort of judgment about such a scenario. I also don’t think people can readily imagine individual lives being thousands of times better than other lives while all are still “worth living,” whatever that means. And for the matter, what does it even mean for a life to be “worth living”? I literally have no idea what that means. Is it based on my subjective appraisal of the worth of human lives? Is it based on the appraisal of the person living the life? Some objective standard? If so, which one? The situation is hilariously, hopelessly underdescribed, far, far less so than shiny red cars.
And when people experience things that are outside the norms of their experiences, their judgments about whether they’d ultimately prefer or disprefer those things on reflection can and frequently do change. People discover they like food or music or people they thought they didn’t or wouldn’t like. People are routinely wrong about mundane things every day. But somehow they can’t be wrong about whether they’d think a world with lots of people with lives “barely worth living” sucks or not? To invoke Matthew Adelstein: that doesn’t seem very plausible. People frequently revise their assessments of things after experiencing them, even for mundane things. I find the idea that people could lock into and have perfect accuracy on abstract descriptions of science fiction scenarios absurd.
It would be like if I asked someone, “What would you do if someone stole your shoes, but they were a pair you didn’t really like?” and them responding “Hard to say, I might end up really liking them.”
My entire position is predicated on the fact that I deny that it is like this. This is a grounded judgment about actual experiences. Science fiction thought experiments aren’t.
But if I just assert, axiomatically, that I’m discussing a society in which a life would be barely worth living, then the idea of my opinion changing once I experienced that world is literally incoherent to me; I can’t understand how it would be possible, even in theory.
I think you may have misunderstood me. I’m not challenging whether the lives in question are “barely worth living,” I’m challenging whether people are capable of reliably judging that worlds with lives that are barely worth living are better/worse than ones with a smaller number of people with higher average happiness are in a position to accurately make such judgments given how underdescribed the scenarios are. You’re welcome to stipulate that one set of lives are “much better” than others and that others are “barely worth living,” but without knowing the specifics of what that looks like in practice, I have no idea what people are even making judgments about, and I don’t think those people know what they’re making judgments about either. There’s no such thing as abstract lives “barely worth living.” There are only actual lives people are living, and what we’d say about those lives if we knew enough about them.
it’s hard to not notice that your approach generally forecloses on the possibility of meaningfully exploring a large number of questions that I am extremely confident other people have made real progress on through the use of other competing frameworks.
This isn’t something you need to pick up on from what I say. It’s something I openly, repeatedly, and explicitly claim: yes, large numbers of the questions you think people have made progress are, I think, complete nonsense and that all of this alleged progress is a sham. I think many of the smartest people in the world are completely wasting their lives on what amounts to Dennett’s idea of chmess:
Dennett, D. C. (2006). Higher-order truths about chmess. Topoi, 25(1), 39-41.
I don’t think your framework works very well, and for some things, I don’t think it works well at all. That I think this is precisely why I am so animated about all of this. I don’t just think other people are mistaken on the details. I think they are profoundly, hopelessly misguided, because I think they are caught up in metaphilosophical paradigms that are almost entirely dysfunctional.
And that to me is a major point in favor of those other frameworks, especially since the alternative explanation - that the entire field of population ethics is literally meaningless and/or confused - is in direct contradiction with my own first-person experience of reasoning through these questions in productive ways myself.
Christians will tell me that all their experiences with prayer, miracles, witnessing, and so on are in direct contradiction with my atheistic worldview. Members of other religions would say likewise. Maybe they’re right. Maybe they’re not. But such self-assurance isn’t going to move the dial much for me.
If I’m correct that much of what you and other philosophers are doing involves mistakenly thinking you’re making progress when you’re not, of course it’s going to seem that way from the first-person point of view. I’m sure astrologers would tell us that they have first-hand experiences of the efficacy of their practice. They’re still mistaken. My first person experiences could be wildly distorted or mistaken, too. Anyone’s could be.
I guess it’s just hard for me to understand what motivates this skepticism of abstraction, given that I can’t see where it’s actually going wrong, as opposed to just generating conclusions that don’t meet an (imo arbitrary) level of specificity.
There’s a reason I asked you if you’ve looked into the psychology of thought experiments much. It’s hard to present a bunch of studies and reasons to doubt the value of thought experiments in Substack comments, when it’s the sort of thing it would take more than the course I teach to start stitching together a more comprehensive picture of the issue. My skepticism is motivated by the years I’ve spent studying both philosophy and psychology and leveraging the unique virtues of each against the other. Psychology yields so much insight into philosophy it’s insane to me that more philosophers don’t study psychology.
I would at least encourage you to consider what interesting, self-consistent conclusions are possible if you did accept the possibility of abstracting out this way, and whether they might be worth “purchasing” through your own methodology.
I appreciate this kind of concluding remark but I’ve already done this. I spent years going through the same kinds of training in analytic philosophy many other people receive. My conclusions are based on having already trying to do this and coming to reject it. I’m not an outsider to analytic philosophy. I’m an expatriate.
There you have it. There’s probably some mistakes in there. Feel free to point them out and I’ll fix them. I must have written this in some kind of fever dream. I’m not feeling well so I’m going to go rest some more. Hopefully I’ll feel better soon and can get back to regular posting.
I accept intuition-based philosophy, but I agree with most of what you say. I am embarrassed that I did not realize the serious flaws you point out in thought experiment methdology, and I am shocked I haven’t seen these problems more prominently discussed in analytic philosophy. Nevertheless, I think intuitions can justify belief in philosophical theories. (I’d construe intuitions as intellectual seemings, though I know you don’t believe in such things.)
Michael Huemer has defended phenomenal conservatism by claiming that alternative epistemologies are self-defeating. He argues that all beliefs are based on seemings (such as intuitions), and if seemings do not justify belief, then no beliefs are justified. I think you’d deny that beliefs are based on seemings because there are no such things as seemings. Elsewhere, you’ve argued that we shouldn’t believe in seemings because they may be ruled out by a completed cognitive science. I think this assumes a realist view of cognitive science, namely, that a completed theory of cognitive science would be true. Since I don’t accept realism about science, I’m not convinced by this argument.
> “It just renders our primary philosophical toolbox: our own minds, an unopened black box that we only access via its outputs and not by examining its internal workings. That’s not a way I think we can or should do philosophy, and it is highly vulnerable to producing mountains of baloney.”
You’re right. Intuitions probably make us believe mountains of baloney because they don’t have the right connections to the world to reliably produce true beliefs. Nevertheless, I offer two justifications for intuition-based philosophy. First, intuitions may provide epistemic justification for belief because there is no alternative. (This is true only if basically all beliefs are based on seemings.) If there is nothing better than intuition to get at metaphysical truth, then it may be reasonable to use our intuitions to try to discover metaphysical truths. I think this holds even if my beliefs based on intuition are true, say, 0.1% of the time. Of course, no one is required to use intuitions, and if you want to be a skeptic about intuition, I have no problem with that. Second, there’s pragmatic justifications for intuition-based philosophy. I find using intuitions satisfying because I like holding metaphysical beliefs. It’s not the case that the only purpose of inquiry is avoiding error. Personally, I am willing to take the epistemic risk to form beliefs based on intuitions. I want to believe.
Great post as always! Sad to hear about your personal problems, but congratulations on the baby.