Do most people believe in Geospiritual Relocation?
Forced choice and top-down presumptions in psychological research
1.0 You must choose A or B!
Do most people believe in Geospiritual Relocation? Probably not, because it’s a nonsense idea I invented for the purposes of this post. Nevertheless, I could probably run a study where I gave a brief description of what it was, and secure at least some agreement among participants. Would this mean that these people genuinely endorsed the phenomenon? Maybe. Would it mean that a similar percentage of people in the population that they were drawn from do? I very much doubt it, and I hope you do, too.
That researchers could prompt people to endorse views that they don’t really endorse, or didn’t endorse before participating in the study, strikes me as an overlooked but significant problem for at least some psychological research, especially when it comes to studies that solicit responses to abstract and unfamiliar notions that may not be reflected in ordinary thought. Nevertheless, I keep seeing this line of reasoning in psychological research:
We gave participants a scenario engineered to prompt one of two responses.
We then asked the participants to choose one of two responses: A or B, using a multiple choice question where they could only bubble in one or the other.
100% of participants chose A or B. It appears participants think there are one of two solutions to this puzzle, just like we do, and are divided about which one is correct. Furthermore, it appears they’re not ambivalent, conflicted, or drawn to both or neither answer.
This is ridiculous. If you intentionally engineer a scenario that primes people to think of only two responses, then require them to choose one or the other, how on earth does this tell you that everyone thinks in terms of A or B? You made them do this. Imagine you were filling out a survey and you came across this question:
There are only two explanations for why continents have moved apart:
Geospiritual Relocation
God personally directs the plates around to create constant variation in the world’s geography based on the theological principle of Geospiritual Relocation. The gradual movement of the earth’s landmasses may seem like a natural phenomenon, but is in fact subtly directed by God to optimize spiritual opportunities for the people of earth.
Perpetual Inertia
An ancient collision caused a phenomenon known as Perpetual Inertia: once set into motion, the near frictionlessness of the mantle causes plates to move in a process that gradually slows over geological time.
Which is the correct account?
(A) Geospiritual Relocation
(B) Perpetual Inertia
Both of these positions are complete nonsense. Requiring people to choose from these two does not indicate that they sincerely endorse either, nor does it show that if people vary in which they choose that there’s conflict, disagreement, or anything of psychological interest. The problem is that you’re forcing people to choose from options they may not endorse at all. They may:
Endorse a third option
Be uncertain which of the two options is correct, even if they think one of them is
Believe the question is insoluble: there’s no determinate answer, the question is nonsensical, etc.
None of these options are available in forced choice paradigms, and even if a person would be disposed towards one of these, the framing and instructions imply this isn’t an option. This can cause a variety of problems.
First, people often want to be compliant and helpful participants. They may not wish to sabotage your study by not choosing one of the two options, even if they’re not sold on it.
Second, people may choose a response that is a close but not identical match to their own view. This may obscure variation that isn’t reducible to a categorical dichotomy.
Third, the study itself may cause participants to adopt a position they didn’t hold prior to participating in the study. When this occurs, it undermines the generalizability of the study. If your study causes 70% of participants to endorse A, then this does not justify the claim that 70% of people in the sample was drawn from endorse A.
Fourth, even if your goal was to find out whether people would endorse A or B if you presented them with these as options, such endorsement may be transient. Even if a person appears to genuinely endorse A or B at the time of a study, they may or may not continue to endorse it outside the context of the study. Endorsement may also be misleading in other ways. Saying you endorse one theory over another may involve little more than verbally assenting to its truth when prompted to do so, even if this has no impact on one’s behavior in contexts where a more internalized belief would. This is a general problem for any research, but it’s still worth reiterating.
There is a tendency for researchers in at least some subfields that study people’s beliefs and attitudes to operate in a top-down fashion, supposing in advance that people’s attitudes can be, at least to a reasonable approximation, tossed into a discrete set of categories. This wouldn’t be a problem if these researchers had already gathered a lot of observational data and sifted through it, forming hypotheses in an organic, bottom-up way that was based on target populations.
The problem is that sometimes they don’t do this. In experimental philosophy, researchers often take popular theories in academic philosophy, treat them like psychological constructs that have distinct analogs in ordinary thought, then design studies intended to elicit agreement with one or another of these popular categories among the general population…as if people are invariably disposed to implicitly adhere to one or another of these philosophical categories. This does not strike me as a very good idea. We simply don’t know in any given instance whether the categories, labels, and distinctions analytic philosophers use map onto discrete features of ordinary thought.
2.0 Rose et al. and the Ship of Theseus
One actual example of the kind of study I have in mind appears in Rose et al. (2020). Their goal was to explore whether the Ship of Theseus thought experiment could serve a provocative function. Provocative thought experiments elicit ambivalence: those who engage with them feel the pull of two or more answers, and are torn between them, leading to a state of uncertainty and indecisiveness. The thought experiment that they employ presents a scenario along these lines:
A person has a boat. Let’s call the boat “Boaty McBoatface.”
A person gradually replaces the planks of a boat with new planks over time. We will call this Boat A.
They store all of the old planks.
Eventually, they replace all of the original planks in Boat A with new planks.
As a result, all of the original planks are now in storage.
They build a boat, Boat B, out of the old planks.
The question is whether Boat A or Boat B is Boaty McBoatface. This thought experiment could induce ambivalence because you may be torn between various theories about what features of an object are relevant to whether it is the same object. Across all participants, 64% favored the study’s analog to Boat A, with the remainder favoring Boat B. The authors argue that this demonstrates that The Ship of Theseus serves a provocative function. But does it?
It doesn’t appear so. As Campdelacreu et al. (2020) point out, ambivalence is an intrapersonal phenomenon: a person would be ambivalent if they were torn between choosing Boat A or Boat B. But all the study shows is that some people favor Boat A, and some favor Boat B. This only shows interpersonal disagreement. As a result, the study does not support its intended conclusion. Campdelacreu et al. add that if you wanted to test for ambivalence, you’d need a different set of measures. The measures Rose et al. used were only suitable for testing differences between participants, and were not suited to testing whether individuals themselves felt conflicted or indecisive about these scenarios. Arguably, because Rose et al. included a measure of confidence, they may have addressed this to some extent, but this is still not the best means of testing for ambivalence. As Campdelacreu et al. observe:
It might be argued that readers of RMS’s vignette will put two and two together and gauge the potential conflict. That may be right. But RMS include no measure to indicate that this is the case, nor an acknowledgment that they are counting on readers making the connections. More importantly, RMS do not allow readers who have gauged the conflict, and feel intrapersonal ambivalence, to express it. The reason is that readers of their vignette have only two options: they have to choose the reassembled ship or the gradually replaced one. But for the reader to be able to express intrapersonal ambivalence, options such as “both,” “neither” and “I do not know” should be offered as possible answers as well. (p. 556, emphasis mine)
Note that such options wouldn’t even necessarily reflect ambivalence. They could reflect indifference or confusion as well. All such responses are ruled out by design when participants are presented with a forced choice. This can give the misleading impression that such reactions are absent from such studies.
These remarks also indicate that some researchers are aware of these issues. Yet I worry these problems persist. These concerns only became apparent and prompted a response because the original study claimed to address ambivalence but doesn’t appear to have done so. What about all the studies out there that don’t explicitly claim to be testing for ambivalence (or confusion, or indifference, or a commitment to an option that isn’t possible to express, etc.)?
3.0 Conclusion
I worry that these top-down approaches generalize to other areas of psychology. And I worry that the result is that many studies channel participants down prearranged pathways that reinforce top-down presumptions on the part of researchers that never arose from observational or descriptive research drawn from the target population in the first place. Patterns in data are then taken at face value as indications of “individual differences” with respect to the target measures, but are in fact reflecting individual differences in the strategies people take to respond to studies that require them to answer weird and unfamiliar questions using a set of options that don’t actually reflect what they think.
References
Campdelacreu, M., García-Moya, R., Martí, G., & Terrone, E. (2020). How to test the Ship of Theseus. dialectica, 74(3), 551-559.
Rose, D., Machery, E., Stich, S., Alai, M., Angelucci, A., Berniūnas, R., ... & Grinberg, M. (2020). The ship of Theseus puzzle. In T Lombrozo, J. Knobe, & S. Nichols (Eds.), Oxford studies in experimental philosophy (Vol. 3) (pp. 158-174). Oxford, UK: Oxford University Press.