If you enjoy this article, please hit the “Like” button (❤️) at the beginning or end of the article. This helps others witness the glorious destruction of moral realism.
Academic philosophy is currently facing a challenge: there don't seem to be enough people available to review papers submitted to journals. Some people have proposed some ways to improve the situation. Over on Daily Nous, there are some interesting proposals for how to address this problem. You can find those here. I have no objections to these proposals. Philosophers should be working on ways to improve things. But I worry that they are trying to sew patches onto rags. I’m unconvinced the current academic publishing industry, and the overreliance on judging people’s contributions on the basis of how many papers they’ve published in “top” journals is a good thing. Over on Daily Nous, someone stated that:
if you get something in Mind, for example, it kind of speaks for itself.
I said the following, in response:
Does it? I would’ve thought what should matter is whether what you wrote is insightful and contributes something to the field. Getting published in a “top journal” is at best only an indirect indicator of this. I’m not sure it’s best for us to outsource our assessment of the quality of work to journals on the basis of their prestige.
One response agreed that this would be ideal in contexts where this would be best: when we have the “time and energy” to do so, and have the relevant expertise. But in other contexts, such as the need to quickly evaluate hundreds of candidates for a position, it may be better to outsource our assessments to journals. I agree. In these contexts, reading an entire body of work may literally not be an option. If you have 200 applicants to get through, and 2 days to do it, and reading someone’s work would take 4-5 hours, you would be physically incapable of directly evaluating the quality of everyone’s work.
However, if we’re going to rely on heuristics, what should those heuristics be? Currently, my impression is that philosophers use something like “number of publications in top journals” as a very important criterion. But I have doubts about the effectiveness of this approach, and, more importantly, I worry that reliance on this approach isn’t confined to evaluating job applicants. I said this in response:
Let’s say you need to assess the quality of hundreds of people in a few days. How did philosophers figure out that “Number of publications in the following list of specified prestigious journals” was the best heuristic for evaluating the competence and quality of philosophers? Did they carefully weigh this method of evaluation against other heuristics? Do we have empirical evidence supporting its efficacy? On what basis was this means of evaluation established?
And if you do use that method, it would be one thing if it were confined to those specific contexts. A necessary evil: sometimes we have to use heuristics. But “number of papers in top journals” becomes a target, an axis around which entire academic careers rotate. And when that happens, the measure is no longer merely a convenient tool for sorting through applicants; it becomes the goal of aspiring academics, and their success or failure determines whether they rise or fall or even remain in the field at all.
The stakes here are much higher than the mere need to review applicants quickly. The entire field gravitates around this metric. And rather than see philosophers seriously question whether this is a good thing and whether we need more fundamental changes, I see most people impatient or disinterested in questioning whether the problems philosophy faces are much more foundational than the lack of referees. I am not at all convinced that the need to quickly review applicants justifies placing so much of the weight of success on journal publications. People can be excellent lecturers, or mentors, or bloggers. People may write great textbooks, or syllabi, or be great debaters or produce amazing videos. Should we be downplaying all the many others ways someone may do good philosophy just because sorting through applicants without a convenient heuristic would be really cumbersome?
My biggest concern is Goodhart's law: that when a measure becomes a target, it ceases to be a good measure. If the entire field of academic philosophy rewards people almost exclusively on the basis of how many papers they publish in top journals, then aspiring and successful philosophers will be those who write whatever it is that gets them published in these top journals. More importantly, aspiring philosophers will know this in advance, and they will focus on developing the skills and knowledge to get papers published in top journals.
In other words, when people become aware that there’s a specific metric used for evaluating their success, rather than organically doing whatever it is they think is best and allowing themselves to be passively evaluated according to that metric, people shift their focus, interests, attitudes, behavior, writing style, practices, and so on around that metric. People often complain that if students know that their performance will be evaluated on how well they’ll do on a test, that instead of learning organically, they’ll specifically study for the test. How much of a problem this is in any given context will likely vary. Maybe it isn’t a problem. Maybe it is. But the problem for academic philosophers is that it could be a problem and nobody is bothering to check. One would think that, of all people, philosophers would worry about these sorts of things. That they wouldn’t want to set up standards that incentivize people to think and write and do philosophy in order to pass the test.
To illustrate how this could be disastrous in principle, imagine if a specific group of philosophers, let’s say Randian Objectivists, took over all evaluation for publication and jobs. Even if they did their absolute best to be open-minded, what would you expect in upcoming generations of philosophers?
Here’s what I’d expect: That you’d see a dramatic shift towards people espousing pro-Objectivism views, that remaining critics would be sympathetic to or take Rand’s work more seriously, and that those who found the whole idea of Objectivists being in charge of everything to be revolting or simply not interesting would be inclined to leave the field. There would be a titanic paradigm shift in the field.
Is the current situation that dire? No. Of course not. But this example illustrates in extreme form what is very likely happening in a more subtle, relaxed form: reinforcement of the status quo.
Are “top” journals perfect, unbiased repositories of whatever happens to be good? I doubt it. There will be established conventions, norms, and expectations about what’s good or bad that is reinforced by editors and peers, which may serve to stifle innovative or unconventional approaches, or approaches that don’t buy into established methodological presuppositions, stylistic conventions, and other mainstream norms and expectations that have been internalized.
Imagine, for instance, if artists established art evaluation clubs. They evaluated art based on the standards of editors and reviewers for decades. When you submit your art, how is it going to be evaluated? On the basis of the standards of those editors and reviewers. Are those editors and reviewers going to be good judges of art? Probably, yes. But they’re also going to carry on board all their narrow, entrenched preconceptions about what kind of art is good or bad. If your art isn’t fashionable or doesn’t appeal to them, personally, it can be ignored even if the public would love it. Even if it were beautiful. Even if it were amazing. And over time, what this can do is cause young artists to adapt their art style to what they know would receive praise and yield success.
The same issue is going on in the sciences. Scientific research relies on funding. You need to pitch your research to funders. Do people simply pitch projects based on what they are passionate about? No. They pitch projects based on what they expect to get funded to do:
Researchers may fit their research to match funder ideas and programs at the expense of own [sic] interest and ideas, which can imply impoverishment of science [...] (Meirmans, 2024, p. 6)
Studies do find that incentives influence direction of research:
Our analysis, which estimates the similarity of the grantees’ research focus before, during and after a grant, suggests that scientists acquiring thematic funding alter their research interests more than comparable scientists funded through responsive-mode schemes. (MAdsen & Nielsen, 2024)
Unfortunately, this shift in research “interests” can stifle creativity. Meirmans (2024) reports that:
Researchers across all 11 groups submitted many comments (61) on whether and how funding impacts novelty and risk in science. The comments were predominantly negative. Many expressed that while funders often aim to fund innovative and risky projects, the opposite typically happens. One Dutch researcher commented that the “rhetoric of innovation and breakthrough” does not reflect how most funding is awarded in practice (hum jun, NL). The reason for this is that research projects are designed to be funded, not designed towards what researchers themselves would consider to be novel ideas, and to be creative and original science [...] (p. 6)
Meirmans quotes this researcher, who states that:
in principle, good effort to support the best science, but the measures of success are in favour of ‘productive’ science, not necessarily creative science [...] the competitive system only works for ideas and methodologies that are well established, well known, not for ideas and methodologies that are new and really original [...] (as quoted in Meirmans, 2024, p. 6)
More generally, by placing so much emphasis on metrics: citation count, which journals you publish, and so on, academics face growing pressure to produce publishable papers, which may or may not be aligned with a paper being good. Even if published papers are good, this is still not necessarily a good thing. Does that seem paradoxical? It shouldn’t. Suppose we required all academics to only write papers on Nietzsche. We might get a lot of good papers, but we’d have a field with a range of publications so narrow that most topics would receive little or no coverage. It’s not enough that the papers we publish are good. The only goal of publication isn’t “publish as many good papers as possible.” What we need is a diverse and innovative range of papers on a plurality of distinct topics, with unique perspectives and unconventional contributions that push the field forward. Is this achieved by the present publish-or-perish dynamic? It’s hard for me to see how it could be.
If you set up a system in which a person doesn’t publish papers because they are great, but is considered great because of where they publish, one creates a perverse incentive to writing whatever it is that will get one published in those journals. Even if it’s true that what gets published in these journals tends to be of better quality than what isn’t published in them, it’s not as though these journals are serving as perfect and unbiased arbiters of anything and everything that is “good.” Precedent will determine what people will and won’t publish.
Let me give one more example. On a show like MasterChef, one’s food is judged by three chefs. Are these chefs good judges? Yes. But like all humans, they have preferences and biases. Successful contestants in later seasons of these shows will typically carefully study the competitions on the show, and the distinctive likes and dislikes of the chefs themselves. To win, they will adapt their cooking accordingly, by studying the techniques common on the show and the particular preferences of the chefs. They will adapt what they cook, what techniques they develop, and how they flavor their food accordingly. While this can and does result in good cooking, it also results in narrow cooking.
The exact same applies to academic philosophy. If editors and reviewers publish papers on specific topics, or with a specific tone, or that give off the right vibe by employing the expected jargon and ways of framing things, then they are not purely judging on the basis of whether a paper is good so much as whether it is good-according-to-a-familiar-set-of-internalized-expectations-and-norms. And as these norms are acted on, and papers reflecting and conforming to those norms and signaling the appropriate features are published, this serves to reinforce and entrench the very standards about what counts as good or bad: what’s good isn’t anything good, it’s good-in-the-way-previous-papers-in-the-journal-are-good. After all, one of the primary ways philosophers will come to distinguish good papers from bad ones is to read papers in the top journals. “Good” papers become those that look like these papers. That read the same, that sound the same, that tickle that same sense of familiarity.
Whenever any standards or metrics become too entrenched they risk influencing how people in the field comport themselves, and this inevitably leads to a self-reinforcing entrenchment, which in turn leads to a narrow, dogmatic, and difficult-to-escape conception of quality that can stifle innovation and suppress unconventional ideas.
These concerns are not totally speculative. The sciences are rife with concerns, and empirical data supporting concerns, that institutional pressures can and do influence what kind of science is done, often to the detriment of scientists and society:
indicators which denoted of prestige and competition were generally rated as important to career advancement, but irrelevant or even detrimental in advancing science [...]
Some comments mentioned that publications are a better indicator of the status and resources of a laboratory than they are of the “actual research capabilities” of researchers [...] (Aubert Bonn & Pinxten, 2021)
I think philosophers are not appreciating that if we devise a narrow, exclusive set of standards for judging the quality of one’s philosophical contributions that this can lead to a degree of ideological and stylistic capture, filtering not just for quality but for conformity to various articulable and in some cases inscrutable criteria that aren’t good indicators of the kind of philosophy that can advance human knowledge or improve our lives, but that happens to appeal to most entrenched paradigms and ways of viewing the world.
In short, I suspect that the present means of evaluating philosophers on the basis of the number of publications in top journals results in incentivizing everyone to conform to conventions in the field, thereby serving to reinforce the status quo. And it’s hard for me to see how that’s a good thing.
References
Aubert Bonn, N., & Pinxten, W. (2021). Advancing science or advancing careers? Researchers’ opinions on success indicators. PLoS One, 16(2), e0243664.
Madsen, E. B., & Nielsen, M. W. (2024). Do thematic funding instruments lead researchers in new directions? Strategic funding priorities and topic switching among British grant recipients. Research Evaluation, rvae015.
Meirmans, S. (2024). How Competition for Funding Impacts Scientific Practice: Building Pre-fab Houses but no Cathedrals. Science and Engineering Ethics, 30(1), 6.
Will getting rid of peer review or publish-or-perish fix this problem? Suppose that "top" journals bias towards, say... analytic philosophy, framing metaphysical claims in grounding terms, and standpoint epistemology. Now suppose we abolished all that, so philosophers have their work evaluated solely by content. Even so, won't we continue to see papers that use metaphysical ground or standpoint epistemology be privileged because they're seen as trendier or more in tune with the field? The trends and popular ideas are rooted in a general sentiment that aren't dependent on journals to persist; instead, the journals bias that way *because* of the sentiment.