In the absence of sustained normative moral theorizing, people nevertheless have attitudes about what they want the world to be like: they’re against stealing, they favor one policy over another, they think kindness is good, and so on.
Once people begin engaging in normative moral philosophy, they are inducted into a long tradition that hosts a variety of prepackaged templates: utilitarianism, deontology, and so on, with their distinctive proponents, and the views, arguments, thought experiments, and so on that have accrued over the years. This strikes me as a kind of idiosyncratic and culturally contingent “subculture of thought.”
People inducted into this body of literature are encouraged to systematize and identify more fundamental principles behind their various moral views. Often this is depicted as a matter of discovery. From a realist perspective, one may be attempting to sort out which normative moral theory is correct. But even from an antirealist perspective, one may come to believe their moral values are underwritten by an implicit adherence to one or another normative moral theory. What ties all my moral views together? Perhaps it is utilitarianism! Perhaps threshold deontology! Maybe virtue ethics does the best job of describing how I truly think, on reflection. Even for the antirealist, normative moral theorizing may be seen as a process of self-discovery.
I worry that in all of these cases, there are no deeper principles. Not only are there no stance-independent moral truths for us to discover, there is no hidden, principled core to our own normative moral values. I suspect, instead, that what we are observing amounts to a kind of culturally constructed game: one is given the rules of the game of contemporary analytic normative ethics, then encouraged to pick a team. I suspect analytic normative moral philosophy is even less a matter of discovery than choosing which Hogwarts house most suits you. And if it seems like a frivolous waste of time to decide whether you’re a Hufflepuff or a Gryffindor, perhaps there is something even more pointless about deciding whether you’re a deontologist or a utilitarian. At least people are unlikely to engage in a protracted process of reflection that fundamentally changes how they think when they opt for Ravenclaw. I am far less certain that the process of coming to believe utilitarianism is correct, or “discovering” you’re a utilitarian is so benign. Hogwarts house selection is transparently frivolous. Moral theorizing at least has the pretense of being a serious enterprise. And if it actually functions as a means of induced change in our ways of thinking, and if those changes are a result of a semi-arbitrary engagement in a culturally idiosyncratic intellectual game, I worry that moral theorizing may do more harm than good.
I'd agree that in most cases it's a game, I'm not sure if it's frivolous. A lot of intellectual activity is somewhat game -like. How would it be harmful? I think deciding one is, for example, a utilitarian (or at least a consequentialist of some sort) can lead, for some people (people who engage in those intellectual exercises for a reason -- eg to inform their actions or life plans) to meaningful decisions about, in very broad terms, "how to live their lives". And from that come specific rules or heuristics or acts. For others it will be mostly a rationalisation exercise ("what best fits what I already do" type) or pure "fun" (without any consequences for how they act). Either way, where are the possible harms?
[Standard caveat: not at all a philosopher here]
I think the game part is associated not with discovering or developing the rules of a given moral theory, so much as applying (i.e. playing) them.