Tuesday, December 12, 2017

Dreidel: A seemingly foolish game that contains the moral world in miniature

[Also appearing in today's LA Times. Happy first night of Hannukah!]

Superficially, dreidel appears to be a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and (apparently) meaningful strategic choice. From this perspective, its prominence in the modern Hannukah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

This perspective misses the brilliance of dreidel. Dreidel's seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

For readers unfamiliar with the game, here's a tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of several foil-wrapped chocolate coins, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put one coin in. Then the next player takes a spin.

It all sounds very straightforward, until you actually start to play the game.

The first odd thing you might notice is that although some of the coins are big and others are little, they all count just as one coin in the rules of the game. This is unfair, since the big coins contain more chocolate, and you get to eat your stash at the end.

To compound the unfairness, there is never just one dreidel — each player may bring her own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels 40 times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27/40 spins.) It matters a lot which dreidel you spin.

And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or how low you should let the pot get before you all have to contribute again. No one agrees how many coins to start with or whether you should let someone borrow coins if he runs out. You could try to appeal to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using what seems to be the "best" dreidel, always argue for rule interpretations in your favor, eat your big coins and use that as a further excuse to contribute only little ones, et cetera. You could do all of this without ever breaking the rules, and you'd probably end up with the most chocolate as a result.

But here's the twist, and what makes the game so brilliant: The chocolate isn't very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather enjoy being kind and generous than hoarding up the most coins. The pleasure of the chocolate doesn't outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put a big coin in next time, just to be fair to others and to enjoy being perceived as fair by them.

Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint.

Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context where the rules are unclear and where there are norm violations that aren't rules violations, and where both norms and rules are negotiable, varying from occasion to occasion. Just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

Friday, December 08, 2017

Women Have Been Earning 30-34% of Philosophy BAs in the U.S. Since Approximately Forever*

* for values of "forever" ≤ 30 years.

The National Center for Education Statistics has data on the gender of virtually all Bachelor's degree recipients in the U.S. back into the 1980s, publicly available through the IPEDS database. For Philosophy, the earliest available data cover the 1986-1987 academic year. [For methodological details, see note 1].

The percentage of Philosophy Bachelor's degrees awarded to women has been remarkably constant over time -- a pattern not characteristic of other majors, many of which have shown at least a modest increase in the percentage of women since 1987. In the 1986-1987 academic year, women received 33.6% of Philosophy BAs. In the most recent available year (preliminary data), 2015-2016, is was 33.7%. Throughout the period, the percentage never strays from the band between 29.9% and 33.7%.

I have plotted the trends in the graph below, with Philosophy as the fat red line, including a few other disciplines for comparison: English, History, Psychology, the Biological Sciences, and the Physical Sciences. The fat black line represents all Bachelor's degrees awarded.

[if blurry or small, click to enlarge]

Philosophy is the lowest of these, unsurprisingly to those of us who have followed gender issues in the discipline. (It is not the lowest overall, however: Some of the physical science and engineering majors are as low or lower.) To me, more striking and newsworthy is the flatness of the line.

I also thought it might be worth comparing high-prestige research universities (Carnegie classification: Doctoral Universities, Highest Research Activity) versus colleges with much more of a teaching focus (Carnegie classification: Baccalaureate's Colleges, Arts & Science focus or Diverse Fields).

Women were a slightly lower percentage of Philosophy BA recipients in the research universities than in the teaching-focused colleges (30% vs. 35%; and yes, p < .001). However, the trends over time were still approximately flat:

For kicks, I thought I'd also check if my home state of California was any different -- since we'll be seceding from the rest of the U.S. soon (JK!). Nope. Again, a flat line, with women overall 33% of graduating BAs in Philosophy.

Presumably, if we went back to the 1960s or 1970s, a higher percentage of philosophy majors would be men. But whatever cultural changes there have been in U.S. society in general and in the discipline of philosophy in particular in the past 30 years haven't moved the dial much on the gender ratio of the philosophy major.

[Thanks to Mike Williams at NCES for help in figuring out how to use the database.]

-----------------------------------------

Note 1: I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. Before the 2000-2001 academic year, only first major is recorded. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who complete the degree are included in the data. Although gender data are available back to 1980, Philosophy and Religious Studies majors are merged from 1980-1986.

Friday, December 01, 2017

Aiming for Moral Mediocrity

I've been working on this essay off and on for years, "Aiming for Moral Mediocrity". I think I've finally pounded it into circulating shape and I'm ready for feedback.

I have an empirical thesis and a normative thesis. The empirical thesis is that most people aim to be morally mediocre. They aim to be about as morally good as their peers, not especially better, not especially worse. This mediocrity has two aspects. It is peer-relative rather than absolute, and it is middling rather than extreme. We do not aim to be good, or non-bad, or to act permissibly rather than impermissibly, by fixed moral standards. Rather, we notice the typical behavior of people we regard as our peers and we aim to behave broadly within that range. We -- most of us -- look around, notice how others are acting, then calibrate toward so-so.

This empirical thesis is, I think, plausible on the face of it. It also receives some support from two recent subliteratures in social psychology and behavioral economics.

One is the literature on following the (im-)moral crowd. I'm thinking especially of the work of Robert B. Cialdini and Cristina Bicchieri. Cialdini argues that "injunctive norms" (that is, social or moral admonitions) most effectively promote norm-compliant behavior when they align with "descriptive norms" (that is, facts about how people actually behave). People are less likely to litter when they see others being neat, more likely to reuse their hotel towels when they learn that others also do so, and more likely to reduce their household energy consumption when they see that they are using more than their neighbors. Bicchieri argues that people are more likely to be selfish in "dictator games" when they are led to believe that earlier participants had mostly been selfish and that convincing communities to adopt new health practices like family planning and indoor toilet use typically requires persuading people that their neighbors will also comply. It appears that people are more likely to abide by social or moral norms if they believe that others are also doing so.

The other relevant literature concerns moral self-licensing. A number of studies suggest that after having performed good acts, people are likely to behave less morally well than after performing a bad or neutral act. For example, after having done something good for the environment, people might tend to make more selfish choices in a dictator game. Even just recalling recent ethical behavior might reduce people's intentions to donate blood, money, and time. The idea is that people are more motivated to behave well when their previous bad behavior is salient and less motivated to behave well when their previous good behavior is salient. They appear to calibrate toward some middle state.

One alternative hypothesis is that people aim not for mediocrity but rather for something better than that, though short of sainthood. Phenomenologically, that might be how it seems to people. Most people think that they are somewhat above average in moral traits like honesty and fairness (Tappin and McKay 2017); and maybe then people mostly think that they should more or less stay the course. An eminent ethicist once told me he was aiming for a moral "B+". However, I suspect that most of us who like to think of ourselves as aiming for substantially above-average moral goodness aren't really willing to put in the work and sacrifice required. A close examination of how we actually calibrate our behavior will reveal us wiggling and veering toward a lower target. (Compare the undergraduate who says they're "aiming for B+" in a class but who wouldn't be willing to put in more work if they received a C on the first exam. It's probably better to say that they are hoping for a B+ than that they are aiming for one.)


My normative thesis is that it's morally mediocre to aim for moral mediocrity. Generally speaking, it's somewhat morally bad, but not terribly bad, to aim for the moral middle.

In defending this view, I'm mostly concerned to rebut the charge that it's perfectly morally fine to aim for mediocrity. Two common excuses, which I think wither upon critical scrutiny, are the Happy Coincidence Defense and The-Most-You-Can-Do Sweet Spot. The Happy Coincidence Defense is an attractive rationalization strategy that attempts to justify doing what you prefer to do by arguing that it's also for the moral best -- for example, that taking this expensive vacation now is really the morally best choice because you owe it to you family, and it will refresh you for your very important work, and.... The Most-You-Can-Do Sweet Spot is a similarly attractive rationalization strategy that relies on the idea that if you tried to be any morally better than you in fact are, you would end up being morally worse -- because you would collapse along the way, maybe, or you would become sanctimonious and intolerant, or you would lose the energy and joie de vivre on which your good deeds depend, or.... Of course it can sometimes be true that by Happy Coincidence your preferences align with the moral best or that you are already precisely in The-Most-You-Can-Do Sweet Spot. But this reasoning is suspicious when deployed repeatedly to justify otherwise seemingly mediocre moral choices.

Another normative objection is the Fairness Objection, which I discussed on the blog last month. Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to make such sacrifices. If the average person in your financial condition gives X% to charity, for example, it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, lie, and flake X amount of the time, it's only fair that you should get to do the same.

The simplest response to the Fairness Objection is to appeal to absolute moral standards. Although some norms are peer-relative, so that they become morally optional if most of your peers fail to comply with them, other norms aren't like that. A Nazi death camp guard is wrong to kill Jews even if that is normal behavior among his peers. More moderately, sexism, racism, ableism, elitism, and so forth are wrong and blameworthy, even if they are common among your peers (though blame is probably also partly mitigated if you are less biased than average). If you're an insurance adjuster who denies or slow-walks important health benefits on shaky grounds because you guess the person won't sue, the fact that other insurance adjusters might do the same in your place is again at best only partly mitigating. It would likely be unfair to blame you more than your peers are blamed; but if you violate absolute moral standards you deserve some blame, regardless of your peers' behavior.

-----------------------------------------

Full length version of the paper here.

As always, comments welcome either by email to me or in the comments field of this post. Please don't feel obliged to read the full paper before commenting, if you have thoughts based on the summary arguments in this post.

[Note: Somehow my final round of revisions on this post was lost and an old version was posted. The current version has been revised in attempt to recover the lost changes.]