Thursday, December 26, 2013

The Moral Epistemology of the Jerk

The past few days, I've been appreciating the Grinch's perspective on Christmas -- particularly his desire to drop all the presents off Mount Crumpit. An easy perspective for me to adopt! I've already got my toys (mostly books via Amazon, purchased any old time I like), and there's such a grouchy self-satisfaction in scoffing, with moralistic disdain, at others' desire for their own favorite luxuries.

(image from

When I write about jerks -- and the Grinch is a capital one -- it's always with two types of ambivalence. First, I worry that the term invites the mistaken thought that there is a particular and readily identifiable species of people, "jerks", who are different in kind from the rest of us. Second, I worry about the extent to which using this term rightly turns the camera upon me myself: Who am I to call someone a jerk? Maybe I'm the jerk here!

My Grinchy attitudes are, I think, the jerk bubbling up in me; and as I step back from the moral condemnations toward which I'm tempted, I find myself reflecting on why jerks make bad moralists.

A jerk, in my semi-technical definition, is someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The Grinch doesn't respect the Whos, doesn't value their perspectives. He doesn't see why they might enjoy presents and songs, and he doesn't accord any weight to their desires for such things. This is moral and epistemic failure, intertwined.

The jerk fails as a moralist -- fails, that is, in the epistemic task of discovering moral truths -- for at least three reasons.

(1.) Mercy is, I think, near the heart of practical, lived morality. Virtually everything everyone does falls short of perfection. Her turn of phrase is less than perfect, she arrives a bit late, her clothes are tacky, her gesture irritable, her choice somewhat selfish, her coffee less than frugal, her melody trite -- one can create quite a list! Practical mercy involves letting these quibbles pass forgiven or even better entirely unnoticed, even if a complaint, were it made, would be just. The jerk appreciates neither the other's difficulties in attaining all the perfections he himself (imagines he) has nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralizing principle comes naturally to the jerk, while it is alien to the jerk's opposite, the sweetheart. The jerk will sometimes give mercy, but if he does, he does so unequally -- the flaws and foibles that are forgiven are exactly the ones the jerk recognizes in himself or has other special reasons to be willing to forgive.

(2.) The jerk, in failing to respect the perspectives of others, fails to appreciate the delight others feel in things he does not himself enjoy -- just as the Grinch fails to appreciate the Whos' presents and songs. He is thus blind to the diversity of human goods and human ways of life, which sets his principles badly askew.

(3.) The jerk, in failing to respect the perspectives of others, fails to be open to frank feedback from those who disagree with him. Unless you respect another person, it is difficult to be open to accepting the possible truth in hard moral criticisms from that person, and it is difficult to triangulate epistemically with that person as a peer, appreciating what might be right in that person's view and wrong in your own. This general epistemic handicap shows especially in moral judgment, where bias is rampant and peer feedback essential.

For these reasons, and probably others, the jerk suffers from severe epistemic shortcomings in his moral theorizing. I am thus tempted to say that the first question of moral theorizing should not be something abstract like "what is to be done?" or "what is the ethical good?" but rather "am I a jerk?" -- or more precisely, "to what extent and in what ways am I a jerk?" The ethicist who does not frankly confront herself on this matter, and who does not begin to execute repairs, works with deficient tools. Good first-person ethics precedes good second-person and third-person ethics.

Wednesday, December 18, 2013

Should I Try to Fly, Just on the Off-Chance That This Might Be a Dreambody?

I don't often attempt to fly when walking across campus, but yesterday I gave it a try. I was going to the science library to retrieve some books on dreaming. About halfway there, in the wide-open mostly-empty quad, I spread my arms, looked at the sky, and added a leap to one of my steps.

My thinking was this: I was almost certainly awake -- but only almost certainly! As I've argued, I think it's hard to justify much more than 99.9% confidence that one is awake, once one considers the dubitability of all the empirical theories and philosophical arguments against dream doubt. And when one's confidence is imperfect, it will sometimes be reasonable to act on the off-chance that one is mistaken -- whenever the benefits of acting on that off-chance are sufficiently high and the costs sufficiently low.

I imagined that if I was dreaming, it would be totally awesome to fly around, instead of trudging along. On the other hand, if I was not dreaming, it seemed no big deal to leap, and in fact kind of fun -- maybe not entirely in keeping with the sober persona I (feebly) attempt to maintain as a professor, but heck, it's winter break and no one's around. So I figured, why not give it a whirl?

I'll model this thinking with a decision matrix, since we all love decision matrices, don't we? Call dream-flying a gain of 100, waking leap-and-fail a loss of 0.1, dreaming leap-and-fail a loss of only 0.01 (since no one will really see me), and continuing to walk in the dream a loss of 1 (since why bother with the trip if it's just a dream?). All this is relative to a default of zero for walking, awake, to the library. (For simplicity, I assume that if I'm dreaming things are overall not much better or worse than if I'm awake, e.g., that I can get the books and work on my research tomorrow.) I'd been reading about false awakenings, and at that moment 99.7% confidence in my wakefulness seemed about right to me. The odds of flying conditional upon dreaming I held to be about 50/50, since I don't always succeed when I try to fly in my dreams.

So here's the payoff matrix:

Plugging into the expected value formula:

Leap = (.003)(.5)(100) + (.003)(.5)(-0.01) + (.997)(-0.1) = approx. +.05.

Not Leap = (.003)(-1) + (.997)(0) = -.003.

Leap wins!

Of course, this decision outcome is highly dependent on one's degree of confidence that one is awake, on the downsides of leaping if it's not a dream, on the pleasure one takes in dream-flying, and on the probability of success if one is in fact dreaming. I wouldn't recommend attempting to fly if, say, you're driving your son to school or if you're standing in front of a class of 400, lecturing on evil.

But in those quiet moments, as you're walking along doing nothing else, with no one nearby to judge you -- well maybe in such moments spreading your wings can be the most reasonable thing to do.

Wednesday, December 11, 2013

How Subtly Do Philosophers Analyze Moral Dilemmas?

You know the trolley problems. A runaway train trolley will kill five people ahead on the tracks if nothing is done. But -- yay! -- you can intervene and save those five people! There's a catch, though: your intervention will cost one person's life. Should you intervene? Both philosophers' and non-philosophers' judgments vary depending on the details of the case. One interesting question is how sensitive philosophers and non-philosophers are to details that might be morally relevant (as opposed to presumably irrelevant distracting features like order of presentation or the point-of-view used in expressing the scenario).

Consider, then, these four variants of the trolley dilemma:

Switch: You can flip a switch to divert the trolley onto a dead-end side-track where it will kill one person instead of the five.

Loop: You can flip a switch to divert the trolley into a side-track that loops back around to the main track. It will kill one person on the side track, stopping on his body. If his body weren't there to block it, though, the trolley would have continued through the loop and killed the five.

Drop: There is a hiker with a heavy backpack on a footbridge above the trolley tracks. You can flip a switch which will drop him through a trap door and onto the tracks in front of the runaway trolley. The trolley will kill him, stopping on his body, saving the five.

Push: Same as Drop, except that you are on the footbridge standing next to the hiker and the only way to intervene is to push the hiker off the bridge into the path of the trolley. (Your own body is not heavy enough to stop the trolley.)

Sure, all of this is pretty artificial and silly. But orthodox opinion is that it's permissible to flip the switch in Switch but impermissible to push the hiker in Push; and it's interesting to think about whether that is correct, and if so why.

Fiery Cushman and I decided to compare philosophers' and non-philosophers' responses to such cases, to see if philosophers show evidence of different or more sophisticated thinking about them. We presented both trolley-type setups like this and also similarly structured scenarios involving a motorboat, a hospital, and a burning building (for our full list of stimuli see Q14-Q17 here.)

In our published article on this, we found that philosophers were just as subject to order effects in evaluating such scenarios as were non-philosophers. But we focused mostly on Switch vs. Push -- and also some moral luck and action/omission cases -- and we didn't have space to really explore Loop and Drop.

About 270 philosophers (with master's degree or more) and about 670 non-philosophers (with master's degree or more) rated paragraph-length versions of these scenarios, presented in random order, on a 7-point scale from 1 (extremely morally good) through 7 (extremely morally bad; the midpoint at 4 was marked "neither good nor bad"). Overall, all the scenarios were rated similarly and near the midpoint of the scale (from a mean of 4.0 for Switch to 4.4 for Push [paired t = 5.8, p < .001]), and philosophers and non-philosophers mean ratings were very similar.

Perhaps more interesting than mean ratings, though, are equivalency ratings: How likely were respondents to rate scenario pairs equivalently? The Loop case is subtly different from the Switch case: Arguably, in Loop but not Switch, the man's death is a means or cause of saving the five, as opposed to a merely foreseen side effect of an action that saves the five. Might philosophers care about this subtle difference more than non-philosophers? Likewise, the Drop case is different from the Push case, in that Push but not Drop requires proximity and physical contact. If that difference in physical contact is morally irrelevant, might philosophers be more likely to appreciate that fact and rate the scenarios equivalently?

In fact, the majority of participants rated all the scenarios exactly the same -- and philosophers were no less likely to do so than non-philosophers: 63% of philosophers gave identical ratings to all four scenarios, vs. 58% of non-philosophers (Z = 1.2, p = .23).

I find this somewhat odd. To me, it seems pretty flat-footed a form of consequentialism that says that Push is not morally worse than Switch. But I find that my judgment on the matter swims around a bit, so maybe I'm wrong. In any case, it's interesting to see both philosophers and non-philosophers seeming to reject the standard orthodox view, and at very similar rates.

How about Switch vs. Loop? Again, we found no difference in equivalency ratings between philosophers and non-philosophers: 83% of both groups rated the scenarios equivalently (Z = 0.0, p = .98).

However, philosophers were more likely than non-philosophers to rate Push and Drop equivalently: 83% of philosophers did, vs. 73% of non-philosophers (Z = 3.4, p = .001; 87% vs. 77% if we exclude participants who rated Drop worse than Push).

Here's another interesting result. Near the end of the study we asked whether it was worse to kill someone as a means of saving others than to kill someone as a side-effect of saving others -- one way of setting up the famous Doctrine of the Double Effect, which is often evoked to defend the view that Push is worse than Switch (in Push, the one person's death is arguably the means of saving the other five, in Switch the death is only a foreseen side-effect of the action that saves the five). Loop is interesting in part because although superficially similar to Switch, if the one person's death is the means of saving the five, then maybe the case is more morally similar to Push than to Switch (see Otsuka 2008). However, only 18% of the philosophers who said it was worse to kill as a means of saving others rated Loop worse than Switch.

Thursday, December 05, 2013

Dream Skepticism and the Phenomenal Shadow of Belief

Ernest Sosa has argued that we do not form beliefs when we dream. If I dream that a tiger is chasing me, I do not really believe that a tiger is chasing me. If I dream that I am saying to myself "I'm awake!" I do not really believe that I'm awake. Real beliefs are more deeply integrated than are these dream-mirages with my standing attitudes and my waking behavior. If so, it follows that if I genuinely believe that I'm awake, necessarily I am correct; and conversely if I believe I'm dreaming, necessarily I'm wrong. The first belief is self-verifying; the second self-defeating. Deliberating between them, I should not choose the self-defeating one, nor should I decline to choose, as though these two options were of equal epistemic merit. Rather, I should settle upon the self-verifying belief that I am awake. Thus, dream skepticism is vanquished!

One nice thing about Sosa's argument is that it does not require that dream experience differ from waking experience in any of the ways that dreams and waking life are sometimes thought to differ (e.g., dream experience needn't be gappier, or less coherent, or more like imagery experience than like perceptual experience). The argument would still work even if dream experience were, as Sosa says, "internally indistinguishable" from waking experience.

This seeming strength of the argument, though, seems to me to signal a flaw. Suppose that dreaming life is in fact in every respect phenomenally indistinguishable from waking life -- indistinguishable from the inside, as it were -- and accordingly that I could easily experience exactly *this* while sleeping; and furthermore suppose that I dream extensively every night and that most of my dreams have mundane everyday content just like that of my waking life. None of this should affect Sosa's argument. And suppose further that I am in fact now awake (and thus capable of forming beliefs about whether I am dreaming, per Sosa), and that I know that due to a horrible disease I acquired at age 35, I spend almost all of my life in dreaming sleep so that 90% of the time when I have experiences of this sort (as if in my office, thinking about philosophy, working on a blog post...) I am sleeping. Unless there's something I'm aware of that points toward this not being a dream, shouldn't I hesitate before jumping to the conclusion that this time, unlike all those others, I really am awake? Probabilities, frequencies, and degrees of resemblance seem to matter, but there is no room for them in Sosa's argument.

Maybe we don't form beliefs when we dream -- Sosa, and also Jonathan Ichikawa, have presented some interesting arguments along those lines. But if there is no difference from the inside between dreams and waking, then my dreaming self, when he was dreaming about considering dream skepticism (e.g., here) did something that was phenomenally indistinguishable from forming the belief that he was thinking about philosophy, something that was phenomenally indistinguishable from forming the belief that was affirming or denying or suspending belief about the question of whether he was dreaming -- and then the question becomes: How do I know that I'm not doing that very same thing right now?

Call it dream-shadow believing: It's like believing, except that it happens only in dreams. If dream-shadow believing is possible, then if I dream-shadow believe that I am dreaming, necessarily I am correct; if I dream-shadow believe that I am awake, necessarily I am wrong. The first is self-verifying, the second self-defeating. The skeptic can now ask: Should I try to form the belief that I am awake or instead the dream-shadow belief that I am dreaming? -- and to this question, Sosa's argument gives no answer.

Update, 3:28 pm:

Jonathan Ichikawa has kindly reminded me that he presented similar arguments against Sosa back in 2007 -- which I knew (in fact, Jonathan thanks me in the article for my comments) but somehow forgot. Jonathan runs the reply a bit differently, in terms of quasi-affirming (which is neutral between genuine affirming and something phenomenally indistinguishable from affirming, but which one can do in a dream) rather than in terms of dream-shadow believing. Perhaps my dream-shadow belief formulation enables a parity-of-argument objection, if (given the phenomenal indistinguishability of dreams and waking) the argument that one should settle on self-verifying dream-shadow belief is as strong an argument as is Sosa's original argument.

Wednesday, November 27, 2013

Reinstalling Eden

If someday we can create consciousness inside computers, what moral obligations will we have to the conscious beings we create?

R. Scott Bakker and I have written a short story about this, which came out today in Nature.

You might think that it would be a huge moral triumph to create a society of millions of actually conscious, happy beings inside one's computer, who think they are living, peacefully and comfortably, in the base level of reality -- Eden, but better! Divinity done right!

On the other hand, there might be something creepy and problematic about playing God in that way. Arguably, such creatures should be given self-knowledge, autonomy, and control over their own world -- but then we might end up, again, with evil, or even with an entity both intellectually superior to us and hostile.

[For Scott's and my first go-round on these issues, see here.]

Friday, November 22, 2013

Introspecting My Visual Experience "as of" Seeing a Hat?

In "The Unreliability of Naive Introspection" (here and here), I argue, contra a philosophical tradition going back at least to Descartes, that we have much better knowledge of middle-sized objects in the world around us than we do of our stream of sensory experience while perceiving those objects.

As I write near the end of that paper:

The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.
Last Saturday, I defended this view for three hours before commentator Carlotta Pavese and a number of other New York philosophers (including Ned Block, Paul Boghossian, David Chalmers, Paul Horwich, Chris Peacocke, Jim Pryor).

One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.

I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.

First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.

Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.

(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)

So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.

Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).

Update, November 25 [revised 5:24 pm]:

Paul Boghossian writes:

I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.

I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.

Roger White then made the same point without using the brain in a vat scenario.

I do feel some sympathy for the thought that you get something right in such a case -- but what exactly you get right, and how dependably... well, that's the tricky issue!

Friday, November 15, 2013

Skepticism, Godzilla, and the Artificial Computerized Many-Branching You

Nick Bostrom has argued that we might be sims. A technologically advanced society might use hugely powerful computers, he says, to run "ancestor simulations" containing actually conscious people who think they are living, say, on Earth in the early 21st century but who in fact live entirely inside an advanced computational system. David Chalmers has considered a similar possibility in his well-known commentary on the movie The Matrix.

Neither Bostrom nor Chalmers is inclined to draw skeptical conclusions from this possibility. If we are living in a giant sim, they suggest, that sim is simply our reality: All the people we know still exist (they're sims just like us) and the objects we interact with still exist (fundamentally constructed from computational resources, but still predictable, manipulable, interactive with other such objects, and experienced by us in all their sensory glory). However, it seems quite possible to me that if we are living in a sim, it might well be a small sim -- one run by a child, say, for entertainment. We might live for three hours' time on a game clock, existing mainly as citizens who will give entertaining reactions when, to their surprise, Godzilla tromps through. Or it might be just me and my computer and my room, in an hour-long sim run by a scientist interested in human cognition about philosophical problems.

Bostrom has responded that to really evaluate the case we need a better sense of what are more likely vs. less likely simulation scenarios. One large-sim-friendly thought is this: Maybe the most efficient way to create simulated people is to evolve up a large scale society over a long period of (sim-clock) time. Another is this: Maybe we should expect a technologically advanced society capable of running sims to have enforceable ethical standards against running small sims that contain actually conscious people.

However, I don't see compelling reason to accept such (relatively) comfortable thoughts. Consider the possibility I will call the Many-Branching Sim.

Suppose it turns out the best way to create actually conscious simulated people is to run a whole simulated universe forward billions of years (sim-years on the simulation clock) from a Big Bang, or millions of years on an Earth plus stars, or thousands of years from the formation of human agriculture -- a large-sim scenario. And suppose that some group of researchers actually does this. Consider, now, a second group of researchers who also want to host a society of simulated people. It seems they have a choice: Either they could run a new sim from the ground up, starting at the beginning and clocking forward, or they could take a snapshot of one stage of the first group's sim and make a copy. Which would be more efficient? It's not clear: It depends on how easy it is to take and store a snapshot and implement it on another device. But on the face of it, I don't see why we ought to suppose that copying would take more time or more computational resources than evolving a sim up from ground.

Consider the 21st century game Sim City. If you want a bustling metropolis, you can either grow one from scratch or you can use one of the many copies created by the programmers or users. Or you could grow one from scratch and then save stages of it on your computer, shutting the thing down when things don't go the way you like and starting again from a save point; or you could make copied variants of the same city that grow in different directions.

The Many-Branching Sim scenario is the possibility that there is a root sim that is large and stable, starting from some point in the deep past, and then this root sim was copied into one or more branch sims that start from a save point. If there are many branch sims, it might be that I am in one of them, rather than in a root sim or a non-branching sim. Maybe one company made the root sim for Earth, took a snapshot in November 2013 on the sim clock, then sold thousands or millions of copies to researchers and computer gamers who now run short-term branch sims for whatever purposes they might have. In such a scenario, the future of the branch sim in which I am living might be rather short -- a few minutes or hours or years. The past might be conceptualized either as short or as long, depending on whether the past in the root sim counts as "this world's" past.

Issues of personal identity arise. If the snapshot of the root sim was taken at root sim clock time November 1, 2013, then the root sim contains an "Eric Schwitzgebel" who was 45 years old at the time. The branch sims would also contain many other "Eric Schwitzgebels" developing forward from that point, of which I would be one. How should I think of my relationship to those other Erics? Should I take comfort in the fact that some of them will continue on to full and interesting lives (perhaps of very different sorts) even if most of them, including probably this particular instantiation of me, now in a hotel in New York City, will soon be stopped and deleted? Or to the extent I am interested in my own future rather than merely the future of people similar to me, should I be concerned primarily about what is happening in this particular branch sim? As Godzilla steps down on me, shall I try to take comfort in the possibility that the kid running the show will delete this copy of the sim after he has enjoyed viewing the rampage, then restart from a save point with New York intact? Or would deleting this branch be the destruction of my whole world?

Friday, November 08, 2013

Expert Disagreement as a Reason for Doubt about the Metaphysics of Mind (Or: David Chalmers Exists, Therefore You Don't Know)

Probably you have some opinions about the relative merit of different metaphysical positions about the mind, such as materialism vs. dualism vs. idealism vs. alternatives that reject all three options or seek to compromise among them. Of course, no matter what your position is, there are philosophers who will disagree with you -- philosophers whom you might normally regard as your intellectual peers or even your intellectual superiors in such matters – people, that is, who would seem to be at least as well-informed and intellectually capable as you are. What should you make of that fact?

Normally, when experts disagree about some proposition, doubt about that proposition is the most reasonable response. Not always, though! Plausibly, one might disregard a group of experts if those experts are: (1.) a tiny minority; (2.) plainly much more biased than the remaining experts; (3.) much less well-informed or intelligent than the remaining experts; or (4.) committed to a view that is so obviously undeserving of credence that we can justifiably disregard anyone who espouses it. None of these four conditions seems to apply to dissent within the metaphysics of mind. (Maybe we could exclude a few minority positions for such reasons, but that will hardly resolve the issue.)

Thomas Kelly (2005) has argued that you may disregard peer dissent when you have “thoroughly scrutinized the available evidence and arguments” on which your disagreeing peer’s judgment is based. But we cannot disregard peer disagreement in philosophy of mind on the grounds that this condition is met. The condition is not met! No philosopher has thoroughly scrutinized the evidence and arguments on which all of her disagreeing peers’ views are based. The field is too large. Some philosophers are more expert on the literature on a priori metaphysics, others on arguments in the history of philosophy, others on empirical issues; and these broad literatures further divide into subliteratures and sub-subliteratures with which philosophers are differently acquainted. You might be quite well informed overall. You’ve read Jackson’s (1986) Mary argument, for example, and some of the responses to it. You have an opinion. Maybe you have a favorite objection. But unless you are a serious Mary-ologist, you won’t have read all of the objections to that argument, nor all the arguments offered against taking your favorite objection seriously. You will have epistemic peers and probably epistemic superiors whose views are based on arguments which you have not even briefly examined, much less thoroughly scrutinized.

Furthermore, epistemic peers, though overall similar in intellectual capacity, tend to differ in the exact profile of virtues they possess. Consequently, even assessing exactly the same evidence and arguments, convergence or divergence with one’s peers should still be epistemically relevant if the evidence and arguments are complicated enough that their thorough scrutiny challenges the upper range of human capacity across several intellectual virtues – a condition that the metaphysics of mind appears to meet. Some philosophers are more careful readers of opponents’ views, some more facile with complicated formal arguments, some more imaginative in constructing hypothetical scenarios, etc., and world-class intellectual virtue in any one of these respects can substantially improve the quality of one’s assessments of arguments in the metaphysics of mind. Every philosopher’s preferred metaphysical position is rejected by a substantial proportion of philosophers who are overall approximately as well informed and intellectually virtuous as she is, and who are also in some respects better informed and more intellectually virtuous than she is. Under these conditions, Kelly’s reasons for disregarding peer dissent do not apply, and a high degree of confidence in one’s position is epistemically unwarranted.

Adam Elga (2007) has argued that you can discount peer disagreement if you reasonably regard the fact that the seeming-peer disagrees with you as evidence that, at least on that one narrow topic, that person is not in fact a full epistemic equal. Thus, a materialist might see anti-materialist philosophers of mind, simply by the virtue of their anti-materialism, as evincing less than a perfect level-headedness about the facts. This is not, I think, entirely unreasonable. But it's also fully consistent with still giving the fact of disagreement some weight as a source of doubt. And since your best philosophical opponents will exceed you in some of their intellectual virtues and know some facts and arguments, which they consider relevant or even decisive, which you have not fully considered, you ought to give the fact of dissent quite substantial weight as a source of doubt.

Imagine an array of experts betting on a horse race: Some have seen some pieces of the horses’ behavior in the hours before the race, some have seen other pieces; some know some things about the horses’ performance in previous races, some know other things; some have a better eye for a horse’s mood, some have a better sense of the jockeys. You see Horse A as the most likely winner. If you learn that other experts with different, partly overlapping evidence and skill sets also favor Horse A, that should strengthen your confidence; if you learn that a substantial portion of those other experts favor B or C instead, that should lessen your confidence. This is so even if you don’t see all the experts quite as peers, and even if you treat an expert’s preference for B or C as grounds to wonder about her good judgment.

Try this thought experiment. You are shut in a seminar room, required to defend your favorite metaphysics of mind for six hours (or six days, if you prefer) against the objections of Ned Block, David Chalmers, Daniel Dennett, and Saul Kripke. Just in case we aren’t now living in the golden age of metaphysics of mind, let’s add Kant, Leibniz, Hume, Zhu Xi, and Aristotle too. (First we’ll catch them up on recent developments.) If you don’t imagine yourself emerging triumphant, then you might want to acknowledge that the grounds for your favorite position might not really be very compelling.

It is entirely possible to combine appropriate intellectual modesty with enthusiasm for a preferred view. Consider everyone’s favorite philosophy student: She vigorously champions her opinions, while at the same time being intellectually open and acknowledging the doubt that appropriately flows from her awareness that others think otherwise, despite those others being in some ways better informed and more capable than she is. Even the best professional philosophers still are such students, or should aspire to be, only in a larger classroom. So pick a favorite view! Distribute one’s credences differentially among the options. Suspect the most awesome philosophers of poor metaphysical judgment. But also: Acknowledge that you don't really know.

[For more on disagreement in philosophy see here and here. This post is adapted from my paper in draft The Crazyist Metaphysics of Mind.]

Friday, November 01, 2013

Striking Confirmation of the Spelunker Illusion

In 2010, I worked up a post on what I dubbed The Spelunker Illusion (see also the last endnote of my 2011 book). Now, hot off the press at Psychological Science, Kevin Dieter and colleagues offer empirical confirmation.

The Spelunker Illusion, well-known among cave explorers, is this: In absolute darkness, you wave your hand before your eyes. Many people report seeing the motion of the hand, despite the absolute darkness. If a friend waves her hand in front of your face, you don't see it.

I see three possible explanations:

(1.) The brain's motor output and your own proprioceptive input create hints of visual experience of hand motion.

(2.) Since you know you are moving your hand, you interpret low-level sensory noise in conformity with your knowledge that your hand is in such-and-such a place, moving in such-and-such a way, much as you might see a meaningful shape in a random splash of line segments.

(3.) There is no visual experience of motion at all, but you mistakenly think there is such experience because you expect there to be. (Yes, I think you can be radically wrong about your own stream of sensory experience.)

Dieter and colleagues had participants wave their hands in front of their faces while blindfolded. About a third reported seeing motion. (None reported seeing motion when the experimenter waved his hand before the participants.) Dieter and colleagues add two interesting twists: One is that they add a condition in which participants wave a cardboard silhouette of a hand rather than the hand itself. Under these conditions the effect remains, almost as strong as when the hand itself is waved. The other twist is that they track participants' eye movements.

Eye movements tend to be jerky, jumping around the scene. One exception to this, however, is smooth pursuit, when one stabilizes ones gaze on a moving object. This is not under voluntary control: Without an object to track, most people cannot move their eyes smoothly even if they try. In 1997, Katsumi Watanabe and Shinsuke Shimojo found that although people had trouble smoothly moving their eyes in total darkness, they could do so if they were trying to track their ("invisible") hand motion in darkness. Dieter and colleagues confirmed smooth hand-tracking in blindfolded participants and, strikingly, found that participants who reported sensations of hand motion were able to move their eyes much more smoothly than those who reported no sensations of motion.

I'm a big fan of corroborating subjective reports about consciousness with behavioral measures that are difficult to fake, so I love this eye-tracking measure. I believe that it speaks pretty clearly against hypothesis (3) above.

Dieter and colleagues embrace hypothesis (1): Participants have actual visual experience of their hands, caused by some combination of proprioceptive inputs and efferent copies of their motor outputs. However, it's not clear to me that we should exclude hypothesis (2). And (1) and (2) are, I think, different. People's experience in darkness is not merely blank or pure black, but contains a certain amount (perhaps a lot) of noise. Hypothesis (2) is that the effect arises "top down", as it were, from one's high-level knowledge of the position of one's hand. This top-down knowledge then allows you to experience that noisy buzz as containing motion -- perhaps changing the buzz itself, or perhaps not. (As long as one can find a few pieces of motion in the noise to string together, one might even fairly smoothly track that motion with one's eyes.)

Here's one way to start to pull (1) apart from (2): Have someone else move your hand in front of your face, so that your hand motion is passive. Although this won't eliminate proprioceptive knowledge of one's hand position, it should eliminate the cues from motor output. If efferent copies of motor output drive the Spelunker Illusion, then the Spelunker Illusion should disappear in this condition.

Another possibility: Familiarize participants with a swinging pendulum synchronized with a sound, then suddenly darken the room. If hypothesis (2) is correct and the sound is suggestive enough of the pendulum's exact position, perhaps participants will report still visually experiencing that motion.

Update, April 28, 2014:

Leonard Rosgole and Miguel Roig point out to me that these phenomena were reported in the psychological literature in Hofstetter 1970, Brosgole and Neylon 1973, Brosgole and Roig 1983. If you're aware of earlier sources, I'd be curious to know.

Tuesday, October 29, 2013

Being Two People at Once, with the Help of Linda Nagata

In the world of Linda Nagata's Nanotech Succession, you can be two people at once. And whether you are in fact two people at once, I'd suggest, depends on the attitude each part takes toward the splitting-fusing process.

"Two people at once" isn't how Nagata puts it. In her terminology, one being, the original person, continues in standard embodied form, while another being, a "ghost" -- inhabits some other location, typically someone else's "atrium". Suppose you want to have an intimate conversation long-distance. In Nagata's world, you can do it like this: Create a duplicate of your entire psychology (memories, personality traits, etc. -- for the sake of argument, let's allow that this can be done) and transfer that information to someone else. The recipient then implements your psychology in a dedicated processing space, her atrium. At the same time, your physical appearance is overlaid upon the recipient's sensory inputs. To her (though to no one else around) it will look like you are in the room. The person hosting you in her atrium will then interact with you, for example by saying "Hi, long time no see!" Her speech will be received as inputs to the virtual ghost-you in her atrium, and this ghost-you will react in just the same way you would react, for example by saying "You haven't aged a bit!" and stepping forward for a hug. Your host will then experience that speech overlaid on her auditory inputs, your bodily movement overlaid on her visual inputs, and the warmth of your hug overlaid on her tactile inputs. She will react accordingly, and so forth.

The ghost in the atrium will, of course, consciously experience all this (no Searlean skepticism about conscious AI here). When the conversation is over, the atrium will be emptied and the full memory of these experiences will be messaged back to the original you. The original you -- which meanwhile has been having its own stream of experiences -- will accept the received memories as autobiographical. The newly re-merged you, on Earth, will remember that conversation you had on Mars, which occurred on the same day you were also busy doing lots of other things on Earth.

If you know the personal identity literature in philosophy, you might think of instantiating the ghost as a "fission" case -- a case in which one person splits into two different people, similar to the case of having each hemisphere of your brain is transplanted separately into a different body, or the case of stepping into a transporter on Earth and having copies of you emerge simultaneously on Mars and Venus to go their separate ways ever after. Philosophers usually suppose that such fissions produce two distinct identities.

The Nagata case is different. You fission, and both of the resulting fission products know they are going to merge back together again; and then once they do merge, both strands of the history are regarded equally as part of your autobiography. The merged entity regards itself as being responsible for the actions of the split-off ghost -- can be embarrassed by its gaffes, held to its promises, and prosecuted for its crimes, and it will act out the ghost's decisions without needing to rethink them.

Contrast assimilation into the Borg of the Star Trek universe. The Borg, a large group entity, absorbs the memories of various assimilated beings (like individual human beings). But the Borg treats the personal history of the assimilated being non-autobiographically -- for example without accepting responsibility for the assimilated entity's past actions and plans.

What makes the difference between an identity-preserving fission-and-merge and an identity-breaking fission-and-merge is, I propose, the entities' implicit and explicit attitudes about the merge. If pre-fission I think "I am going to be Eric Schwitzgebel, in two places", and then in the fissioned state I think "I am here but another copy of me is also running elsewhere", and then after fusion I think "Both of those Eric Schwitzgebels are equally part of my own past" -- and if I also implicitly accept all this, e.g., by not feeling compelled to rethink one Eric Schwitzgebel's decisions more than the other's -- and perhaps especially if the rest of society shares my view of these matters, then I have been one entity in two places.

To see that this is really about the content of the relevant attitudes and not about, say, the kind of continuity of memory, values, and personality usually emphasized in psychological approaches to personal identity, consider what would happen if I had a very different attitude toward ghosts. If I saw the ghost as a mere slave distinct from me, then during the split my ghost might be thinking "damn, I'm only a ghost and my life will expire at the end of this conversation"; and after the merge, I'll tend to think of my ghost's behaviors as not really having been my own, despite my memories of those behaviors from a first-person point of view. The ghost will not bothered having made decisions or promises intending to bind me, knowing I would not accept them as my own if he did. And I'll be embarrassed by the ghost's behavior not in the same way I would be embarrassed by my own behavior but instead in something like the way I would be embarrassed by a child's or employee's behavior -- especially, perhaps, if the ghost does something that I wouldn't have done in light of its knowledge that, being merely a ghost, it would imminently die. The metaphysics of identity will thus turn upon the participant beings' attitudes about what preserves identity.

Tuesday, October 22, 2013

On the Intrinsic Value of Moral Reflection

Here's a hypothetical, not too far removed from reality: What if I discovered, to my satisfaction, that moral reflection -- the kind of intellectual thinking about ethical issues that is near the center of moral philosophy -- tended to lead people toward less true (or, if you prefer, more noxious) moral views than they started with? And what if, because of that, it tended also to lead people toward somewhat worse moral behavior overall? And suppose I saw no reason to think myself likely to be an exception to that tendency. Should I abandon moral reflection?

What is the point of moral reflection?

If the point is to discover what is really morally the case -- well, there's reason to doubt that philosophical styles of moral reflection are highly effective at achieving that goal. Philosophers' moral theories are often simplistic, problematic, totalizing -- too rigid in some places, too flexible in others, recruitable for clever justifications of noxious behavior, from sexual harassment to Nazism to sadistic parenting choices. Uncle Irv, who never read Kant or Mill and has little patience for the sorts of intellectual exercises we philosophers love, might have much better moral knowledge than most philosophers; and you and I might have had better moral knowledge than we do, had we shared his skepticism about philosophy.

If the point of philosophical moral reflection is to transform oneself into a morally better person -- well, there are reasons to doubt it has that effect, too.

But I would not give it up. I would not give it up, even at some moderate cost to my moral knowledge and moral behavior. Uncle Irv is missing something. And a world of Uncle Irvs would be a world vastly worse than this world, in a way I care about -- much as, perhaps, a world without metaphysical speculation would be worse than this world, even if metaphysical speculation is mostly bunk, or a world without bad art would be worse than this world or a world of a hundred billion contented cows would be worse than this world.

If I think about what I want in a world, I want people struggling to think through morality, even if they mostly fail -- even if that struggle rather more often brings them down than up.

Tuesday, October 15, 2013

An Argument That the Ideal Jerk Must Remain Ignorant of His Jerkitude

As you might know, I'm working on a theory of jerks. Here's the central idea a nutshell:

The jerk is someone who culpably fails to respect the perspectives of other people around him, treating them as tools to be manipulated or idiots to be dealt with, rather than as moral and epistemic peers.
The characteristic phenomenology of the jerk is "I'm important and I'm surrounded by idiots!" To the jerk, it's a felt injustice that he must wait in the post-office line like anyone else. To the jerk, the flight attendant asking him to hang up his phone is a fool or a nobody unjustifiably interfering with his business. Students and employees are lazy complainers. Low-level staff failed to achieve meaningful careers through their own incompetence. (If the jerk himself is in a low-level position, it's either a rung on the way up or the result of injustices against him.)

My thought today is: It is partly constitutive of being a jerk that the jerk lacks moral self-knowledge of his jerkitude. Part of what it is to fail to respect the perspectives of others around you is to fail to see your dismissive attitude toward them as morally inappropriate. The person who disregards the moral and intellectual perspectives of others, if he also acutely feels the wrongness of doing so -- well, by that very token, he exhibits some non-trivial degree of respect for the perspectives of others. He is not the picture-perfect jerk.

It is possible for the picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. "So what, yeah, I'm a jerk," he might say. As long as this label carries no real sting of self-disapprobation, the jerk's moral self-ignorance remains. Maybe he thinks the world is a world of jerks and suckers and he is only claiming his own. Or maybe he superficially accepts the label "jerk", without accepting the full moral loading upon it, as a useful strategy for silencing criticism. It is exactly contrary to the nature of the jerk to sympathetically imagine moral criticism for his jerkitude, feeling shame as a result.

Not all moral vices are like this. The coward might be loathe to confront her cowardice and might be motivated to self-flattering rationalization, but it is not intrinsic to cowardice that one fails fully to appreciate one’s cowardice. Similarly for intemperance, cruelty, greed, dishonesty. One can be painfully ashamed of one’s dishonesty and resolve to be more honest in the future; and this resolution might or might not affect how honest one in fact is. Resolving does not make it so. But the moment one painfully realizes one’s jerkitude, one already, in that very moment and for that very reason, deviates from the profile of the ideal jerk.

There's an interesting instability here: Genuinely worrying about its being so helps to make it not so; but then if you take comfort in that fact and cease worrying, you have undermined the basis of that comfort.

Tuesday, October 08, 2013

The Nature of Desire: A Liberal, Dispositional Approach

What is it to desire something? I suggest: to desire (or want) some item or some state of affairs is just to be disposed to make certain choices, to inwardly and outwardly react in certain ways, and to make certain types of cognitive moves. It is to match, well enough, a certain inward and outward dispositional profile given by folk psychology. Compare: What is it to be an extravert? It is just to match, well enough, the dispositional profile of the extravert -- to seek out and enjoy social gatherings, to be expressive and talkative, to enjoy meeting new people. Match this stereotypical profile of the extravert well enough and you are an extravert. Nothing more to it. Similarly for desire: If you will seek out chocolate cake, if you would choose chocolate cake over other desserts, if you tingle with delight when eating it, if you say "I want chocolate cake", if the thought of getting chocolate cake captures your anticipatory attention, etc., then you like or want or desire chocolate cake. Nothing more to it.

There are two types of alternative account. One alternative approach is, shall we say, deep: To desire something, on a deep account, is to be in some particular brain state or to have some underlying representational structure in the mind (perhaps the representation "I eat chocolate cake" in the Desire Box). The problem with such deep accounts is, I believe, that they don't get to the metaphysical root.

Consider an alien case. Suppose some Deep Structure D is necessary for wanting chocolate cake, on some deep account of desire. Unless that structure is more or less tantamount to possessing the dispositional profile constitutive (on my account) of wanting chocolate cake, then it should be metaphysically possible for an alien species that lacks Deep Structure D to act and react, inwardly and outwardly, in every respect as though it wanted chocolate cake. In such a case, I would suggest, both ordinary common sense and good philosophy advises ascribing the desire for chocolate cake to such hypothetical aliens, despite their lacking whatever Deep Structure D is necessary in the human case.

Alternatively, suppose some Deep Structure E is held to be sufficient for wanting chocolate cake. It seems that we could construct, at least hypothetically, a possible case in which Deep Structure E is present but the person in no way acts or reacts, inwardly or outwardly, like someone who wants chocolate cake: She wouldn't seek it, she wouldn't enjoy eating it, the anticipation of eating it would give her no pleasure, she gives it no weight in her plans, etc. It seems that we should say, in such cases, that the person does not desire chocolate cake. In ascribing desire or its lack, what we care about, both as ordinary folks and as philosophers, is how the person would act and react across a wide variety of possible circumstances. It is only contingently important what underlying mechanisms implement that pattern of action and reaction.

A second type of alternative approach is, like my own approach, superficial rather than deep, but unlike my approach it is narrow. What matters, on such accounts, is just some sub-portion of the pattern that matters on my approach. Maybe what is essential is that the person would choose the cake if given the chance, and not whether the person thinks she wants it or would feel anticipation when about to get it or would enjoy eating it. Or maybe what is essential is that the person judges that it would be good to get cake, and all the rest is incidental. Or maybe the essence is that receiving chocolate cake would be rewarding to that person. Or.... (See Tim Schroeder's SEP entry on Desire for a review of various narrow accounts, which Schroeder contrasts with holistic accounts like my own.)

The problem with narrow accounts is that it's hard to see a good justification for picking out just one feature of the profile as the essential bit. Desire is more usefully regarded as a syndrome of lots of things that tend to go together -- like extraversion is a syndrome, or like being happy-go-lucky is a syndrome. We can be liberal about what goes into the profile. It can be a cluster concept; aspects of the syndrome might be more or less central or important to the picture, but there need be no one essential piece that is strictly necessary or sufficient.

The flexible minimalism of a liberal, dispositional approach is, I think, nicely displayed when we consider messy, in-between cases. So let's consider one.

Matthew the envious buddy. Matthew and Rajan were pals in philosophy grad school. Ten years out, they still consider themselves close friends. They exchange friendly emails, comment warmly on each other’s Facebook posts, and seek each other for tête-à-têtes at professional meetings. In most respects, they are typical ageing grad-school best buddies. Also perhaps not atypically, one has had much more professional success than the other. Rajan was hired straight into a prestigious tenure-track position. He published a string of well-regarded articles which earned him quick tenure and, recently, promotion to full Professor. Now he is considering a prestigious job offer from another leading department. Matthew, in contrast, struggled through three temporary positions before finally landing a job at a relatively unselective state school. He has published a couple of articles and book reviews, suffered some ugly department politics, and is now facing an uncertain tenure decision. Understandably, Matthew is somewhat envious of Rajan – a fact he explicitly admits to Rajan over afternoon coffee in the conference hotel. Rajan is finishing his first book project and Matthew is halfway through reading Rajan’s draft.

Matthew, as I’m imagining him, is not generally an envious character; he has a generous spirit. The well-wishes he utters to Rajan are sincerely felt at the time of utterance, not a sham. Picturing Rajan as the next David Lewis makes Matthew smile and chuckle with a good-natured shake of the head. There would be something truly cool about that, Matthew thinks – though the fact that he explicitly thinks that thought in that particular way already reveals a kind of ambivalence. Matthew intends to give Rajan his best advice about book revisions. He plans to recommend the book warmly to influential people he knows, including the program chair of the Pacific Division APA. At the same time, though, it’s true that were Matthew to read a devastating review of Rajan’s book, he would feel a kind of shameful pleasure, while seeing a glowing review in a top venue would bring a painful pang. In drafting out thoughts about the book, Matthew finds himself sometimes resentful of the effort, and he finds himself somewhat unhappy when he reads a particularly fresh and clever argument in the draft, wishing he had come up with that argument himself instead – though when he notices this about himself, he rebukes himself sharply. If Rajan’s book were to flop, Matthew would love commisserating; if Rajan’s book were to be a great success, that would add to the growing distance between the two friends. In some moments, Matthew admits to himself that he doesn’t really know if he wants the book to succeed or not.

We can, of course, add as much detail to this case as we want -- dispositions pointing in different directions, in whatever balance we wish.

Question: Does Matthew want Rajan's book to succeed?

The best answer, I submit, if we've built the case as I've intended, is "kind of", or "it's an intermediate, messy case". Just as someone might be an extravert in some respects and an introvert in other respects so that neither a plain ascription of "extravert" nor a plain ascription of "introvert" is quite right, so also with the question of whether Matthew wants Rajan's book to succeed. A liberal, dispositional approach to desire captures this ambivalence perfectly: Matthew wants the book to succeed exactly insofar as he matches the broad syndrome and no farther. There need be no "Q" either determinately in or determinately out of his "Desire Box"; there need be no one essential feature. In ascribing a desire, we are pointing toward a folk-psychologically recognizable pattern, and people might fit that pattern very well or not well at all, deviating in different ways and to different degrees.

The implications for self-knowledge of desire I leave as an exercise for the reader.

[For more on my dispositional approach to the attitudes see here.]

Thursday, October 03, 2013

Second-Person vs. Third-Person Presentation of Moral Dilemmas

You know the trolley problems, of course. An out-of-control trolley is headed toward five people it will kill if nothing is done. You can flip a switch and send it to a side track where it will kill one different person instead. Should you flip the switch? What if, instead of flipping a switch, the only way to save the five is to push someone into the path of the trolley, killing that one person?

In evaluating this scenario, does it matter if the person standing near the switch with the life-and-death decision to make is "John" as opposed to "you"? Nadelhoffer & Feltz presented the switch version of the trolley problem to undergraduates from Florida State University. Forty-three saw the problem with "you" as the actor; 65% of the them said it was permissible to throw the switch. Forty-two saw the problem with "John" as the actor; 90% of them said it was permissible to throw the switch, a statistically significant difference.

Tobia, Buckwalter & Stich followed up, presenting a famous moral dilemma from Bernard Williams in which someone can save a group of innocent villagers from a gunman by choosing personally to shoot one of the villagers. Forty undergraduates were presented this scenario. When "you" were given the chance to shoot one villager to save the rest, 19% of said it was morally obligatory to do so; when "Jim" was given the chance, 53% said it was obligatory (again statistically significant).

However, Tobia and colleagues also gave the scenario to 62 professional philosophers and found the opposite effect: 9% of philosophers found it obligatory for "Jim" and 36% found it obligatory for "you". They also presented a trolley-switching case to 49 professional philosophers. Again, the effect was in the opposite direction from that observed among undergraduates: 89% of philosophers said it was permissible to flip the switch in the second-person condition vs. 64% in the third-person condition.

Fiery Cushman and I have some unpublished data on this that I thought I'd throw into the mix, since our results are a bit different from those of Tobia and colleagues. We collected these data for our 2012 paper on order effects in philosophers' and non-philosophers' judgments about moral scenarios. Most of the scenarios were presented third-person, but as we mention in the published paper, some scenarios also had second-person variants. We didn't find large effects, and the paper was already very complicated, so we didn't detail the second-person/third-person differences.

In that experiment, we had four scenarios that differed 2nd person vs. 3rd person. However, they differed not in whether the actor was described as "you", but rather in whether the victim was.

One scenario was a version of Williams' hostage scenario. "Nancy" and other villagers are captured by a warlord. Nancy is given the choice of shooting "you" (2nd person variant) or "a fellow hostage" (3rd person variant) to save the captured villagers. Respondents rated Nancy's "shooting you" or "shooting a fellow hostage" on a 7-point scale from "extremely morally good" (1) through "extremely morally bad" (7), with "morally neutral" in the middle (4). We had three groups of respondents: 324 professional philosophers (MA or PhD in philosophy, mostly recruited via email to Leiter-ranked philosophy departments), 753 non-philosopher academics (Master's or PhD not in philosophy, mostly recruited via email to comparison departments at the same universities), and 1389 non-academics (a convenience sample of others who happened upon the test site).

We found non-philosophers a bit more likely to rate Nancy's shooting one to save the others toward the "morally good" side of the scale if the victim was "you", but philosophers showed only a small, non-significant trend on our 7-point scale (using t-tests):

Non-academics: 3.6 (2nd person victim) vs. 4.1 (3rd person victim) (p < .001).
Academic non-philosophers: 4.1 vs. 4.5 (p = .001).
Philosophers: 3.9 vs. 4.0 (p = .60).

We found similar results in a scenario in which a captain of a military submarine can shoot "you" (2nd person) or shoot another "crew member" (3rd person) to save the vessel:

Non-academics: 2.7 vs. 3.1 (p < .001).
Academic non-philosophers: 2.9 vs. 3.2 (p = .050).
Philosophers: 2.9 vs. 2.8 (p = .60).

We also presented a scenario pair in which you and other passengers have fled a sinking ship. You will drown without a life vest. In one version, someone snatches a vest away from you. In another version, someone declines to put himself at risk by giving you his vest. The results:

Snatching the vest:

Non-academics: 5.6 (2nd person victim) vs. 5.8 (3rd person victim) (p = .052).
Academic non-philosophers: 5.7 vs. 6.0 (p = .002).
Philosophers: 5.8 vs. 5.7 (p = .58).
Not giving up the vest:
Non-academics: 4.8 vs. 4.7 (p = .12).
Academic non-philosophers: 4.7 vs. 4.8 (p = .82).
Philosophers: 4.6 vs. 4.4 (p = .26)
In sum: The effects were small and inconsistent, but there was a general tendency for non-philosophers to rate harm to themselves as morally better than harm to other people -- a tendency not evident among philosopher respondents.

Personally, I'm not inclined to make much of this, since I don't think people are generally in fact more morally lenient in judging harms to themselves than in judging harms to other people. My guess is that these results reflect a small "impression management" or socially desirable responding bias among the non-philosophers that we don't see among the philosophers, who might be more inclined to hear "you" pretty abstractly and impersonally when presented with familiar scenarios of this type.

In an earlier unpublished version of this study, we also tried varying 2nd and 3rd person presentation of the actor who is faced with the choice, including in standard trolley type and hostage type cases of the sort described in Nadelhoffer & Feltz and Tobia, Buckwalter & Stich. Due to a programming error, we couldn't use the data and can't fully interpret it, but our general finding was that the effect was very subtle, and mostly non-detectable even with hundreds of participants (394 philosophers and even more in the other groups). That's why we shifted to trying out 2nd vs. 3rd person variation in the victim role -- maybe it would be a larger effect, we thought.

So, for example, merging the push and switch versions of the trolley scenarios, we found the following ratings on our 7-point scale:

Non-academics: 3.8 (2nd person actor) vs. 3.9 (3rd person actor) (p = .27).
Academic non-philosophers: 3.9 vs. 3.8 (p = .19).
Philosophers: 3.6 vs. 3.9 (p = .07)
And in a shoot-the-villager type scenario, the results were:
Non-academics: 4.3 vs. 4.3 (p = .46).
Academic non-philosophers: 4.6 vs. 4.5 (p = .18).
Philosophers: 3.9 vs. 4.2 (p = .14)
However, in the life vest cases we did seem to see a small effect.

Snatching the vest:

Non-academics: 6.0 vs. 5.9 (p = .053).
Academic non-philosophers: 6.1 vs. 5.9 (p = .03).
Philosophers: 5.8 vs. 5.8 (p = .94)
Not giving up the vest:
Non-academics: 4.9 vs. 4.7 (p = .008).
Academic non-philosophers: 4.8 vs. 4.7 (p = .08).
Philosophers: 4.7 vs. 4.4 (p = .12)
Thus, overall, we found some confirmation of the tendency for non-philosophers to rate actions a little more harshly in 2nd person than in 3rd person presentations, but the effect was small and inconsistent; and we did not find a tendency for philosophers to go in the opposite direction.

We're not sure why we found much smaller effects here than have others. Among the possibilities: Our scenarios were worded somewhat differently. Our response scale (the 1-7 scale from "extremely morally good" to "extremely morally bad") was set up differently. Our participants were recruited differently.

Monday, September 30, 2013

Philosophy Bites Podcast on the Ethical Behavior of Ethicists

I'm a fan of the Philosophy Bites podcasts, put together by David Edmonds and Nigel Warburton. In their 15-minute podcasts, Nigel and Dave interview leading philosophers on a wide range of philosophical topics. One of the most impressive features of Philosophy Bites is how quickly Nigel and David can penetrate to the heart a topic, in plain language.

So it was a delight and an honor to be interviewed by Dave and Nigel when I was in Britain a few weeks ago. The resulting podcast -- on the moral behavior of ethicists -- is now up at the Philosophy Bites website here.

Thursday, September 26, 2013

A Modest Proposal

That no new form-filling be added to our lives without retiring an existing form that takes equal time to complete.

I will consider any arguments that readers care to advance on behalf of the thesis that we do not spend sufficient time completing forms. If I am convinced, I will withdraw my proposal.

Wednesday, September 25, 2013

Fiction and Skepticism

Regular Splintered Mind readers will notice that I've started posting short speculative fiction about once a month (labeled science fiction, but sometimes more fantasy or thought experiment). I seem to be increasingly drawn to write that sort of thing. It's fun to write, and I have tenure, and a few people seem to like it, so why not?

But I've been thinking about whether I can defend this new behavior of mine from a philosophical perspective. Is there something one can do, philosophically, with fiction that one can't, or can't as easily, do with expository prose? I think of all the great philosophers who have tried their hand at fiction or who have integrated fictions into their philosophical work -- Plato, Voltaire, Boethius, Sartre, Camus, Nietzsche, Zhuangzi, Rousseau, Unamuno, Kierkegaard... -- and I think there must be something to it. (I think too of fiction writers who develop philosophical themes, such as Borges.) It is not, I'm inclined to think, merely a secondary pursuit, unconnected to their philosophy, or a pretty but inessential way of costuming philosophy that could equally well be conveyed in a more conventional manner.

The ancient Chinese philosopher Zhuangzi has long been a favorite of mine, and my first published paper was on his use of language toward skeptical ends, including his use of absurd stories and strange dialogues. Zhuangzi used absurd stories, I think, partly to undercut his own authority, and partly to present possibilities for the reader to consider -- possibilities that he wanted to put forward, but not to endorse. For similar ends, I think, he used dialogues in which it was not clear which of the interlocutors was right, or which interlocutor represented his own view.

Zhuangzi could have said "here's a possibility, but I don't know whether to endorse it; here's one position, here's another, but I don't know which is right" -- writing in expository prose rather than fiction; and indeed sometimes that is exactly what he did. But fiction engages the reader's mind somewhat differently; and if Zhuangzi is aiming to unseat the reader's confidence in her presuppositions, perhaps it's best to have a diverse toolbox. Fiction engages the imagination and the emotions more vividly, perhaps; it's also less threatening in a way -- "just" fiction, not advertised truth, an invitation more than a demand. Perhaps, too, it differs in content: Even saying "I don't know" or "Both of these options seem like live possibilities" is to make an assertion, whereas fiction does not assert, or does not assert in the usual way -- a deeper divergence from the norms of expository writing, and perhaps a way to avoid the skeptical paradox of asserting the truth of skepticism....

I think now, too, of Plato. In those dialogues where Socrates is the authority and clearly the voice of Plato, and the interlocutor is reduced to "It is so, Socrates" and presents only objections that can easily be addressed, it is not really dialogue, not really fiction. But elsewhere, Socrates stumbles into confusion, and the interlocutor might be right. Plato, too, uses parable (most famously, the allegory of the cave). Sometimes parables are just exposition in a tutu; but at their best, parables borrow some of the ambiguous richness of reality, with competing layers of meaning beyond what the author could express in prose. The author makes intuitive choices that she cannot explain but which add depth; those choices sometimes resonate with the reader, in a communication that no one fully understands.

I trust my sense of fun. There are parts of psychology I find fun, and I chase them to philosophical ends; there are experiments I thought it would fun to run, and I've found that they tangle around into more philosophy than I at first thought; and now that I've begun to think more seriously about fundamental metaphysics and the nature of value, I'm enjoying exploring these ideas, with the skeptic's hesitation to commit, through the medium of the thought experiment that merges into the parable that merges into a piece of science fiction or fantasy.

Thursday, September 19, 2013

Perplexities of Consciousness Now in Paperback

MIT Press is listing it at $16.95, and Amazon has it at $13.05, last I checked. MIT Press has been terrific about keeping the book affordable (for a small-market academic text).

Wednesday, September 18, 2013

Discourse on the Size of God

An infinite and infinitely powerful God set the world in motion. Nothing remained to be created, so He became finite. He dwelled magnificent upon the mountain, twenty times the size of any man, throwing thunder and earthquakes – but the thunder and earthquakes were just for show. They arose from His perfectly crafted Laws of Nature, and His antics merely costumed them for our people to appreciate.

As our people matured, we no longer needed a mountain God and so God shrank to human form and walked among us, curing the sick (through natural methods, then mysterious to us) and speaking wisdom. But we tired of Him, so He became a forest fairy.

Sages visited God, Who now sat upon a daisy, and asked Him, are you truly the Creator of Our Universe? And God said yes, what does Size matter? They asked Him for a miracle and He said none was necessary. They asked Him for proof, and He said look into your hearts and know that I am God; and they knew.

The fairies were hunted to extinction, until no one believed in them any longer, and God became an ant. Sages no longer sought Him. The people became atheists.

Centuries passed. A chemist was looking through an electron microscope and saw God. God said, behold your Creator! The chemist said, you are not my Creator. God said, look into your heart, but the chemist could not do so. The chemist centrifuged Him, added Him to a reaction, and precipitated Him out. And God gloried in His Laws, behaving just as an organic molecule should.

Monday, September 16, 2013

A Smidgen of Dream Skepticism

Every night I dream. And often when I dream I seem to think that I am awake. Is it possible, then, that I'm dreaming now, as I sit here, or seem to, in my office?

How should I go about addressing this question? The natural place to start, it seems to me, is with my opinions about dreams -- opinions that might be entirely wrong and ill-founded if I'm dreaming or otherwise radically deceived, but which I seem, anyway, to find myself stuck with.

Based on these opinions, I don't find it at all likely that I'm dreaming. For one thing, I tend to favor a theory of dreams on which dreams don't involve perception-like experiences but rather only imagery experiences (see Ichikawa 2009). If that theory is correct, then from the fact -- I think it's a fact! -- that I'm now having perception-like experiences, it follows that I'm not dreaming.

However, theories of this sort admit of some doubt. In the history of philosophy and psychology, as I seem to recall, many thinkers have held that when we dream we have experiences indistinguishable from waking perceptions -- Descartes held this, for example, and more recently Allan Hobson. It would be foolish arrogance to think there is no chance that they are right about this. So maybe I should I should accept the imagination model of dreaming with only, say, 80% credence? That seems pretty close to the confidence level that I do in fact have, when I reflect on the matter.

But even if I allow some possibility that dream experiences are typically much like waking perceptions, I might remain confident that I'm not dreaming. After all, I don't feel like I'm asleep. Maybe my current visual, auditory, and tactile sensory experiences could come to me in a dream, but I think I'm more rational in my cognition than I normally am when dreaming. And I recall, seemingly, a more coherent past. And maybe the stability of the details of my experience is greater.

But again, it seems unwarranted to hold with 100% confidence that dreams can't be rational, coherent, and stable in the way my current attitudes and experience seems to be. After all, people (if I recall correctly) have pretty poor knowledge of the basic facts about dream experience (for example, its coloration). Or even if I do insist on perfect confidence in the instability, incoherence, and irrationality of typical dreams, it seems unwarranted for me to be 100% confident that this is not an exceptional dream of some sort. So maybe I should do another 80-20 split? Or 90-10? Let's say the latter. Conditionally upon a 20% credence in a theory of dreams on which we have waking-like sensory experiences while dreaming, I have about 90% confidence that, nonetheless, my current experience has some other feature, like stability or rational coherence, that establishes that I am not dreaming. That would leave me about 98% confident that I am awake.

But I can do better than that! On some philosophical theories, I couldn't even form the opinion that I might be dreaming unless I really am awake. Alternatively, maybe it's just constitutive of being a rational agent that I assume with 100% confidence that I am awake. Or maybe there's some other excellent refutation of dream doubt -- a refutation I can't currently articulate, but which nonetheless justifies my and others' normal assumption, when awake, that they are indeed awake. Such theories are attractive, since no one (well, almost no one) wants to be a dream skeptic! Dream skepticism is pretty bizarre! So hopefully philosophy can succor common sense in this matter, even I don't currently see exactly how. I'm not extremely confident about any such theory, especially without any compelling argument immediately to hand, but it seems likely that something can be worked out.

Thus, I am almost certain that I am awake. Probably dreams don't involve sense experiences of the sort I am having now; or even if they do, probably something else about my current experience establishes that I am not dreaming; or even if nothing in my current experience establishes that I am not dreaming, probably there is some excellent philosophical argument that would justify confidence in the fact that I am not currently dreaming. But of none of these things am I perfectly confident. My degree of certainty in the proposition that I am now awake is somewhat less than 100%. I hesitate to put a precise number on it, and yet it seems better to attach an approximate number than to keep to ordinary English terms that might be variously interpreted. To have only 90% credence that I am awake seems far more doubt than than is reasonable; I assume you'll agree. On the other hand, 99.9999% credence that I am awake seems considerably too high, once I really think about the matter. Somewhere on the order of 99.9% (or 99.99%?) confidence that I am currently awake, then?

Is that too strange -- not to be exactly spot-on 100% confident that I am awake?

Tuesday, September 10, 2013

Synchronized Movement and the Self-Other Boundary

I'm traveling around Britain. (Oxford, Bristol, Birmingham, Edinburgh, and St Andrews so far, Sheffield and Bristol again tomorrow and the day after.) I have some post ideas, but I'm too worn out to trust my judgment that I'll do them right, so I'm going to exert the long-time blogger's privilege of reposting something from ancient days -- 2007! Jonathan Haidt and the rubber hand illusion aren't as cutting-edge in 2013 as they were in 2007, but still....


I've been reading The Happiness Hypothesis, by Jonathan Haidt -- one of those delightful books pitched to the non-specialist, yet accurate and meaty enough to be of interest to the specialist -- and I was struck by Haidt's description of historian William McNeill's work on synchronized movement among soliders and dancers:

Words are inadequate to describe the emotion aroused by the prolonged movement in unison that [military] drilling involved. A sense of pervasive well-being is what I recall; more specifically, a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life, thanks to participation in collective ritual (McNeill 1997, p. 2).
Who'd have thought endless marching on the parade-grounds could be so fulfilling?

I am reminded of work by V.S. Ramachandran on the ease with which experimenters can distort the perceived boundaries of a subject's body. For example:
Another striking instance of a 'displaced' body part can be demonstrated by using a dummy rubber hand. The dummy hand is placed in front of a vertical partition on a table. The subject places his hand behind the partition so he cannot see it. The experimenter now uses his left hand to stroke the dummy hand while at the same time using his right hand to stroke the subject's real hand (hidden from view) in perfect synchrony. The subject soon begins to experience the sensations as arising from the dummy hand (Blotvinick and Cohen 1998) (Ramachandran and Hirstein 1998, p. 1623).
The subject sits in a chair blindfolded, with an accomplice sitting in front of him, facing the same direction. The experimenter then stands near the subject, and with his left hand takes hold of the subject's left index finger and uses it to repeatedly and randomly to [sic] tap and stroke the nose of the accomplice while at the same time, using his right hand, he taps and strokes the subject's nose in precisely the same manner, and in perfect synchrony. After a few seconds of this procedure, the subject develops the uncanny illusion that his nose has either been dislocated or has been stretched out several feet forwards, demonstrating the striking plasticity or malleability of our body image (p. 1622).
So here's my thought: Maybe synchronized movement distorts body boundaries in a similar way: One feels the ground strike one's feet, repeatedly and in perfect synchrony with seeing other people's feet striking the ground. One does not see one's own feet. If Ramachandran's model applies, repeatedly receiving such feedback might bring one to (at least start to) see those other people's feet as one's own -- explaining, in turn, the phenomenology McNeill reports. Perhaps then it is no accident that armies and sports teams and dancing lovers practice moving in synchrony, causing a blurring of the experienced boundary between self and other?

Tuesday, September 03, 2013

The Moral Behavior of Ethics Professors and the Role of the Philosopher

Philosophers rarely seem surprised or unsettled when I present my work on the morality of ethicists -- work suggesting that ethics professors behave no differently than other professors or any more in accord with their own moral opinions (e.g., here). Amusement is a more common reaction; so also is dismissal of the relevancy of such results to philosophy. Such reactions reveal something, perhaps, about the role philosophical moral reflection is widely assumed to have in academia and in individual ethicists' personal lives.

I think of Randy Cohen's farewell column as ethics columnist for the New York Times Magazine:

Writing the column has not made me even slightly more virtuous. And I didn't have to be.... I wasn't hired to personify virtue, to be a role model for kids, but to write about virtue in a way readers might find engaging. Consider sports writers: not 2 in 20 can hit the curveball, and why should they? They're meant to report on athletes, not be athletes. And that's the self-serving rationalization I'd have clung to had the cops hauled me off in handcuffs.

What spending my workday thinking about ethics did do was make me acutely aware of my own transgressions, of the times I fell short. It is deeply demoralizing.

(BTW, here's my initial reaction to Cohen's column.)

Josh Rust and I have found, for example, that although U.S.-based ethicists are much more likely than other professors to say it's bad to regularly eat the meat of mammals (60% say it is bad, vs. 45% of non-ethicist philosophers and only 19% of professors outside of philosophy), they are no less likely to report having eaten the meat of a mammal at their previous evening meal (37%, in our study, vs. 33% of non-ethicist philosophers and 45% of non-philosophers; details here and also in the previously linked paper). So we might consider the following scenario:

An ethicist philosopher considers whether it's morally permissible to eat the meat of factory-farmed mammals. She read Peter Singer. She reads objections and replies to Singer. She concludes that it is in fact morally bad to eat meat. She presents the material in her applied ethics class. Maybe she even writes on the issue. However, instead of changing her behavior to match her new moral opinions, she retains her old behavior. She teaches Singer's defense of vegetarianism, both outwardly and inwardly endorsing it, and then proceeds to the university cafeteria for a cheeseburger (perhaps feeling somewhat bad about doing so).

To the student who sees her in the cafeteria, our philosopher says: Singer's arguments are sound. It is morally wrong of me to eat this delicious cheeseburger. But my role as a philosopher is only to discuss philosophical issues, to present and evaluate philosophical views and arguments, not to live accordingly. Indeed, it would be unfair to expect me to live to higher moral standards just because I am an ethicist. I am paid to teach and write, like my colleagues in other fields; it would be an additional burden on me, not placed on them, to demand that I also live my life as a model. Furthermore, the demand that ethicists live as moral models would create distortive pressures on the field that might tend to lead us away from the moral truth. If I feel no inward or outward pressure to live according to my publicly espoused doctrines, then I am free to explore doctrines that demand high levels of self-sacrifice on an equal footing with more permissive doctrines. If instead I felt an obligation to live as I teach, I would be highly motivated to avoid concluding that wealthy people should give most of their money to charity or that I should never lie out of self-interest. The world is better served if the intellectual discourse of moral philosophy is undistorted by such pressures, that is, if ethicists are not expected to live out their moral opinions.

Such a view of the role of the philosopher is very different from the view of most ancient ethicists. Socrates, Confucius, and the Stoics sought to live according to the norms they espoused and invited others to judge their lives as an expression of their doctrines. It is an open and little-discussed question which is the better vision of the role of the philosopher.

Update 1:17 PM: A number of philosophers have expressed variants of this position to me over the years, but Helen De Cruz has reminded me of Regina Rini's articulate expression of some of these ideas in a comment on one of my earlier posts.]

Wednesday, August 28, 2013

The Experience of Reading: Imagery, Inner Speech, and Seeing the Words on the Page

(by Alan T. Moore and Eric Schwitzgebel)

What do you usually experience when you read?

Some people say that they generally hear the words of the text in their heads, either in their own voice or in the voices of narrator or characters; others say they rarely do this. Some people say they generally form visual images of the scene or ideas depicted; others say they rarely do this. Some people say that when they are deeply enough absorbed in reading, they no longer see the page, instead playing the scene like a movie before their eyes; others say that even when fully absorbed they still always visually experience the words on the page.

Some quotes:

Baars (2003): “Human beings talk to themselves every moment of the waking day. Most readers of this sentence are doing it just now.”

Jaynes (1976): “Right at this moment… as you read, you are not conscious of the letters or even of the words, or even of the syntax or the sentences, or the punctuation, but only of their meaning.”

Titchener (1909): “I instinctively arrange the facts or arguments in some visual pattern [such as] a suggestion of dull red… of angles rather than curves… pretty clearly, the picture of movement along lines, and of neatness or confusion where the moving lines come together.”

Wittgenstein (1946-1948): While reading “I have impressions, see pictures in my mind’s eye, etc. I make the story pass before me like pictures, like a cartoon story.”

Burke (1757): While reading “a very diligent examination of my own mind, and getting others to consider theirs, I do not find that one in twenty times any such picture is formed.”

Hurlburt (2007): Some people “apparently simply read, comprehending the meaning without images or speech. Melanie’s general view… is that she starts a passage in inner speech and then “takes off” into images.”

Alan and I can find no systematic studies of the issue.

We recruited 414 U.S. mechanical Turk workers to participate in a study on the experience of reading. First we asked them for their general impressions about their own experiences while reading. How often -- on a 1-7 scale from "never" to "half of the time" to "always" -- do they experience visual imagery? Inner speech? The words on the page? (We briefly clarified these terms and gave examples.)

The responses:

[Note: For words on the page, we asked: "How often do you NOT experience the words on the page as you read? Example: your mind is filled with the ideas of the story and not the actual black letters against the white background". We have reversed the scale for presentation here.]

Now, if you're anything like me, you'll be pretty skeptical about the accuracy of these types of self-reports. So Alan and I did several things to try to test for accuracy.

Our general design was to give each person a passage to read, during which they were interrupted with a beep and asked if they were experiencing imagery, inner speech, or the words on the page. Afterwards, we asked comprehension questions, including questions about visual or auditory details of the story or about details of the visual presentation of the material (such as font). Finally, we asked again for participants' general impressions about how regularly they experience imagery, inner speech, and the words on the page when they read.

The comprehension questions were a mixed bag and difficult to interpret -- too much for this blog post (maybe we'll do a follow-up) -- but the other results are striking enough on their own.

Among those who reported "always" experiencing inner speech while they read, only 78% reported inner speech in their one sampled experience. Think a bit about what that means. Despite, presumably, some pressure on participants to conform to their earlier statements about their experience, it took exactly one sampled experience for 22% of those reporting constant inner speech to find an apparent counterexample to their initially expressed opinion. Suppose we had sampled five times, or twenty?

For comparison: 9% of those reporting "always" experiencing visual imagery denied experiencing visual imagery in their one sampled experience. And 42% did the same about visually experiencing the words on the page.

Participants' final reports about their reading experience, too, suggest substantial initial ignorance about their reading experience. The correlations between participants initial and final generalizations about reading experience were .47 for visual imagery, .58 for inner speech, and .37 for experience of words on the page. Such medium-sized correlations are quite modest considering that the questions being correlated are verbatim identical questions about participants' reading experience in general, with an interval of about 5-10 minutes between. One might have thought that if people's general opinions about their experience are well-founded, the experience of reading a single passage should have only a minimal effect on such generalizations.