Wednesday, February 15, 2017

Human Nature Is Good: A Sketch of the Argument

The ancient Chinese philosopher Mengzi and the early modern French philosopher Rousseau both argued that human nature is good. The ancient Chinese philosopher Xunzi and the early modern English philosopher Hobbes argued that human nature is not good.

I interpret this as an empirical disagreement about human moral psychology. We can ask, who is closer to right?

1. Clarifying the Question.

First we need to clarify the question. What do Mengzi and Rousseau mean by the slogan that is normally translated into English as "human nature is good"? There are, I think, two main claims.

One is a claim about ordinary moral reactions: Normal people, if they haven't been too corrupted by a bad environment, will tend to be revolted by clear cases of morally bad behavior and pleased by clear cases of morally good behavior.

The other is a claim about moral development: If people reflect carefully on those reactions, their moral understanding will mature, and they will find themselves increasingly wanting to do what's morally right.

The contrasting view -- the Xunzi/Hobbes view -- is that morality is an artificial human construction. Unless the right moral system has specifically been inculcated in them, ordinary people will not normally find themselves revolved by evil and pleased by the good. At least to start, people need to be told what is right and wrong by others who are wiser than them. There is no innate moral compass to get you started in the right developmental direction.

2. Mixed Evidence?

One might think the truth is somewhere in the middle.

On the side of good: Anyone who suddenly sees a child crawling toward a well, about to fall in, would have an impulse to save the child, suggesting that everyone has some basic, non-selfish concern for the welfare of others, even without specific training (Mengzi 2A6). This concern appears to be present early in development. For example, even very young children show spontaneous compassion toward those who are hurt. Also, people of different origins and upbringings admire moral heroes who make sacrifices for the greater good, even when they aren't themselves directly benefited. Non-human primates show sympathy for each other and seem to understand the basics of reciprocity, exchange, and rule-following, suggesting that such norms aren't entirely a human invention. (On non-human primates, see especially Frans de Waal's 1996 book Good Natured.)

On the other hand: Toddlers (and adults!) can of course be selfish and greedy; they don't like to share or to wait their turn. In the southern U.S. about a century ago, crowds of ordinary White people frequently lynched Blacks for minor or invented offenses, proudly taking pictures and inviting their children along, without apparently seeing anything wrong in it. (See especially James Allen et al., Without Sanctuary.) The great "heroes" of the past include not only those who sacrificed for the greater good but also people famous mainly for conquest and genocide. We still barely seem to notice the horribleness of naming our boys "Alexander" and "Joshua".

3. Human Nature Is Nonetheless Good.

Some cases can be handled by emphasizing that only "normal" people who haven't been too corrupted by a bad environment will be attracted to good and revolted by evil. But a better general defense of the goodness of human nature involves adopting an idea that runs through both the Confucian and Buddhist traditions and, in the West, from Socrates through the Enlightenment to Habermas and Scanlon. It's this: If you stop and think, in an epistemically responsible way (perhaps especially in dialogue with others), you will tend to find yourself drawn toward what's morally good and repelled by what's evil.

Example A. Extreme ingroup bias. For example, one of the primary sources of evil that doesn't feel like evil -- and can in fact feel like doing something morally good -- is ingroup/outgroup thinking. Early 20th century Southern Whites saw Blacks as an outgroup, a "them" that needed to be controlled; the Nazis similarly viewed the Jews as alien; in celebrating wars of conquest, the suffering of the conquered group is either disregarded or treated as much less important that the benefits to the conquering group. Ingroup/outgroup thinking of this sort typically requires either ignoring others' suffering or accepting dubious theories that can't withstand objective scrutiny. (This is one function of propaganda.) The type of extreme ingroup bias that promotes evil behavior tends to be undermined by epistemically responsible reflection.

Example B. Selfishness and jerkitude. Similarly, selfish or jerkish behavior tends to be supported by rationalizations and excuses that prove flimsy when carefully examined. ("It's fine for me to cheat on the test because of X", "Our interns ought to expect to be hassled and harrassed; it's just part of their job", etc.) If you were simply comfortable being selfish, you wouldn't need to concoct those poor justifications. If and when critical reflection finally reveals the flimsiness of those justifications, that normally creates some psychological pressure for you to change.

It's crucial not to overstate this point. We can be unshakable in our biases and rationalizations despite overwhelming evidence. And even when we do come to realize that something we eagerly want for ourselves or our group is immoral, we can still choose that thing. Evil might still be commonplace: Just as most plants don't survive to maturity, many people fall far short of their moral potential, often due to hard circumstances or negative outside influences.

Still, if we think well enough, we all can see the basic outlines of moral right and wrong; and something in us doesn't like to choose the wrong. This is true of pretty much everyone who isn't seriously socially deprived, regardless of the specifics of their cultural training. Furthermore, this inclination toward what's good -- I hope and believe -- is powerful enough to place at the center of moral education.

That is the empirical content of the claim that human nature is good.

I do have some qualms and hesitations, and I think it only works to a certain extent and within certain limits.

Perhaps oddly, the strikingly poor quality of the reasoning in recent U.S. politics has actually firmed up my opinion that careful reflection can indeed fairly easily reveal the lies behind evil.

-----------------------------------

Related: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly 2007).

[image source]

Monday, February 06, 2017

Should Ethics Professors Be Held to Higher Ethical Standards in Their Personal Behavior?

I've been waffling about this for years (e.g., here and here). Today, I'll try out a multi-dimensional answer.

1. My first thought is that it would be unfair for us to hold ethics professors to higher standards of personal behavior because of their career choice. Ethics professors are hired based on their academic skills as philosophers -- their ability to interpret texts, evaluate arguments, and write and teach effectively about a topic of philosophical discourse. If we demand that they also behave according to higher ethical standards than other professors, we put an additional burden on them that they don't deserve and isn't written into their work contracts. They signed up to be scholars, not moral exemplars. (In this way, ethics professors differ from clergy, whose role is partly that of exemplar.)

2. Nonetheless, it might be reasonable for ethicists to hold themselves to higher moral standards. Consider my "cheeseburger ethicist" thought experiment. An ethicist reads Peter Singer on vegetarianism, considers the available counterarguments, and ultimately concludes that Singer is correct. Eating meat is seriously morally wrong, and we ought to stop. She publishes a couple of articles, and she teaches the arguments to her classes. But she just keeps eating meat at the same rate she always did, with no effort to change her ways. If challenged by a surprised student, maybe she defends herself with something like Thought 1 above: "I'm just paid to evaluate the arguments. Don't demand that I also live that way. I'm off duty!"

[Socrates: always on duty.]

There's something strange and disappointing, I think, about a response that depends on treating the study of ethics as just another job. Our cheeseburger ethicist knows a large range of literature, and she has given the matter extensive thought. If she insulates her philosophical thinking entirely from her personal behavior, she seems to be casting away a major resource for moral self-improvement. All of us, even if we don't aim to be saints, ought to take some advantage of the resources we have that can help us to be better people -- whether those resources are community, church, meditation, thoughtful reading, or the advice of friends we know to be wise. As I've imagined her, the cheeseburger ethicist shows a disconcerting lack of interest in becoming a better person.

We can run similar examples with political activism, charitable giving, environmentalism, sexual ethics, honesty, kindness, racism and sexism, etc. -- any issue with practical implications for one's life, to which an ethicist might give serious thought, leading to what she takes to be a discovery that she would be much morally better if she started doing X. Almost all ethicists have thought seriously about some issues with practical implications for their lives.

Combining 1 and 2. Despite the considerations of fairness raised in point 1, I think we can reasonably expect ethicists to shape and improve their personal behavior in a way that is informed by their professional ethical reasoning. This is not because ethicists have a special burden as exemplars but rather because it's reasonable to expect everyone to use the tools at their disposal toward moral self-improvement, at least to some moderate degree, or at least toward the avoidance of serious moral wrongdoing. We should similarly expect people who regularly attend religious services to try to use, rather than ignore, what they regard as the best moral insights of their religion. We should also expect secular non-ethicists to explore and improve their moral worldviews, in some way that suits their abilities and life circumstances, and apply some of the results.

3. My third thought is to be cautious with charges of hypocrisy. Part of the philosopher's job is to challenge widely held assumptions. This can mean embracing unusual or radical views, if that's where the arguments seem to lead. If we expect high consistency between a professional ethicist's espoused positions and her real-world choices, then we disincentivize highly demanding or self-sacrificial conclusions. But it seems, epistemically, like a good thing if professional ethicists have the liberty to consider, on their argumentative merits alone, the strength of the arguments for highly demanding ethical conclusions (e.g., the relatively wealthy should give most of their money to charity, or if you are attacked you should "turn the other cheek") alongside the arguments for much less demanding ethical conclusions (e.g., there's no obligation to give to charity, revenge against wrongdoing is just fine). If our ethicist knows that as soon as she reaches a demanding moral conclusion she risks charges of hypocrisy, then our ethicist might understandably be tempted to draw the more lenient conclusion instead. If we demand ethicists to live according to the norms they endorse, we effectively pressure them to favor lenient moral systems compatible with their existing lifestyles.

(ETA: Based on personal experience, and my sense of the sociology of the field, and one empirical study, it does seem that professional reflection on ethical issues, in contemporary Anglophone academia, coincides with a tendency to embrace more stringent moral norms and to see our lives as permeated with moral choices.)

4. And yet there's a complementary epistemic cost to insulating one's philosophical positions too much from one's life. To gain insight into an ethical position, especially a demanding one, it helps to try to live that way. When Gandhi and Martin Luther King Jr. talk about peaceful resistance, we rightly expect them to have some real understanding, since they have tried to put it to work. Similarly for Christian compassion, Buddhist detachment, strict Kantian honesty, or even egoistic hedonism: We ought to expect people who have attempted to put these things into practice to have, on average, a richer understanding of the issues than those who have not. If an ethicist aspires to write and teach about a topic, it seems almost intellectually irresponsible for them not to try to gain direct personal experience if they can.

(ETA 2: Also, to understand vice, it's probably useful to try it out! Or better, to have lived through it in the past.)

Combining 1, 2, 3, and 4. I don't think all of this fits neatly together. The four considerations are to some extent competing. Should we hold ethics professors to higher ethical standards? Should we expect them to live according to the moral opinions they espouse? Neither "yes" nor "no" does justice to the complexity of the issue.

At least, that's where I'm stuck today. I guess "multi-dimensional" is a polite word for "still confused and waffling".

[image source]

Friday, February 03, 2017

The Unskilled Zhuangzi: Big and Useless and Not So Good at Catching Rats

New essay in draft:

The Unskilled Zhuangzi: Big and Useless and Not So Good at Catching Rats

Abstract: The mainstream tradition in recent Anglophone Zhuangzi interpretation treats spontaneous skillful responsiveness -- similar to the spontaneous responsiveness of a skilled artisan, athlete, or musician -- as a, or the, Zhuangzian ideal. However, this interpretation is poorly grounded in the Inner Chapters. On the contrary, in the Inner Chapters, this sort of skillfulness is at least as commonly criticized as celebrated. Even the famous passage about the ox-carving cook might be interpreted more as a celebration of the knife’s passivity than as a celebration of the cook’s skillfulness.

--------------------------------------

This is a short essay at only 3500 words (about 10 double-spaced pages excluding abstract and references) -- just in and out with the textual evidence. Skill-centered interpretations of Zhuangzi are so widely accepted (e.g., despite important differences, Graham, Hansen, and Ivanhoe), that people interested in Zhuangzi might find it interesting to see the contrarian case.

Available here.

As always, comments welcome either by email or in the comments section of this post. (I'd be especially interested in references to other scholars with a similar anti-skill reading, whom I may have missed.)

[image source]

Monday, January 30, 2017

David Livingstone Smith: The Politics of Salvation: Ideology, Propaganda, and Race in Trump's America

David Livingstone Smith's talk at UC Riverside, Jan 19, 2017:

Introduction by Milagros Pena, Dean of UCR's College of Humanities, Arts, and Social Sciences. Panel discussants are Jennifer Merolla (Political Science, UCR), Armando Navarro (Ethnic Studies, UCR), and me. After the Dean's remarks, David's talk is about 45 minutes, then about 5-10 minutes for each discussant, then open discusson with the audience for the remainder of the three hours, moderated by David Glidden (Philosophy, UCR).

Smith outlines Roger Money-Kyrle's theory of propaganda -- drawn from observing Hitler's speeches. On Money-Kyrle's view propaganda involves three stages: (1) induce depression, (2) induce paranoia, and (3) offer salvation. Smith argues that Trump's speeches follow this same pattern.

Smith also argues for a "teleofunctional" notion of ideological beliefs as beliefs that have the function of promoting oppression in the sense that those beliefs have proliferated because they promote oppression. On this view, beliefs are ideological, or not, depending on their social or cultural lineage. One's own personal reasons for adopting those beliefs are irrelevant to the question of whether they are ideological. In the case of Trump in particular, Smith argues, regardless of why he embraces the beliefs he does, or what his personal motives are, if his beliefs are beliefs with the cultural-historical function of promoting oppression, they are ideological.

Friday, January 27, 2017

What Happens to Democracy When the Experts Can't Be Both Factual and Balanced?

Yesterday Stephen Bannon, one of Trump's closest advisors, called the media "the opposition party". My op-ed piece in today's Los Angeles Times is my response to that type of thinking.

What Happens to Democracy When the Experts Can't Be Both Factual and Balanced?

Does democracy require journalists and educators to strive for political balance? I’m hardly alone in thinking the answer is "yes." But it also requires them to present the facts as they understand them — and when it is not possible to be factual and balanced at the same time, democratic institutions risk collapse.

Consider the problem abstractly. Democracy X is dominated by two parties, Y and Z. Party Y is committed to the truth of propositions A, B and C, while Party Z is committed to the falsity of A, B and C. Slowly the evidence mounts: A, B and C look very likely to be false. Observers in the media and experts in the education system begin to see this, but the evidence isn’t quite plain enough for non-experts, especially if those non-experts are aligned with Party Y and already committed to A, B and C....

[continued here]

Wednesday, January 25, 2017

Fiction Writing Workshop for Philosophers in Oxford, June 1-2

... the deadline for application is Feb. 1.

It's being run by the ever-awesome Helen De Cruz, supported by the British Society of Aesthetics. The speakers/mentors will be James Hawes, Sara L. Uckelman, and me.

More details here.

If you're at all interested, I hope you will apply!

Tuesday, January 24, 2017

The Philosopher's Rationalization-O-Meter

Usually when someone disagrees with me about a philosophical issue, I think they're about 20% correct. Once in a while, I think a comment is just straightforwardly wrong. Very rarely, I find myself convinced that the person who disagrees is correct and my original view was mistaken. But for the most part, it's a remarkable consistency: The critic has a piece of the truth, but I have more of it.

My inner skeptic finds this to be a highly suspicious state of affairs.

Let me clarify what I mean by "about 20% correct". I mean this: There's some merit in what the disagreeing person says, but on the whole my view is still closer to correct. Maybe there's some nuance that they're noticing, which I elided, but which doesn't undermine the big picture. Or maybe I wasn't careful or clear about some subsidiary point. Or maybe there's a plausible argument on the other side which isn't decisively refutable but which also isn't the best conclusion to draw from the full range of evidence holistically considered. Or maybe they've made a nice counterpoint which I hadn't previously considered but to which I have an excellent rejoinder available.

In contrast, for me to think that someone who disagrees with me is "mostly correct", I would have to be convinced that my initial view was probably mistaken. For example, if I argued that we ought to expect superintelligent AI to be phenomenally conscious, the critic ought to convince me that I was probably mistaken to assert that. Or if I argue that indifference is a type of racism, the critic ought to convince me that it's probably better to restrict the idea of "racism" to more active forms of prejudice.

From an abstract point of view, how often ought I expect to be convinced by those who object to my arguments, if I were admirably open-minded and rational?

For two reasons, the number should be below 50%:

1. For most of the issues I write about, I have given the matter more thought than most (not all!) of those who disagree with me. Mostly I write about issues that I have been considering for a long time or that are closely related to issues I've been considering for a long time.

2. Some (most?) philosophical disputes are such that even ideally good reasoners, fully informed of the relevant evidence, might persistently disagree without thereby being irrational. People might reasonably have different starting points or foundational assumptions that justify persisting disagreement.

Still, even taking 1 and 2 together, it seems that it should not be a rarity for a critic to raise an interesting, novel objection that I hadn't previously considered and which ought to persuade me. This is clear when I consider other philosophers: Often they get objections (sometimes from me) which, in my judgment, nicely illuminate what is incorrect in their views, and which should rationally lead them to change their views -- if only they weren't so defensively set upon rebutting all critiques! I doubt I am a much better philosopher than they are, wise enough to have wholly excellent opinions; so I must sometimes hear criticisms that ought to cause me to relinquish my views.

Let me venture to put some numbers on this.

Let's begin by excluding positions on which I have published at least one full-length paper. For those positions, considerations 1 and 2 plausibly suggest rational steadfastness in the large majority of cases.

A more revealing target is half-baked or three-quarters-baked positions on contentious issues: anything from a position I have expressed verbally, after a bit of thought, in a seminar or informal discussion, up to approximately a blog post, if the issue is fairly new to me.

Suppose that about 20% of the time what I say is off-base in a way that should be discoverable to me if I gave it more thought, in an reasonably open-minded, even-handed way. Now if I'm defending that off-base position in dialogue with someone substantially more expert than I, or with a couple of peers, or with a somewhat larger group of people who are less expert than I but still thoughtful and informed, maybe I should expect that about half to 3/4 of the time I'll hear an objection that ought to move me. Multiplying and rounding, let's say that about 1/8 of the time, when I put forward a half- or three-quarters-baked idea to some interlocutors, I ought to hear an objection that makes me think, whoops, I guess I'm probably mistaken!

I hope this isn't too horrible an estimate, at least for a mature philosopher. For someone still maturing as a philosopher, the estimate should presumably be higher -- maybe 1/4. The estimate should similarly be higher if the half- or three-quarters-baked idea is a critique of someone more expert than you, concerning the topic of their philosophical expertise (e.g., pushing back against a Kant expert's interpretation of a passage of Kant that you're interested in).

Here then are two opposed epistemic vices: being too deferential or being too stubborn. The cartoon of excessive deferentiality would be the person who instantly withdraws in the face of criticism, too quickly allowing that they are probably mistaken. Students are sometimes like this, but it's hard for a really deferential person to make it far as a professional philosopher in U.S. academic culture. The cartoon of excessive stubbornness is the person who is always ready to cook up some post-hoc rationalization of whatever half-baked position happens to come out of their mouth, always fighting back, never yielding, never seeing any merit in any criticisms of their views, however wrong their views plainly are. This is perhaps the more common vice in professional philosophy in the U.S., though of course no one is quite as bad as the cartoon.

Here's a third, more subtle epistemic vice: always giving the same amount of deference. Cartoon version: For any criticism you hear, you think there's 20% truth in it (so you're partly deferential) but you never think there's more than 20% truth in it (so you're mostly stubborn). This is what my inner skeptic was worried about at the beginning of this post. I might be too close to this cartoon, always a little deferential but mostly stubborn, without sufficient sensitivity to the quality of the particular criticism being directed at me.

We can now construct a rationalization-o-meter. Stubborn rationalization, in a mature philosopher, is revealed by not thinking your critics are right, and you are wrong, at least 1/8 of the time, when you're putting forward half- to three-quarters-baked ideas. If you stand firm in 15 out of 16 cases, then you're either unusually wise in your half-baked thoughts, or you're at .5 on the rationalization-o-meter (50% of the time that you should yield you offer post-hoc rationalizations instead). If you're still maturing or if you're critiquing an expert on their own turf, the meter should read correspondingly higher, e.g., with a normative target of thinking you were demonstrably off-base 1/4 or even half the time.

Insensitivity is revealed by having too little variation in how much truth you find in critics' remarks. I'd try to build an insensitivity-o-meter, but I'm sure you all will raise somewhat legitimate but non-decisive concerns against it.

[image modified from source]

Monday, January 23, 2017

Reminder: Philosophical Short Fiction Contest Deadline Feb 1

Reminder: We are inviting submissions for the short story competition “Philosophy Through Fiction”, organized by Helen De Cruz (Oxford Brookes University), with editorial board members Eric Schwitzgebel (UC Riverside), Meghan Sullivan (University of Notre Dame), and Mark Silcox (University of Central Oklahoma). The winner of the competition will receive a cash prize of US$500 (funded by the Berry Fund of the APA) and their story will be published in Sci Phi Journal.

Full call here.

Monday, January 16, 2017

AI Consciousness: A Reply to Schwitzgebel

Guest post by Susan Schneider

If AI outsmarts us, I hope its conscious. It might help with the horrifying control problem – the problem of how to control superintelligent AI (SAI), given that SAI would be vastly smarter than us and could rewrite its own code. Just as some humans respect nonhuman animals because animals feel, so too, conscious SAI might respect us because they see within us the light of conscious experience.

So, will an SAI (or even a less intelligent AI) be conscious? In a recent Ted talk, Nautilus and Huffington Post pieces, and some academic articles (all at my website) I’ve been urging that it is an important open question.

I love Schwitzgebel's reply because he sketches the best possible scenario for AI consciousness: noting that conscious states tend to be associated with slow, deliberative reasoning about novel situations in humans, he suggests that SAI may endlessly invent novel tasks – e.g., perhaps they posit ever more challenging mathematical proofs, or engage in an intellectual arms race with competing SAIs. So SAIs could still engage in reasoning about novel situations, and thereby be conscious.

Indeed, perhaps SAI will deliberately engineer heightened conscious experience in itself, or, in an instinct to parent, create AI mindchildren that are conscious.

Schwitzgebel gives further reason for hope: "...unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing." He also writes: "Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing."

Both of us agree that leading scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus, and that these approaches also associate consciousness with some sort of broad information sharing from a central system or global workspace (see Ch. 2 of my Language of Thought: a New Philosophical Direction where I mine Baars' Global Workspace Theory for a computational approach to LOT's central system).

Maybe it is just that I'm too despondent since Princess Leah died. But here's a few reasons why I still see the glass half empty:

a. Eric's points assume that reasoning about novel situations, and centralized, deliberative thinking more generally, will be implemented in SAI in the same way they are in humans – i.e., in a way that involves conscious experience. But the space of possible minds is vast: There could be other architectural ways to get novel reasoning, central control, etc. that do not involve consciousness or a global workspace. Indeed, if we merely consider biological life on Earth we see intelligences radically unlike us (e.g., slime molds, octopuses); there will likely be radically different cognitive architectures in the age of AGI/superintelligence.

b. SAI may not have a centralized architecture in any case. A centralized architecture is a place where highly processed information comes together from the various sensory modalities (including association areas). Consider the octopus, which apparently has more neurons in its arms than in its brain. The arms can carry out activity without the brain; these activities do not need to be coordinated by a central controller or global workspace in the brain proper. Maybe a creature already exists, elsewhere in the universe, that has even less central control than the octopus.

Indeed, coordinated activity doesn't require that a brain region or brain process be a place where it all comes together, although it helps. There are all kinds of highly coordinated group activities on Earth, for instance (the internet, the stock market). And if you ask me, there are human bodies that are now led by coordinated conglomerates without a central controller. Here, I am thinking of split brain patients, who engage in coordinated activity (i.e., the right and left limbs seem to others to be coordinated). But the brain has been split through removal of the corpus collosum, and plausibly, there are two subjects of experience there. The coordination is so convincing that even the patent's spouse doesn't realize there are two subjects there. It takes highly contrived laboratory tests to determine that the two hemispheres are separate conscious beings. How could this be? Each hemisphere examines the activity of the other hemisphere (the right hemisphere observes the behavior of the limb it doesn't control, etc.) And only one hemisphere controls the mouth.

c. But assume the SAI or AGI has a similar cognitive architecture as we do; in particular, assume it has an integrated central system or global workspace (as in Baars' Global Workspace Theory). I still think consciousness is an open question here. The problem is that only some implementations of a central system (or global workspace) may be conscious, while others may not be. Highly integrated, centralized information processing may be necessary, but not sufficient. For instance, it may be that the very properties of neurons that enable consciousness, C1-C3, say, are not ones that AI programs need to reproduce to get AI systems that do the needed work. Perhaps AI programmers can get sophisticated information processing without needing to go as far as build systems to instantiate C1-C3. Or perhaps a self-improving AI may not bother to keep consciousness in its architecture, or lacking consciousness, it may not bother to engineer it in, as its final and instrumental goals may not require it. And who knows what their final goals will be; none of the instrumental goals Bostrom and others identify require consciousness (goal content integrity, cognitive enhancement,etc.)

Objection (Henry Shevlin and others): am I denying that it is nomologically possible to create a copy of a human brain, in silicon or some other substance, that precisely mimics the causal workings of the brain, including consciousness?

I don't deny this. I think that if you copy the precise causal workings of cells in a different medium you could get consciousness. The problem is that it may not be technologically feasible to do so. (An aside: for those who care about the nature of properties, I reject pure categorical properties; I have a two-sided view, following Heil and Martin. Categoricity and dispositionality are just different ways of viewing the same underlying property—two different modes of presentation, if you will. So consciousness properties that have all and only the same dispositions are the same type of property. You and your dispositional duplicate can't differ in your categorical properties then. Zombies aren't possible.)

It seems nomologically possible that an advanced civilization could build a gold sphere the size of Venus. What is the probability this will ever happen though? This depends upon economics and sociology – a civilization would need to be a practical incentive to do this. I bet it will never happen.

AI is currently being built to do specific tasks better than us. This is the goal, not reproducing consciousness in machines. It may be that the substrate used to build AI is not a substrate that instantiates consciousness easily. Engineering consciousness in may be too expensive and time consuming. Like building the Venus-sized gold sphere. Indeed, given the ethical problems with creating sentient beings and then having them work for us, AI programs may aim to build systems that aren't conscious.

A response here is that once you get a sophisticated information processor, consciousness inevitably arises. Three things seem to fuel this view: (1) Tonini's information integration theory (IIT). But it seems to have counterexamples (see Scott Aaronson's blog). (2) Panpsychism/panprotopsychism. Even if one of these views is correct, the issue of whether a given AI is conscious is about whether the AI in question has the kind of conscious experience macroscopic subjects of experience (persons, selves, nonhuman animals) have. Merely knowing whether panpsychism or panprotopsychism is true does not answer this. We need to know which structural relations between particles lead to macroexperience. (3) Neural replacement cases. I.e., thought experiments in which you are asked to envision replacing parts of your brain (at time t1) with silicon chips that function just like neurons, so that in the end, (t2), your brain is made of silicon. You are then asked: intuitively are you still conscious? Do you think the quality of your consciousness would change? These cases only goes so far. The intuition is plausible that from t1 to t2, at no point would you would lose consciousness or have your consciousness diminished (see Chalmers, Lowe and Plantinga for discussion of such thought experiments). This is because a dispositional duplicate of your brain is created, from t1 to t2. If the chips are dispositional duplicates of neurons, sure, I think the duplicate would be conscious. (I'm not sure this would be a situation in which you survived though-see my NYT op-ed on uploading.) But why would an AI company build such a system from scratch, to clean your home, be a romantic partner, advise a president, etc?

Again, is not clear, currently, that just by creating a fast, efficient program ("IP properties") we have also copied the very same properties that give rise to consciousness in humans ("C properties"). It may require further work to get C properties, and in different substrates, it may be hard, far more difficult than building a biological system from scratch. Like creating a gold sphere the size of Venus.

Cheers to a Philosopher and Fighter

[image source]

Sunday, January 08, 2017

Against Charity in the History of Philosophy

Peter Adamson, host of History of Philosophy Without Any Gaps, recently posted twenty "Rules for the History of Philosophy". Mostly, they are terrific rules. I want to quibble with one.

Like almost every historian of philosophy I know, Adamson recommends that we be "charitable" to the text. Here's how he puts it in "Rule 2: Respect the text":

This is my version of what is sometimes called the "principle of charity." A minimal version of this rule is that we should assume, in the absence of fairly strong reasons for doubt, that the philosophical texts we are reading make sense.... [It] seems obvious (to me at least) that useful history of philosophy doesn't involve looking for inconsistencies and mistakes, but rather trying one's best to get a coherent and interesting line of argument out of the text. This is, of course, not to say that historical figures never contradicted themselves, made errors, and the like, but our interpretations should seek to avoid imputing such slips to them unless we have tried hard and failed to find a way of resolving the apparent slip.

At first pass, it seems a good idea to avoid imputing contradictions and errors, and to seek a coherent, sensible interpretation of historical texts "unless we we have tried hard and failed to find a way of resolving the apparent slip". This is how, it seems, to best "respect the text".

To see why I think charity isn't as good an idea as it seems, let me first reveal my main reason for reading history of philosophy: It's to gain a perspective, through the lens of distance, on my own philosophical views and presuppositions, and on the philosophical attitudes and presuppositions of 21st century Anglophone philosophy generally. Twenty-first century Anglophone philosophy tends to assume that the world is wholly material (with the exception of religious dualists and near cousins of materialists, like property dualists). I'm inclined to accept the majority's materialism. Reading the history of philosophy helpfully reminds me that a wide range of other views have been taken seriously over time. Similarly, 21st century Anglophone philosophy tends to favor a certain sort of liberal ethics, with an emphasis on individual rights and comparatively little deference to traditional rules and social roles -- and I tend to favor such an ethics too. But it's good to be vividly aware that wonderful thinkers have often had very different moral opinions. Reading culturally distant texts reminds me that I am a creature of my era, with views that have been shaped by contingent social factors.

Of course, others might read history of philosophy with very different aims, which is fine.

Question: If this is my aim in reading history of philosophy, what is the most counterproductive thing I could do when confronting a historical text?

Answer: Interpret the author as endorsing a view that is familiar, "sensible", and similar to my own and my colleagues'.

Historical texts, like all philosophical texts -- but more so, given our linguistic and cultural distance -- tend to be difficult and ambiguous. Therefore, they will admit of multiple interpretations. Suppose, then, that there's a text admitting of four possible interpretations: A, B, C, and D, where Interpretation A is the least challenging, least weird, and most sensible, and Interpretation D is the most challenging, weirdest, and least sensible. A simple application of the principle of charity seems to recommend that we favor the sensible, pedestrian Interpretation A. In fact, however, weird and wild Interpretation D would challenge our presuppositions more deeply and give us a more helpfully distant perspective. This is one reason to favor Interpretation D. Call this the Principle of Anti-Charity.

Admittedly, this way of defending of Anti-Charity might seem noxiously instrumentalist. What about historical accuracy? Don't we want the interpretation that's most likely to be true?

Bracketing post-modern views that reject truth in textual interpretation, I have four responses to that concern:

1. Being Anti-Charitable doesn't mean that anything goes. You still want to respect the surface of the text. If the author says "P", you don't want to attribute the view not-P. In fact, it is the more "charitable" views that are likely to take the author's claims other than at face value: "The author says P, but really a charitable, sensible interpretation is that the author really meant P-prime". In one way, it is actually more respectful to the texts not to be too charitable, and to interpret the text superficially at face value. After all, P is what the author literally said.

2. What seems "coherent" and "sensible" is culturally variable. You might reject excessive charitableness, while still wanting to limit allowable interpretations to one among several sensible and coherent ones. But this might already be too limiting. It might not seem "coherent" to us to embrace a contradiction, but some philosophers in some traditions seem happy to accept bald contradictions. It might not seem "sensible" to think that the world is nothing but a flux of ideas, such that the existence of rocks depends entirely upon the states of immaterial spirits. So if there's any ambiguity, you might hope to tame views that seem metaphysically idealist, thereby giving those authors a more sensible, reasonable seeming view. But this might be leading you away from rather than toward interpretative accuracy.

3. Philosophy is hard and philosophers are stupid. The human mind is not well-designed for figuring out philosophical truths. Timeless philosophical puzzles tend to kick our collective asses. Sadly, this is going to be true of your favorite philosopher too. The odds are good that this philosopher, being a flawed human like you and me, made mistakes, fell into contradictions, changed opinions, and failed to see what seem to be obvious consequences and counterexamples. Respecting the text and respecting the person means, in part, not trying too hard to smooth this stuff away. The warts are part of the loveliness. They are also a tonic against excessive hero worship and a reminder of your own likely warts and failings.

4. Some authors might not even want to be interpreted as having a coherent, stable view. I have recently argued that this is the case for the ancient Chinese philosopher Zhuangzi. Let's not fetishize stable coherence. There are lots of reasons to write philosophy. Some philosophers might not care if it all fits together. Here, attempting "charitably" to stitch together a coherent picture might be a failure to respect the aims and intentions implicit in the text.

Three cheers for the weird and "crazy", the naked text, not dressed in sensible 21st century garb!

-----------------------------------------------

Related post: In Defense of Uncharitable and Superficial History of Philosophy (Aug 17, 2012)

(HT: Sandy Goldberg for discussion and suggestion to turn it into a blog post)

[image source]

Sunday, January 01, 2017

Writings of 2016, and Why I Love Philosophy

It's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012, 2013, 2014, and 2015.)

Two landmarks: my first full-length published essay on the sociology of philosophy ("Women in philosophy", with Carolyn Dicey Jennings), and the first foreign-language translations of my science fiction ("The Dauphin's Metaphysics" into Chinese and Hungarian).

Recently, I've been thinking about the value of doing philosophy. Obviously, I love reading, writing, and discussing philosophy, on a wide range of topics -- hence all the publications, the blog, the travel, and so forth. Only love could sustain that. But do I love it only in the way that I might love a videogame -- as a challenging, pleasurable activity, but not something worthwhile? No, I do hope that in doing philosophy I am doing something worthwhile.

But what makes philosophy worthwhile?

One common view is that studying philosophy makes you wiser or more ethical. Maybe this is true, in some instances. But my own work provides reasons for doubt: With Joshua Rust, I've found that ethicists and non-ethicist philosophers behave pretty much the same as professors who study other topics. With Fiery Cushman, I've found evidence that philosophers are just as subject to irrational order effects and framing effects in thinking about moral scenarios, even scenarios on which they claim expertise. With Jon Ellis, I've argued that there's good reason to think that philosophical and moral thought may be especially fertile for nonconscious rationalization, including among professors of philosophy.

Philosophy might still be instrumentally worthwhile in various ways: Philosophers might create conceptual frameworks that are useful for the sciences, and they might helpfully challenge scientists' presuppositions. It might be good to have philosophy professors around so that students can improve their argumentative and writing skills by taking courses with them. Public philosophers might contribute usefully to political and cultural dialogue. But none of this seems to be the heart of the matter. Nor is it clear that we've made great progress in answering the timeless questions of the discipline. (I do think we've made some progress, especially in carving out the logical space of options.)

Here's what I would emphasize instead: Philosophy is an intrinsically worthwhile activity with no need of further excuse. It is simply one of the most glorious, awesome facts about our planet that there are bags of mostly-water that can step back from ordinary activity and reflect in a serious way about the big picture, about what they are, and why, and about what really has value, and about the nature of the cosmos, and about the very activity of philosophical reflection itself. Moreover, it is one of the most glorious, awesome facts about our society that there is a thriving academic discipline that encourages people to do exactly that.

This justification of philosophy does not depend on any downstream effects: Maybe once you stop thinking about philosophy, you act just the same as you would have otherwise acted. Maybe you gain no real wisdom of any sort. Maybe you learn nothing useful at all. Even so, for those moments that you are thinking hard about big philosophical issues, you are participating in something that makes life on Earth amazing. You are a piece of that.

So yes, I want to be a piece of that too. Welcome to 2017. Come love philosophy with me.

-----------------------------------

Full-length non-fiction essays appearing in print in 2016:

    The behavior of ethicists” (with Joshua Rust), in J. Sytsma and W. Buckwalter, eds., A Companion to Experimental Philosophy (Wiley-Blackwell).
Full-length non-fiction finished and forthcoming:
Shorter non-fiction:
Editing work:
    Oneness in philosophy, religion, and psychology (with P.J. Ivanhoe, O. Flanagan, R. Harrison, and H. Sarkissian), Columbia University Press (forthcoming).
Non-fiction in draft and circulating:
Science fiction stories:
    "The Dauphin's metaphysics" (orig. published in Unlikely Story, 2015).
      - translated into Hungarian for Galaktika, issue 316.
      - translated into Chinese for Science Fiction World, issue 367.
Some favorite blog posts:
Selected interviews:

[image modified from here]

Tuesday, December 27, 2016

A few days ago, Skye Cleary interviewed me for the Blog of the APA. I love her direct and sometimes whimsical questions.

--------------------------

SC: What excites you about philosophy?

ES: I love philosophy’s power to undercut dogmatism and certainty, to challenge what you thought you knew about yourself and the world, to induce wonder, and to open up new vistas of possibility.

SC: What are you working on right now?

ES: About 15 things. Foremost in my mind at this instant: “Settling for Moral Mediocrity” and a series of essays on “crazy” metaphysical possibilities that we aren’t in a good epistemic position to confidently reject....

[It's a brief interview -- only six more short questions.]

Read the rest here.

Wednesday, December 21, 2016

Is Most of the Intelligence in the Universe Non-Conscious AI?

In a series of fascinating recent articles, philosopher Susan Schneider argues that

(1.) Most of the intelligent beings in the universe might be Artificial Intelligences rather than biological life forms.

(2.) These AIs might entirely lack conscious experiences.

Schneider's argument for (1) is simple and plausible: Once a species develops sufficient intelligence to create Artificial General Intelligence (as human beings appear to be on the cusp of doing), biological life forms are likely to be outcompeted, due to AGI's probable advantages in processing speed, durability, repairability, and environmental tolerance (including deep space). I'm inclined to agree. For a catastrophic perspective on this issue see Nick Bostrom. For a polyannish perspective, see Ray Kurzweil.

The argument for (2) is trickier, partly because we don't yet have a consensus theory of consciousness. Here's how Schneider expresses the central argument in her recent Nautilus article:

Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.

On this issue, I'm more optimistic than Schneider. Two reasons:

First, Schneider probably underestimates the capacity of the universe to create problems that require novel solutions. Mathematical problems, for example, can be arbitrarily difficult (including problems that are neither finitely solvable nor provably unsolvable). Of course AGI might not care about such problems, so that alone is a thin thread on which to hang hope for consciousness. More importantly, if we assume Darwinian mechanisms, including the existence of other AGIs that present competitive and cooperative opportunities, then there ought to be advantages for AGIs that can outthink the other AGIs around them. And here, as in the mathematical case, I see no reason to expect an upper bound of difficulty. If your Darwinian opponent is a superintelligent AGI, you'd probably love to be an AGI with superintelligence + 1. (Of course, there are other paths to evolutionary success than intelligent creativity. But it's plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.)

Second, unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing. Schneider is right that many current scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus. But most current scientific approaches to consciousness also associate consciousness with some sort of broad information sharing -- a "global workspace" or "fame in the brain" or "availability to working memory" or "higher-order" self-representation. On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity's other subsystems and/or reportable in some sort of "introspective" summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation "lots of visible light from that direction!" would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing.

I share with Schneider a high degree of uncertainty about what the best theory of consciousness is. Perhaps it will turn out that consciousness depends crucially on some biological facts about us that aren't likely to be replicated in systems made of very different materials (see John Searle and Ned Block for concerns). But to the extent there's any general consensus or best guess about the science of consciousness, I believe it suggests hope rather than pessimism about the consciousness of large superintelligent AI systems.

Related:

Possible Psychology of a Matrioshka Brain (Oct 9, 2014)

If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).

Susan Schneider on How to Prevent a Zombie Dictatorship (Jun 27, 2016)

[image source]

Friday, December 16, 2016

Extraterrestrial Microbes and Being Alone in the Universe

A couple of weeks ago I posted some thoughts that I intended to give after a cosmology talk here at UCR. As it happens, I gave an entirely different set of comments! So I figured I might as well also share the comments I actually gave.

Although the cosmology talk made no or almost no mention of extraterrestrial life, it had been advertised as the first in a series of talks on the question "Are We Alone?" The moderator then talked about astrobiologists being excited about the possibility of discovering extraterrestrial microbial life. So I figured I'd expand a bit on the idea of being "alone", or not, in the universe.

Okay, suppose that we find microbial life on another planet. Tiny micro-organisms. How excited should be we?

The title of this series of talks -- written in big letters on the posters -- is "Are We Alone?" What does it mean to be alone?

Think of Robinson Crusoe. He was stranded on an island, all by himself (or so he thought). He is kind of our paradigm example of someone who is totally alone. But of course he was surrounded by life on that island -- trees, fish, snails, microbes on his face. This suggests that on one way of thinking about being "alone", a person can be entirely alone despite being surrounded by life. Discovering microbes on another planet would not make us any less alone.

To be not alone, I’m thinking, means having some sort of companion. Someone who will recognize you socially. Intelligent life. Or at least a dog.

We might be excited to discover microbes because hey, it's life! But what’s so exciting about life per se?

Life -- something that maintains homeostasis, has some sort of stable organization, draws energy from its environment to maintain that homeostatic organization, reproduces itself, is complex. Okay, that's neat. But the Great Red Spot on Jupiter, which is a giant weather pattern, has maintained its organization for a long time in a complex environment. Flames jumping across treetops in some sense reproduce themselves. Galaxies are complex. Homeostasis, reproduction, complexity -- these are cool. Tie them together in a little package of microbial life; that’s maybe even cooler. But in a way we do kind of already know that all the elements are out there.

Now suppose that instead of finding life we found a robot -- an intelligent, social robot, like C3P0 from Star Wars or Data from Star Trek. Not alive, by standard biological definitions, if it doesn’t belong to a reproducing species.

Finding life would be cool.

But finding C3P0 would be a better cure for loneliness.

(Apologies to my student Will Swanson, who has recently written a terrific paper on why we should think of robots as "alive" despite not meeting standard biological criteria for life.)

Related post: "Why Do We Care About Discovering Life, Exactly?" (Jun 18, 2015)

Recorded video of the Dec 8 session.

Thanks to Nalo Hopkinson for the dog example.

[image source]

Monday, December 12, 2016

Is Consciousness an Illusion?

In the current issue of the Journal of Consciousness Studies, Keith Frankish argues that consciousness is an illusion -- or at least that "phenomenal consciousness" is an illusion. It doesn't exist.

Now I think there are basically two different things that one could mean in saying "consciousness doesn't exist".

(A.) One is something that seems to be patently absurd and decisively refuted by every moment of lived experience: that there is no such thing as lived experience. If it sounds preposterous to deny that anyone ever has conscious experience, then you're probably understanding the claim correctly. It is a radically strange claim. Of course philosophers do sometimes defend radically strange, preposterous-sounding positions. Among them, this would be a doozy.

(B.) Alternatively, you might think that when a philosopher says that consciousness exists (or "phenomenal consciousness" or "lived, subjective experience" or whatever) she's usually not just saying the almost undeniably obvious thing. You might think that she's probably also regarding certain disputable properties as definitionally essential to consciousness. You might hear her as saying not only that there is lived experience in the almost undeniable sense but also that the target phenomenon is irreducible to the merely physical, or is infallibly knowable through introspection, or is constantly accompanied by a self-representational element, or something like that. Someone who hears the claim that "consciousness exists" in this stronger, more commissive sense might then deny that consciousness does exist, if they think that nothing exists that has those disputable properties. This might be an unintuitive claim, if it's intuitively plausible that consciousness does have those properties. But it's not a jaw dropper.

Admittedly, there has been some unclarity in how philosophers define "consciousness". It's not entirely clear on the face of it what Frankish means to deny the existence of in the article linked above. Is he going for the totally absurd sounding claim, or only the more moderate claim? (Or maybe something somehow in between or slightly to the side of either of these?)

In my view, the best and most helpful definitions of "consciousness" are the less commissive ones. The usual approach is to point to some examples of conscious experiences, while also mentioning some synonyms or evocative phrases. Examples include sensory experiences, dreams, vivid surges of emotion, and sentences spoken silently to oneself. Near synonyms or evocative phrases include "subjective quality", "stream of experience", "that in virtue of which it's like something to be a person". While you might quibble about any particular example or phrase, it is in this sense of "consciousness" that it seems to be undeniable or absurd to deny that consciousness exists. It is in this sense that the existence of consciousness is, as David Chalmers says, a "datum" that philosophers and psychologists need to accept.

Still, we might be dissatisfied with evocative phrases and pointing to examples. For one thing, such a definition doesn't seem very rigorous, compared to an analytic definition. For another thing, you can't do very much a priori with such a thin definition, if you want to build an argument from the existence of consciousness to some bold philosophical conclusion (like the incompleteness of physical science or the existence of an immaterial soul). So philosophers are understandably tempted to add more to the definition -- whatever further claims about consciousness seem plausible to them. But then, of course, they risk adding too much and losing the undeniability of the claim that consciousness exists.

When I read Frankish's article in preprint, I wasn't sure how radical a claim he meant to defend, in denying the existence of phenomenal consciousness. Was he going for the seemingly absurd claim? Or only for the possibly-unintuitive-but-much-less-radical claim?

So I wrote a commentary in which I tried to define "phenomenal consciousness" as innocently as possible, simply by appealing to what I hoped would be uncontroversial examples of it, while explicitly disavowing any definitional commitment to immateriality, introspective infallibility, irreducibility, etc. (final MS version). Did Frankish mean to deny the existence of phenomenal consciousness in that sense?

In one important respect, I should say, definition by example is necessarily substantive or commissive: Definition by example cannot succeed if the examples are a mere hodgepodge without any important commonalities. Even if there isn't a single unifying essence among the examples, there must at least be some sort of "family resemblance" that ordinary people can latch on to, more or less.

For instance, the following would fail as an attempted definition: By "blickets" I mean things like: this cup on my desk, my right shoe, the Eiffel tower, Mickey Mouse, and other things like those; but not this stapler on my desk, my left shoe, the Taj Mahal, Donald Duck, or other things like those. What property could the first group possibly possess, that the second group lacks, which ordinary people could latch onto by means of contemplating these examples? None, presumably (even if a clever philosopher or AI could find some such property). Defining "consciousness" by example requires there to be some shared property or family resemblance among the examples, which is not present in things we normally regard as "nonconscious" (early visual processing, memories stored but not presently considered, and growth hormone release). The putative examples cannot be a mere hodge-podge.

Definition by example can be silent about what descriptive features all these conscious experiences share, just as a definition by example of "furniture" or "games" might be silent about what ties those concepts together. Maybe all conscious experiences are in principle introspectively reportable, or nonphysical, or instantiated by 40 hertz neuronal oscillations. Grant first that consciousness exists. Argue about these other things later.

In his reply to my commentary, Frankish accepts the existence of "phenomenal consciousness" as I have defined it -- which is really (I think) more or less how it is already defined and ought to be defined in the recent Anglophone "phenomenal realist" tradition. (The "phenomenal" in "phenomenal consciousness", I think, serves as a usually unnecessary disambiguator, to prevent interpreting "consciousness" as some other less obvious but related thing like explicit self-consciousness or functional accessibility to cognition.) If so, then Frankish is saying something less radical than it might at first seem when he rejects the existence of "phenomenal consciousness".

So is consciousness an illusion? No, not if you define "consciousness" as you ought to.

Maybe my dispute with Frankish is mainly terminological. But it's a pretty important piece of terminology!

[image source, Pinna et al 2002, The Pinna Illusion]

Tuesday, December 06, 2016

A Philosophical Critique of the Big Bang Theory, in Four Minutes

I've been invited to be one of four humanities panelists after a public lecture on the early history of the universe. (Come by if you're in the UCR area. ETA: Or watch it live-streamed.) The speaker, Bahram Mobasher, has told me he likes to keep it tightly scientific -- no far-out speculations about the multiverse, no discussion of possible alien intelligences. Instead, we'll hear about H/He ratios, galactic formation, that sort of stuff. I have nothing to say about H/He ratios.

So here's what I'll say instead:

Alternatively, here’s a different way our universe might have begun: Someone might have designed a computer program. They might have put simulated agents in that computer program, and those simulated agents might be us. That is, we might be artificial intelligences inside an artificial environment created by some being who exists outside of our visible world. And this computer program that we are living in might have started ten years ago or ten million years ago or ten minutes ago.

This is called the Simulation Hypothesis. Maybe you’ve heard that Elon Musk, the famous tycoon of Paypal, Tesla, and SpaceX, believes that the Simulation Hypothesis is probably true.

Most of you probably think that Musk is wrong. Probably you think it vastly more likely that Professor Mobasher’s story is correct than that the Simulation Hypothesis is correct. Or maybe you think it’s somewhat more likely that Mobasher is correct.

My question is: What grounds this sense of relative likelihood? It’s doubtful that we can get definite scientific proof that we are not in a simulation. But does that mean that there are no rational constraints on what it’s more or less reasonable to guess about such matters? Are we left only with hard science on the one hand and rationally groundless faith on the other?

No, I think we can at least try to be rational about such things and let ourselves be moved to some extent by indirect or partial scientific evidence or plausibility considerations.

For example, we can study artificial intelligence. How easy or difficult is it to create artificial consciousness in simulated environments, at least in our universe? If it’s easy, that might tend to nudge up the reasonableness of the Simulation Hypothesis. If it’s hard, that might nudge it down.

Or we can look for direct evidence that we are in a designed computer program. For example, we can look for software glitches or programming notes from the designer. So far, this hasn’t panned out.

Here’s my bigger point. We all start with framework assumptions. Science starts with framework assumptions. Those assumptions might be reasonable, but they can also be questioned. And one place where cosmology intersects with philosophy and the other humanities and sciences is in trying to assess those framework assumptions, rather than simply leaving them unexamined or taking them on faith.

[image source]

Related:

"1% Skepticism" (Nous, forthcoming)

"Reinstalling Eden" (with R. Scott Bakker; Nature, 2013)

Tuesday, November 29, 2016

How Everything You Do Might Have Huge Cosmic Significance

Infinitude is a strange and wonderful thing. It transforms the ridiculously improbable into the inevitable.

Now hang on to your hat and glasses. Today's line of reasoning is going to make mere Boltzmann continuants seem boring and mundane.

First, let's suppose that the universe is infinite. This is widely viewed as plausible (see Brian Greene and Max Tegmark).

Second, let's suppose that the Copernican Principle holds: We are not in any special position in the universe. This principle is also widely accepted.

Third, let's assume cosmic diversity: We aren't stuck in an infinitely looping variant of a mere (proper) subset of the possibilities. Across infinite spacetime, there's enough variety to run through every finitely specifiable possibility infinitely often.

These assumptions are somewhat orthodox. To get my argument going, we also need a few assumptions that are less orthodox, but I hope not wildly implausible.

Fourth, let's assume that complexity scales up infinitely. In other words, as you zoom out on the infinite cosmos, you don't find that things eventually look simpler as the scale of measurement gets bigger.

Fifth, let's assume that local actions on Earth have chaotic effects of an arbitrarily large magnitude. You know the Butterfly Effect from chaos theory -- the idea that a small perturbation in a complex, "chaotic" system can make a large-scale difference in the later evolution of the system. A butterfly flapping its wings in China could cause the weather in the U.S. weeks later to be different than it would have been if the butterfly hadn't flapped its wings. Small perturbations amplify. This fifth assumption is that there are cosmic-scale butterfly effects: far-distant, arbitrarily large future events that arise with chaotic sensitivity to events on Earth. Maybe new Big Bangs are triggered, or maybe (as envisioned by Boltzmann) given infinite time, arbitrarily large systems will emerge by chance from low-entropy "heat death" states, and however these Big Bangs or Boltzmannian eruptions arise, they are chaotically sensitive to initial conditions -- including the downstream effects of light reflected from Earth's surface.

Okay, that's a big assumption to swallow. But I don't think it's absurd. Let's just see where it takes us.

Sixth, given the right kind of complexity, evolutionary processes will transpire that favor intelligence. We would not expect such evolutionary processes at most spatiotemporal scales. However, given that complexity scales up infinitely (our fourth assumption) we should expect that at some finite proportion of spatiotemporal scales there are complex systems structured in a way that enables the evolution of intelligence.

From all this it seems to follow that what happens here on Earth -- including the specific choices you make, chaotically amplified as you flap your wings -- can have effects on a cosmic scale that influence the cognition of very large minds.

(Let me be clear that I mean very large minds. I don't mean galaxy-sized minds or visible-universe-sized minds. Galaxy-sized and visible-universe-sized structures in our region don't seem to be of the right sort to support the evolution of intelligence at those scales. I mean way, way up. We have infinitude to play with, after all. And presumably way, way slow if the speed of light is a constraint. Also, I am assuming that time and causation make sense at arbitrarily large scales, but maybe that can be weakened if necessary to something like contingency.)

Now at such scales anything little old you personally does would very likely be experienced as chance. Suppose for example that a cosmic mind utilizes the inflation of Big Bangs. Even if your butterfly effects cause a future Big Bang to happen this way rather than that way, probably a mind at that scale wouldn't have evolved to notice tiny-scale causes like you.

Far fetched. Cool, perhaps, depending on your taste in cool. Maybe not quite cosmic significance, though, if your decisions only feed a pseudo-random mega-process whose outcome has no meaningful relationship to the content of your decisions.

But we do have infinitude to play with, so we can add one more twist.

Here it is: If the odds of influencing the behavior of an arbitrarily large intelligent system are finite, and if we're letting ourselves scale up arbitrarily high, then (granting all the rest of the argument) your decisions will affect the behavior of an infinite number of huge, intelligent systems. Among them there will be some -- a tiny but finite proportion! -- such that the following counterfactual is true: If you hadn't made that upbeat, life-affirming choice you in fact just made, that huge, intelligent system would have decided that life wasn't worth living. But fortunately, partly as a result of that thing you just did, that giant intelligence -- let's call it Emily -- will discover happiness and learn to celebrate its existence. Emily might not know about you. Emily might think it's random or find some other aspect of the causal chain to point toward. But still, if you hadn't done that thing, Emily's life would have been much worse.

So, whew! I hope it won't seem presumptuous of me to thank you on Emily's behalf.

[image source]

Sunday, November 27, 2016

The Odds of Getting Three Consecutive Wars in a Row in the Card Game

What better way to spend the Sunday after Thanksgiving than playing card games with your family and then arguing about the odds?

As pictured, my daughter and I just got three consecutive "wars" in the card game of war. (I lost with a 3 at the end!)

What are the odds of that?

Well, the odds of getting just one war are 3/51, right? Here's why. It doesn't matter whether my or my daughter's card is turned first. That card can be anything. The second card needs to match it. With the first card out of the deck, 51 cards remain. Three of them match the first-turned card. So 3/51 = .058824 = about a 5.9% chance.

Then you each play three face down "soldier" cards. Those could be any cards, and we don't know anything about them, so they can be ignored for purposes of calculation. What's relevant are the next upturned cards, the "generals". Here there are two possibilities. First possibility: The first general is the same value as the original war cards. Since there are 50 unplayed cards and two that match the original two war cards, the odds of that are 2/50 = .040000 = 4.0%. The other possibility is that the value of the first general differs from that of the war cards: 48/50 = .960000 = 96.0%.

(As I write this, my son is sleeping late and my wife and daughter are playing with Musical.ly -- other excellent ways to spend a lazy Sunday!)

In the first case, the odds of the second general matching are only one in 49 (.020408, about 2.0%), since three of the four cards of that value have already been played and there are 49 cards left in the deck (disregarding the soldiers). In the second case, the odds are three in 49 (.061224, about 6.1%).

So the odds of two wars consecutively are: .058824 * .04 * .020408 (first war, followed by matching generals, i.e. all four up cards the same) + .058824 * .96 * .061124 (first war, followed by a different pair of matching generals) = .000048 + .003457 = .003505. In other words, there's about a 0.35% chance, or about one in 300 chance, of two consecutive wars.

If the second war had generals that matched the original war cards, then there's only one way for the third war to happen. Player one draws any new general. The odds of player two's new general matching are 3/47 (.063830).

If the second war had generals that did not match the original war cards, then there are two possibilities.

First possibility: The first new general is the same value as one of the original war cards or previous generals. There's a 4 in 48 (.083333) chance of that happening (two remaining cards of each of those two values). Finally, there's a 1/47 (.021277) chance that the last general matches this one (last remaining card of that value).

Second possibility: The first new general is a different value from either the original war cards or the previous generals. The odds of that are 44/48 (.916667), followed by a 3/47 (.063830) chance of match.

Okay, now we can total up the possibilities. There are three relevantly different ways to get three consecutive wars in a row.

A: First war, followed by second war with same values, followed by third war with different values: .058824 (first war) * .04000 (first general matches war cards) * .020408 (second general matches first general) * .063830 (odds of third war with fresh card values) = .000003 (.0003% or about 1 in 330,000).

B: First war, followed by second war with different values, followed by third war with same values as one of the previous wars: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .083333 (first new general matches either war cards or previous generals) * .021277 (second new general matches first new general) = .000006 (.0006% or about 1 in 160,000).

C: First war, followed by second and third wars, each with different values: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .916667 (first new general doesn't match either war cards or previous generals) * .063830 (second new general matches first new general) = .000202 (.02% or about 1 in 5000).

Summing up these three paths: .000003 + .000006 + .000202 = .000211. In other words, the chance of three wars in a row is 0.0211% or 1 in 4739.

Now for some leftover turkey.

-----------------------------------------------

As it happens we were playing the variant game Modern War -- which is much less tedious than the traditional card game of war! But since it was only the first campaign the odds are the same. (In later campaigns the odds of war increase, because smaller cards fall disproportionately out of the deck.)

Wednesday, November 23, 2016

The Moral Compass and the Liberal Ideal in Moral Education

Here are two very different approaches to moral education:

The outward-in approach. Inform the child what the rules are. Do not expect the child to like the rules or regard them as wise. Instead, enforce compliance through punishment and reward. Secondarily, explain the rules, with the hope that eventually the child will come to appreciate their wisdom, internalize them, and be willing to abide by them without threat of punishment.

The inward-out approach. When the child does something wrong, help the child see for herself what makes it wrong. Invite the child to reflect on what constitutes a good system of rules and what are good and bad ways to treat people, and collaborate in developing guidelines and ideals that make sense to the child. Trust that even young children can come to see the wisdom of moral guidelines and ideals. Punish only as a fallback when more collaborative approaches fail.

Though there need be no neat mapping, I conjecture that preference for the outward-in approach correlates with what we ordinarily regard as political conservativism and preference for the inward-out approach with what we ordinarily regard as political liberalism. The crucial difference between the two approaches is this: The outward-in approach trusts children's judgment less. On the outward-in approach, children should be taught to defer to established rules, even if those rules don't make sense to them. This resembles Burkean political conservativism among adults, which prioritizes respect for the functioning of our historically established traditions and institutions, mistrusting our current judgments about how to those institutions might be improved or replaced.

In contrast, the liberal ideal in moral education depends on the thought that most or all people -- including most or all children -- have something like an inner moral compass, which can be relied on as at least a partial, imperfect guide toward what's morally good. If you take four-year-old Pooja aside after she has punched Lauren (names randomly chosen) and patiently ask her to explain herself and to think about the ethics of punching, you will get something sensible in reply. For the liberal ideal to work, it must be true that Pooja can be brought to understand the importance of treating others kindly and fairly. It must be true that after reflection, she will usually find that she wants to be kind and fair to others, even without outer reward.

This is a lot to expect from children. And yet I do think that most children, when approached patiently, can find their moral compass. In my experience watching parents and educators, it strikes me that when they are at their best -- not overloaded with stress or too many students -- they can successfully use the inward-out approach. Empirical psychology also suggests that the (imperfect, undeveloped) seeds of morality are present early in development and shared among primates.

It is I think foundational to the liberal conception of the human condition -- "liberal" in rejecting the top-down imposition of values and celebrating instead people's discovery of their own values -- that when they are given a chance to reflect, in conditions of peace, with broad access to relevant information, people will tend to find themselves revolted by evil and attracted to good. Hatred and evil wither under thoughtful critical examination. So we liberals must believe. Despite complexities, bumps, regressions, and contrary forces, reflection and broad exposure to facts and arguments will bend us toward freedom, egalitarianism, and respect.

If this is so, here's something you can always do: Invite people to think alongside you. Share the knowledge you have. If there is light and insight in your thinking, people will slowly walk toward it.

Related essay: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly, 2007)

[image source]