Anda di halaman 1dari 19

MORALITIES OF EVERYDAY LIFE

BY YALE UNIVERSITY
Video 04 Reason vs. Emotions 15:00
Video Transcript

0:07
Okay, here's where we are so far.

0:10
We started off with a discussion of what morality is, gave some examples of it, talked about
the scope of morality. And then we introduced the distinction between the philosophical
question of how we should be, what, what's the right way to live, and the psychological
question of our moral reasoning and moral action, how we actually do live, how we do think.
We, we talked about philosophical theories, and we introduced a distinction between
consequentialism and deontology: roughly the theory that all that matters is the consequences
of an action versus the theory that there are sort of these abstract principles that apply above
and beyond consequences.

0:52
Now, many experimental psychologists and developmental psychologists and social
psychologists have been very excited. At the idea of framing the psychological question in
terms of they philosophical question.

1:08
So, one way to think about people is that we're, sort of, common sense moral philosophers.
We, we think about morality. We, we have a sense of morality. We have a, a unconscience,
perhaps, an unconscience theory of what's right and wrong. And then the question is, what
kind of moral philosophers are we? There's been a lot of research asking the question, are
people, in their gut, consequentialists

1:34
or are people deontologic, deontological thinkers? Are they Kantians?
1:39
And one way to explore this, and this has launched dozens, hundreds of experiments, is in
terms of a famous thought experiment. This thought experiment is known as the Trolly
Problem. The Trolly Problem was developed in its initial form by the philosopher Philipa Foot
and then was expanded and developed by many other philosophers and in particular Judith
Jarvis Thompson. When I was a graduate student at MIT, she was a professor there and I
actually took one of her Philosophy courses. Which was just terrific though I did drop it after
the second class. But that was me and not her. So here's the trolly problem.

2:21
There is a run away trolly, a run away train. And if it is left on its, to do what it's going to do,
it will go on the track to the left. And it will kill five people. These are workers. They're,
they're banging on it. Maybe they have headphones on. They can't see the train coming. They
are too, you are the person at the switch. These people on the tracks are too far away to warn.
If you do nothing, they will die. The train will come and it will run them over.

2:51
You, however, are standing next to a switch.

2:54
If you pull that switch, the train will be diverted and will go on the other track. Unfortunately,
there is one other worker there, also too far away to warn and that worker will be killed.

3:07
And the question, this is a question that has been asked, I would bet has been asked to over
100,000 people in various experiments and surveys and online studies is, should you throw
the switch? Should you, not what would you do, not, they don't ask people ever to do it, but
should you throw the switch? What's the right thing to do?

3:25
So, we call that the switch case. Think about your answer to that. And now compare it to
another case. This is sometimes called the Footbridge Case.

3:35
Same situation to some extent. There's a runaway train. Unless something happens, that train
will proceed and kill five people, and there's not a thing you can do to warn them.
3:45
But you are standing on a bridge, and next to you is a man.

3:49
And he's, and, and, and, and if you push him, he will plummet down, land on the tracks, and
stop the train.

3:59
Sometimes this is called a fat man case. Because the idea is that people always ask why
shouldn't you jump yourself? So, the example is such that he's a big enough guy that he'll stop
the train. You're not big enough to stop the train.

4:13
So, now your question is should you push the man? Should you let the train proceed, kill five
people or should you push the man, which would kill him but save those five. As I said,
countless people have been asked these questions. If these questions have been asked, for
young children, they have been asked people from different cultures in the world. They've
been asked of criminal psychopaths.

4:41
They've been asked of people with autism, with schizophrenia, with various forms of brain
damage.

4:49
And when you look at the data for normal people there's a striking pattern. And I would bet
that this comes up in your intuitions, too. Most people think it's okay to throw the switch for
the switch case. Most people think that, that maybe, you know, it's not 100% you throw the
switch it's the right thing to do. You save five and one die.

5:10
But most people think it's wrong to push the man.

5:14
Most people think you shouldn't do that.
5:18
So, what does this tell us? Well, the people that do this research will, will tell you that, one
thing it suggests is that we're not consequentialists. because notice these cases are, kind of
similar.

5:30
The consequences are the same. If you do nothing, five people will die. If you act, one person
will die.

5:42
So, if all we were doing was making our decision based on the consequences of our acts, these
two cases would elicit the identical intuition. But they don’t. There seems to be a critical
difference between them. And so, what could this difference be? Well, there’s different
theories of this.

6:02
One theory is that we have in our brains, unconsciously usually, a rather subtle philosophical
principle and this principle is sometimes called the Doctrine of Double Effect.

6:16
And it was it was developed, among others, by, Thomas Aquinas. And the Doctrine of Double
Effect says there's a distinction between doing something bad, like killing or harming
somebody as an unintended consequence of causing greater good to happen, and that could be
the right thing to do, versus doing something bad, like killing or harming somebody in order
to bring about a greater good, and that you shouldn't do.

6:47
The consequences are identical.

6:50
But the difference is that in one case, the bad thing is a regrettable byproduct.

6:57
I'm sorry, in the good case, the bad thing is a regrettable byproduct. That's what you, that's
okay. Well, in the bad case, it's an instrument through which you act.
7:06
So, this often comes up, in case, cases of just war. Or, or, or in philosophical debates and
political debates over, over what's permissible in war time. And people who talk about a
Doctrine of Double Effects say, consider two examples.

7:22
In one example there's a munition's factory, weapon's factory.

7:27
And you have to decide whether or not to bomb it. And if you bomb it, there's good reason to
believe it would, defeat the enemy and the war would come to an end and millions of lives
would be saved.

7:39
But if you bomb it, there's people who work in the factory who are innocent, and they will
die.

7:44
Should you bomb it? Well, that's a hard question. But many people would argue if the benefit
is great enough, having some innocent people die as a byproduct is permissible.

7:56
Now compare to another case, there innocent people and if you blow them up, if you kill
them, that will terrorize the enemy population. They'll know you're serious and then the war
will end, saving millions of lives and so on. In fact, you can make these case so that they have
identical consequences so that the same number of people die in the same way.

8:20
But still there's intuition that these cases are different. And the Doctrine of Double Effect says,
the difference is, in the case where you may do it, the innocents are collateral damage, but you
don't want them to die. They're just a regrettable byproduct. But here, you're killing people in
order to bring about an effect and you shouldn't do that. And the connection to trolley
problem, is pretty clear.
8:46
this, this could explain, some people believe, why we think so differently about these two
cases. In the switch case, one person dies but their death is an accident and their death does
nothing. If they were to leave the track, you'd be happy. They don't have to die and it's
wonderful.

9:04
In the bridge case, the person's death is necessary.

9:08
If they were to leave the bridge, it doesn't work any more, the five will die. You need them.

9:13
And this difference, people argue, explains our intuition.

9:19
Unwise permissible in a switch case, but not in a bridge case.

9:24
I will add, there's all sorts of philosophical dilemmas that are so fanciful and weird and crazy
that they would never happen in a real world, but this actually is a case where these things
happen in a real world. in, in, 9/11 the U.S. government had to make a decision over whether
or not to shoot down a plane that was headed for the Pentagon and, and I think the German
government had a similar case a few years ago, where they had to, where they debated
whether to do this. And this is a trolley problem. Is it permissible to kill some people in order
to save a greater amount of people? Or imagine you're driving your car home and imagine it's
a cold winter's day on ice. And you're driving a bit too fast, and you shoot through a stop sign,
and you're headed right towards five people. And they're standing in front of you [NOISE]
and if you do nothing your car will go into them and kill them.

10:21
But you can swerve, you could swerve and your car will spin, but there's one person over
there. You are living in a trolley problem. And you might think, well, those things are so rare
we never have to deal with them. But people pointed out that solving the trolly problem,
figuring out the right thing to do in these cases, is a matter of real world relevance. For one
thing, as Gary Marcus writes in an interesting discussion, people are now building self driving
cars. Google's building self driving cars. And self driving cars may have to one day deal with
a variants of the trolly problem, and solve.

10:58
So, this way of looking at things where we take these abstract philosophical theories, we use
the philosophical examples, we test people on them, and then we try to explain people's
intuitions based on, on these philosophical approaches, is one way to do moral psychhology.
And it's been a very influential approach. It treats people as moral philosophers and uses
abstract philosophical theories as a way to make sense of our deepest intuitions and
judgements. But it's not the only way to do moral psychology. And in fact, I think its fair to
say that in the last decade or so there's been a backlash. And in this backlash people have said
this is entirely the wrong way to study moral psychology. We should not think of people as, as
these philosophical creatures doing these abstract rules, rather moral, morality is driven to a
large extent by gut feelings.

12:00
And and this is a view I want to talk about in the lectures that, that follow. It was actually
nicely summarized, I think, by, the cultural commentator and public intellectual, David
Brooks, in a New York Times article of a few years ago. And the article was titled, The End of
Philosophy. So Brooks writes this. Think of what happens when you put a new food into your
mouth. You don't have to decide if it's disgusting. You just know. You don't have to decide if a
landscape is beautiful. You just know. Moral judgments are like that. They are rapid intuitive
decisions and involve the emotional-processing parts of the brain. Most of us make snap
moral judgments about what feels fair or not or what feels good or not. We start doing this
when we are babies, before we have language, and even as adults, we don't, we often can't
explain to ourselves why something feels wrong.

12:51
And, a lot of what he talks about, about the brain, and about babies, we're going to talk about
in this course and talk about edvidence underlying his claims, which I think are for the most
part correct

13:03
from this perspective, we're not these rational, moral philosophers at all. We, we, a lot of our
morality is driven by our gut.
13:12
And in fact, David Brooks then wrote a best selling book, The Social Animal, that expanded
and elaborated on this theory and it's one I highly recommend.

13:21
In his work, David Brooks drew, to a large extent, on the work of the philosopher Jonathan
Haidt. And Jonathan Haidt wrote, many years ago, a very influential article in Psychological
Review, one of our top journals, called The Emotional Dog and Its Rational Tail: A Social
Intuitionist Approach to Moral Judgment And you can see his argument from the title. The
idea is that if we think reason is important, we are mistaking the tail for the dog. It's, it's the
motion that counts.

13:51
Haidt writes, moral judge, reasoning does not cause moral judgment; rather moral reasoning is
usually a post hoc construction generated after a judgment has been reached. And Haidt has
recently written a book summarizing his view, called The Righteous Mind. This is the
American cover. But I think the British cover better conveys the audacious nature of this
claim

14:14
In all of this work scholars like Brooks and Haidt and the many psychologists and
philosophers who endorse this emotional approach to morality are drawing upon an important
philosophical tradition. And it's actually nicely summarized by David Hume. So, David Hume
talking about moral reasoning says, look, our moral decisions, our moral understanding is not
driven by reason. Rather, he says, reason is, and ought only to be, the slave of the passions.

14:50
Now, so far I have not given evidence for this. I just made the abstract argument for it. What I
want to do in the next three lectures, is give three case studies of how our gut feelings, how
our emotional responses influence, in a profound way, our sense of right and wrong. [MUSIC]

MORALITIES OF EVERYDAY LIFE


BY YALE UNIVERSITY
Video 04 Reason vs. Emotions 15:00
Video Transcript

0:07
Okay, here's where we are so far.
Ok, aqui é onde estamos tão longe.
0:10
We started off with a discussion of what morality is, gave some examples of it, talked about
the scope of morality. And then we introduced the distinction between the philosophical
question of how we should be, what, what's the right way to live, and the psychological
question of our moral reasoning and moral action, how we actually do live, how we do think.
We, we talked about philosophical theories, and we introduced a distinction between
consequentialism and deontology: roughly the theory that all that matters is the consequences
of an action versus the theory that there are sort of these abstract principles that apply above
and beyond consequences.
Começamos com uma discussão sobre o que é moralidade, apresentamos alguns exemplos,
falamos sobre o escopo da moralidade. E então nós introduzimos a distinção entre a questão
filosófica de como deveríamos ser, o que, o modo certo de viver, e a questão psicológica do
nosso raciocínio moral e ação moral, como realmente vivemos, como pensamos. Nós, falamos
sobre teorias filosóficas, e introduzimos uma distinção entre consequencialismo e
deontologia: aproximadamente a teoria de que tudo o que importa são as consequências
de uma ação versus a teoria de que existem alguns desses princípios abstratos que se aplicam
acima e além das conseqüências.
0:52
Now, many experimental psychologists and developmental psychologists and social
psychologists have been very excited. At the idea of framing the psychological question in
terms of they philosophical question.
Agora, muitos psicólogos experimentais e psicólogos do desenvolvimento e psicólogos
sociais estão muito entusiasmados. Na ideia de enquadrar a questão psicológica em termos de
sua questão filosófica.
1:08
So, one way to think about people is that we're, sort of, common sense moral philosophers.
We, we think about morality. We, we have a sense of morality. We have a, a unconscience,
perhaps, an unconscience theory of what's right and wrong. And then the question is, what
kind of moral philosophers are we? There's been a lot of research asking the question, are
people, in their gut, consequentialists
Então, uma maneira de pensar nas pessoas é que somos, de certa forma, filósofos morais do
senso comum. Nós pensamos em moralidade. Nós temos um senso de moralidade. Temos uma
inconsciência, talvez, uma teoria inconsciente do que é certo e errado. E então a questão é:
que tipo de filósofos morais somos nós? Tem havido muita pesquisa fazendo a pergunta, são
pessoas, em seu intestino, conseqüencialistas
1:34
or are people deontologic, deontological thinkers? Are they Kantians?
ou são pensadores deontológicos, deontológicos? Eles são kantianos?

1:39
And one way to explore this, and this has launched dozens, hundreds of experiments, is in
terms of a famous thought experiment. This thought experiment is known as the Trolly
Problem. The Trolly Problem was developed in its initial form by the philosopher Philipa Foot
and then was expanded and developed by many other philosophers and in particular Judith
Jarvis Thompson. When I was a graduate student at MIT, she was a professor there and I
actually took one of her Philosophy courses. Which was just terrific though I did drop it after
the second class. But that was me and not her. So here's the trolly problem.
E uma maneira de explorar isso, e isso lançou dezenas, centenas de experimentos, é em
termos de um experimento de pensamento famoso. Este experimento mental é conhecido
como o Problema Trolly. O Trolly Problem foi desenvolvido em sua forma inicial pela
filósofa Philipa Foot e depois foi expandido e desenvolvido por muitos outros filósofos e em
particular por Judith Jarvis Thompson. Quando eu era estudante de pós-graduação no MIT, ela
era professora lá e eu realmente fiz um de seus cursos de Filosofia. O que foi ótimo, embora
eu tenha desistido depois da segunda aula. Mas isso era eu e não ela. Então aqui está o
problema do trole.
2:21
There is a run away trolly, a run away train. And if it is left on its, to do what it's going to do,
it will go on the track to the left. And it will kill five people. These are workers. They're,
they're banging on it. Maybe they have headphones on. They can't see the train coming. They
are too, you are the person at the switch. These people on the tracks are too far away to warn.
If you do nothing, they will die. The train will come and it will run them over.
Há um trenó fugido, um trem fugido. E se for deixado no seu, para fazer o que vai fazer, ele
vai na pista para a esquerda. E isso matará cinco pessoas. Estes são trabalhadores. Eles estão
batendo nele. Talvez eles tenham fones de ouvido. Eles não podem ver o trem chegando. Eles
também são, você é a pessoa no interruptor. Essas pessoas nos trilhos estão longe demais para
avisar. Se você não fizer nada, eles vão morrer. O trem virá e vai atropelá-los.
2:51
You, however, are standing next to a switch.
Você, no entanto, está de pé ao lado de um interruptor.
2:54
If you pull that switch, the train will be diverted and will go on the other track. Unfortunately,
there is one other worker there, also too far away to warn and that worker will be killed.
Se você puxar o interruptor, o trem será desviado e irá para a outra pista. Infelizmente, há
outro trabalhador lá, também longe demais para avisar e esse trabalhador será morto.
3:07
And the question, this is a question that has been asked, I would bet has been asked to over
100,000 people in various experiments and surveys and online studies is, should you throw
the switch? Should you, not what would you do, not, they don't ask people ever to do it, but
should you throw the switch? What's the right thing to do?
E a pergunta, esta é uma pergunta que foi feita, eu aposto que foi pedido a mais de 100.000
pessoas em vários experimentos e pesquisas e estudos on-line é, você deve jogar o
interruptor? Se você não, o que você faria, não, eles não pedem às pessoas que façam isso,
mas você deveria jogar o interruptor? Qual é a coisa certa a fazer?
3:25
So, we call that the switch case. Think about your answer to that. And now compare it to
another case. This is sometimes called the Footbridge Case.
Então, chamamos isso de switch case. Pense na sua resposta para isso. E agora compare com
outro caso. Às vezes, isso é chamado de caso da passarela.
3:35
Same situation to some extent. There's a runaway train. Unless something happens, that train
will proceed and kill five people, and there's not a thing you can do to warn them.
Mesma situação até certo ponto. Há um trem desgovernado. A menos que algo aconteça, esse
trem irá prosseguir e matar cinco pessoas, e não há nada que você possa fazer para avisá-las.
3:45
But you are standing on a bridge, and next to you is a man.
Mas você está em pé em uma ponte, e ao seu lado é um homem.
3:49
And he's, and, and, and, and if you push him, he will plummet down, land on the tracks, and
stop the train.
E ele, e, e, e, e se você o empurrar, ele irá despencar, cair nos trilhos e parar o trem.
3:59
Sometimes this is called a fat man case. Because the idea is that people always ask why
shouldn't you jump yourself? So, the example is such that he's a big enough guy that he'll stop
the train. You're not big enough to stop the train.
Às vezes isso é chamado de caso do homem gordo. Porque a ideia é que as pessoas sempre
perguntam por que você não deveria pular sozinho? Então, o exemplo é que ele é um cara
grande o suficiente para parar o trem. Você não é grande o suficiente para parar o trem.
4:13
So, now your question is should you push the man? Should you let the train proceed, kill five
people or should you push the man, which would kill him but save those five. As I said,
countless people have been asked these questions. If these questions have been asked, for
young children, they have been asked people from different cultures in the world. They've
been asked of criminal psychopaths.
Então, agora sua pergunta é se você empurrar o homem? Se você deixar o trem prosseguir,
mate cinco pessoas ou se você empurrar o homem, o que o mataria, mas salvaria esses cinco.
Como eu disse, inúmeras pessoas fizeram essas perguntas. Se essas perguntas foram feitas,
para crianças pequenas, foram perguntadas a pessoas de diferentes culturas do mundo. Eles
foram perguntados sobre psicopatas criminosos.
4:41
They've been asked of people with autism, with schizophrenia, with various forms of brain
damage.
Eles foram questionados sobre pessoas com autismo, com esquizofrenia, com várias formas
de danos cerebrais.
4:49
And when you look at the data for normal people there's a striking pattern. And I would bet
that this comes up in your intuitions, too. Most people think it's okay to throw the switch for
the switch case. Most people think that, that maybe, you know, it's not 100% you throw the
switch it's the right thing to do. You save five and one die.
E quando você olha os dados para pessoas normais, há um padrão impressionante. E eu
apostaria que isso surge em suas intuições também. A maioria das pessoas acha que não há
problema em acionar a chave do switch case. A maioria das pessoas pensa que, talvez você
não seja 100%, você é o correto. Você economiza cinco e um morre.
5:10
But most people think it's wrong to push the man.
Mas a maioria das pessoas pensa que é errado empurrar o homem.
5:14
Most people think you shouldn't do that.
A maioria das pessoas acha que você não deveria fazer isso.

5:18
So, what does this tell us? Well, the people that do this research will, will tell you that, one
thing it suggests is that we're not consequentialists. because notice these cases are, kind of
similar.
Então, o que isso nos diz? Bem, as pessoas que fizerem essa pesquisa, dirão, uma coisa que
sugere é que não somos consequencialistas. porque note que esses casos são semelhantes.
5:30
The consequences are the same. If you do nothing, five people will die. If you act, one person
will die.
As conseqüências são as mesmas. Se você não fizer nada, cinco pessoas morrerão. Se você
agir, uma pessoa vai morrer.
5:42
So, if all we were doing was making our decision based on the consequences of our acts, these
two cases would elicit the identical intuition. But they don’t. There seems to be a critical
difference between them. And so, what could this difference be? Well, there’s different
theories of this.
Então, se tudo o que estávamos fazendo era tomar nossa decisão com base nas conseqüências
de nossos atos,
dois casos provocariam a intuição idêntica. Mas eles não. Parece haver uma diferença crítica
entre eles. E então, o que poderia ser essa diferença? Bem, existem teorias diferentes disso.
6:02
One theory is that we have in our brains, unconsciously usually, a rather subtle philosophical
principle and this principle is sometimes called the Doctrine of Double Effect.
Uma teoria é que temos em nossos cérebros, inconscientemente geralmente, um princípio
filosófico bastante sutil e esse princípio é às vezes chamado de Doutrina do Duplo Efeito.
6:16
And it was it was developed, among others, by, Thomas Aquinas. And the Doctrine of Double
Effect says there's a distinction between doing something bad, like killing or harming
somebody as an unintended consequence of causing greater good to happen, and that could be
the right thing to do, versus doing something bad, like killing or harming somebody in order
to bring about a greater good, and that you shouldn't do.
E foi desenvolvido, entre outros, por Thomas Aquinas. E a Doutrina do Duplo Efeito diz que
há uma distinção entre fazer algo ruim, como matar ou ferir alguém como uma consequência
não intencional de causar um bem maior, e isso poderia ser a coisa certa a fazer, versus fazer
algo ruim, como matar ou ferir. alguém para trazer um bem maior, e que você não deveria
fazer.
6:47
The consequences are identical.
As conseqüências são idênticas.
6:50
But the difference is that in one case, the bad thing is a regrettable byproduct.
Mas a diferença é que, em um caso, o ruim é um subproduto lamentável.
6:57
I'm sorry, in the good case, the bad thing is a regrettable byproduct. That's what you, that's
okay. Well, in the bad case, it's an instrument through which you act.
Me desculpe, no bom caso, o ruim é um subproduto lamentável. Isso é o que você está bem.
Bem, no pior dos casos, é um instrumento através do qual você age.
7:06
So, this often comes up, in case, cases of just war. Or, or, or in philosophical debates and
political debates over, over what's permissible in war time. And people who talk about a
Doctrine of Double Effects say, consider two examples.
Então, isso geralmente surge, no caso, casos de guerra justa. Ou, ou, em debates filosóficos e
debates políticos, sobre o que é permissível no tempo de guerra. E as pessoas que falam sobre
uma Doutrina de Efeitos Duplos dizem, considerem dois exemplos.
7:22
In one example there's a munition's factory, weapon's factory.
Em um exemplo, há uma fábrica de munição, a fábrica de armas.
7:27
And you have to decide whether or not to bomb it. And if you bomb it, there's good reason to
believe it would, defeat the enemy and the war would come to an end and millions of lives
would be saved.
E você tem que decidir se deve ou não bombardeá-lo. E se você bombardeá-lo, há boas razões
para acreditar que isso aconteceria, derrotaria o inimigo e a guerra chegaria ao fim e milhões
de vidas seriam salvas.
7:39
But if you bomb it, there's people who work in the factory who are innocent, and they will
die.
Mas se você bombardear, há pessoas que trabalham na fábrica que são inocentes, e eles vão
morrer.
7:44
Should you bomb it? Well, that's a hard question. But many people would argue if the benefit
is great enough, having some innocent people die as a byproduct is permissible.
Você deveria bombardeá-lo? Bem, essa é uma pergunta difícil. Mas muitas pessoas
argumentariam se o benefício é grande o suficiente, tendo algumas pessoas inocentes
morrendo como um subproduto é permissível.
7:56
Now compare to another case, there innocent people and if you blow them up, if you kill
them, that will terrorize the enemy population. They'll know you're serious and then the war
will end, saving millions of lives and so on. In fact, you can make these case so that they have
identical consequences so that the same number of people die in the same way.
Agora compare com outro caso, há pessoas inocentes e se você as explodir, se você as matar,
isso irá aterrorizar a população inimiga. Eles saberão que você está falando sério e então a
guerra terminará, salvando milhões de vidas e assim por diante. De fato, você pode fazer esses
casos para que eles tenham consequências idênticas, de modo que o mesmo número de
pessoas morra da mesma maneira.
8:20
But still there's intuition that these cases are different. And the Doctrine of Double Effect says,
the difference is, in the case where you may do it, the innocents are collateral damage, but you
don't want them to die. They're just a regrettable byproduct. But here, you're killing people in
order to bring about an effect and you shouldn't do that. And the connection to trolley
problem, is pretty clear.
Mas ainda há intuição de que esses casos são diferentes. E a Doutrina do Duplo Efeito diz, a
diferença é que, no caso em que você pode fazer isso, os inocentes são danos colaterais, mas
você não quer que eles morram. Eles são apenas um subproduto lamentável. Mas aqui, você
está matando pessoas para causar um efeito e você não deveria fazer isso. E a conexão com o
problema do carrinho é bem clara.

8:46
this, this could explain, some people believe, why we think so differently about these two
cases. In the switch case, one person dies but their death is an accident and their death does
nothing. If they were to leave the track, you'd be happy. They don't have to die and it's
wonderful.
isso, isso poderia explicar, algumas pessoas acreditam, por que pensamos de forma diferente
sobre esses dois casos. No caso da chave, uma pessoa morre, mas sua morte é um acidente e
sua morte não faz nada. Se eles saíssem da pista, você seria feliz. Eles não têm que morrer e é
maravilhoso.
9:04
In the bridge case, the person's death is necessary.
No caso da ponte, a morte da pessoa é necessária.
9:08
If they were to leave the bridge, it doesn't work any more, the five will die. You need them.
Se eles deixassem a ponte, não funcionaria mais, os cinco morreriam. Você precisa deles.
9:13
And this difference, people argue, explains our intuition.
E essa diferença, argumentam as pessoas, explica nossa intuição.
9:19
Unwise permissible in a switch case, but not in a bridge case.
Imprudente permissível em um caso de switch, mas não em um caso de bridge.
9:24
I will add, there's all sorts of philosophical dilemmas that are so fanciful and weird and crazy
that they would never happen in a real world, but this actually is a case where these things
happen in a real world. in, in, 9/11 the U.S. government had to make a decision over whether
or not to shoot down a plane that was headed for the Pentagon and, and I think the German
government had a similar case a few years ago, where they had to, where they debated
whether to do this. And this is a trolley problem. Is it permissible to kill some people in order
to save a greater amount of people? Or imagine you're driving your car home and imagine it's
a cold winter's day on ice. And you're driving a bit too fast, and you shoot through a stop sign,
and you're headed right towards five people. And they're standing in front of you [NOISE]
and if you do nothing your car will go into them and kill them.
Eu adicionarei, há todo tipo de dilemas filosóficos que são tão fantasiosos e estranhos e
loucos que eles nunca aconteceriam em um mundo real, mas este é um caso onde estas coisas
acontecem em um mundo real. em, 11 de setembro, o governo dos EUA teve que tomar uma
decisão sobre derrubar ou não um avião que estava indo para o Pentágono e, e eu acho que o
governo alemão teve um caso semelhante há alguns anos atrás, onde eles tiveram para onde
eles debateram se deveriam fazer isso. E isso é um problema de bonde. É permitido matar
algumas pessoas para salvar uma quantidade maior de pessoas? Ou imagine que você está
dirigindo seu carro para casa e imagine que é um dia frio de inverno no gelo. E você está
dirigindo um pouco rápido demais, e você atira através de um sinal de stop, e você está indo
direto para cinco pessoas. E eles estão na frente de você [NOISE] e se você não fizer nada,
seu carro vai entrar neles e matá-los.

10:21
But you can swerve, you could swerve and your car will spin, but there's one person over
there. You are living in a trolley problem. And you might think, well, those things are so rare
we never have to deal with them. But people pointed out that solving the trolly problem,
figuring out the right thing to do in these cases, is a matter of real world relevance. For one
thing, as Gary Marcus writes in an interesting discussion, people are now building self driving
cars. Google's building self driving cars. And self driving cars may have to one day deal with
a variants of the trolly problem, and solve.
Mas você pode desviar, você pode desviar e seu carro vai girar, mas há uma pessoa lá. Você
está vivendo em um problema de bonde. E você pode pensar, bem, essas coisas são tão raras
que nunca temos que lidar com elas. Mas as pessoas apontaram que resolver o problema do
trole, descobrir a coisa certa a fazer nesses casos, é uma questão de relevância para o mundo
real. Por um lado, como Gary Marcus escreve em uma discussão interessante, as pessoas
agora estão construindo carros autoguiados. Carros de condução auto do Google. E os carros
autodirigidos podem ter que lidar um dia com as variantes do problema do trole e resolvê-lo.
10:58
So, this way of looking at things where we take these abstract philosophical theories, we use
the philosophical examples, we test people on them, and then we try to explain people's
intuitions based on, on these philosophical approaches, is one way to do moral psychhology.
And it's been a very influential approach. It treats people as moral philosophers and uses
abstract philosophical theories as a way to make sense of our deepest intuitions and
judgements. But it's not the only way to do moral psychology. And in fact, I think its fair to
say that in the last decade or so there's been a backlash. And in this backlash people have said
this is entirely the wrong way to study moral psychology. We should not think of people as, as
these philosophical creatures doing these abstract rules, rather moral, morality is driven to a
large extent by gut feelings.
Assim, esta maneira de ver as coisas onde tomamos essas teorias filosóficas abstratas, usamos
os exemplos filosóficos, testamos as pessoas nelas, e então tentamos explicar as intuições das
pessoas com base nessas abordagens filosóficas, é uma forma de fazer psicologia. E tem sido
uma abordagem muito influente. Ele trata as pessoas como filósofos morais e usa teorias
filosóficas abstratas como uma forma de dar sentido a nossas intuições e julgamentos mais
profundos. Mas não é a única maneira de fazer psicologia moral. E, de fato, acho justo dizer
que na última década houve uma reação negativa. E nessa reação as pessoas dizem que esse é
o modo errado de estudar a psicologia moral. Não devemos pensar nas pessoas como, como
essas criaturas filosóficas que fazem essas regras abstratas, morais e morais, são movidas em
grande parte por sentimentos viscerais.
12:00
And and this is a view I want to talk about in the lectures that, that follow. It was actually
nicely summarized, I think, by, the cultural commentator and public intellectual, David
Brooks, in a New York Times article of a few years ago. And the article was titled, The End of
Philosophy. So Brooks writes this. Think of what happens when you put a new food into your
mouth. You don't have to decide if it's disgusting. You just know. You don't have to decide if a
landscape is beautiful. You just know. Moral judgments are like that. They are rapid intuitive
decisions and involve the emotional-processing parts of the brain. Most of us make snap
moral judgments about what feels fair or not or what feels good or not. We start doing this
when we are babies, before we have language, and even as adults, we don't, we often can't
explain to ourselves why something feels wrong.
E esta é uma visão que quero falar nas palestras que se seguem. Na verdade, foi bem
resumido, penso eu, pelo comentarista cultural e intelectual público, David Brooks, em um
artigo do New York Times de alguns anos atrás. E o artigo foi intitulado O Fim da Filosofia.
Então, Brooks escreve isso. Pense no que acontece quando você coloca um novo alimento em
sua boca. Você não precisa decidir se é nojento. Você apenas sabe. Você não precisa decidir se
a paisagem é linda. Você apenas sabe. Julgamentos morais são assim. Eles são decisões
intuitivas rápidas e envolvem as partes de processamento emocional do cérebro. A maioria de
nós faz julgamentos morais imediatos sobre o que é justo ou não, ou o que é bom ou não.
Começamos a fazer isso quando somos bebês, antes de termos a linguagem e, mesmo quando
adultos, não temos, muitas vezes não conseguimos nos explicar por que algo parece errado.
12:51
And, a lot of what he talks about, about the brain, and about babies, we're going to talk about
in this course and talk about edvidence underlying his claims, which I think are for the most
part correct
E, muito do que ele fala, sobre o cérebro, e sobre os bebês, vamos falar sobre este curso e
falar sobre as evidências subjacentes às suas afirmações, que eu acho que são na maior parte
corretas
13:03
from this perspective, we're not these rational, moral philosophers at all. We, we, a lot of our
morality is driven by our gut.
a partir dessa perspectiva, não somos esses filósofos morais racionais. Nós, nós, muito da
nossa moralidade é impulsionada pelo nosso intestino.
13:12
And in fact, David Brooks then wrote a best selling book, The Social Animal, that expanded
and elaborated on this theory and it's one I highly recommend.
E, de fato, David Brooks escreveu um livro best-seller, The Social Animal, que expandiu e
elaborou essa teoria e é altamente recomendável.
13:21
In his work, David Brooks drew, to a large extent, on the work of the philosopher Jonathan
Haidt. And Jonathan Haidt wrote, many years ago, a very influential article in Psychological
Review, one of our top journals, called The Emotional Dog and Its Rational Tail: A Social
Intuitionist Approach to Moral Judgment And you can see his argument from the title. The
idea is that if we think reason is important, we are mistaking the tail for the dog. It's, it's the
motion that counts.
Em seu trabalho, David Brooks desenhou, em grande medida, o trabalho do filósofo Jonathan
Haidt. E Jonathan Haidt escreveu, muitos anos atrás, um artigo muito influente na
Psychological Review, um de nossos principais periódicos, chamado The Emotional Dog e
Sua Cauda Racional: Uma Abordagem Social Intuicionista ao Julgamento Moral E você pode
ver seu argumento do título. A ideia é que, se pensarmos que a razão é importante, estamos
confundindo a cauda com o cachorro. É o movimento que conta.
13:51
Haidt writes, moral judge, reasoning does not cause moral judgment; rather moral reasoning is
usually a post hoc construction generated after a judgment has been reached. And Haidt has
recently written a book summarizing his view, called The Righteous Mind. This is the
American cover. But I think the British cover better conveys the audacious nature of this
claim
Haidt escreve, juiz moral, raciocínio não causa julgamento moral; em vez disso, o raciocínio
moral é geralmente uma construção post hoc gerada após um julgamento ter sido alcançado. E
Haidt recentemente escreveu um livro resumindo sua visão, chamado The Righteous Mind.
Esta é a capa americana. Mas eu acho que a capa britânica transmite melhor a natureza
audaciosa dessa afirmação
14:14
In all of this work scholars like Brooks and Haidt and the many psychologists and
philosophers who endorse this emotional approach to morality are drawing upon an important
philosophical tradition. And it's actually nicely summarized by David Hume. So, David Hume
talking about moral reasoning says, look, our moral decisions, our moral understanding is not
driven by reason. Rather, he says, reason is, and ought only to be, the slave of the passions.
Em todo esse trabalho, acadêmicos como Brooks e Haidt e muitos psicólogos e filósofos que
endossam essa abordagem emocional da moralidade estão recorrendo a uma importante
tradição filosófica. E é bem resumido por David Hume. Então, David Hume falando sobre
raciocínio moral diz, olhe, nossas decisões morais, nossa compreensão moral não é
impulsionada pela razão. Antes, diz ele, a razão é e deve ser apenas a escrava das paixões.
14:50
Now, so far I have not given evidence for this. I just made the abstract argument for it. What I
want to do in the next three lectures, is give three case studies of how our gut feelings, how
our emotional responses influence, in a profound way, our sense of right and wrong. [MUSIC]
Agora, até agora não dei provas disso. Acabei de fazer o argumento abstrato para isso. O que
eu quero fazer nas próximas três palestras, é dar três estudos de caso de como nossos
sentimentos, como nossas respostas emocionais influenciam, de uma maneira profunda, nosso
senso de certo e errado. [MÚSICA]