Anda di halaman 1dari 5

Information & Communications Technology Law

Vol. 18, No. 2, June 2009, 79–82

The challenge of hate speech online


Chris Reed*

School of Law, Queen Mary University, London, 67–69 Lincoln’s Inn Fields, London, WC2A
3JB, UK

Real-world hate speech crimes are defined by reference to the special attributes of
the victim. A thought experiment about hate speech in virtual worlds indicates
that this approach to the criminalisation of hate speech may not be sustainable.
Keywords: hate speech; virtual worlds

The obvious and immediate challenge posed by the online spaces which have recently
come into existence is how to apply and enforce existing laws against hate speech.
Difficult questions such as where hate speech is ‘uttered’, and who is responsible for
its utterance, will need to be answered.
Beyond that lies a further challenge to the fundamental theoretical basis for the
criminalisation of hate speech. Digital technology has already led us to re-examine
concepts which previously needed no detailed analysis, such as copying or making a
signature. In just the same way, crimes involving hate speech make implicit
assumptions about matters such as personality and the attributes of victims,
assumptions which may not so readily be applicable in online worlds.
Most hate speech crimes exhibit two basic elements. They require the
defendant to have incited hatred, and in addition that incitement must be directed
against identifiable groups (or individuals belonging to such a group) which are
defined by an attribute which merits special protection. The latter is usually because
the incitement of hatred against such groups has historically led to their oppression.
It is important to note that the mere incitement of hatred against an individual or
a group is not a hate speech crime, though it may amount to harassment or some
kind of public order or communications offence.1 Only the incitement of hatred
against specially protected groups is an offence. These protected groups are normally
defined by one or more of four characteristics: race, colour, sexual orientation or
creed.
In the offline world these groups are quite easy to identify as having protected
status, largely because they have existed for some long period of time and have
achieved social recognition as meriting stronger protection than other groups. New
groups can arise very rapidly online, apparently meeting many of the law’s criteria
for protection from hate speech. How should we decide whether these groups can be
the subject of criminal hate speech?

*Email: chris.reed@qmul.ac.uk

ISSN 1360-0834 print/ISSN 1469-8404 online


DOI: 10.1080/13600830902812202
http://www.informaworld.com
80 C. Reed

The problem is best illustrated via a thought experiment. The scenario outlined
below is purely fictitious, but readers who are familiar with the new online spaces will
recognise it as being based on reality (perhaps virtuality would be a better word) and
reasonably plausible.
BattleVenture is an online role-playing game, with similarities to World of
Warcraft and EverQuest. As in both those games, there is a warfare element. Players
select an avatar from a wide range of available ‘species’ which includes humans, elves
and dragons. These avatars can be customised in appearance.
Initially players operated in mixed guilds to conduct warfare under the rules of
the game, but recently a tendency has developed for those with similar avatars to
band together and fight other groups. The four most vigorous groupings are:

. elves, who adopt the standard appearance from fantasy novels and films;
. dragons, who are also traditionalists;
. the human-shaped followers of the war god Qzwlch, who engage in rituals
which include the sacrifice of captured avatars of whatever form; and
. a group of human-shaped avatars whose players have chosen green as their
skin colour – these are nicknamed Kermits by their opponents.

Battle raids are usually preceded by bonding sessions in which the intended
opponents are reviled and insulted in terms which are similar to hate speech in the
physical world. Participants are encouraged to kill and maim their opponents
because of their allegedly shared characteristics – thus all dragons are scheming and
money-grabbing, all elves indulge in disgusting sexual practices (which are not
possible in the physical world because of the differences between human and elven
anatomy), Qzwlchist beliefs are a danger to the continued existence of the
BattleVenture world and the skin colour of Kermits is an unnatural abomination.2
Thus far we might legitimately leave this kind of speech, unpleasant though it
may be, to be dealt with in the context of the game. Participants have chosen
voluntarily to participate in a world whose rules permit conflict, and those rules can
if necessary be modified to restrict the range of permissible behaviour (as already
happens in World of Warcraft and EverQuest).3 We should note, however, that the
online activities of some inhabitants of virtual worlds become an integral part of
their lives,4 so that attacks and insults in the virtual world cause as much distress as
similar activities in the physical world. I have argued elsewhere5 that the time may
come when physical world criminal law will need to extend itself into these online
spaces, though it is still too early to do so.
Suppose, though, that we take our thought experiment a little further.
BattleVenture players begin to take their conflicts outside the boundaries of the
game world, and to insult and threaten rival players (as opposed to their avatars).
Friends who used to play as friendly rivals come to despise each other because one is
an elf and the other a Kermit. A dragon runs a blog via which he urges other dragons
to assault anyone seen wearing a Qzwlchist T-shirt. Fights break out at the annual
BattleVenture convention in Blackpool, and in the following months a rash of other
blogs appears, each denigrating and threatening rival groups in terms which
would amount to hate speech if directed against those of a physical world race or
religion.
These incitements to hatred and violence constitute very much the kind of
mischief that existing hate speech criminal laws are designed to control. Does this
Information & Communications Technology Law 81

mean that we should prosecute those authors under that legislation? To do so would
require a number of difficult conceptual problems to be solved.
First, there is the difficult question whether race, colour or sexuality are innate
and objective, or subjective and elective. We know the answer for religion, which (in
the UK at least) is a matter of personal choice, but in our example the victims of the
hate speech have chosen their race and sexual practices. Even more difficult, they
have chosen race (dragons and elves), sexuality (elven) and colour (green) which do
not exist among physical world humans. It might be thought that this would be a
fatal objection to applying existing laws, but the UK jurisprudence on crimes of
racial hatred has been careful not to limit the categories of race to innate and
objective groupings. Epithets in such broad terms as African,6 or even foreigner,7
have been held to be terms with a potential racial connotation. Those cases appear to
define terms of racial hatred by reference to the xenophobic intent of the speaker,
rather than any characteristic actually possessed by the victim. If this is a correct
analysis, hate speech against dragons or Kermits is equally capable of being racially
motivated. The same must surely be true in relation to sexuality.8
The second challenge from our thought experiment is to the nature of religious
belief. It seems likely that the Qzwlchists only hold their religious beliefs whilst
playing the game. Do religious believers need to believe all the time to receive
protection from hate speech? If not, the way is open for a defendant charged with
inciting religious hatred against a member of a physical world religious group to
argue that her victim was having doubts on the occasion of the incitement, or that
the victim is not a ‘true’ member of the religion because he has broken its dietary
laws, used contraception or swatted a fly. As with racial hatred, the only workable
test is the subjective intent of the defendant. The cry of ‘Kill the Christian’ is surely a
crime, even if the person referred to is a Buddhist. If so, ‘Kill the Qzwlchist’ is no
different.
Finally, we already recognise that humans have multiple aspects to their
identities. I am, amongst other things, legal academic, glider pilot and ukulele player,
and which of these is currently the most salient aspect of my identity varies with
context. There is no contradiction with my ukulele playing if I take up the tuba, or
with my legal academic status if I also begin to write about literature. And yet we
have, until now, seen race, colour and religion as matters which are inevitably
singular. Our thought experiment indicates that this may have to change, now that
online worlds give us the opportunity to adopt virtual extensions to our identities
which are not constrained to match our physical world identities.
If our lives in virtual worlds become so deeply embedded as to be a fundamental
element of self, one long term effect might even be to force a re-examination of the
fundamental justification for hate speech laws. The heart of the mischief addressed
by those laws is incitement to hatred of the outsider, rather than the race, colour or
creed of those singled out. These matters are aggravating factors not because there is
something inherently worth protecting about them, but rather because the history of
persecution of those groups gives an additional intensity to the incitement. Thus in
the UK, incitement of anti-semitic hatred is surely worse than incitement of hatred of
Christians and deserves stronger criminal sanctions. This is not because Jews are
inherently more worthy of protection than Christians but because of the injuries
done to Jews in the past.
Encompassing the potentially unlimited range of races, colours and religions
which virtual worlds make possible within hate speech laws creates real conceptual
82 C. Reed

difficulties. An attempt to solve them by excluding victims’ virtual world identities


from the ambit of legal protection would throw into doubt the legal protections
given to physical world groups, because it would then be necessary to define those
groups more narrowly and in objective terms. We have already noted that religion is
necessarily a subjective matter, and even race is difficult to define objectively. It
appears now to be well-established that the genetic variation within any particular
racial group is no less than the variation between members of different groups,9 and
the human proclivity to base sexual attraction on matters other than race
complicates matters further. As an example, is US President Obama black, white,
Hawaiian, Kenyan, Luo or even Irish? If he also chose to be an elf, would that be
conceptually more difficult for the law to encompass?
One consequence of the development of virtual worlds might thus be the
abandonment of aggravated hate speech offences. The real problem which the law
needs to address is incitement to xenophobic hatred, no matter what the nature of
the victim group. Race, colour, sexuality or creed would then be merely an
aggravating feature of the incitement, taken into account only in its punishment and
not as part of the definition of the crime.

Notes
1. See Walden, I. (2007). Computer crimes and digital investigations. Oxford: Oxford
University Press, 3.187–3.212.
2. Though see contra, ‘Bein’ Green’ (Raposo), (1970), Sesame Street.
3. World of Warcraft Terms of Use, Clauses 4C and 5, available at http://www.worldof
warcraft.com/legal/eula.html; EverQuest II User Agreement clause 6(iv) & (v), available
at http://help.station.sony.com
4. See Yee. N. (2006). The psychology of massively multi-user online role-playing games:
Motivations, emotional investment, relationships and poblematic usage. In Schroeder, R.
& Axelson, A.-S. (2006). Avatars at work and play: Collaboration and interaction in shared
virtual environments (Chap. 9, pp. 187–207). Netherlands: Springer.
5. Reed, C. (November 2008). Why must you be mean to me? – Crime, punishment and
online personality. Queen Mary University of London School of Law Working Paper,
available at: http://ssrn.com/abstract¼1305125
6. R v. White [2001] EWCA Crim 216 (14 February 2001).
7. Director of Public Prosecutions v. M (A Minor) [2004] EWHC 1453 (Admin) (25 May
2004).
8. In English v. Thomas Sanderson Ltd [2008] EWCA Civ 1421, para. 38, Sedley LJ was
prepared to find sexual harassment based on an imaginary sexuality.
9. See e.g. Templeton, A. (2002). Out of Africa again and again. Nature, 416, 45.

Anda mungkin juga menyukai