Anda di halaman 1dari 26

2 December, 2015

United Nations, NY

Monitoring, Regulating and Limiting Hate Speech


Dr Andre Oboler
CEO, Online Hate Prevention Institute
@onlinehate | facebook.com/onlinehate
Andre Oboler, 2015

Point 1: Social Media & Search are special


Three separate hate speech problems:
Hate speech on the internet
Hate speech in social media
Hate speech found via search engines

One can allow freedom of expression on the


Internet, while still denying hate speech access to the
tools to go viral or to mislead.

Mainstream
media

How do the news sites rank?

BBC Online 58 at 1.795%


CNN 70 at 1.478%
Huffington Post 93 at 1.284%
The New York Times 118 at 0.191%
Compare this to #2 Facebook at 42.981%, or #5
Wikipedia at 12.633%

Hate speech, technology, and regulation


Prof. Jeremy Waldron (New York University School of Law):
Hate Speech:
Undermines the public good of inclusiveness in society
Becomes embedded in the permanent visible fabric of society and victims
assurance that there will be no need to face hostility, violence, discrimination,
or exclusion by others in going about their daily life vanishes

Prof. Lawrence Lessig (Harvard Law School):


unless we understand how cyberspace can embed, or displace, values from
our constitutional tradition, we will lose control over those values. The law in
cyberspacecodewill displace them

Lets combine these ideas...

Point 2: The Fabric of Online Space


The Internet is a space whose fabric is speech. Hate
speech embeds itself in the very fabric of this space.
Some of these spaces are vital public spaces, others are
more private. In a world where space is made of speech,
when the public spaces are built of hate and become
harmful to some, it denies them access to what should
be a right for all.
The environment itself can become exclusionary. In this
environment a distinction between hate speech and hate
acts is illusionary.

Point 3: A technological accelerant for hate


The Internet, and particularly social media, is a technological accelerant
for memes, including messages of hate and extremism
An accelerant is a term usually used in the firefighting area. It is any substance that
can accelerate the development of a fire. It's a fitting term.
A meme is a broader concept than the internet meme consisting of an image and
text that many are familiar with. A meme is an idea, a unit of culture, which can
spread like a virus and morph as it does. It is a concept developed by Richard
Dawkins in his book the Selfish Gene back in 1976. Racism, Xenophobia, and
antisemitism in particular are all memes.

The idea of a technological accelerant for memes can be amusing if the


meme is grumpy cat, but downright scary if the meme of the sort of hate
that has inspired genocides.
Just as the car accelerated movement, and new laws i.e. road rules, had to
be create in response, so too are some laws needed to halt or at least
slow down the viral spread of hate online.

So if we need to monitor and


remove hate, how do we do it?

Response 1: Report on examples compiled by experts

Reports available
Online by theme:
http://ohpi.org.au/

Response 2: Briefings on specific items of hate in SM


Briefings available online by theme: http://ohpi.org.au/

Expert work: Breakdown of 191 Examples


Security Threat / Threat to Public Safety: (42)
Cultural Threat (29)
Economic Threat (11)
Dehumanising or Demonizing Muslim (37)
Incitement & general threats (24)
Targeting Refugees (12)
Other Forms of Hate (36)

50 Facebook pages | 249 images | 191 excluding reposts


Access via: http://ohpi.org.au/anti-muslim-hate/

This doesnt scale...


YouTube
2,056,320 videos are uploaded each day

Facebook
350,000,000 images are uploaded each day

Even if only a small percent of them are hate... Thats


still going to be a huge volume of content every day.
And its being seen by a huge audience.

The FightAgainstHate.com Approach


The problem of monitoring and analysis at scale was
first raised in 2009 in the online antisemitism
working group of the Global Forum to Combat
Antisemitism
In 2011 a software proposal was discussed. The key
aspects of this approach were:

Crowd sourcing the report from the public


Artificial intelligence (AI) for quality control of the reports
AI is to be based on calibration to experts opinions
Platform is to provide sharing of data between experts to
enable further analysis

we have response 3:
nitoring & Analysis Transparency & Accountability

Responding Public
Public
Reporting
Transparency Experts
Accountability

Final report to be released


Jan 27, 2016
At the Global Forum to Combat Antisemitism in May we release a report based on data
from the FightAgainstHate.com reporting tool. Here are some the results:
Antisemitism
by social media platform

Sample size: 2024 items

36%
41%
Facebook
YouTube
Twitter

Antisemitism
by classification sub-types
23%

34%

Promoting violence
against Jews
Holocaust denial

49%
5%
12%

Traditional antisemitism
(not Israel-related)
New antisemitism
(Israel-related)

Final report to be released


Jan 27, 2016
Drilling deeper the results are even more startling. We see that different kinds of
Antisemitism are more prevalent on different platforms. Prevalence is a combination of
what users upload, and what action the platform is taking to remove such content.
Promoting violence against Jews

16

42

44
27

72

Holocaust denial

Facebook

Facebook

YouTube

YouTube

Twitter

Twitter

105

Traditional antisemitism

New antisemitism

120
253

167 137
214

Facebook

Facebook

YouTube

YouTube

Twitter

Twitter

433

Final report to be released


Jan 27, 2016

Forth coming data


Removal rates range from 2% (new antisemitism on
YouTube) to 50% (promoting violence on Facebook)
The final report will provide a full breakdown by
platform and hate type

More on the SAMIH campaign at:


http://fightagainsthate.com/samih/

Draft report to be released Dec 10,


2015. Full Report Feb 2016.

Spotlight on Anti-Muslim Hate Report


Based on a sample of 1111 Items of Anti-Muslim Hate Speech
Anti-Muslim hate classification subtypes
Other anti-Muslim hate, 4%
Socially excluding Muslims, 3%
Undermining Muslim allies, 5%

Muslims as a cultural threat, 33%

Muslims as dishonest, 3%
Xenophobia /
anti-refugee, 7%

Inciting anti-Muslim
violence, 9%

Muslims as a security risk, 19%

Demonising Muslims, 17%

Draft report to be released Dec 10,


2015. Full Report Feb 2016.

Spotlight on Anti-Muslim Hate Report

Take down rates so far


Demonising Muslims
(Facebook)

Muslims as a security risk


(Facebook)

6%

20%

31%

Xenophobia / anti-refugee
(Facebook)

online

online

online

offline

offline

offline

69%

80%

94%

These items have been reported to the platforms through the usual reporting mechanisms.
We will be offering senior management the list we are using, and allow them time to review
the items, before publishing the final report.

The Big Picture

Contact details

Websites: oboler.com / ohpi.org.au / fightagainsthate.com


Twitter: @oboler / @onlinehate
Facebook: facebook.com/onlinehate
E-mail via: http://ohpi.org.au/contact-us/
Help promoting FightAgainstHate.com will enable us collect
and share better data. NGOs and Government agencies can
endorse it (39 organisations endorsing it so far).

Anda mungkin juga menyukai