Anda di halaman 1dari 206

No.

61 / 1st quarter 2006

The economic journal on telecom, IT and media

Competition in
two-sided markets
Application to information and communication industries

Edited by Marc BOURREAU & Nathalie SONNAC


A

Strategic Guide on Two-Sided Markets Applied to the ISP Market


Retail Payment Systems:
What Can we Learn from Two-Sided Markets?


 Mobile
 Impact

Call Termination: a Tale of Two-Sided Markets

of Mobile Usage on the Interpersonal Relations

David EVANS,
Vice Chairman of LECG Europe

Interview with

Foreword

Research into industrial economics has recently yielded up a new concept,


that of two-sided markets. This is the study of situations whereby one or several
platforms facilitate interactions between users on two different sides of a market.
Several examples of two-sided markets spring to mind, notably in the ICT
industry. Video games platforms such as Atari, Nintendo and Sega, for example,
need to attract players to encourage developers to provide games for these
platforms. At the same time, these platforms need games in order to attract new
players. Similarly, TV networks are platforms that compete against each other in
the advertising and viewer markets and set up intra-group externalities.
Applications in the telecommunications market are another example of such
platforms. More specifically, two-sided markets offer a highly pertinent tool for
analysing the call termination market.
This new method of analysis may encourage some competition authorities
and regulators to reconsider the functioning of ICT markets and incite decisionmakers to think about the industry strategies to be implemented. That is why
COMMUNICATIONS & STRATEGIES has decided to begin the year by
publishing a dossier focusing on these issues.
It is followed by three papers that round off the Dossier in our last issue,
which was devoted to the first 'Transatlantic Telecom Forum' seminar held in
Montpellier on November 22nd 2005 in the framework of IDATE's international
conference.

We hope that you enjoy reading this issue.

Edmond Baranes
Editor

Yves Gassot
Publishing Director

Call for papers


Dossiers
No. 62 2nd quarter 2006

When convergence really means something


to the media industry
Editors: Rmi LE CHAMPION, Bernard GUILLOU & Andr LANGE

The convergence of the telecommunications, IT and media industries is no longer a


prophecy drawn up by luminaries. The portability of media usage, the ubiquity of
content and continuous improvements in networks and platforms have all combined
to turn convergence into a reality. Its impact on market structure is proving swift and
massive. Convergence is also raising new issues and fuelling debates on the
evolution of the regulatory framework. We welcome both theoretical approaches and
case studies in this instance and would be happy to consider paper proposals on
topics such as:
- Convergence/discrimination by regulatory frameworks in broadcasting and online services
- Player strategies in the audiovisual, print, IT and telecommunications
industries
- Long term economic sustainability of a disseminated services offering
- Concentration versus diversification of content supply
- The role of public service broadcasters
- Analysis of consumers' emerging practices towards the media.
Please send proposals to:
remylc@noos.fr - guillou_bernard@yahoo.fr - a.lange@obs.coe.int

No. 63 3rd quarter 2006

Bundling strategies and competition in ICT industries


Editors: Edmond BARANES & Gilles LE BLANC

Bundling strategies and its effects on competition is now a well-established research


topic in industrial economics. Since the late 1990's, the technological "convergence"
of voice, video, and data services, the telecom sector has offered a rich experimental
field for bundling proposals. This special issue aims to confront recent advances in
the economic theory of bundling with the specific experiences and perspectives of
the telecom industry. We welcome theoretical papers, as well as empirical
contributions based on practical case studies, on the following issues: bundling and
competition policy, the implication of bundling strategies on competition and market
structure, bundling and innovation, bundling and network effects etc. Papers
addressing these questions with applications or illustrations in the software or media
industries will also be considered.
Please send proposals to:
edmond.baranes@univ-montp1.fr - leblanc@ensmp.fr

As far as practical and technical questions are


concerned, proposals for papers must be
submitted in Word format (.doc) and must not
exceed 20-22 pages (6,000 to 7,000 words).
Please ensure that your illustrations (graphics,
figures, etc.) are in black and white - excluding any
color - and are of printing quality. It is essential that
they be adapted to the journal's format (with a
maximum width of 12 cm). We would also like to
remind you to include bibliographical references at
the end of the article. Should these references
appear as footnotes, please indicate the author's
name and the year of publication in the text.

Coordination and information


Sophie NIGON
s.nigon@idate.org
+33 (0)4 67 14 44 16
www.idate.org

No. 61, 1st quarter 2006

Contents
Dossier
Competition in two-sided markets:
Application to information and communication industries
Introduction
Marc BOURREAU & Nathalie SONNAC ..................................................... 11
A Strategic Guide on Two-Sided Markets Applied to the ISP Market
Thomas CORTADE ..................................................................................... 17
Retail Payment Systems: What can we Learn from Two-Sided Markets?
Marianne VERDIER ..................................................................................... 37
Mobile Call Termination: a Tale of Two-Sided Markets
Tommaso VALLETTI ................................................................................... 61
Impact of Mobile Usage on the Interpersonal Relations
AeRee KIM & Hitoshi MITOMO ................................................................... 79

Opinion
Interview with David EVANS, Vice Chairman of LECG Europe
Conducted by Marc BOURREAU, David SEVY & Nathalie SONNAC ......... 97

Articles
Municipal Wi-Fi Networks:
The Goals, Practices, and Policy Implications of the US Case
Franois BAR & Namkee PARK ................................................................ 107

The EU Regulatory Framework for Electronic Communications:


Relevance and Efficiency Three Years Later
Audrey BAUDRIER .................................................................................... 127
Modelling Scale and Scope in the Telecommunications Industry:
Problems in the Analysis of Competition and Innovation
Alain BOURDEAU de FONTENAY & Jonathan LIEBENAU ...................... 139

Features
Regulation and Competition
Competitive compliance:
streamlining the Regulation process in Telecom & Media
Grard POGOREL ..................................................................................... 159

Firms and Markets


The world broadband access market
Loc LE FLOCH .......................................................................................... 163

Technical Innovations
Instant messaging: Towards a convergent multimedia hub
Vincent BONNEAU .................................................................................... 171

Use Logics
Mobile CE - The nomadic era
Laurent MICHAUD ..................................................................................... 179

Book Review
Peter HUMPHREYS & Seamus SIMPSON, Globalisation, Convergence
and European Telecommunications Regulation, by Zdenek HRUBY ....... 185
Byung-Keun KIM, Internationalizing the Internet - The co-evolution of
Influence and Technology, by Bruno LANVIN ........................................... 186
Bethany McLEAN & Peter ELKIND, The Smartest Guys in the Room: The
Amazing Rise and Scandalous Fall of Enron, by James ALLEMAN ......... 187
Peggy VALCKE, Robert QUECK & Eva LIEVENS, EU Communications
Law Significant power in the mobile sector, by Petros KAVASSALIS ..... 190
Summary ................................................................................................. 192
The authors ................................................................................................. 193
ITS News..................................................................................................... 199
Announcements .......................................................................................... 203

Dossier

Competition in two-sided markets:


Application to information
and communication industries

Introduction
A Strategic Guide on Two-Sided Markets Applied
to the ISP Market
Retail Payment Systems: What can we Learn
from Two-Sided Markets?
Mobile Call Termination:
a Tale of Two-Sided Markets
Impact of Mobile Usage
on the Interpersonal Relations

Introduction
Marc BOURREAU
Ecole Nationale Suprieure des Tlcommunications, Paris
Nathalie SONNAC
Laboratoire d'Economie industrielle du CREST-INSEE
& Universit de Paris II

last five years, researchers in the theory of industrial organization


Forhavethebeen
showing an increasing interest in the economics of "platforms"

and "two-sided markets" This interest was first sparked by antitrust cases
concerning the credit card market in the United-States and the development
of business models in the "new economy."
There is no unified theory of two-sided markets to-date, but rather
different models co-exist that are applied to specific industry cases: payment
systems (ROCHET & TIROLE, 2002), media markets (FERRANDO et alii,
2004), the internet (LAFFONT, MARCUS, REY & TIROLE, 2003; CAILLAUD
& JULLIEN, 2003; BAYE & MORGAN, 2001), etc.
Although these different models were developed simultaneously and
independently, it soon became clear that they shared some common
features, giving birth to the two-sided markets theory. In a nutshell, a twosided market is one in which a platform (or more than one platform) brings
together two (or more) groups of consumers that are interdependent. More
formally, there are inter-group (or indirect) externalities in a two-sided
market.
The concepts of direct and indirect network effects (or network
externalities) are not new 1. There is a direct network effect when the utility
of a consumer who purchases a product or a service depends on the
number of consumers who purchase the same product or service. A

1 See the survey of Nicholas Economides for instance (ECONOMIDES, 1996).

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 11.

12

No. 61, 1st Q. 2006

standard example is the telephone; the higher the number of subscribers to


the telephone network, the higher the utility of each current subscriber, and
the higher the incentives to connect to the network for those not yet
connected. This is because users value the possibility of making calls;
hence, the higher the number of users, the higher the number of potential
calls and the value of having a telephone line. The telephone is an example
of a positive network effect, as utility increases with the number of users.
There are also negative network effects; for instance, the higher the number
of drivers on a motorway, the lower the utility of any driver (when a network
suffers from congestion, there are negative network effects; see
GABSZEWICZ & SONNAC, 2006).
Though research in industrial organization is mainly focused on direct
network effects, the impact of indirect network effects (or "inter-group"
network effects) has also been analyzed (see ECONOMIDES, 1996, for
instance). There is an indirect (or inter-group) network effect when the utility
of a consumer belonging to one group of consumers depends on the number
of consumers in the other group(s).
Having defined indirect (inter-group) network effects, we can now
introduce the concepts of platforms and two-sided markets. In a two-sided
market, a platform offers a joint product to two (or more) different groups of
economic agents, who receive a benefit if a transaction takes place. The
platform is not neutral; it plays the role of an intermediary and facilitates
transactions between groups of agents. More precisely, the platform helps
both groups of agents to maximise gains from transactions. This is because
there are indirect (or inter-group) externalities that the platform can
internalize (and not the agents). For instance, in the media industry,
advertisers (which constitute one group of consumers for a newspaper or a
television channel) are willing to purchase more advertising slots the larger
the number of readers or viewers. The readers or viewers (which constitute
the other group of consumers) can either like advertisements (ad-lovers) or
not (ad-avoiders). Therefore, the number of advertisements represents a
positive externality for ad-lovers and a negative externality for ad-avoiders;
besides, readers or viewers generate a positive externality on advertisers, as
an advertisement has a higher impact if the audience is larger.
A platform generates revenues only if it can attract both sides of the
market, that is, if both groups of agents connect to the platform. Since one
group can be more price-sensitive than the other, the price structure set by
the platform plays an important role; for a given total price, different
allocations of this price between the two groups of consumers will generate

Introduction

13

different volumes of transactions. This is one key characteristic of two-sided


markets. In the terminology of the media industry, the price structure issue
can be restated as follows: which combination of advertiser-support and paysupport should a media platform choose?
Apart from the media industry, there are many other examples of
industries of a two-sided nature. Payment systems and the internet are both
cited above. Other examples of such industries include: shopping malls,
dating sites, software operating systems, estate companies, etc.
The theory of two-sided markets contributes to a better understanding of
information and communication technologies (ICT) markets. Indeed,
conventional economic reasoning cannot be applied to two-sided markets
very effectively. For instance, whereas the structure of prices does not play a
role in "one-sided markets", it does in two-sided markets; in other words, the
effect of competition in two-sided markets might be surprising. For these
reasons, regulatory and competition authorities are increasingly likely to take
two-sided markets theory into account in their analysis of competition cases.
This is another good reason for practitioners to investigate this new theory.
Finally, we believe that information and communication technologies provide
the tools that are needed to establish efficient platforms in two-sided
markets. Think, for example, of a local estate agency versus an online estate
agency; or an offline dating agency versus an online dating site.
Contribution and limits of two-sided markets theory

The literature available on two-sided markets provides formal settings for


analysing pricing decisions and competition issues in these markets; given
that the latter are characterized by strong interdependencies, these models
are useful tools for strategic analysis. Moreover, the platform is not
presented as a neutral player in this literature; but plays the role of an active
intermediary between the groups of agents. Through its pricing strategies in
particular, a platform internalises inter-group externalities. Platforms have to
decide on price instruments (whether to price discriminate among groups or
not; whether to set access prices, usage prices etc.), price structure and the
total price.
As for the price structure, there are various possible scenarios. It may
prove advantageous for the platform only to charge one group of consumers
(free television and free press charge advertisers, for example, but not
viewers/readers). Apart from this extreme case, the two groups are likely to

14

No. 61, 1st Q. 2006

be charged different prices; in online dating sites, for instance, women are
charged less than men.
Moreover, when competition among platforms is introduced, this can lead
to surprising results. A monopolistic platform can be more efficient than
competing platforms, for instance. The results obtained under competition
depend on the assumptions with respect to consumers' connection
decisions: do consumers connect to one platform at most ("single-homing")
or can they connect to more than one platform ("multi-homing")? Finally, one
last important contribution of this theory is that it provides an interesting and
convincing analysis of the well-known "chicken and egg" problem (see
CAILLAUD & JULLIEN, 2003).
To-date, the theory of two-sided markets suffers from two main
weaknesses. Firstly, there is no unified framework as yet. However, we
believe that it might be difficult to find such a unified setting. Better prospects
may be offered by developing models tailored to specific industry situations.
Secondly, research has remained mainly theoretical so far, and empirical
evidence is weak.
The papers in this dossier

The following collection of papers provides interesting insights into how


the theory of two-sided markets can be used to address strategic and policy
issues.
In his paper "A strategic guide to two-sided markets applied to the ISP
market", Thomas CORTADE provides an overview of the literature on twosided markets and applies this literature to analyse pricing strategies in the
internet access market. After a discussion of the main features of a twosided market, he analyzes pricing strategies and the effect of competition in
a two-sided market. This paper ends with a discussion of the implications of
these models in terms of regulatory and competition policy.
In her paper "Retail payment systems: what can we learn from two-sided
markets", Marianne VERDIER discusses the extent to which the theory of
two-sided markets provides useful insights into understanding payment
systems. She highlights the contribution and limits of this theory when
applied to payment systems. More specifically, she argues that some
payment systems, such as credit transfer or direct debit, do not fulfil all the
criteria of two-sided markets. Similarly, she points out that some key

Introduction

15

features of payment systems, such as risk management, are ignored by


existing literature on this topic.
In his paper "Mobile call termination: a tale of two-sided markets",
Tommaso VALLETTI argues that the mobile telephony market can be
viewed as a two-sided market, where senders and receivers constitute the
two groups of agents and the mobile network acts as the platform. He then
applies the theory of two-sided markets to discuss market definition in the
mobile industry, the analysis of market power and the regulation of mobile
call termination.
In their paper "Impact of Mobile Usage on the Interpersonal Relations",
AeRee KIM and Hitoshi MITOMO analyze the role of voice and text
messaging in social relations among young people. Based on a survey of
students at several major universities in Seoul, Taipei and Tokyo, they show
that the mobile platform has little impact on the width and depth of
relationships among young people. It appears that mobile communications
both text and voice essentially help to maintain existing relations.
Finally, the interview with David EVANS provides several highly
interesting insights into the definition of two-sided markets, the contribution
that this theory can offer practitioners and policy makers and its relevance to
ICT markets.

No. 61, 1st Q. 2006

16

References
BAYE M.R. & MORGAN J. (2001): "Information Gatekeepers on the Internet and the
Competitiveness of Homogeneous Product Markets", American Economic Review,
vol. 91, pp. 454-474.
CAILLAUD B. & JULLIEN B. (2003): "Chicken & egg: Competition among
intermediation service providers", Rand Journal of Economics, vol. 34, pp. 309-328.
ECONOMIDES N. (1996): "The Economics of Networks", International Journal of
Industrial Organization, vol. 14, pp. 673-699.
FERRANDO J., J. GABSZEWICZ, D. LAUSSEL & N. SONNAC (2004): "Two-Sided
Network Effects and Competition: an Application to Media Industries", Lucarnes
bleues 2004/09.
GABSZEWICZ J., LAUSSEL D. & SONNAC N. (2004): "Programming and
Advertising Competition in the Broadcasting Industry", Journal of Economics and
Management Strategy, vol. 13, pp. 657-669.
GABSZEWICZ J. & SONNAC N. (2006): L'industrie des mdias, Editions La
Dcouverte, Paris, "Repres".
LAFFONT J.-J., MARCUS S., REY P. & TIROLE J. (2003): "Internet Interconnection
and the off-net cost pricing principle", Rand Journal of Economics, vol. 34, pp 370390.
ROCHET J.-C. & TIROLE J. (2002): "Cooperation Among Competitors: The
Economics of Payment Card Associations", Rand Journal of Economics, vol. 33, pp.
549-570.

A Strategic Guide on Two-Sided Markets


Applied to the ISP Market
Thomas CORTADE
LASER-CREDEN, University of Montpellier

Abstract: This paper looks at a new body of literature that deals with two-sided markets
and focuses on the Internet Service Provider (ISP) segment. ISPs seem to act as a
platform enabling transactions between web sites and end consumers. We propose a
strategic guide for ISPs that covers features of two-sided markets such as strong
externalities and discuss how these market characteristics can affect competition policy.
Key words: Platform, externalities, price allocation, competition policy.

n the internet, interactions between firms and/or consumers play an


important role. Interactions between two (or more) parties are possible
due the existence of "platforms" owned by third parties. The economics
of these "two-sided markets" is the focus of a large amount of literature
which has been published recently.
Following EVANS (2004), we can argue that a platform constitutes the
set of the institutional arrangements necessary to realise a transaction
between two users groups. Many markets can be seen as two-sided:
- the academic review market, since reviews compete to attract authors
and readers;
- the video game market, where, the Sony Playstation, for example, is
the platform and Sony is trying to attract game video providers and final
users;
- a newspaper (or more generally media) is a platform between
advertisers and readers.
One important characteristic of two-sided markets is the presence of
network externalities between the two different groups using the platform.
There is a large amount of literature on positive network externalities (KATZ
& SHAPIRO,1985, 1986; FARRELL & SALONER, 1985, 1986). However, in
this literature, users belong to the same group and externalities are "intra-

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 17.

No. 61, 1st Q. 2006

18

group" externalities, whereas in a two-sided market there are two different


groups of users, and externalities are "inter-group" externalities.
The paper deals with the following questions. How can a two-sided
market be identified? What are the features of such markets? What are the
implications of these markets for competition policy and platform strategies?
We propose to focus on the Internet Service Providers (ISP) market.
Indeed in this market, ISPs attempt to attract web sites and users, and so
would appear to be platforms. This situation is particularly true of the B2B
segment. Indeed, Internet users on the platform are willing to buy CD or
books from Fnac or Amazon, for example.
The second section of the paper studies the features of two-sided
markets and their pricing implications. This is followed by a strategic guide
for ISPs, while the fourth section focuses on ISP strategy in the presence of
multihoming. Lastly, we propose to study the implications of ISPs on
regulatory and competition policy. The paper ends with a few concluding
remarks.

Features of two-sided markets

This section analyses the main features of two-sided markets by focusing


on the nature of the externalities involved and then assessing their
implications on the prices set by platforms.
We retain the same definition of the two-sided market adopted by both
EVANS (2004) and REISINGER (2003). A market is said to be two-sided if:
"at any point in time there are (a) two distinct groups of customers; (b)
the value obtained by one kind of customers increases with the
number of the other kind of customers; and (c) an intermediary is
necessary for internalizing the externalities created by one group for
the other group".

Externalities in two-sided markets


The presence of two different user groups calls for a modification to the
standard analysis of externalities. We can distinguish between two main sets
of externalities in a two-sided market: membership externalities and usage

T. CORTADE

19

externalities. The first set closely resembles classical externalities, such as


positive network externalities. Indeed there are positive externalities in the
telecommunications industry, which have been analysed by KATZ &
SHAPIRO (1985), as well as FARRELL & SALONER (1985).
The first feature of two-sided market is the membership externality. We
can describe this external effect as follows. The more consumers connected
to the platform, the greater the number of consumers will want to join this
platform. ARMSTRONG (2004a) referred to this effect as the membership
externality. For example, the greater the number of consumers connected to
an ISP, the more consumers will be willing to pay to join the ISP in order to
be able to exchange traffic. In two-sided markets, however, the membership
externality results from the presence of the two different user groups. This
means that the greater the number of web sites (ergo consumers) connected
to the platform, the more attractive the latter (and the web sites accessed via
the ISP) become from a consumer's point of view.
The second feature results from the interaction between the two user
groups. That is referred to as the usage externality. The usage externality
arises from one or several interactions caused by the ISP between web sites
and internet users. As ROSON (2004) notes, there are:
- markets where only one interaction exists, such as estate agencies;
- markets with several interactions, as is the case with the ISP market.
The interactions can be repeated. From this point of view, each agent
receives some benefit from each interaction. This is true of the Google
web site, for example, and for B2B more generally.
To summarize, there are two kinds of externalities. The usage externality
results from the interaction between two different user groups, whereas the
membership externality refers to the installed base.

General implications of externalities on ISPs' strategy


The presence of externalities in two-sided markets has implications on
the prices set by platforms, allowing us to draw a distinction between twosided markets and their classical counterparts. This presence impacts both
the price level and the price structure. In this respect, ROCHET & TIROLE
(2004) argue that price structure can provide a basis for identifying two-sided
markets. Since there are two different user groups, ISPs face two distinct
types of demand. Thus the global end price is composed of a price paid by
the web site and a price paid by internet users. The presence of externalities

No. 61, 1st Q. 2006

20

and the existence of two different prices raise the issue of price allocation.
This in turn raises poses key questions. What are the efficient price level and
an efficient allocation of prices from the platform's point of view? What are
the implications of the presence of positive externalities?
The first implication is essential. EVANS (2003) affirms that the price on
each side can be different. In cases where demand is developed on each
side, price level and allocation play an important role in maintaining two
different types of consumer. We can argue with ROCHET & TIROLE (2003)
that since there is a membership externality, the price charged by ISPs for a
transaction decreases with the size of the installed base. Again, this effect
closely resembles the network positive externality. However, the usage
externality may be internalised by the user groups through the price
structure set by the platform. In this case EVANS (2004) argues that the
service is jointly consumed by the two types of users in two-sided markets,
and the usage externality exists only if the transaction takes place.
Figure 1 - An ISP as a platform

Platform (ISP)

Access

Access
aB

Web site (S)

aS
Web site (S)

Therefore the presence of externalities 1 implies that the aim of an ISP is


not to offer cost-oriented and symmetric prices, but to balance demand
between websites and internet users. In other words, discrimination is
possible. In these conditions, a market can be described as two-sided if it is
characterised by the following phenomenon. There is a confrontation
between supply (websites) and demand (internet users), which, when
combined, also express a demand ISP access to realise transactions. The
ISP market structure is illustrated in the diagram above.

1 It is worth noting that potential negative externalities do exist. This is the case with advertising
in newspapers. Indeed, consumers are willing to pay more to have less advertising. For a more
detailed analysis of this point see FERRANDO, GABSZEWICZ, LAUSSEL & SONNAC (2004)
AND GABSZEWICZ, LAUSSEL & SONNAC (2002).

T. CORTADE

21

In this situation it is easy to understand the distinction proposed by


ROCHET & TIROLE (2004) between the price level (the total price set by the
platform) and price structure (allocation). Thus there is no evidence that the
two types of users equally share the total price for access to the ISP. As
underlined above by the definition, the benefit gained by a consumer
(website or internet user) results from their interactions on the platform.
Hence, in such markets, a consumer on side i earns a positive net surplus
from interactions with another consumer on side j z i . This feature refers to
the usage externality, whereas the membership externality refers to the
decision (ex ante) to join the platform, for a given fixed fee.
ROCHET & TIROLE (2004) explain the features of two-sided markets in
the following way. They argue that, from a theoretical point of view, it is
impossible to apply Coase's theorem (1960) to two sided markets, since the
transaction between sellers and buyers takes place only if there is a
platform. This implies the presence of a third party, which owns the platform,
and prevents direct bargaining between the agents. The authors conclude
that, in a Coasian world, the price structure would be neutral. In other words,
there would be neutrality in the allocation of the total price. However, as
explained above, this is not the case in two-sided markets. Since there is no
pricing neutrality, ISP strategy should be based on price allocation.

Pricing strategy for ISPs: a strategic guide

To introduce this topic, we can distinguish between internal competition


occurring within the same platform, and external competition, which occurs
between two or more platforms (see ROSON, 2004).
In this context, externalities have major implications for price structure.
Thus, if the price on one side of the market decreases, for internet users for
example, they tend to use the platform more. However, at the same time, the
other side, which consists of web sites, also stands to benefit from this.
Indeed when the price decreases, the direct effect is as follows: there are
more internet users, so the incentive for web sites to join the platform
increases. This result is not surprising. However, interaction between the
different user groups modifies the standard results of competition la
Bertrand, since the prices are cost-oriented. Thus the utility derived by one
group depends on the number of users in the other group.

22

No. 61, 1st Q. 2006

In this context price allocation is an important issue. ARMSTRONG


(2004b) and ROCHET & TIROLE (2003) present an overview of this price
allocation problem. Their analyses insist on externalities and their
implications for prices, in line with LAFFONT, MARCUS, REY & TIROLE
(2003) who study ubiquitous connectivity. More precisely, they consider a
platform as a monopoly 2 in order to explain how price allocation is affected
by different factors such as:
- multihoming,
- user costs,
- platform differentiation,
- a platform's capacity to use a price based on the number of
transactions (ROCHET & TIROLE, 2003),
- the number of users (ARMSTRONG, 2004b),
- externalities between user groups (ROCHET & TIROLE, 2003) and
within a group (ARMSTRONG, 2004b).
In order to illustrate this topic, this paper studies the pricing strategy for a
monopolistic ISP in order to underline the impact of a two-sided market's
features.

A theoretical framework: a monopolistic ISP


In line with ARMSTRONG (2004b), ROCHET & TIROLE (2003), we
consider that the monopoly offers linear prices on the two sides. In this
situation the aim for an ISP should be to define a price level, but also an
efficient price allocation between web sites and internet users.
ARMSTRONG (2004b) focuses on the membership externality, while
ROCHET & TIROLE (2003) focus on the usage externality.
ARMSTRONG (2004b) compares a situation in whereby a platform
maximizes the global welfare of the industry to a situation whereby it sets
prices to maximize its own profits. In cases where platforms maximize social
welfare, prices are below fixed costs, since prices are defined by this cost
minus a parameter of the externality related to the other side of the market.
In cases where the platform maximizes its own profit, the price is equal to
the fixed cost minus the externality plus a factor related to the demand
elasticity of the group in question, and given the participation of the other

2 Or that the connectivity is the same for the platforms.

T. CORTADE

23

side. ARMSTRONG (2004b) concludes that is the membership externality


that determines the allocation of prices.
The first insight into pricing strategy is consequently provided by the
membership externality. It is worth noting that the pricing strategy for a
monopolistic ISP could be to identify the side of the market most sensitive to
the network effect.
ROCHET & TIROLE (2003) focus on the usage externality. This could be
more realistic in our framework since several transactions take place
between the internet and web sites.
As a result, the price depends on the elasticity of the side in question.
This is noted by a Lerner index as follows:

pS  pB c
B

pS pB

KS KB

where p and p are the price respectively for the buyers (internet
S
B
users) and the sellers (web sites) and K and K represent the respective
elasticity of demand from each group. The interesting insight afforded by
ROCHET & TIROLE (2004) is that prices are inversely proportional to
elasticity.
It follows that ISP strategy should consider the side of the market more
sensitive to price by analysing the direct elasticity on each side impacted by
the usage externality. Internet users should thus be more sensitive than web
sites.

A strategic pricing guide in the presence of ISP competition


ISP competition with single-homing

In line with the principles described above (network effect and elasticity),
this section considers competition between ISPs, in cases of single home
connection i.e. where each side can only be connected to one platform.
ARMSTRONG (2004b) focuses on competition between ISPs that
provide services perceived as different by users. The author supposes that
each consumer, web site or internet user, can be connected to one exclusive

24

No. 61, 1st Q. 2006

ISP only. The first insight provided by this study is that the net surplus for
each group is a function of the external benefit of having an additional
consumer in the group. Its main conclusion is that ISPs should consider this
external benefit as a measure of the opportunity cost.
This means that, since there is competition between ISPs, their strategy
should be based on avoiding price hikes to discourage consumers from
switching to a competitor's platform. The expression of price is simple. It is
the sum of fixed costs and the substitutability parameter (since services are
perceived as different), minus the valorisation of the inter-group externality
resulting from the transaction. Moreover, this means that pricing is generally
not cost-oriented.
The impact of single homing on pricing strategy can be summarised as
follows:
x In the presence of single homing, the more the users on one side
place a high value on the presence of the other group, the lower the price
determined by the ISP should be.
x However, the single homing hypothesis is not really consistent with
the ISP market. Web sites, in particular, can be connected to several
platforms.
ISP competition with multihoming

Following on from ARMSTRONG (2004b) and ROCHET & TIROLE


(2003), this section considers cases whereby one side of the market can
multi home, i.e. connect to several platforms. In these conditions the result is
naturally as follows.
The inter group externality arising from transactions or the usage
externality is more valued by the users' multihoming. As a result, competition
takes place only on the single home side.
ROCHET & TIROLE (2003) propose a more general model than
ARMSTRONG (2004b). They suppose that web sites are connected to two
different platforms, and that internet users choose the platform where
transactions take place. Transactions will take place when the benefit to
each user on each side (buyer and seller) is higher than the price set by the
platform. At first the authors postulate that the price levels proposed by each
ISP are the same.

T. CORTADE

25

In these conditions web sites have three different possibilities. They


realise no transaction if the price is higher than the value generated by the
transaction. Secondly, the choice of connection to one or two platforms
implies the following trade off: a web site compares its net surplus expressed
by the difference between the benefit stood to be gained and the price in
view of demand from internet users in the two situations (multihoming versus
single home). Thus, the ISP's strategy consists of setting a price lower than
its rivals in order to limit the incentive for web sites to become multihoming.
Indeed, when a platform decreases its price, this increases its own demand
and attracts web sites that were previously multihoming.
To represent this trade-off ROCHET & TIROLE (2003) define the
following index 3:

Vi

d1B  d2 B  DJ B
di B

With V i 0,1 , this index provides a measure of consumer loyalty to


B
the platform i . DJ corresponds to the proportion (demand) of internet
users (buyers) who are willing to use the platform j when web sites are
B
( i 1, 2 )
exclusively connected to the platform j ( j 1, 2 ). di
corresponds to the proportion (demand) of the internet users who are willing
to proceed to a transaction on the platform i when the seller is multihoming.
So when V i 0 , all web sites are multihoming. On Bthe contrary for
d1B  d 2 B D j B , V i 1 all web sites are single home ( D j
di B ).

> @

The outcome of competition, where the aim of the platform is to maximize


its own profit, closely resembles the outcome of the monopoly situation. We
can note 4:

pB  pS  c

pB
B
K

pS

K / V
S

B
S
where K and K respectively represent the demand-elasticity of internet
users and of web sites for a given platform, where the transaction takes
place. It is worth noting that the web sites' elasticity is corrected by the index

3 Where the index B refers to buyers, thus to internet users.


4 Where the index S refers to buyers, thus to web sites.

26

No. 61, 1st Q. 2006

of loyalty. When web sites are connected to an exclusive platform, ( V


this produces the monopoly result.

1)

In cases where multihoming is widespread, the sensitivity of web site


demand to price variations is higher. As a result, a small decrease in the
price of one platform implies that web sites have incentives to move from
multihoming to the single-home model. ROCHET & TIROLE (2004) conclude
that there are cross subsidies between the two sides. The authors call this
principle the "topsy-turvy principle". This can be defined as follows: an
increase in the price on one side implies an increase in the mark-up for the
platform, but also implies a decrease in price on the other side, in order to
attract users and to preserve balanced demands.
As a result, the more widespread multihoming becomes, the more
platform competition implies a decrease in price on the web site side. Finally,
the volume of transactions depends not only on the overall price, but also on
price allocation. The price structure is again not neutral in the presence of
competition with multihoming.
Finally ISP pricing strategy should be guided by the following factors,
which all have an impact on price allocation:
x Elasticity: For example, if the installed based on one side increases
and if this side is captive, then it is profitable for the platform to increase its
price (for web sites, for example) in order to decrease the price on the other
side and attract new buyers (internet users).
x The web site's market power: if web sites enjoy significant market
power, then the platform could decrease the price it charges those service
providers to decrease the double marginalization effect.
x In the ISP market internet users can be seen as "marquee buyers," as
highlighted by ROCHET & TIROLE (2003). Indeed, their presence has a
high value for web sites and thus modifies the price structure. This effect
implies that ISPs could set a lower price for buyers and a higher price for
web sites.
x The consequences of multihoming are not clear. Indeed, if some on
the internet users' side are connected to several platforms then price
sensitivity appears to increase on this side (higher elasticity). Platforms can
react in the following way: to create an incentive for web sites to stop
multihoming, ISPs may decide to charge them low prices.

T. CORTADE

27

According to EVANS (2004) other factors impact the price structure such
as investment on one side of the market, since an investment allows the
platform to decrease the price on this side. As a result, this strategy makes it
possible to attract new consumers on the other side. Moreover Evans
argues that multihoming offers a key insight in the study of two-sided
markets. Multihoming consequently implies higher competitive pressure and
tends to decrease prices.

Competition for market share among ISPs


The analysis above explains how the features of two-sided markets affect
price structure, making them subject to economic consequences that differ
from standard effects. Under such circumstances, a platform may have an
incentive to modify price structure according to the valorisation of the usage
externality. Thus, the demand of one side tends to decrease if the demand
of the other is too low. In this context the following two questions arise:
x Which strategy should a platform adopt to attract both sides and reach
critical installed bases on each side?
x On which side should demand be stimulated first by the platform?
In a competitive environment ISPs must be able to defend their existing
market share, as well as bidding for new clients.
CAILLAUD & JULLIEN (2003) and JULLIEN (2001) look at this issue in
greater detail. Following Caillaud and Jullien, we can argue that ISPs must
own an important installed base of web sites in order to attract internet
users. However, web sites will only be willing to pay if they anticipate that a
large number of internet users will be present. That is the chicken and egg
problem.
The authors argue that a possible strategy for platforms (in our case:
ISPs) is to "divide and conquer" the market. This platform strategy is based
on dividing one side to conquer the other, with price discrimination arising in
two-sided markets. Caillaud and Jullien focus on market structure and
platform strategies. The study considers imperfect competition with a two
part tariff between platforms, whereby the services provided can be
exclusive (single home) or non exclusive (multihoming).

28

No. 61, 1st Q. 2006

Competition for market share with exclusive services

Exclusive services denote a single home connection. In this case, all


users on both sides prefer to belong to the same platform. The ISP's
strategy is consequently based on subventions on one side. As a result, with
exclusive services externalities tend to favour market concentration market.
This appears to be an efficient market structure, which generates low profits
as a result. CAILLAUD & JULLIEN (2003, 2004) explain this effect as
follows.
Let suppose that two platforms compete against each other for exclusive
services. This implies that all users single home. A platform could decrease
the price on the Internet users' side in order to attract web sites, which stand
to gain a higher net surplus from connection to this platform. This process
can be repeated, turning the monopoly into an efficient structure with low
profits. In other words, when services are exclusive, competitive pressure is
high. This is true as long as transaction prices are not distorted, that means
as long as the price enables ISPs to collect all the profit on one side and
subsidize the other. Under such circumstances subsidies would appear to
represent a competitive strategy and entail a concentrated market structure.
When there is intense competition for market share with exclusive
services, a concentrated market may offer an efficient market structure.
Competition for market share with non exclusive services

This section examines cases where competition takes place with non
exclusive services. In many cases users are connected to several platforms
(multihoming). This is particularly true of internet users. CAILLAUD &
JULLIEN (2003) show that service providers have incentives to propose non
exclusive services when competitive pressure is not too high in order to
exercise their market power. In such cases it is easy to divide (to subsidize),
but more difficult to conquer. With non exclusive services the competitive
pressure is lower, making it more difficult to attract new users
Finally ARMSTRONG & WRIGHT (2004) provide an analysis of this topic
based on endogenous users' decisions between exclusive and non
exclusive services. Their results close resemble those cited above. We can
consequently argue that:
An optimal strategy for ISPs is to sustain losses on one side in order to
achieve a critical installed base on this side. The "divide and conquer"

T. CORTADE

29

strategy consists of ISPs subsidizing internet users in order to attract them.


Once their participation is obtained on this side, there is a bandwagon effect
that allows the platform to recover the subsidy through the fixed fee paid by
web sites on the other side. This platform strategy is based on the idea of
"buying" the participation of one side in order to create some value for the
other due to the presence of inter group externalities. As seen above, many
factors have implications for the price structure and ISP strategy.

Implications for regulatory and competition policy

It seems that the usual principles of competition in terms of price level


and allocation are modified in two-sided markets. More specifically, the
membership externality and usage externality imply that ISP strategy is not
based on cost-oriented prices, but on the ability to achieve balanced
demand. We have shown that the price strategy depends not only on
competitive pressure and elasticity, but also on externalities and their
valorisation for each group, according to whether there is multihoming or not.
As a result, the different level of valorisation impacts both on pricing strategy
and on competition to maintain and conquer market share. Under such
circumstances, a pricing strategy could consist of subsidizing one side to
attract consumers on the other. Those questions need to be debated from a
regulatory and competition policy point of view.

Competitive policy and price structure


A first insight is afforded by the impact of externalities on price structure,
which is not neutral in two sided markets. An efficient price structure is no
longer cost-oriented. However, it seems essential to take into account the
surplus received by each consumer, web site and internet user from
transactions.
A price above marginal cost does not reflect market power
Indeed interactions between the two sides imply counter-intuitive effects.
As shown with the "divide and conquer" strategy, we can affirm with EVANS
(2004) that the estimation of market power should take both sides of the
market into consideration. This is particularly true if a price is higher than
marginal cost on one side, and below marginal cost on the other.

30

No. 61, 1st Q. 2006

Competition policy in a traditional market can embrace price distortion (price


below marginal cost) in the short term; but is opposed to this principle once
the market becomes mature.
Thus, competition policy can not consider these prices separately. This
policy is not relevant for two-sided markets, where goods or services are
only sold if the platform attracts sufficient users on both sides. In this
framework the competition authority seems unable to analyse collective
welfare without taking the price level, price allocation and the external effects
created by the presence of the two sides into account.
A strategy based on cross subsidies is not a predatory strategy
in a two-sided market
ROCHET & TIROLE (2004) compare a two sided market with a vertically
related market structure. They suppose that a vertical organization in which
there is no direct relation with consumers (downstream market), but only
with sellers. In two-sided markets, if sellers enjoy significant market power,
platforms may have an incentive to subsidize prices in order to increase the
buyers' surplus and their willingness to pay. Another strategy according to
ROCHET & TIROLE (2004) is to encourage competition on one side, in
order to attract users on the other side. Platforms thus have an incentive to
offer cost-oriented prices. This stimulates interactions and tends to make the
volume of transactions optimal.
If we consider a vertical market structure such positive effects are limited
because there is no internalization of the benefits resulting from transactions
when platforms contract with sellers only. The authors demonstrate that
foreclosure is less possible in two-sided markets.
The key insight of their study is the existence of differences in the
economic effects of one sided and two-sided markets. According to
ROCHET & TIROLE (2004) a platform is able to control or regulate
interactions. This is not the case in a vertical related market. Their analysis
becomes valid again if we consider a price lower than marginal cost and this
does not imply to a predatory pricing strategy. In a two-sided market, it is
essential to consider that a given service is provided to each user on each
side at the same time.

T. CORTADE

31

Competitive policy and market concentration


A concentrated market is not an inefficient market structure
Increasing the number of firms in a market, as is the case in a
competitive multihoming scenario, has no positive impact on price structure.
Under such circumstances we have shown that internet users may pay a
lower price, since the ISP's strategy consists of reaching a critical installed
base on this side. On the other side, web sites are usually willing to pay a
higher price to participate in transactions.
As a result, a more competitive two-sided market does not imply that the
price structure is more balanced.
Moreover, if we consider a merger between ISPs like EVANS (2004), we
can argue that when competition policy faces a merger between two
platforms, the presence of the two sides must be considered. In general
terms, competition policy accepts or refuses the merger in view of the
evolution of prices. However, the total price must be considered in two-sided
markets. Indeed a price increase on one side can reflect a decrease on the
other in order to preserve balanced demand. So a price decrease on one
side increases willingness to pay on the other side. In the end the variation
in the total price may be low, although the price structure has changed
significantly.

Price regulation and interconnection in two sided markets


Price regulation is not neutral if this regulation only attributes a
competitive advantage to regulated firms. In two-sided markets WRIGHT
(2004) underlines that a non regulated firm will not want to match a
suboptimal price structure imposed on a regulated firm. In other words,
suppose regulation prevents one side from participating. The first impact of
regulation is to decrease prices. However, users may prefer to pay more to
access the non regulated platform if installed bases are larger, thus enabling
the non regulated firm to increase its market share and profits.
This analysis of regulatory policy can be extended in line with LAFFONT,
MARCUS, REY & TIROLE (2003). They provide a model which considers a
reciprocal access charge in a two-sided market. The framework analysis is
as follows: two ISPs compete at the same time for final users and for web
sites. The authors suppose that to exchange traffic, the ISPs set a reciprocal

32

No. 61, 1st Q. 2006

access charge for termination. This means that the ISP at the origination of
the traffic must pay an access charge to its rival for termination. Finally,
users' decision to join one exclusive ISP (i.e. single homing) is endogenous.
The ISPs are considered as perfect substitutes from the consumer's point
of view. The total price set by an ISP consists of the price set for consumers,
plus the price fixed for web sites. The authors adopt the "off-net cost
principle". Moreover, they suppose that the hypothesis of the "balanced
calling pattern" is respected. This highlights an important difference between
their views and theoretical literature on the telecommunications industry. The
receivers of traffic pay a price to receive calls, which is not true in
telecommunications. This has two major implications. The first is related to
prices, while the second is linked to competition stability.
The impact on price is as follows: when a consumer receives traffic
without paying, ISPs are left to pick up the perceived marginal cost (as
pointed out by LAFFONT, REY & TIROLE, 1998a). However, when
consumers pay to receive traffic, the perceived marginal cost is only equal to
the opportunity cost of losing a consumer who may switch to another ISP.
This is the result of the usage externality in two-sided markets. Moreover,
competition stability is stronger in this context. Indeed, when receivers do
not pay for traffic, then equilibrium can only exist if the access charge is
close to the marginal cost for termination or if the networks are close
substitutes. Yet in the scenario outlined above this is never the case, since
the sum of the prices (for each side) is just equal to the traffic cost,
independently of the access charge level. The access charge only
determines how cost is allocated between the two sides.
As a result, the price structure implied by externalities modifies the
access pricing problem. Here again, it is the study of the total price that is
relevant.
All the features can potentially modify the tools used by competition
policy. In short, two main difficulties for competition policy arise with regard
to two-sided markets. The first is characterized by the utility received by
consumers, since there are usage and membership externalities. Although it
is difficult to measure these externalities, they must be taken into account in
studies of two-sided markets. The second difficulty concerns the advantages
that consumers derive from price structure that enable them to perform
transactions at the lowest possible cost. It is important to consider that the
benefits on one side increase with participation on the other. Again, it is not
easy to take this effect into account in competition policy.

T. CORTADE

33

However, there is no reason to believe that non competitive behaviour is


more widespread in two-sided markets. In fact, behaviour is just different,
with prices not set based on cost on each side, for example. Moreover, price
level and allocation must be determined in order to maximize output. In such
cases, it is important to increase installed bases on each side to solve the
chicken and egg problem. From this point of view, CAILLAUD & JULLIEN
(2003) show how dominant firms prefer to set prices related to volumes of
transactions, rather than a fixed fee when entry is impossible. Like
ARMSTRONG (2004b), CAILLAUD & JULLIEN (2003) show that the
pressure of competition is more intense without multihoming. This fact
seems to oppose economic intuitions.

Conclusion

This paper attempts to offer a strategic guide to two sided markets, and
to identify the difficulties for competition and regulatory policy with regard to
the features of two-sided markets.
At first, our analysis shows that two-sided markets differ from their
classical counterparts because there is a third party involved that is subject
to two different types of demand. The platform allows transactions between
different user groups. As a result, there are two types of externality. The first
externality, also present in the telecommunications industry, is the
membership externality, whereas usage externality is specific to the two
sided market structure. Thus users of the platform benefit from the presence
of members on the other side.
Such interactions have an impact on price level, and especially on the
allocation of the total price between the two sides of the market. Indeed,
platforms charge each side a price. In such cases, it is possible for the third
party to charge one side a price below marginal cost and the other a price
that is higher than this cost. However, as demonstrated above, such prices
do not express cross subsidies or market power. Price allocation is not
neutral.
As a result, we show that competition policy tools can be modified by
such features of two sided markets. The most efficient market structure is
not always competition (multihoming). On the contrary, concentrated
markets can be justified since there are strong externalities. Similarly,
mergers are not necessarily detrimental to the industry. The second insight

34

No. 61, 1st Q. 2006

of our paper is the impact on competition and regulatory policy of the


presence of externalities. We show that a price that is higher than marginal
cost doesn't reflect market power, while a strategy based on cross subsidies
is not a predatory strategy in a two-sided market. We also demonstrate that
a concentrated market is not an inefficient market structure and that price
regulation in two-sided markets would be not neutral.
Finally, we assess the impact of such features on the interconnection
market. When we consider two-way interconnection in telecommunications,
the theoretical literature on the topic shows that competition between
symmetric networks arises from collusion and implies exclusion when these
networks are asymmetric. In two sided markets, on the other hand, the role
of the reciprocal access charge is modified since this charge makes it
possible to determine price allocation. In such cases it is essential for
competition policy to study a two sided market and the strategic behaviour of
its players by considering the total price, not the price paid by each side.

T. CORTADE

35

References
ARMSTRONG M. (2004b): "Competition in two-sided markets", mimeo.
ARMSTRONG M. & WRIGHT J. (2004): "Two-sided markets with multihoming and
exclusive dealing", working paper.
CAILLAUD B. & JULLIEN B.:
- (2003): "Chicken & egg: Competition among intermediation service providers",
Rand Journal of Economics, vol. 34, pp. 309-328.
- (2004): "Two-sided markets and electronic intermediaries", working paper IDEI.
EVANS D.S.:
- (2003): "Some empirical aspects of multi-sided platform industries", Review of
Networks Economics, vol 3, pp. 191-209.
- (2004): "The antitrust economics of two-sided markets", Yale Journal on Regulation,
vol. 2, pp325-382.
FERRANDO J., GABSZEWICZ J., LAUSSEL D. & SONNAC N. (2004): "Two-Sided
Network Effects and Competition: An Application to Media Industries", Conference on
"The economics of two-Sided markets", Toulouse, January 23rd-24th.
GABSZEWICZ J., LAUSSEL D. & SONNAC N. (2002): "Network effects in the press
and advertising industries", mimeo, CORE Discussion Paper.
JULLIEN B.:
- (2001): "Competing with network externalities, and price competition", mimeo IDEI
Toulouse.
- (2004): "Two-sided markets and electronic intermediaries", working paper IDEI.
LAFFONT J.J. & TIROLE J. (2000): "Competition in telecommunications", MIT Press,
Cambridge.
LAFFONT J.J., MARCUS S., REY P. & TIROLE J. (2003): "Internet Interconnection
and the off-net cost pricing principle", Rand Journal of Economics, vol.34, pp 370390.
REISINGER M. (2003): "Two sided markets with negative externalities", Mimeo.
ROCHET J.C. & TIROLE J.:
- (2004): "Two-sided markets: an overview", Mimeo.
- (2003): "Platform competition in two-sided markets", Journal of the European
Economic Association, vol. 1, pp. 990-1029.
ROSON R. (2004): "Two-sided Markets", Mimeo.
WRIGHT J. (2004): "One sided logic in two-sided markets", Review of Networks
Economics, vol. 3, pp. 42-63.

Retail Payment Systems:


What can we Learn from Two-Sided Markets?
Marianne VERDIER (*)
Department of Economics, ENST, Paris

Abstract: Some retail payment systems can be modelled as two-sided markets, where a
payment system facilitates money exchanges between consumers on one side and
merchants on the other. The system sets rules and standards, to ensure usage and
acceptance of its payment instruments by consumers and merchants respectively.
Some retail payment systems exhibit indirect network externalities, which is one of the
main criteria used to define two-sided markets. As more consumers use the payment
platform, more merchants are encouraged to join it. Conversely, the value of holding
payment instruments increases with the number of merchants accepting them. The theory
of two-sided markets contributes to a better understanding of these retail payment
systems, by showing that an asymmetric allocation of costs is needed to maximise the
volume of transactions. It also starts to offer results that could explain competition
between payment platforms.
However, this theory entails some limits to a thorough understanding of retail payment
systems. Firstly, we show that some retail payment systems, such as credit transfer or
direct debit systems, do not necessarily fulfil all the theoretical criteria used to define twosided markets. Moreover, this theory does not take into account specific features of the
payment industry, such as risk management or fraud prevention. This leads us to propose
new research directions.
Key words: payment systems, two-sided markets, platform competition, payment cards.

n December 4th 2004 a failure in the Belgium payment card system


paralysed merchants' card transactions for over two hours.
According to the local newspaper Le Soir, retailers sustained losses
of an estimated EUR 20 million 1. The consequences of this failure show the
economic importance of payment systems for commercial exchanges. Over
231 billion transactions worth EUR 52,000 billion are processed each year in
European payment systems, totalling between 2-3% of European GDP 2.

(*) I wish to thank "le Groupement des cartes bancaires" CB for its helpful support.
1 Source : www.silicon.fr, Thursday, December 9th 2004.
2 Study conducted by McKinsey in 2005, cited by the European Commission in its directive
proposal for payment services in the internal market. Directive COM(2005)603.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 37.

38

No. 61, 1st Q. 2006

These transactions can be routed through several payment systems:


payment systems for interbanking transfers, payment systems for securities
traded on financial markets, and retail payment systems (for cash, checks,
card payments, credit transfers, direct debits etc.). All these payment
systems, whether for retail or wholesale transactions, share a clearly defined
set of rules 3, processes, and instruments that enable their members to
exchange money.
It is increasingly difficult to understand how financial systems work,
because money exchange mechanisms now involve complex competitive
interactions between payment institutions on the one hand, and payment
systems on the other. In fact, private payment systems organised as
networks compete to offer services to payment institutions, so as to ease
and expand their money exchanges, while meeting the cautious constraints
defined by Central Banks. It is now possible to claim that there is a real
payment service industry because monetary authorities do not wholly control
the competitive game anymore. This trend is likely to escalate in years to
come, notably in Europe, where there is a project to liberalise payment
services 4. This ongoing revolution in the organisation of the European
financial system encourages us to think about the contributions of industrial
organization to the field of payment economics. The literature on networks is
a good starting point for understanding how payment systems are
organised 5. Indeed, this literature analyses the way a payment system
prices access to its infrastructure and usage of its services, in the presence
of network externalities 6. However, we will see that the theory on two-sided
markets provides us with new elements to explain the way retail payment
systems work, because it formalises the existence of indirect network
externalities between two distinct groups of users, consumers and
merchants. The payment system acts as an intermediary, which facilitates
the interactions between end-users, trying to get both sides of the market on
board by choosing appropriate prices.

3 The rules specify which payment instruments are accepted by the system, the characteristics
of acceptance points, risk management, the clearing mechanism and the proceeding of funds
transfers.
4 For further information about the directive proposal on a "New Legal Framework" for payment
services, see: http://europa.eu.int/comm/internal_market/payments/framework/index_en.htm
5 DAVID (1985), KATZ & SHAPIRO (1985), FARRELL & SALONER (1985), et alii.
6 The value of a payment system increases with the number of its users. Network economics
also deal with a number of essential issues for payment systems, such as standard setting,
compatibility among service providers, and the role of an installed base of network facilities.

M. VERDIER

39

The purpose of this article is to underline that some private retail payment
systems fit in well with the theory of two-sided markets. Our analysis goes
beyond payment card systems. Our aim is also to highlight the limits of this
theory in its analysis of the payment industry, due to its failure to take into
account some of its peculiarities. The paper begins by discussing the two
hypotheses provided by ROCHET & TIROLE (2004) to characterise twosided markets, namely the presence of indirect network externalities and the
impact of price structure on transaction volume. We show that, unlike
wholesale payment systems, retail payment systems fit in well with the first
hypothesis, because they act as intermediaries between two distinct groups
of users, consumers on the one hand, and merchants on the other. We
subsequently draw a distinction between closed-loop and open-loop
payment systems, which is necessary to discuss the second hypothesis.
This typology enables us to show that two-sided market theory contributes to
a better understanding of the asymmetric prices chosen by payment
platforms. Meanwhile, we point out that it is less obvious to define direct
debit and credit transfer systems as two-sided markets. This is followed by a
discussion of the results provided by previous research on platform
competition. We show that it is difficult to apply these results to competition
between payment systems because the models do not take platform
differentiation sufficiently into account. Finally, we try to propose some
research perspectives. Indeed, the theory of two-sided markets could be
developed to account for specific features from the payment industry.

Contributions of two-sided market theory


to retail payment systems economics
Why does two-sided market theory contribute to a better understanding
of retail payment systems? Do all retail payment systems meet the criteria
used to define two-sided markets? In this section, we discuss the two
hypotheses provided by ROCHET & TIROLE (2004) to characterise twosided markets. Then we try to identify the retail payment systems that fit in
with these assumptions.

Definition chosen for two-sided markets


The two-sided markets theory starts from the following observation: in
many markets, a platform intervenes to facilitate the interactions between

40

No. 61, 1st Q. 2006

two distinct groups of users (say group B for buyers and S for sellers) 7. This
platform chooses its prices (denoted a B and a S respectively) so as to attract
the two groups of agents in the market, and in order to internalise the
indirect network externalities that each group causes to the other. Indeed,
the number of agents from a given group willing to trade on the platform
depends on the number of agents on the other side of the market. The
presence of indirect network externalities between two distinct groups of
users builds a first criterion to define two-sided markets.
However, ROCHET & TIROLE (2004) consider that the first criterion is
not sufficient to conclude that a market is two-sided. They suggest a more
precise definition, whereby the transaction volume depends not only on the
total price a S  a B , but also on the price structure ( a B , a S ). For instance, the
transaction volume should be sensitive to a small reduction in the price paid
by one group of users, if the aggregate price level remains constant.
According to Rochet and Tirole, the failure of the Coase theorem is the key
feature that links transaction volume to price structure. In other words, endusers should not be able to pass interaction costs on from one side to the
other, and bargain to internalise indirect network externalities. This situation
may arise when transaction costs are high or when the platform sets up
rules that prevent end-users from bargaining 8.

Indirect network externalities in retail payment systems


The rising number of transactions carried out using paper money
accelerated the development of private retail payment systems. We show
that these systems meet the first criterion used to define two-sided markets.
The specificity of retail payment systems is to deal with a great number of
creditors and debtors for small or average transaction volumes. Retail
payment platforms act as intermediaries between two distinct groups of
users, consumers on the one hand and merchants on the other. By contrast,
wholesale payment systems only work with financial institutions, which can
be seen as a homogenous group of users.

7 We assume that sellers and buyers are homogenous.


8 We will see for instance that a payment platform can forbid surcharges. A merchant is said to
surcharge when he charges a higher retail price to a consumer using a specific payment
instrument.

M. VERDIER

41

The development of retail payment systems is closely related to the


existence of indirect network externalities in retail banking markets. For
instance, in joining a payment platform, consumers take into account the
number of merchants accepting the payment instruments marketed by the
system. Conversely, the merchants' benefits from membership will increase
with the number of consumers holding the payment instruments of the
system. As a result, demands from consumers and merchants are heavily
inter-dependent. That is why new retail payment systems often face what is
referred to as, "the chicken-and-egg problem". These payment systems
must use appropriate prices to attract both groups of users in the market,
and to balance demands. This creation of incentives for two distinct groups
of users is not an issue for wholesale payment systems, because the latter
only involve relatively homogenous financial institutions 9. Thus, retail
payment systems meet a specific logic, which seems to correspond to the
first criterion used to define two-sided markets.

Typology of retail payment systems


There are two types of retail payment systems: closed-loop and openloop systems. At this point in our analysis, it is important to understand this
typology, because the results presented by economic literature on platform
pricing are heavily influenced by the type of system considered.
In a closed-loop system 10, the platform is managed by a single
company, which signs all contracts directly with cardholders and merchants.
Amex, Diners Club, and private label cards like the "Pass" card issued by
the French retailer Carrefour are often referred to as closed-loop retail
payment systems. Amex issues cards that can only be accepted by
merchants affiliated with its platform and charges both consumers and
retailers directly. The system used by Carrefour for its "Pass" card is very
similar, except that its acceptance network is limited to Carrefour stores.
Those systems are not necessarily specific to payment cards: for example,
the issuance of gift checks accepted by a group of shops corresponds to a
three party system 11.

9 Two banks can play different roles during the settlement of a transaction, but these roles may
be switched during the following deal.
10 Also referred to as "three-party" systems.
11 In many countries, gift checks are not legally considered as payment instruments.

No. 61, 1st Q. 2006

42

Figure 1 - Closed-loop system

S
Payment system
aB = pB

aS = pS
Purchase
of a good

B
buyer

S
seller

The organisation of open-loop payment systems is more complex, for its


members act as intermediaries between the platform and its end-users,
consumers and merchants. Two levels of pricing must be taken into account:
the pricing of the services provided by the platform to banks, and the pricing
of services provided by banks to end-users. In this case, the impact of the
prices chosen by the platform on end-users depends on the degree of
competition between banks 12. The Visa and MasterCard payment card
systems are examples of open-loop systems. Banks pay fees to become
members, but remain free to choose their pricing policy as regards
cardholders and merchants.
Figure 2 - Open-loop system

Consumer's bank
(I)

Merchant's bank
(A)

Price
pB

Consumer
B

Commission
Payment system
S

Purchase
of a good

pS

Merchant
S

12 For instance, if retail-banking markets are perfectly competitive, banks' costs are completely
passed on to consumers and merchants.

M. VERDIER

43

Examples of indirect network externalities in retail payment systems


In this section, we show that card and check payment systems meet the
first criterion used to define two-sided markets. However, it less obvious to
define indirect usage network externalities between end-users for distant
payment systems, such as direct debit and credit transfer systems.
ROCHET & TIROLE (2002) first used card payment systems to illustrate
the two-sided markets theory. Indeed, these payment systems exhibit
indirect network membership and usage externalities. To build a card
payment system, banks have to sign up merchants to acquire cards, and to
provide incentives for cardholders to use them. The launching of the
payment card "Carte Bleue" in France aptly illustrates the issue of
membership externalities. To overcome merchants' resistance to card
acceptance, the French banks decided to provide them with new services,
such as payment guarantee and partnerships with international networks like
BankAmericard and BarclayCard in 1973 13.
The existence of network externalities in the payment card industry was
first underlined by Baxter in 1983, a long time before the emergence of
literature on two-sided markets.
Benefit from a transaction

Consumer, Bank I,
"issuer"
Merchant, Bank A,
"Acquirer" (*)

Price of a transaction

cI

bS

cA

(*) In a closed-loop system, I and A are identical.

Baxter noticed that each user will be willing to proceed to a transaction if,
and only if, the benefit of that transaction exceeds its price, which is equal to
the bank's marginal cost under perfect competition. Baxter assumes that the
merchant cannot discriminate according to the payment instrument 14.
Therefore, a consumer will be able to use his payment card if at the same

13 For more details about the launching of the payment card in France, see: "La carte bleue: la
petite carte qui change la vie", Patricia Kapferer and Tristan Gaston-Breton, dition le cherche
midi. The payment garantee was a good way of competing with cheques, which were not
garanteed.
14 Otherwise, there is no externality associated to card usage, because the merchant can
always charge a higher price for this instrument. This rule is called "Non Discrimination Rule"
(NDR).

No. 61, 1st Q. 2006

44
B

time b t c I and b S t c A . Consequently, socially optimal transactions 15


are sometimes refused either by the consumer if b B  c I , or by the merchant
S
if b  c A 16.
Economic literature has provided an in-depth analysis of payment card
networks, which is discussed in greater detail below. However, let us first
examine if there are other retail payment systems that also share this first
characteristic of two-sided markets. Cheque payment systems also entail
externalities of the same kind as payment card systems. There are retailers
who do not accept cheques at all, or set an upper limit (often at EUR 15 in
France) for cheque payments 17, resulting in a negative externality for the
consumer. The issue is not similar for cash payments, because this
instrument of payment is universal. There is no acceptance network for
cash, which proves that this system does not meet the criteria used to define
two-sided markets. The existence of usage externalities is also less obvious
for distant payment systems, such as direct debit and credit transfer
systems. Indeed, when a consumer transfers funds on a merchant's
account, s/he transmits a payment order to his bank that cannot be refused
by the merchant 18. Furthermore, credit transfer and direct debit systems do
not need to be supported by specific investments and equipment from
banks' clients. Consequently, those systems do not need to provide
incentives for end-users to participate in these platforms 19.

15 Socially optimal transactions verify

b S  b B t c A  cI .

16 Several factors can account for this negative externality. Merchants' resistance to card
acceptance can be high, or there may be an imbalance between issuers' and acquirers' costs,
generating a higher price on one side of the market
17 Cheque payments are not guaranteed in France for payments exceeding EUR 15.
18 This analysis is based on the French direct debit and credit transfer systems. Systems in
Germany are very different. For further information, please refer to the study conducted by
Bogaert&Vandemeulebrooke at:
http://europa.eu.int/comm/internal_market/payments/directdebit/index_en.htm.
At the same time, one could argue that there are indirect membership externalities between
banks in these payment systems
19 This does not mean that both banks in direct debit and credit transfer systems offer the same
services for the settlement of a transaction.

M. VERDIER

45

Relation between pricing and transaction volume


in retail payment systems
This section of the paper tries to identify the conditions under which card
and cheque payment systems meet the second criterion used to define twosided markets. How does the pricing chosen by the platform affect the
volume of transactions processed via the platform? We distinguish between
closed-loop and open-loop payment systems, and assume that merchants
cannot charge consumers different prices according to the type of payment
instrument used.
Case of closed-loop payment systems

Closed-loop systems using a linear tariff fall perfectly into line with the
theoretical framework built by ROCHET & TIROLE (2003b) to analyse
platform pricing. To begin with, they assume that a monopoly platform
chooses its prices a B and a S for buyers and sellers, respectively, to
maximise its profits. They show that two conditions must be satisfied to
achieve an optimal outcome:
aT  c 1
aT
K
where c represents the platform's marginal cost and h the sum of merchants'
and consumers' demand elasticities.

The total price a T

a S  a B is given by the Lerner formula

The price structure must reflect the ratio of the elasticity of consumers'
a B a S 20
.
and merchants' demands: B
S

In reality, the price structures of payment systems are often skewed


towards one side of the market 21. For example, EVANS (2003) shows that
the credit card system Diners Club developed thanks to the asymmetric
prices it charged consumers and merchants: two years after its creation in

20 If the platform chooses its prices to maximise the social surplus, the price structure also
reflects the difference between the average surplus generated on each side of the market (see
Rochet and Tirole 2003 for further information).
21 In reality, payment card platforms also charge fixed membership fees, but this does not
modify the results obtained by Rochet and Tirole substantially. The platform uses per-interaction
prices

p B and p S which take into account usage pricing and fixed costs, which are incurred

on a transaction by transaction basis.

46

No. 61, 1st Q. 2006

1949 Diners Club was deriving over three quarters of its revenues from
merchants. Initially, credit cards were even given away to consumers to
encourage them to participate in the system and solve the chicken-and-egg
problem. At the same time, merchants were ready to pay more for
membership to attract buyers they perceived as valuable. This asymmetric
pricing is not specific to card payment systems. Gift vouchers, for example,
are often given away to consumers, while merchants must pay a
commission to the platform on acceptance 22. These examples show that
two-sided market theory provides us with a strong framework for explaining
asymmetric pricing on payment platforms.
Case of open-loop systems

As far as open-loop systems are concerned, the issue is more complex,


because two levels of prices must be taken into account: the prices that
banks are charged by the platform, and the prices that end-users are
charged by banks. The impact of platform pricing on end-users will depend
on the kind of competition between payment institution members of the
system. Economic literature on this subject provides an in-depth analysis of
open-loop payment card systems managed by payment card
associations 23. These systems use a specific mechanism of commissions to
charge platform usage, referred to as "interchange fees" 24.
Prices of payment card systems and interchange fees
The literature on payment card systems assumes that the platform
chooses a special tariff: the merchant's bank, A, (A for "Acquirer") pays to
the consumer's bank, I, (I for "Issuer") a price per interaction "a", which is
called the "interchange fee". In that case, using the notations we introduced
previously, we have: a S a B a . If the interchange fee is positive, the
cardholder's bank is subsidised each time the card is used. Consequently, if
this subsidy is partially passed on to the cardholder, who pays a lower price
p per transaction, it serves to boost consumer demand. In compensation, the
acquirer (A) can totally or partially pass on its cost "a" to the commissions

22 This information is confirmed by the French company Kadeos.


23 Payment card systems are not the only example of open-loop payment systems. One can
cite, for instance, the system of cheque exchange and storage managed by the American
company Viewpointe. Literature on this topic analyses a lot of payment card systems because
of the popularity of this payment instrument.
24 Interchange fees have been subject to many controversies, which are not discussed in this
article.

M. VERDIER

47

"m" paid by merchants. This linear pricing studied in literature on the topic is
a good representation of a system like Visa. Indeed, the merchant's bank
pays the consumer's bank a fixed percentage per transaction, which
corresponds exactly to interchange fees as defined by literature on this
subject. However, other systems have chosen to implement more complex
pricing methods. The French payment card system "CB", for example, chose
to use a two-part tariff, which involves a fixed multilateral part, and a variable
bilateral part 25. This example suggests that the theoretical results shown by
the literature rely heavily on the modelling choice. Indeed, in all articles, the
interchange fee is modelled using a linear and multilateral tariff. In reality,
the definition of interchange fees varies a lot across countries and payment
card systems. The reader will find useful information in the comparative
analysis carried out by WEINER & WRIGHT (2005).
Interchange fees and externalities
Baxter's basic model shows that an appropriate choice of interchange fee
enables the platform to internalise the fundamental externality described
above. Let us look once again at the benefits and costs of an interaction for
each user.
Benefit from a transaction
Consumer
Merchant

b
bS

Cost of a transaction

cI  a
cA  a
S

Assume that the platform chooses an interchange fee a b  c A 26. In


this case, social optimality is restored, because each agent agrees to
proceed with a socially optimal transaction. Consequently, when there is
perfect competition, the platform pricing perfectly internalises a negative
externality, if any, caused by the consumer to the merchant.
Literature on interchange fees
Baxter's analysis of interchange fees opened up a new branch for
economic literature on payment card systems and several authors extended

25 The variable part will vary across the couples I-A. The first element of the variable part
depends on the transaction volume, and the second on another bilateral part, which is
calculated according to the relative number of cards from each bank used fraudulently.
26 The interchange fee in this model is either positive or negative. The hypothesis of a positive
interchange fee is equivalent to the assumption of a negative usage externality caused by the
consumer to the merchant.

48

No. 61, 1st Q. 2006

their work to cover this issue 27. In fact, this literature mainly considers two
questions. Firstly, under what conditions does the interchange fee chosen by
the platform impact the transaction volume generated by end-users?
Secondly, if the interchange fee is not neutral, is the interchange fee chosen
by the platform socially optimal? The reader can refer to ROCHET's review
of the literature for further details (2003), and to WEINER & WRIGHT (2005)
for a cross-country analysis.

Modelling competition between payment systems:


perspective from two-sided markets theory
The two-sided markets theory offers a good start to model platform
competition, which sheds light on the way payment systems interact
strategically. This section shows that payment platforms can compete either
to attract new consumers or to affiliate new merchants. Afterwards, when
several payment platforms are available, platforms compete for usage. The
intensity of competition for membership depends on whether users can
belong to several platforms. We then discuss the findings of research on the
usage prices chosen by competing payment platforms. This section will
essentially follow the literature and focus on payment cards.

Competition between payment systems: access and usage


There are two kinds of competition between payment platforms:
competition for access or membership and competition for usage. Access
competition characterises the fact that payment systems seek to encourage
as many users as possible to become members of the platform. For closedloop payment systems, like Amex, it simply consists of getting a large
number of consumers and merchants on board. For open-loop systems, as
usual, the mechanism is more complex, because there are two levels of
players. The system must provide incentives for banks to participate and
become members. Afterwards, the latter compete in banking retail markets
to offer payment services to consumers and merchants.

27 ROCHET & TIROLE, SCHMALANSEE, WRIGHT, GANS & KING they model different
sorts of competition between banks, between merchants on retail markets, consider
heterogeneous consumers, differentiated merchants etc.

M. VERDIER

49

This difference between closed-loop and open-loop systems is extremely


important to an understanding of the way payment systems compete to
provide payment services on the one hand and the way banks compete in
retail markets on the other, as members of the same payment association.
Platform usage is necessarily impacted by the fact that an open-loop system
does not directly charge its end-users. Indeed, banks will not necessarily
provide incentives for consumers to buy the payment instruments specifically
accepted by the system and to use them at the point of sale. Most of the
time, banks multihome, which means that they offer consumers a bundle of
payment instruments, which can be issued by competing systems. For
instance, for a given transaction, a bank may offer its clients the possibility of
using the cheque payment system, the Visa card payment system, or other
competing systems. Multihoming can offer a competitive edge, because
consumers will choose the bank that provides them with the most
comprehensive bundle of payment instruments at the best price. To some
extent, banks can sometimes encourage the use of a payment instrument at
the point of sale, by providing consumers with bonus points or rebates.
This shows that the issue of multihoming and singlehoming is essential to
understanding payment platform competition. If consumers hold several
payment instruments at the point of sale, for instance, systems cannot
charge merchants with excessive commissions, otherwise the latter would
not choose to be affiliated with the platform. Merchants need not be affiliated
with several platforms, because they know that consumers can substitute
one payment instrument for another. In order to compete with Amex for
merchants' acceptance, for instance, Visa chose to charge merchants lower
fees. This strategy was supported by an advertising campaign claiming that
Visa was "everywhere you want to be", whereas some merchants do not
accept Amex. This enabled Visa to overtake Amex in the credit card market,
despite the fact that the Visa brand appeared 18 years later 28. Conversely,
if payment systems know that merchants need to accept as many payment
instruments as possible to be valuable for consumers, they will tend to
compete on the other side of the market by opting for lower fees for
consumers. The cost allocation is then skewed towards merchants.

28 Amex started issuing cards in 1958.

No. 61, 1st Q. 2006

50

Competition for usage 29


ROCHET & TIROLE obtain the first results about the prices charged by
two competing closed-loop platforms. In a first step, they assume that buyers
and sellers are affiliated with two proprietary platforms, and that they can
use one platform or both. They also consider that merchants are not
strategic, and that consumers end up choosing the platform on which the
transaction will be processed, when sellers are ready to deal on both
platforms. This assumption is perfectly realistic for most payment systems
because the consumer chooses the instrument used at the point of sale,
provided the merchant leaves him the choice. However, it is perhaps not
empirically true to assume that merchants always accept all cards. For
instance, in 1991, Boston restaurant owners started to decline American
Express cards in their establishments in a highly public campaign against
Amex's high merchant commissions. The incident became known as "The
Boston Fee Party."
ROCHET & TIROLE show that platform competition results in prices very
similar to those chosen by a monopoly platform. Indeed, the symmetric
equilibrium is characterised by the following equation:

aB  aS  c

aB

K 0B

aS

KS

On the buyers' side, demand elasticity is replaced with K 0B , the "ownbrand" elasticity (demand elasticity of buyers who choose platform i when
the seller offers transactions on both platforms 30). On the sellers' side,
demand elasticity is multiplied by the inverse of s, the singlehoming index.
The s, which can also be seen as a loyalty index, measures the proportion of
consumers that stop trading when their favourite platform is no longer
available. In reality, is there a lot of multihoming for payment systems?
Empirical results from Marc RYSMAN's work (2004) show that consumers
often hold several payment cards, but tend to use one platform. Over 75% of

29 ARMSTRONG (2005) "competition in two-sided markets" analyses platform competition


when heterogeneous agents differ across their fixed adoption benefit (In ROCHET & TIROLE,
agents differ across their usage benefits). He suggests that his assumptions (lump sum pricing,
fixed costs) intend to better reflect other two-sided markets than payment cards (nightclubs,
shopping malls and newspapers).
30 If V
1 then own-brand elasticity is equal to demand elasticity.
i

M. VERDIER

51

the participants surveyed in his study spend over 87% on the same card in a
month. However, some consumers switch to another platform periodically.
Marc Rysman computes a transition matrix, which provides a good
estimation of the loyalty index 31. The indices computed are relatively high
(from 73.1% for Amex to 85.4% for Visa) 32.
How are Rochet and Tirole's results modified if competition takes place
between open-loop platforms? The latter assume constant margins for
banks competing in retail markets, denoted by m B and m S respectively, with
m m S  m B 33. The prices charged by competing platforms at a symmetric
equilibrium are characterised by the following equations:
pS  pB  m
pB

pS

VK 0B

KS

Even if competition in retail markets is intense (m=0), platform


competition does not result in a socially optimal price structure, as the
equation above fails to acccount for end-user surpluses.
GUTHRIE & WRIGHT (2003) extend ROCHET & TIROLE's models
(2002 and 2003b) 34. If consumers hold only one payment card, platforms
compete to attract this group of users. Consequently, they wish to distort the
price structure to favour them, which can only take place through a rise in
the interchange fee subsidising the issuer. However, if merchants are
homogenous in terms of costs and acceptance benefits, platform
competition does not modify the prices chosen by a monopoly platform.
Indeed, the latter already chooses the maximal level of interchange fee
compatible with card acceptance. On the contrary, if all merchants are
heterogeneous, platform competition causes an increase in interchange
fees, and at the same time of commissions paid by merchants. This situation

31 For example, if a consumer mostly used the Visa network in a given month, what would be
the probability that the Visa network would be his/her favourite network again the following
month?
32 ARMSTRONG & WRIGHT analyse the role of exclusive contracts that prevent multihoming
in platform competition.
33 According to this hypothesis, maximisation of profits and volumes are equivalent for the
platform.
34 They consider strategic merchants with no surcharges and perfect competition between
identical platforms. Like ROCHET & TIROLE (2002 and 2003), they also assume constant
margins for banks on each group of users.

52

No. 61, 1st Q. 2006

has been observed in reality: platform competition can paradoxically trigger


a rise in the prices paid by a group of users 35. Indeed, a study conducted by
Kenneth Posner and Athina Meehan from Morgan Stanley research
department 36 shows that Visa and MasterCard raised their interchange fees
to face competition with Amex.
If consumers hold two payment cards, platforms compete on the
merchant side to encourage the latter to accept their card, or even to favour
it over the second card held by the consumer. In such cases, platform
competition decreases interchange fees, which may even become lower
than the socially optimal fees. This situation is less frequently observed in
reality, because merchants are not allowed to temporarily refuse a payment
card when they multihome. Sometimes they must even accept all cards
offered by a given network, due to the "honour-all-cards rule" 37.
Furthermore, as we saw previously in Rysman's study, consumers often
express a strong preference for one card.
CHAKRAVORTI & ROSON (2005) extend the literature in two directions.
Firstly, they assume that platforms are differentiated and that the benefits of
usage of each platform are different for consumers and merchants.
Secondly, they take into account the impact of competition on the total price
charged by the platform 38. However, they do not consider strategic
merchants and assume that consumers use one card at most. The falling of
the constant margin hypothesis modifies the results obtained first by Rochet
and Tirole, then by Guthrie and Wright. Indeed, Chakravorti and Roson
compare the prices chosen by a duopoly of two differentiated platforms and
a cartel. They show that competition reduces the total price charged by the
platform, which increases the welfare of consumers and merchants.
However, the impact of the price reduction on each sub-market (consumer
and merchant) is not uniform. Consequently, when banks' margins are not
fixed, platform competition has a positive impact on welfare. If benefits are
uniformly distributed, they determine the conditions whereby consumers
stand to benefit more from the price decrease than merchants.

35 However, Guthrie and Wright's hypotheses do not offer a clear description of the situation
observed in reality.
36 Diversified financials. Industry Overview "Attacking the death star", April 15th, 2004.
37 In practice, it is extremely difficult to verify whether merchants respect this rule for payments
at the point of sale.
38 Banks' margins are not fixed. As we saw previously, maximisation of profits and volumes are
not equivalent for the platform.

M. VERDIER

53

Research perspectives and changes needed


to better understand retail payment systems

The limits of the "two-sided" markets approach


The lack of empirical evidence quantifying indirect network externalities
in payment systems

The first natural criticism of the two-sided market theory pertains to the
lack of empirical research to quantify indirect network externalities between
consumers and merchants. In order to estimate membership and usage
externalities in payment card systems, for instance, one should first use data
to estimate demand on both sides of the market. This would be difficult to
achieve, since most merchants are already equipped with terminals to
accept cards in the majority of developed countries. When they are affiliated
with a system, merchants are generally not allowed to turn down cards
because of the "honour-all-cards" rule. It would consequently be impossible
to estimate the negative usage externality that merchants would be likely to
cause to consumers. It would also be rather difficult to derive a demand
function for cards on the consumer side, because prices vary significantly
from bank to bank, according to the bundle of services sold with the card.
The appropriate theoretical framework from the literature on two-sided
markets should consquently be chosen to estimate the links between
transaction volumes and price structure. This would also prove difficult to
estimate for the payment card industry, since consumers usually pay yearly
or quarterly membership fees, while merchants are charged per transaction.
Compared to other two-sided markets, like the media industry, it seems
more difficult to gather the appropriate data and develop a theoretical
framework to analyse the payment card industry 39.
The lack of specific elements from payments industry

As far as theory is concerned, the models developed by literature on twosided markets do not take into account key aspects from the payment
industry such as risk management or fraud prevention. Payment systems

39 Readers interested in empirical analysis of other two-sided markets can refer to KAISER &
WRIGHT (2006) for the media industry and RYSMAN (2004) for the business directory market.

54

No. 61, 1st Q. 2006

often choose more complex pricing methods to provide incentives for their
members to invest in security or in fraud detection programs, as already
mentioned for the French Carte Bleue system. For the moment, the quality
of payment services (which can depend on different elements such as
security or payments guarantee) is absent from the analysis of payment
systems. However, current evolutions in the European legal framework
(NLF) should encourage economists to analyse other aspects of payment
systems, such as the impact of risk management on access and usage
pricing. Let us consider another example. If a payment system gives access
to two different types of users, a mobile network operator that is a simple
"payment institution" and a credit institution, which pricing method should it
implement? Both players are subject to different regulatory constraints in the
New Legal Framework. However, an incident caused by an erroneous risk
management strategy could eventually affect all members of the system in
the same way. It would also be interesting to take the quality and risk
management issues in to account when studying platform competition. The
following subsection provides some research perspectives for competition
between payment platforms.

The limits of platform competition models for the payment industry


It is worth noting that there are some limits to the current approach
towards competition between platform systems. Firstly, payment system
differentiation is not sufficiently taken into account. Moreover, the models
used in the literature do not help us to understand competition between
closed-loop and open-loop systems. Finally, evolutions observed in the
industry encourage us to study other issues, and enable us to provide some
research perspectives.
The issue of payment system differentiation

It seems difficult to use the results obtained by Rochet and Tirole or


Guthrie and Wright to compare the prices chosen by Amex and Diners, or
Visa and MasterCard. Indeed, payment systems generally offer
differentiated services for the settlement of a transaction: payment
guarantee, transaction security, clearing method etc.. Even the Visa and
MasterCard networks, which seem very similar, try to differentiate
themselves through innovation and technologies. Consequently, system
competition is unlikely to generate a symmetric equilibrium, as is often the
case in the literature. Moreover, differentiation of payment services could be

M. VERDIER

55

useful to model the entry of newcomers in the payment industry. This issue
is all the more important, since new players, like retailers or mobile network
operators, have shown their willingness to participate to the payment
industry, and to offer alternative payment technologies. Under what
conditions will these newcomers compete with existing payment systems?
Will competition between payment systems involve some infrastructure
sharing, differentiation or complementarity of payment services?
CHAKRAVORTI & ROSON (2005) already started to analyse the impact
of payment system differentiation on platform competition. It would be
interesting to develop their study to find analytical results 40. Meanwhile, it
seems very important to lift the constant margin hypothesis to analyse
platform competition 41. Are the results obtained by Chakravorti and Roson
sensitive to the assumption of uniform distributions of benefits for consumers
and merchants? It would also be interesting to see how these conclusions
may evolve with strategic merchants. This would confirm whether Guthrie
and Wright's results are related to this specific hypothesis and enable us to
compare both models. At the same time, it may be useful to study platform
system differentiation through the prism of unbundling. Will payment
systems benefit from the unbundling of essential functions, such as clearing,
to better differentiate themselves from other services? When the transaction
chain is unbundled, how do payment systems manage their complementarity? What is the impact of unbundling on the risks born by each
payment systems?
Competition between closed-loop and open-loop systems

The literature on two-sided markets does not often model competition


between closed-loop and open-loop systems 42. In the panel data used by
Mark Rysman, 12% of the consumers that prefer to use the Discover
network one month will switch to Visa the following month. This proves that
there is also a kind of competition between closed-loop and open-loop
systems. From a bank's point of view, it would be interesting to analyse the
incentives to participate in an open-loop system, rather than building its own

40 CHAKRAVORTI & ROSON only give numerical simulations for the results of competition
between differentiated platforms.
41 These authors work on the hypothesis that banks' margins are constant. Consequently, as
we saw previously, maximisation of volume and profits for the platform are identical.
42 To our knowledge, the only paper on this subject was written by MANENTI & SOMMA
(2002).

No. 61, 1st Q. 2006

56

proprietary network. This issue seems all the more important nowadays
since open-loop systems are increasingly subject to regulatory pressure,
forcing them to decrease their interchange fees. Is it possible for open-loop
systems to decrease their interchange fees while facing competition from
closed-loop systems? And is it socially optimal to set up an asymmetric
regulation of interchange fees as the Australian regulator has done?
New research perspectives inspired by the single European payments area

The creation of a single European payments area and the future of the
various national payments systems open interesting research perspectives.
For instance, the literature on two-sided markets does not offer answers to
incentives that could encourage two payments systems to merge. In fact, the
single European payments area will certainly encourage national systems to
seek economies of scale. Mergers between payment systems are not the
only scenario to consider. One could also imagine that the national systems
would cooperate so as to accept payment instruments issued by other
platforms. National payment systems can also decide to use common
supports for payment instruments that can be used in several different
systems. For instance, the cards issued in the French system CB are
cobranded with the Visa or the MasterCard logo, which means that they are
accepted by these networks, when the French use them abroad. Under what
conditions and rules will the payment systems be able to use common
instruments 43? What is the impact of cobranding alliances on competition?

Conclusion

The literature on two-sided markets sheds light on the way some retail
payment systems work and interact. The essential contribution of this branch
of literature is to show that asymmetric user pricing is needed to optimise the
volume of transactions that are routed through the platform. Consumers and
merchants are charged prices by payment systems that do not reflect the
cost of serving them. Nevertheless, we show that the two-sided market
approach does not enable us to cover all the features of the payments

43 It would be interesting to examine if the example provided by SCHIFF (2003) , whereby


open-loop systems offer shared access to one side of the market, could be applied to payment
systems.

M. VERDIER

57

industry. Moreover, while card payments have been widely analysed, we still
lack results on other payment instruments such as cheques. Furthermore,
risk management and fraud prevention deserve more attention. One way of
tackling those issues may be to consider the quality of the service provided
by the platform. Finally, the literature on platform competitionhas not yet
dealt with the issue of cooperation or mergers between payment systems.
Our view is that a better understanding of retail payment systems is
needed to ensure an appropriate regulation of these markets. For instance,
we do not yet know which definitive rules will be adopted in the European
directive to define European payment instruments. However, we think that
the contributions of the two-sided market theory should not be neglected.
For example, payment cards and direct debits do not obey the same logic
and should not be dealt with in the same way. We also believe that fraud
prevention is a key issue, which should inform reflections on the various
statuses that the Commission intends to define for payment service
providers.
Moreover, the emergence of new payment instruments and technological
evolutions, such as contactless payments, will perhaps provide us with some
data to empirically test the hypotheses of two-sided market theory.

No. 61, 1st Q. 2006

58

References
ARMSTRONG M. (2005): "Competition in Two-Sided Markets", working paper, May.
ARMSTRONG M. & WRIGHT J. (2004): "Two-Sided markets, Competitive
Bottlenecks and Exclusive Contracts", working Paper, November.
BAUMOL W. (1952): "The Transactions Demand for Cash", Quaterly Journal of
Economics, vol.67, no. 4, pp. 545-556.
BAXTER, W. (1983): "Bank Interchange of Transactional Paper: Legal and Economic
Perspectives", Journal of Law & Economics, vol. 26, no. 3, October, pp. 541-588.
BORDES CH., HAUTCOEUR P-C., LACOUE-LABARTHE D. & MISHKIN F. (2005):
"The economics of money, banking, and financial markets", Pearson education.
CHAKRAVORTI S. (2003): "Theory of Credit Card Networks: A survey of the
literature", Review of Network Economics, vol. 2, no. 2, June, pp. 50-68.
CHAKRAVORTI S. & ROSON, R. (2005): "Platform competition in Two-Sided
Markets: The Case of Payment Networks", working paper, May.
DAVID P.A. (1985): "Clio and the economics of QWERTY", American Economic
Review, vol. 75, no. 2, pp. 332-337.
EVANS D. (2003): "Some Empirical Aspects of Multi-sided Platform Industries",
Review of Network Economics, vol. 2, issue 3, September, pp. 191-209.
FARRELL J. & SALONER G. (1985): "Standardization, Compatibility and Innovation",
RAND Journal of Economics, vol. 16, pp. 70-83.
GANS Joshua & KING Stephen (2003): "The Neutrality of Interchange Fees in
Payment Systems", Topics in Economic Analysis & Policy, Berkeley Electronic Press,
vol. 3, no. 1, pp. 1069-1069.
GASTON-BRETON Tristan & KAPFERER Patricia (2004): Carte bleue: la petite carte
qui change la vie, dition le cherche-midi.
GUTHRIE G. & WRIGHT J. (2003): "Competing Payment Schemes", working paper
no. 245, Department of Economics, University of Auckland.
HUNT R. (2003): "An introduction to the Economics of Payment Card Networks",
Review of Network Economics, vol. 2, no. 2, June, pp. 80-96.
KAISER U. & WRIGHT J. (2006): "Price structure in two-sided markets: Evidence
from the magazine industry", International Journal of Industrial Organization, vol. 24,
pp. 1-28.
KATZ M. & SHAPIRO C. (1985): "Network Externalities, Competition and
Compatibility", American Economic Review, vol. 75 (3), pp. 424-440.

M. VERDIER

59

MANENTI F. & SOMMA E. (2002): "Plastic Clashes: Competition among Closed and
Open Systems in the Credit Card Industry", working paper.
RYSMAN M.:
- (2004): "An empirical analysis of Payment Card Usage", working paper.
- (2004): "Competition between networks: a study of the market for yellow pages",
Review of Economic Studies, vol. 71(2), pp. 483-512.
ROCHET J-C. (2003): "the Theory of Interchange Fees: A synthesis of recent
contributions ", Review of Network Economics, vol. 2, no. 2, June, pp. 97-124.
ROCHET J-C. & TIROLE J.:
- (2002): "Cooperation Among Competitors: The Economics of Payment Card
Associations", RAND Journal of Economics, vol. 33, no. 4, winter, pp. 549-570.
- (2003a): "An Economic Analysis of the Determination of Interchange Fees in
Payment Card Systems", Review of Network Economics, vol. 2, no. 2, June, pp. 6979.
- (2003b): "Platform competition in two-sided markets", the European Economic
Association, vol. 1, no. 4, June, pp. 990-1209.
- (2004): "Two-sided markets: an overview", IDEI-CEPR conference.
ROSON R. (2005): "Two sided-markets: a tentative survey," Review of Network
Economics, vol. 4, Issue 2, June, pp. 142-160.
SCHMALENSEE R. (2002): "Payment systems and Interchange Fees", Journal of
Industrial Economics, vol. 50, no. 2 (June), pp. 103-122.
SCHIFF A. (2003): "Open and Closed systems of Two-sided Networks", Information
Economics and Policy, vol. 15, pp. 425-442.
TOBIN J. (1956): "The Interest Elasticity of the Transactions Demand for Cash",
Review of Economics and Statistics, vol. 38, no. 3, pp. 241-247.
WEINER S.E. & WRIGHT J. (2005): "Interchange Fees in Various Countries:
Developments and Determinants", working paper 05-01, Federal Reserve Bank of
Kansas City, September.
WRIGHT Julian:
- (2002): "Optimal Payment Card Systems," European Economic Review, vol. 47, no.
4, August, pp. 587-612.
- (2004): "Determinants of Optimal Interchange Fees in Payment Systems", Journal
of Industrial Economics, vol. 52, no. 1, March, pp. 1-26.

Mobile Call Termination:


a Tale of Two-Sided Markets (*)
Tommaso VALLETTI
Imperial College London, University of Rome "Tor Vergata" and CEPR

Abstract: Mobile telephony is described as a "two-sided" market where customers are


seen as senders and receivers of communications that are mutually beneficial both to
callers and receivers. This has implications in terms of market definition and market
power. The economics of mobile call termination is discussed in this context.
Key words: mobile telephony, market definition and call termination

Market definition in mobile telephony

The standard test adopted by most anti-trust and regulatory authorities to


identify markets is the so-called SSNIP test (sometimes also referred to as
the "hypothetical monopolist test"). This is designed to explore the
consequences of a (hypothetical) Small but Significant and Non-transitory
Increase in Price on the profitability of the (hypothetical) firm that initiates it.
At the heart of this test lies the question of what might make such a price
rise unsustainable. Some consumers may switch to substitute products
("demand-side substitutability") and some firms operating "near" to the
(narrowly defined) candidate market may alter their plans and supply similar
products ("supply-side substitutability"). If there are close demand- or
supply-side substitutes, then the price increase initiated by the hypothetical
monopolist will lead to a large reduction in its sales, and its profits will, as a
result, fall.

(*) Parts of this paper are based on work carried out by the author for the European
Commission. The comments from an anonymous referee are gratefully acknowledged. The
opinions expressed in this paper are the sole responsibility of the author.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 61.

62

No. 61, 1st Q. 2006

A number of difficulties arise in identifying market boundaries including


deciding how to treat firms that operate in many related markets, dealing
with intermediate goods markets, applying the test to markets that are
already monopolised (known as the "cellophane fallacy"), and determining
what is "small but significant".
All of these difficulties occur when applying these general principles to
mobile telephony markets 1. Customers buy mobile telephones for many
reasons. Customer profiles are extremely heterogeneous in terms of calling
patterns, needs, mobility, etc., which is partly reflected in the vast number of
tariffs on offer in these markets. The needs of a certain customer are
themselves not immutable, and will depend on factors such as
circumstances and locations. In principle, therefore, if one defined an
antitrust market in a very narrow way and purely on the basis of
substitutability at a given point in time, this exercise would result in a
proliferation of very narrowly defined markets. At the same time, however, a
mobile operator is a provider of different products and services that satisfy
these various needs. In other words, a mobile operator can be seen as a
multi-product firm. The fact that a firm manufactures or sells more than one
product may suggest, but by no means implies, that there should be a much
bigger market for that firm's total output. According to this view, the relevant
market should include a "cluster" of products, where non-substitutes should
be included in the same market.
The concept of cluster markets clearly applies to most services in mobile
telephony. Customers typically want one handset and one SIM card to
handle almost all their calls, SMS, etc. Even if one accepts the broader
concept of a cluster market, an extra layer of complications arises in the
context of mobile telephony because benefits and costs associated with calls
generally do not accrue to the same party. When a conversation happens,
there must be both "senders" and "receivers" involved, which are, by
definition, different individuals. Clearly, no one would ever want to place a
call if that call is known not to be received or ever retrieved. Even more
obviously, one cannot receive a call if this call has not been made! As
obvious as this may sound, it is a healthy reminder of the type of economic
considerations that must be taken into account when defining markets,
without risking the derivation of fictional market definitions.

1 This paper deals with mobile telephony only, although most of its arguments are also valid
more generally, including for a deregulated fixed telephony sector. I prefer to stick to the mobile
case to avoid crucial factors specific to fixed telephony, such as extremely large incumbency
advantages, public ownership or universal service obligations.

T. VALLETTI

63

As an example, it is a common and useful practice to think of a retail


market for call origination, although it is clear that this market cannot exist in
isolation without termination. When the SSNIP test is applied to the market
for call origination, the analysis should try to assess how the call originator
would respond to an increase in price, looking for possible substitute
services etc. This analysis presupposes that the same change in demand for
calls originated by the sender will also occur on the receiving side, i.e., every
call is accepted by the receiver. This is, indeed, a very likely situation since
receivers will not pay for the call in most cases. According to this line of
analysis, the retail market for call origination is de facto extended to include
termination as a necessary input for an originated call to be completed.
Termination is an input that is not directly bought by the call originator, but is
needed to satisfy the call originator's needs.
According to this view, there is a retail market for call origination, but not
a retail market for termination, which is a derived demand instead. Call
origination and call termination are in a vertical relationship where the
provider of call origination takes as given the input price for termination, and
then charges a mark up depending on the price elasticity of outgoing calls. A
market analysis could therefore find that the retail market for call origination
is competitive, but the input market for termination is monopolised (and viceversa). The distinction between call origination at the retail level and call
termination at the wholesale level is, to a large extent, fictitious and merely
reflects common billing practices, rather than the underlying economic
vertical relationship in the production of a (completed) telephone call (see
box 1 below).
Box 1 Termination: retail or wholesale market?
Imagine customer A calls customer B and pays pAB to A's provider. A's
provider then pays a termination charge tB to B's provider. The competitive
environment that leads to the setting of pAB (at the retail level) may have
nothing to do with the competitive environment that leads to the setting of tB (at
the wholesale level). Alternatively, imagine a situation where there is no intercarrier compensation, and customer A pays directly pA to provider A for call
origination and pB to provider B for call termination. In the eyes of customer A,
the two situations are formally equivalent if, for instance, pA = pAB tB and pB
= tB. Once again, the competitive conditions that lead to the setting of pA and
pB (both at the retail level under this alternative pricing arrangement) could be
very different.

The previous example is coherent, but incomplete. In fact, we argued that


a market for call origination can only exist if there is also a market for
termination. Implicitly in the previous lines of argument, we assumed that
termination was needed only by the sender. However, if a call is accepted by
a receiver, then this implies that there is also a demand for termination of

No. 61, 1st Q. 2006

64

calls on the side of the receiver! If one then applies the SSNIP test to this
market, the exercise looks less straightforward. Which price should one
increase? And who pays for it? The response of a customer to an increase
in the price of termination, and therefore the profitability of the (hypothetical)
firm that initiates it, will differ if the party that bears its cost is the receiver or
the sender.
A less formal market definition would at this stage consider the whole
economic environment, starting from the fact that customers do not demand
calls per se, rather they want to communicate, for example, exchange
information. Calls sent and received are just inputs in this exchange of
information. According to this view, a mobile operator is a provider of a
"platform" that allows the exchange of communications between these two
different sides, the senders and the receivers. In this sense, a mobile firm
should be analysed in the context of the "two-sided markets" framework,
which has recently received much attention both in academic literature and
in court cases.

Two-sided platforms

The term "two-sided platforms" (2SPs) refers to products and services that
must be used by two (or more) groups of customers to be of value to them.
The "platform" enables interactions between the different "sides", trying to
get the two sides "on board", and charging each side.
2SPs are the subject of a recent body of academic literature in economics
that usually refers to them as "two-sided markets" 2. Since the term "market"
is used in a different way for the purposes of antitrust policy, this paper
adopts the more neutral 2SP terminology 3. There is no unequivocal
definition of 2SPs in existing literature. Rochet and Tirole (2003) proposed
the following definition: "A market is two-sided if the platform can affect the
volume of transactions by charging one side of the market more and
reducing the price on the other side by an equal amount; in other words, the
price structure matters".

2 See ROCHET & TIROLE (2003), EVANS (2003), WRIGHT (2004), ARMSTRONG (2006).
3 See EVANS & NOEL (2005).

T. VALLETTI

65

The previous definition draws an important distinction between price


structure and price level. This makes 2SPs different from markets
encountered in textbook economics, where the price structure is typically
neutral. For instance, in competitive markets it is irrelevant who is charged
VAT, whether this be the producer or the consumer, since only the price
level matters for the level of transactions between the two sides (buyers and
sellers). In 2SPs, on the other hand, the price structure that the two sides
are charged has an impact on allocation. If the two sides cannot internalise
externalities between them, then the Coase theorem does not apply and
market failures can arise. The role of the platform can therefore be that of an
intermediary, finding the right pricing structure between the two sides and
allowing trade to take place.
An alternative definition immediately follows from the previous
discussion. A 2SP arises in a situation where: (a) there are two (or more)
sides, with (uninternalised) inter-group network externalities, and (b)
platforms have the ability to price discriminate between the two sides.
Definitions aside, it is helpful to give a few examples of 2SPs. EVANS
(2003) introduces a useful taxonomy of 2SPs:
x Exchanges such as security exchanges, auction houses, brokers, and
various matchmaking activities (for example, employment agencies and real
estate agents). Exchanges help buyers and sellers search for feasible
contracts. The externality here arises from the fact that having large number
of participants on both sides increases the probability that participants will
find a match.
x Advertising-supported media such as newspapers, directories,
television, and web portals. Media provide contents that attract audiences.
Audiences, in turn, are used to attract advertisers. There are two kinds of
externalities between the two sides. Audiences exert a positive externality on
advertisers, as advertisers value platforms that have more viewers. On the
contrary, advertisers exert a negative effect on viewers, at least to the extent
that commercials interrupt a programme, or make it more difficult to
consume content.
x Transaction systems such as credit cards. These are similar to
exchanges in some respects, as cardholders and merchants are more likely
to adopt a particular credit card the greater the number of adopters of the
same card on the other side. They also have some peculiar features, namely
card associations are cooperative 2SPs: for a transaction to be completed
there must be an agreement as to the division of profits and the allocation of

66

No. 61, 1st Q. 2006

various risks between the entity that services the cardholder and the entity
that services the merchant.
x Software platforms such as PCs, video games and music players. The
two sides here are represented by users who want to run software
applications and developers who write applications and sell them to users.
Are 2SPs relevant for telephony? Clearly, any network operator is a
multi-product firm. However, the mere fact that multiple product or "cluster"
markets are involved does not imply that a 2SP is implicated. If the various
products are bought and consumed by the same customer, there is no 2SP
involved since there are no inter-group network externalities. Therefore,
services such as access and call origination can be analysed, to a large
degree, with standard antitrust tools that do not need to be extended to the
analysis of 2SPs.
There are situations where 2SPs can be applied to telephony too. An
important case in point is call termination. A network operator, in this case,
falls in the category of "exchanges" introduced above, as it allows "senders"
and "receivers" to complete their match, i.e., communicate. There is an
externality involved as senders can communicate more the higher the
number of receivers they can contact, and receivers are likely to benefit from
receiving many calls the larger the number of senders there are 4. More
generally, termination revenues form an integral part of the way an operator
sets prices for both termination and outgoing services. These can be distinct
services, but have close inter-relationships since the demand and price for
one service affects the other.
Although we will analyse call termination markets only in a later section,
we anticipate here that the exercise of market power when setting
termination rates is likely to differ when calls are sent and received "on-net"
(i.e., senders and receivers both subscribe to the same network operator)
and when they are "off-net" (i.e., senders and receivers belong to different
network).
In the former case, the "platform" is likely to internalise externalities
between the two sides, and the presence of competition limits the ability of

4 It could be argued that mobile users belong to the same group. One should therefore speak of
"intra-group externalities", rather than "inter-group" externalities typical of 2SPs. However,
please note that my description of the problem relies on having "senders" and "receivers", which
represent the two groups that need a platform to conduct an exchange of communications. In
this sense, I would argue that the definition of a 2SP applies to mobile telephony literally.

T. VALLETTI

67

the network operator to raise termination prices. In the latter case, the
network operator will not internalise the effects on senders when setting the
termination rate and market failure is likely to arise. A specific example of
such market failure is the case of fixed-to-mobile (F2M) calls 5.

Two-sided platforms: market definition and market power


When applying market definitions to 2SPs one has to be particularly
careful to avoid mechanical applications of commonly used concepts due to
the possibly intricate relationship between the various sides. When dealing
with a 2SP, it is essential to evaluate if network effects (i.e., links between
the two sides) are: (a) present, and (b) limit the extent to which a price
increase on either side is profitable. This exercise is tricky as it mixes
several factors: which price should be increased? Who pays for this
increase? What is the starting level for the price increase? Should a firm readjust its entire structure of prices when only one price changes?
Take, as an example, the case of F2M calls and mobile access. Are they
complements or substitutes? The answer to this type of question is of some
use in "normal" markets, as substitute goods are typically presumed to
belong to the same relevant market. Imagine first an increase in the price of
mobile access. Demand for mobile access would go down as a direct
consequence of the price increase. As there would then be fewer mobile
customers to call, demand for F2M calls would also fall. As seen from this
perspective, F2M calls and mobile access seem to be complements. Now
imagine increasing the price of mobile termination, starting from the
termination cost. Demand for F2M calls would decrease because fixed users
would have to pay more to call mobile phones. However, the increase in the
price of mobile termination has also introduced some termination revenues
that did not exist when termination was set at its cost. If there is some
competition for mobile users, these termination profits should, at least to
some extent, be passed on to mobile users. A likely scenarios would be for
the mobile network operator to push down the price of mobile access. This,
in turn, should boost demand for mobile access. From this point of view, an
increase in the price of F2M termination increases demand for mobile
access, while F2M calls and mobile access now seem to be substitutes!

5 The theory of two-sided markets received some prominence in the recent case on mobile
termination rates in New Zealand; see NZ Commerce Commission (2005).

68

No. 61, 1st Q. 2006

It is beyond the scope of this paper to conduct a full analysis of the


termination problem here 6. Our main point is that questions, such as
whether F2M calls and mobile access are complements or substitutes, do
not make much sense when they mechanically apply standard notions of
substitutability and complementarity to highly specific market realities, such
as 2SPs. As we have already seen, a mobile operator is a "platform" that
provides access among other things (and the corresponding price is paid by
mobile consumers), but also enables the termination of calls initiated by
fixed users. The price for termination is indirectly paid by fixed users and,
typically, not by mobile users. These are the main features that have to be
taken into account when conducting an economic analysis of the termination
problem.
Another important caveat, when defining markets in the presence of
2SPs, applies to the use of the SSNIP test. Firstly, when a price is
increased, the corresponding demand will decrease, as in standard markets,
but there may also be additional effects arising from the other side that may,
or may not decrease the profitability of the price increase, according to the
type of inter-group network externalities involved. For instance, in an
exchange such as a matchmaker, where one side benefits from the
presence of high numbers from the other side, imagine the platform
increases the price it charges to one particular side. This will reduce the
number of buyers from this side, making it less appealing for the other side
to join the platform, further reducing demand from the original side. In this
case there is a "multiplier" effect, as a price increase reduces demand more
than in standard one-sided markets. In the case of advertising-supported
media, on the other hand, imagine the platform increases the price it
charges one side (advertisers). This should decrease the number of
commercials bought by advertisers, making it more appealing for the other
side (viewers) to join the platform 7. Secondly, it is not clear where the
hypothetical price increase should originate from. The cost of a product is
typically not an efficient benchmark in the presence of 2SPs. Perhaps more
disappointingly, even the price level set in a "competitive market" is not
efficient. This should not come as a surprise since it is well known in
economics that competitive markets "work", i.e., they are efficient and any
intervention could just make things worse, only without externalities. This

6 See WRIGHT (2002), and VALLETTI & HOUPIS (2005).


7 I assume that other variables such as programme quality or content are not affected. The
important point here is that it is easy to construct situations where the "multiplier" effect can go
either way.

T. VALLETTI

69

fundamental result can be rephrased by saying that, in the presence of


externalities, even competitive markets do not work and some appropriate
intervention can increase the welfare of society.
It therefore seems that trying to define sharp boundaries can be a risky
exercise with 2SPs. Since from a legal standpoint, in practice, market
definition requires that a product is found to be either in the market or
outside it, a possible reasonable compromise would be to look at standard
(possibly narrow) market definition to start with. Then, the impact on
competition in "affected" markets (therefore, extending the analysis beyond
the original market definition) could be considered at a later stage when
conducting a full economic analysis, eventually leading to the imposition of
appropriate remedies. Alternatively, one could start with the whole products
under consideration, avoiding the exercise of market definition and directly
delving into the economic problem at stake. For an economist, this second
approach is bound to give the same answer (and therefore the same set of
possible remedies) as the first approach. However, it is not clear if, from a
legal standpoint, these two approaches are also identical. For instance, SMP
may be found over the narrowly defined market, which would imply the
introduction of some remedies, "adjusted" for the two-sidedness feature of
the market investigated. However, SMP may not be found if one started from
the whole set of interlinked products, where SMP is linked to the presence of
some extra rent that the firm can sustain overall. Therefore an investigation
may not start although "welfare enhancing" regulation would also be
available in this case.
While there is not much disagreement on economic analysis, there may
be some divergence between the legal and economic approach to the main
questions addressed. This is a fundamental and controversial point that
harks back to the meaning of SMP and the ultimate objective of regulatory
and antitrust intervention. Competition law can maintain competition, but
typically cannot create it or cure defects or market failures. It also cannot
impose very precise obligations. On the contrary, regulation usually has
aims that are wider than those of competition law, and has methods that go
beyond those of competition law, because regulators can impose additional
or new duties necessary to promote the objectives specified. In the specific
context of 2SPs, it therefore follows that competition law should not be able
to deal with inefficient pricing structures arising from competition in two-sided
markets. This is because competition law assumes that firms can unilaterally
desist from the conduct that is undesirable. Fines and other anti-trust
sanctions rely on firms being able to take unilateral action to comply and to
act competitively. However, in 2SPs a firm cannot unilaterally lower a

70

No. 61, 1st Q. 2006

particular price that is deemed to be "wrong" (for example, too high) if the
other competitors do not - that would result in losses relative to the rivals.
The threat of fines thus does not work in this context, because no individual
firm can comply. The consequence of this reasoning is that any intervention
has to ensure collective compliance - either by all firms having the same
unilateral incentives at the same time (for example, by setting up a position
in which the authority effectively requires them to "collude"!) or by their
conduct being subject to some exogenous constraint (which is another word
for regulation).

Conclusions on 2SPs
2SPs involve inter-group network externalities and are relevant in many
industries, including telecommunications. As a result of these externalities,
socially-optimal prices in 2SPs typically depend in some intricate way on
price elasticities of demand, inter-group network effects and costs. This is a
complex exercise that can be conducted by taking into account market
realities and avoiding mechanical applications of standard definitions and
tools.
Another result of externalities is that socially-optimal prices in 2SPs,
generally, are not purely cost-based. By understanding the nature of the
problem, it is therefore easy to avoid possible fallacies. For instance,
incremental cost pricing is typically not efficient with 2SPs. High individual
mark-ups may also not indicate standard market power. A more balanced
pricing structure (interpreted as prices being more in line with costs) is not
necessarily produced by fiercer competition. Moreover, the removal of
alleged cross-subsidies, such as decreasing one price (A) and increasing
another price (B), does not necessarily benefit the side (A) that pays a price
above cost. This is because, by increasing the other price (B), some B users
may drop off, thus making the product less valuable to A users as well.
Firms with the features of a 2SP are correct to stress the fact that these are
special markets, which policy-markers consequently need to be very careful
with. We agree with this point and always advocate a full and appropriate
economic analysis of these markets. However, we conclude by recalling
that, even if a two-sided market is assumed to be perfectly competitive, then
the market does not work. This is in stark contrast with standard one-sided
markets: when these markets are competitive, they are also efficient and no
regulator should interfere with their working. In two-sided markets, on the

T. VALLETTI

71

other hand, privately chosen prices, even when ideally set by competing
firms, will differ from socially-optimal prices. An appropriate intervention can
increase consumer and social welfare. 2SPs should therefore be subject to
more, rather than less regulatory oversight.

Incoming and outgoing calls

Let me now return to market definition in mobile telephony. People buy


mobile phones to have access, that is, what they buy is the ability to make
and receive different kind of calls while travelling in different places. Access
typically involves the purchase of a handset and a SIM card. After having
secured access, customers then use their phones, that is, they do make and
receive different kinds of calls while travelling in different places. Access,
outgoing calls, and incoming calls are the three general groups of services
that represent the starting point of the analysis of market definition in mobile
telephony. One consequently needs to understand how a customer would
react when a hypothetical monopolist increases the price of one of these
three services.
This apparently simple exercise has to be done while taking into account
relevant features of the economic environment under consideration. A
crucial aspect in the mobile telephony industry is that, in the absence of any
intervention, the party making and paying for the call is typically the sender
and not the receiver of the call. This arrangement, known as CPP ("Calling
Party Pays") is adopted in all countries in the EU. Under CPP, the service is
initiated by, and paid for by, the caller to the mobile phone, not the mobile
phone owner. A SSNIP test conducted on the price of access or outgoing
calls is therefore a very different exercise to a hypothetical increase in the
price of incoming calls, since, under the current pricing arrangements, in the
former case it is the telephone owner that pays directly for the price
increase, while in the latter there is no direct payment involved, although the
receiver may indirectly suffer from receiving less calls. As a result of this
fundamental difference, the analysis of access and outgoing calls has to be
kept separate from the analysis of incoming calls. This paper does not
consider access and outgoing calls here, as the analysis can be conducted,
to a large extent, with standard tools (such as "cluster" markets), but focuses
instead only on the market for "incoming" calls.

72

No. 61, 1st Q. 2006

Incoming calls
Mobile customers want to receive calls. Under the CPP system, these
calls are initiated and paid by other customers. Given this peculiar feature,
the exercise of market definition should be conducted looking at the
behaviour of both the sender and the receiver. Let us start with the sender
first. The sender has a demand for calls to a particular person owning a
mobile phone. Calls to mobile phones do not have strong demand
substitutes, as senders typically are willing to pay a premium if they need to
contact a person without knowing her exact location. If the price of a call to a
mobile network goes up, a caller would probably reduce the number and/or
length of calls, according to her demand elasticity, but it is very unlikely that
the caller can find good alternative substitutes. A call is typically placed to a
mobile user when the caller wants to be sure to contact and interact in real
time with the called party, for which there is no effective substitute. The
sender therefore has very limited ability to find substitutes if the price of calls
to mobile goes up because of a price increase initiated by the mobile
operator that terminates the call 8.
The behaviour of senders therefore does not impose any limit on the ability
of the mobile firm to increase the price of incoming calls. However, this
analysis is incomplete since constraints on increases in the price of incoming
calls can also arise if receivers themselves react to an increase in the price
of a call to a mobile. For instance, if the receiver cares about the satisfaction
of the sender, then the price of calls to mobile telephones will be
internalised. The latter case is sometimes referred to as "closed user
groups" and can correspond to families that behave under a single budget
constraint, or some business users who provide different sorts of telephony
services to their employees. These can constitute a large part of the
customer base of a mobile operator. Mobile operators, however, have the
ability to price discriminate among different groups, for instance by offering
discounts to large business users, hence their presence does not seem to
restrict overall price levels for other customers.

8 Continuing with the example presented in box 1, where customer A is the caller and customer
B is the receiver, this price increase could be paid directly by the sender if the price pB for
termination is paid directly by customer A to B's provider at the retail level. If, instead, A's
provider bills customer A and then pays a termination charge to B's provider, the price increase
would be initiated at the wholesale level (tB) and have repercussions at the retail level (pAB). In
this latter case (the most common situation in practice), the demand for B's provider is a derived
(input) demand to be analyzed at the wholesale level. In both cases, however, customer A has
a limited ability to find a substitute means of contacting customer B.

T. VALLETTI

73

The receiver may still limit the provider's ability to charge others high
prices. In fact, if the price of incoming calls increases, the number of calls
received will decrease, which has a negative effect on the satisfaction of the
receiver, since receiving calls is clearly one of the incentives of subscribing
to a mobile telephone in the first instance. However, this is not necessarily a
disadvantage for consumers that receivers can easily see or react to. It is
documented by several NRAs (for example, Ofcom) that receivers'
awareness of the price of calls to mobile telephones is low and that the price
of incoming calls is not considered by subscribers to be an important factor
in their choice of mobile operator and other factors are more influential. The
mobile owner cares most about the prices s/he has to pay to subscribe to
and place calls with a mobile operator, but in most cases will not take into
account the prices paid by other callers to contact him/her. In fact, mobile
telephone owners may enjoy a higher level of overall satisfaction if an
increase in the price of incoming calls, despite reducing the number of
incoming calls, induces the mobile operator to decrease other prices directly
paid by subscribers.
When assessing what type of dominant behaviour might arise in the
market for incoming calls, it is useful to distinguish between the following
three types of mobile incoming calls:
- calls to mobile (on-net),
- calls to mobile (off-net),
- calls to mobile (from other non-mobile networks, mostly F2M calls in
practice).
In principle, given that a mobile firm is by definition the only firm that can
terminate calls for its own customers, SMP in the form of single dominance
should arise, no matter what type of call is under consideration. However, as
mentioned repeatedly above, in this market both a sender and a receiver are
involved and their identity cannot be neglected.
In the case of on-net calls to mobile, if the mobile firm tried to increase
the price of the termination end of the call, the sender that would suffer the
price increase would be one of its own customers. An increase in termination
price would make the overall package offered by the firm to its subscribers
less appealing, and the firm would lose customers as a result. Competitive
forces do act as a constraint on the firm's behaviour, hence there is not likely
to be any market power abused in this case. In terms of the analogy with
two-sided markets, in this case the mobile firm is a platform that perfectly
"internalises" transactions that only affect its customers.

74

No. 61, 1st Q. 2006

Contrary to on-net calls, single dominance is likely to exist for the other
two kinds of incoming calls, mobile off-net calls and calls to mobile from
other networks (F2M calls). In these two instances, the sending party that
pays the call is not one of the firm's customers and the firm's receiving
customers would not react to a price increase, which gives the mobile firm
the ability to set the price at monopoly levels. From the point of view of
single dominance, these two types of calls are therefore quite similar.
There is nonetheless one possible important difference between these
two types of incoming calls to mobile from other customers. The difference
lies in the strategic environment. Off-net calls are charged to customers
belonging to a rival mobile network, while there no strategic interaction
between a mobile firm and a fixed firm, as these are to a large extent
separate markets.
As customers buy mobile phones with the purpose of receiving calls from
other customers, a firm might be tempted to increase its off-net termination
price in order to distort competition in the market. This incentive exists, on
top of the termination monopolisation effect, only for mobile off-net calls. For
instance, a mobile firm could set a high off-net termination charge, so that
the overall off-net price paid by rival customers is high. Customers would be
willing to join a bigger network: on-net calls, to the extent that they are
cheaper than off-net calls, imply that customers would be receiving relatively
more incoming calls.
What we have described to far can be said about the price incoming calls
in general, without distinguishing whether these calls are set at the
"wholesale" level as termination charges or at the "retail" level charging
senders directly (see box 1 again for this analogy). There is, however, a
possible main difference with the "retail" market analysis of incoming calls. If
the sending party was billed directly by the receiving operator, it seems
natural that the termination price is set directly by the receiving network, thus
the sending customer has no bargaining power. Instead, at the wholesale
level, the termination price is more likely to be negotiated between the
sending and the receiving network. Countervailing buyer power (i.e.,
bargaining, negotiations) should therefore be taken into account when
analysing the wholesale market for incoming calls in order to determine the
presence of SMP.
In particular, a bargaining model seems quite appropriate to an analysis
of the market for "off-net" M2M calls, as this is a bilateral problem of "twoway" interconnection, where two wholesale prices have to be negotiated,

T. VALLETTI

75

one in each direction. One network, when negotiating the wholesale price for
sending calls to the rival network, can always use its own wholesale price for
receiving calls from the rival as an effective threat in the bargaining game. In
this context, there are different sets of results from the literature 9:
x Bilateral wholesale negotiations can get rid of inefficiencies, given the
reciprocal nature of bargaining. This is true particularly for negotiations
among symmetrically-placed networks.
x Bilateral negotiations may be used to affect the intensity of
competition at the retail level. The nature of collusion may be different:
- Collusion may happen in a "static" framework by setting high
termination rates because of a "raise-each-other's-cost" effect 10. This
result holds true only under particular circumstances, namely retail prices
should be linear (which may be applicable to pre-paid cards), while it
does not apply under more sophisticated retail pricing structures (twopart tariffs, for example, monthly rental plus price per minute of usage).
- Collusion may also happen in a more standard "dynamic" framework,
where networks repeatedly interact with each other. The role of
wholesale termination charges may be one of giving a "focal" reference
point to set collusive retail prices. Please note that, in this case, joint
dominance should be established at the retail level, while the wholesale
level may facilitate reaching the collusive agreement.
The applicability of a bargaining model to the determination of the
wholesale price for termination of F2M calls is more controversial 11. In a
bargaining model, two parties have to find a way to divide the surplus
created by finding a deal. This division is influenced by the outside options
that the parties have, i.e., what they could get if they threaten not to strike a

9 See ARMSTRONG (2002), LAFFONT & TIROLE (2000), VOGELSANG (2003), CAMBINI &
VALLETTI (2005).
10 To see this, imagine what happens when operators charge customers collusive (monopoly)
retail prices. If mobile customers call each other with the same probability, the traffic is balanced
and an operator pays the rival the same amount for termination services that it receives from
the rival for similar services, independently of the value taken by the termination charge. This
can be an equilibrium only if no one has a unilateral incentive to deviate. If one firm deviates
from the monopoly retail charges by undercutting the rival, it induces its subscribers to call
more. Since part of the calls made is destined for the rival's network, the effect of a price cut is
to send out more calls than it receives onnet from the rival. The resulting net outflow of calls
has an associated deficit that is particularly burdensome if the unit termination charge is high.
This will discourage under-pricing in the first place. Some conditions are necessary to produce
this outcome, for instance products should not be too homogeneous, otherwise the incentive to
undercut would have the additional benefit of increasing market share.
11 See BINMORE & HARBORD (2005), UK Competition Appeal Tribunal (2005).

No. 61, 1st Q. 2006

76

deal. The "threat" points are not as natural as in the bilateral negotiation of
termination of M2M calls. In the case of F2M calls, the negotiated price is
only "one way", as the other way (M2F) is typically regulated. This
asymmetric treatment of M2F and F2M calls is a possible source of
distortion that must be noted.
This problem of "bargaining in the shadow of regulation" still has to be
clarified in full. However, some related aspects have received partial
answers. For instance, an argument put forward has been that, to have a
viable business, a small MNO must have an interconnection agreement with
the incumbent fixed-network operator. This argument mixes up incoming
calls and all other services. In fact, as a first cut, the bargaining problem
does not seem to be affected by the size of a MNO. The size of the MNO
affects the total surplus to be bargained over, not its division. This is
because, once MNOs have some subscribers, bargaining might occur over
calls destined to those customers, therefore without substitution possibilities.
As a result, we can conclude that the existence of countervailing buyer
power over the setting of termination prices does not seem more likely for
small MNOs 12.

Conclusions

Practitioners and policy makers should not forget that the role of market
definition is to provide a basis on which regulators or anti-trust authorities
calculate important indicators such as market shares, etc., in making their
prima facie case. However, one should be very careful not to make too much
of market delineations. Market definition is not a substitute for a full analysis
of the likely competitive effects in a certain economic environment under
examination. The task of defining markets should not be confused with the
assessment of competitive effects and efficiencies. In practice, this means
that many subtle interactions that may be missed when defining markets as
a first cut, can be taken into account at later stages, for example, when
assessing market power and eventually imposing remedies.

12 In fact, there are theoretical arguments (and some empirical evidence) for supporting the
opposite result: smaller networks charge more for F2M termination than bigger networks in the
presence of consumer ignorance, mobile number portability, or no discrimination requirements
for F2M calls. See GANS & KING (2000) and WRIGHT (2002).

T. VALLETTI

77

References
ARMSTRONG M.:
- (2002): "The Theory of Access Pricing and Interconnection," in: M. Cave, S.
Majumdar & I. Vogelsang (Eds.), Handbook of Telecommunications Economics,
North Holland, Amsterdam.
- (2006): "Competition in Two-sided markets", RAND Journal of Economics.
BINMORE K. & D. HARBORD (2005): "Bargaining over fixed-to-mobile termination
rates: countervailing buyer power as a constraint on monopoly power", Journal of
Competition Law & Economics.
CAMBINI C. & T. VALLETTI (2005): "Information Exchange and Competition in
Communications Networks", CEPR Discussion Paper.
EVANS D. (2003): "The Antitrust Economics of Multi-Sided Platform Markets", Yale
Journal on Regulation.
EVANS D. & M. NOEL (2005): "Defining Antitrust Markets when Firms Operate Twosided Platforms", Columbia Business Law Review.
GANS J. & S. KING (2000): "Mobile Competition, Customer Ignorance, and Fixed-tomobile Call Prices", Information Economics & Policy.
LAFFONT J.J. & J. TIROLE (2000): Competition in Telecommunications, MIT Press,
Cambridge (MA).
ROCHET J.C. & J. TIROLE (2003): "Platform Competition in Two-sided Markets",
Journal of the European Economic Association.
VALLETTI T. & G. HOUPIS (2005): "Mobile Termination: What is the 'Right'
Charge?", Journal of Regulatory Economics.
VOGELSANG I. (2003): "Price Regulation of Access to Telecommunications
Networks", Journal of Economic Literature.
WRIGHT J.:
- (2002): "Access Pricing under Competition: an Application to Cellular Networks",
Journal of Industrial Economics.
- (2004): "One-sided Logic in Two-sided Markets", Review of Network Economics.

Impact of Mobile Usage


on the Interpersonal Relations
AeRee KIM and Hitoshi MITOMO (*)
Waseda University, Tokyo, Japan

Abstract: Communication via mobile telephones is widespread in East Asian metropolis


such as Seoul, Taipei and Tokyo. In the last ten years, the number of mobile telephone
users has increased dramatically, with the younger generation in particular depending on
the services available via mobile telephones. This paper explores the relationship between
the voice and text messaging communications of these young consumers through their
mobile telephones and their interpersonal relations. It analyses how mobile telephone
usage affects relationships between respondents by comparing models of the cause-effect
relationship of several latent factors in different environments, namely dependency on
mobile telephone communication, perception of friendships, individual factors and IT
literacy. By applying a covariance structure analysis, the correlations between latent and
observable variables can be successfully visualized. The results show that mobile
telephones have little influence on the perception of relationships among the younger
generation, although somewhat different structures of interdependency exist in these
metropolitan areas.
Key words: mobile telephone calls, text messaging, Seoul, Taipei, Tokyo,
communication, younger generation, relationship and covariance structure analysis.

his paper explores the relationship between the calls and text
messages exchanged via mobile telephones owned by young people
in East Asia and their influence on perceptions of friendship with
communication partners. In the last ten years, the number of mobile
telephone users has increased dramatically. Mobile telephones have
become a popular medium of interpersonal communication. The younger
generation in particular depends heavily upon mobile telephone text
messaging, also known as SMS (short message service) and mobile e-mail
to keep in touch with one another, any time and any place. Indeed, the
mobile telephone has become an indispensable communication tool in the
daily lives of the younger generation.

(*) The authors are indebted to Philip Sugai, Miyuki Aoshima and Chiu Chia Hua for their
comments and assistance.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 79.

80

No. 61, 1st Q. 2006

Since the advent of mobile telephone calls and text messaging,


communication between young people has been actively conducted via such
devices, and as a result, their relations with friends and acquaintances have
changed, heralding a new communication culture (ITO & OKABE, 2003).
While Tokyo's youth has been identified as one of the world's leading textmessaging populations, young people in other major metropolitan areas
such as Seoul and Taipei seem to be following similar trends. Moreover, the
usage of mobile telephones has conspicuously influenced the structure of
human relationships and methods of communication; with text messages,
picture messages and related technological developments influencing the
level and frequency of personalized interactions. Consequently, individual
perceptions of interpersonal relationships have been influenced by the
usage of these mobile devices, with a far-reaching impact on the way that
communication communities are developed and structured. Despite the
significant impact of the mobile platform, only a few published research
papers have focused on the influence of mobile telephone usage upon
society.
This paper consequently aims to show how the usage of mobile
telephone calls and text messaging affects relationships between
individuals. To achieve this goal, the structure of the relationship between
several observable variables and latent variables 1 is explored. The
differences and similarities between these relationships are also examined
across three East Asian cultures. The results of the paper show that,
although somewhat different patterns of interdependency among the factors
influencing relationships with friends and acquaintances exist in these
metropolitan areas, communication through mobile telephones has a
relatively low impact on the actual "depth" or "width" of relationships. This
implies that mobile telephones only weakly influence the behaviour of the
younger generation.
This paper opens with a discussion of patterns of mobile telephone
usage specific to young users and analyses primary data collected from
surveys in Seoul, Taipei and Tokyo. We then elaborate on how the usage of
mobile telephone influences perceptions of relationships among young
users. Finally, we present a structural model that describes the effect that
mobile telephone usage has upon these relationships within and across the
metropolitan areas surveyed.

1 Latent variables are conceptual variables that cannot be directly observed, but are rather
inferred from other observable, measurable variables.

A.R. KIM & H. MITOMO

81

Usage of mobile telephones by young people


in Seoul, Taipei and Tokyo

Changes in communication
Today's younger generation uses mobile telephones as its primary
platform for communication. Use of such advanced communication
technologies has fostered the development of new types of friendships,
which are created and sustained through electronic connections like mobile
internet connections, SMS, e-mail, etc. Furthermore, mobile telephones
have changed the way friends and acquaintances communicate, as well as
the perceptions of individuals regarding these interpersonal relationships.
Recent research has shown that participating in communication is often
more important than its content (SOUKUP et al, 2001). Some content
suggests that there are indeed significant opportunities for establishing
better and closer relationships among friends through the use of mobile
telephones (VANCLEAR, 1991).
On the other hand, MATSUDA (2001) suggests that the mobile platform
has significantly lowered the quality of communication because "consumer
related communication" and "brief messaging (just to kill time)" have been
identified as the main uses of mobile-related communications. Yet even
though the content of these mobile text messages has been labelled as
trivial or superficial, small talk itself could be considered an enabler of
interpersonal communications (KOPOMAA, 2000).

High mobile telephone penetration rates


Japan, Korea and Taiwan are home to large populations of mobile users.
The number of mobile telephone users in Japan is well over 85 million,
which represented approximately 65% of the total population in December
2004 (Telecommunications Carrier Association, 2004). In Korea, there are
over 32 million users, with a subscription rate of over 75% of the population
(Ministry of Information and Communication Republic of Seoul, November
2004). In Taiwan, the number of mobile telephones was well over 22 million
in 2003 (The Directorate General of Telecommunication, Ministry of
Transportation and Telecommunications), representing an adoption level of
106% and implying that some individuals possess more than one handset.

No. 61, 1st Q. 2006

82

Recent research has shown that mobile telephones have become


inseparable from the daily lives of Japanese consumers (SUGAI, 2005). The
diffusion levels stated above suggest that this is also true for subscribers in
Taiwan and Korea.

Characteristics of mobile telephone use


Mobility and portability are the mobile telephone's main advantages over
other communications devices that are restricted by either time or place.
Moreover, the mobile telephone is no longer merely a tool for voice
communication and is turning into a more general communication device
(WEILENMAN & LARSSON, 2001), with young people being the most
advanced users of the mobile platform's text messaging and data services.
One of the reasons for the rapid adoption and widespread usage of
mobile text messaging by younger users is the relatively low cost of sending
messages versus other communications media. Furthermore, short, "to-thepoint" messages can be sent that would traditionally not be considered
important enough to make a telephone call or send a PC email. Moreover,
an ongoing record of these communications can be kept and re-read at a
later date. Since people can read, write and send text messages even in
subways, classrooms, and other situations where making a telephone call is
almost impossible, the mobile platform has become an extremely popular
communications media.

Methodology

Relevant literature
For this study, mobile usage is defined as voice and text communication
via the mobile telephone. A prior study (TANAKA, 2001) suggests that when
considering usage behaviour, three broad factors are to be considered:
- attributes of the medium, in this case a mobile telephone, including
technological constraints,
- cost,
- interpersonal (including personality, emotional, and social) factors.

A.R. KIM & H. MITOMO

83

Another study (DALY, 1987) suggests that social, functional, and


personal factors may have a more significant impact on user behaviour than
the characteristics of the medium. In other words, perceptions of social
norms and personality traits such as a "willingness to communicate" affect
user behaviour. Cost is also a key factor in the adoption and use of a
technology. Text messaging, for example, can be cheaper than voice calls.
Moreover, the social influence model of technology use points out that
"individuals' media perceptions and use are, in part, socially constructed"
(SCHMITZ & FULK, 1991). The model proposes that people around the user
influence the user's perception of the medium, including its usefulness, and
that a user's skills and experience of the communication technology facilitate
use of the medium (TANAKA, 2001). For example, if one person uses text
messaging frequently, another may perceive it to be a useful medium. One
individual's use of text messaging thus indirectly influences another's use of
text messaging and the perceived usefulness of the medium.
Finally, we should refer to the concept of "deepening" and "widening"
friendships. Such technical words used for this analysis are based on other
relevant studies that take the effect of mobile telephone use on the
development of social relationships and communities into consideration
(see, for example, LONGMATE & BABER, 2002). "Deepening" refers to an
increased sense of closeness or intimacy between individuals, while
"widening" refers to an increase in the number of people one considers a
friend.

The structure equation model


We construct a structure model based on the covariance structure
analysis 2 that is used to understand the relative strengths of direct and
indirect relationships among a particular set of latent variables. The overall
causality was demonstrated with statistical significance. In this study, the
Analysis of Moment Structures (AMOS) 4.0 software in SPSS 11.5 was
used.

2 The covariance structure analysis is a statistical model that describes the overall structure and
relationships among observable variables via some conceptual unobservable variables, i.e.,
latent variables, representative of some relevant observable variables. The analysis is very
flexible and allows endogeneity between variables.

No. 61, 1st Q. 2006

84

We first assumed a hypothetical model, in which "Relationship" was


explained by "Personal attributes", "Usage of mobile" and "IT literacy" as
shown in figure 1.
Figure1 - Basic structure of the Model

Personal Attributes

Relationship

Usage of mobile

IT Literacy

Analysis

Questionnaire surveys were conducted in Seoul, Taipei and Tokyo


between November 2001 and May 2003 at major universities in each city.
Survey targets were confined to university students as they were considered
the best representatives of young users of the mobile platform. We collected
data from over 1,600 university students from those metropolitan areas, 616
from Seoul, 566 from Taipei and 429 from Tokyo. The survey was conducted
in each nation's language respectively (Korean, Chinese, and Japanese) 3.
The questionnaire contained 51 questions that were classified into four subcategories: personal information, consciousness of friendship within a similar
age group 4, IT usage and dependency on mobile telephones. These
questions were designed to test a few of the hypotheses formulated prior to
the survey.

3 Samples were distributed over 4 universities in Seoul, 5 universities in Taipei, and 2


universities in Tokyo.
4 In this study, friendship refers to a relationship created voluntarily by people who lack kinship.

A.R. KIM & H. MITOMO

85

Table 1 Basic information on the questionnaire surveys


Categories
Target
Number of
respondents
Gender
Date

Seoul

Taipei

Tokyo

University Students
593

University Students
436

University Students
406

Male:
300
Female: 293

Male:
207
Female: 229

Male:
206
Female: 200

Oct. 02 to Nov. 02

May 03 to July 04

Nov. 01 to Apr. 02

Hypothesis
H1: Usage of mobile telephones and text messaging deepens
relationships with friends and acquaintances.
H2: Usage of mobile telephones and text messaging widens
relationships with friends and acquaintances.

Use of mobile telephones by university students


Younger users of mobile telephone made an average of 4.9 and 4.5 calls
per day in Seoul and Taipei respectively, versus 2.4 calls per day by
Japanese young users. However, mobile telephone text messaging was
used more frequently in Tokyo and Seoul than in Taipei. The number of text
messages sent per day totalled 23 in Tokyo, 22 in Seoul, and only 5 in
Taipei. The survey data suggests that Japanese university students make
the most of text messaging because the charge for calls is considerably
higher. In Seoul, the younger generation frequently uses both mobile
telephone call and text messaging services. In Taipei, text messaging is
used less often, probably due to its complicated input method. People in
Taipei use SMS, whereas in Tokyo messages are formatted in HTML, which
is fully compatible with PCs.
One question in the survey related to the number of participants' "invisible
friends" who were seldom seen, but communicated with through mobile
telephone text messaging. The ratio of "invisible friends" was lower in Tokyo
and Taipei than in Seoul. In fact, most respondents claimed that they had 110 friends who were contacted with mobile telephone mails in Tokyo (64.5%)
and Taipei (59.4%), while the number of friends contacted in Seoul (52.8%)
was 11-20.
As for the number of friends with whom communication did not depend
on mobile telephones, the most frequent answer (84%) was an average of 1-

No. 61, 1st Q. 2006

86

5 friends in Tokyo and over 10 friends (54.3%) in Seoul. It is not surprising


that there are many friends who do not contact one another through mobile
telephones. However, approximately 60% of the young generation in Taipei
replied that they communicated via mobile telephone with all of their friends.
Table 3 summarizes the primary results of the questionnaire survey.
Table 2 - Number of mobile telephone calls and text messages placed per day
Categories

Seoul

Average number of mobile telephone calls per day


Average number of mobile telephone text
messages sent per day

4.9
22

Taipei

Tokyo

4.5
5

2.4
23

Table 3 - Number of friends through mobile telephone communications


Categories

Number of
People

Seoul

Taipei

Tokyo

Friends who are seldom seen, but


contacted via mobile telephone text
messages recently

1 -10
11 - 20
Over 20

29%
52.8%
18.2%

59.4%
22.7%
17.9%

64.5%
20.7%
14.8%

Friends to whom one neither sends


mobile telephone text messages nor
makes mobile telephone calls
recently
Experience in mobile telephone text
messaging (years)

0
1-5
6 - 10
Over 10
N/A

5%
15%
25.7%
54.3%
2.5

59.2%
14%
6.7%
23.1%
2.1

3%
84%
10 %
3%
2.8

Analysis with the structure equation model


Four latent variables and fifteen observable variables were identified for
this paper 5.
Personal attributes

Personal attributes were represented by three observable variables:


[V1] Amount of pocket money (Monthly disposable income)
[V2] Number of hours worked at part-time jobs

5 Originally, a total of 22 observable variables were identified, but a factor analysis deemed 7 to
be statistically insignificant. Specifically, for personal attributes, 5 out of 8 observable variables
(clothing expenses, cosmetic expenses, hobbies, lodging, and marital status) were statistically
insignificant; for mobile usage, 1 out of 4 observable variables (cost of sending and receiving
text messages and voice calls) was statistically insignificant; and for IT literacy, 1 out of 4
observable variables (use of game players) was statistically insignificant.

A.R. KIM & H. MITOMO

87

[V3] Number of days spent attending school


Relationships

Relationships with friends and acquaintances are represented by six


observable variables:
[V4] Number of friends contacted only via mobile telephone text
messaging
[V5] Number of friends contacted not only via mobile telephone text
messaging, but also with other media
[V6] Number of friends not contacted with mobile telephone text
messaging
[V7] Number of shopping trips with friends per month
[V8] Number of sightseeing trips with friends per month
[V9] Number of dinners out with friends per month
Literacy

IT literacy has been shown to influence communication skills. Whether


young people use mobile devices for communication may depend more or
less on their use of PCs. In view of this fact, three observable variables are
adopted:
[V10] Hours of PC use per day
[V11] Hours of internet use per day
[V12] The number of PC internet mails sent per week
Usage of mobile telephones

The following three observable variables are assumed to represent


mobile telephone usage:
[V13] Number of mobile telephone calls placed per day
[V14] Hours or minutes of mobile telephone usage per day
[V15] Number of mobile telephone text messages sent per day

No. 61, 1st Q. 2006

88

The full specification of the model is demonstrated in figure 2.


Figure 2 - Full specification of the model

Since we adopted a non-parametric approach, the empirical test was


quite adjustable. Even if a model did not fit well, some modified models
could be applied by adding new pathways or removing the original ones.
This feature would be advantageous to conducting an analysis of a single
data set. However, in a comparative study, the existence of a definitive
structure for testing the hypothesis can prove a disadvantage. In addition,
sample populations were obviously different in the three metropolitans.
Therefore, a rigorous comparative study using such an approach would not
be optimal. To avoid this problem, we applied the multiple population
analysis of AMOS, which enabled the structure model to be created using
multiple data sets. Table 4 shows the results and fit of this model. The
detailed results of the model applied to each of the metropolitan areas are
given in annex.

A.R. KIM & H. MITOMO

89

Table 4 - Fit of the model


GFI

AGFI

RMSEA

AIC

Model 1 0.94
0.93
0.032
844.25
< Criteria >
GFI/AGFI (Adjusted Goodness of Fit Index): more than 0.9
RMSEA (Root Mean Square Error of Approximation): less than 0.08
AIC (AkaikeInformationCriterion): the lower, the better

Empirical results

Influence of "mobile telephone usage" on "relationships"


The coefficients estimated on the path from a latent variable "mobile
telephone usage" to another latent variable "relationships" are 0.26 for
Tokyo, 0.32 for Seoul and 0.40 for Taipei. These results suggest that the
mobile telephone usage has a certain influence on relationships, but this
effect is not significant. However, compared with the other factor that affects
"relationships", "mobile telephone usage" has a larger impact in Tokyo and
Taipei. In Seoul, "personal attributes" have a larger impact. This suggests
that even although "mobile telephone usage" does not have a remarkable
impact on relationships alone, it is likely to contribute to the enhancement of
relationships with friends and acquaintances.
One aspect of the hypothesis assumes that the use of mobile telephones
deepens closeness with friends and acquaintances. The correlations
between the latent variable "relationship" and the observable variables [V7] [V9] represent how much the latent variable is reflected in these observable
variables. They are as follows: 0.38, 0.42 and 0.42 in Tokyo, 0.42, 0.53 and
0.45 in Seoul and 0.47, 0.54 and 0.48 in Taipei. The effect of "mobile
telephone usage" on deepening relationships can be measured by the joint
correlations calculated as the value of multiplying the correlation between
the above two latent variables by the correlation between the "relationships"
and each of the observable variables [V7] - [V9]. These values are 9 - 11%,
13 - 17%, and 18 - 22%, respectively. These values cannot be compared
between metropolitan areas, but in Taipei young people are more likely to
develop closeness with friends through mobile communications.
The other aspect of the hypothesis assumes that mobile telephone users
tend to develop a wider relationship with friends and acquaintances. This

90

No. 61, 1st Q. 2006

can be seen in the correlations between "relationships" and the variables


[V4] - [V6]. The correlations are relatively low in Tokyo (0.18, 0.32 and 0.21,
respectively), which imply that "relationships" is less reflected in the
variables representing a wide relationship than those representing a deep
relationship. A similar tendency can be found in Taipei, but in Seoul both
sets of variables almost equally have an impact. The joint correlations 5 8%, 8 - 16% and 7 - 16% are found between "mobile telephone usage" and
the observable variables in Tokyo, Seoul and Taipei, respectively. It seems
that young people in Tokyo are less likely to increase friends and
acquaintances through mobile telephone communications than those from
Seoul and Taipei.
These results clearly indicate that mobile telephone communications do
not make a significant contribution to the development of either wider or
deeper relationship with friends in these metropolitan areas. Thus, neither
hypothesis was accepted. Although young people use the mobile platform
for their communication needs, it does not strengthen the relationships
between friends as we had hypothesized.

Relative importance of mobile telephone calls and text messaging


The difference between the impact of mobile telephone calls and text
messaging can be seen in the correlations between the latent variable
"mobile telephone usage" and the corresponding observable variables [V13]
and [V15]. The values are: 0.72 and 0.62, 0.51 and 0.46, and 0.46 and 0.3,
in Tokyo, Seoul and Taipei, respectively. Obviously, the difference between
the correlations is small in each country, suggesting that young people place
a similar importance on calls and text messaging, although they may actually
differentiate between the purpose of calls and text messaging
communications. Both the number of participants placing calls and sending
mails affects the usage of mobile telephones most significantly in Tokyo.
The overall structure of the model shows that "relationships" are not
greatly influenced by either "personal attributes" or "mobile telephone usage"
in Tokyo. One of the distinct features is that "mobile telephone usage" is
independent of "IT literacy" In the other two metropolitan areas, IT literacy
was shown to have a small impact on mobile telephone usage. It has often
been pointed out that young people in particular tend to feel that they don't
need a PC if they have a cell telephone in Tokyo (CLARK, 2003). Our results
support this argument.

A.R. KIM & H. MITOMO

91

Personal disposable income, represented by the observable variable


[V1], was included in the latent variable "personal attributes" most
significantly in Tokyo and Taipei, but the latent variable showed a weak
correlation with other latent variables.

Conclusion

Mobile telephones are now an indispensable communication tool among


young people. In Seoul, Taipei and Tokyo, the younger generations
especially enjoy telephone calls and text messaging, and use mobile
telephones very frequently. This has been shown most clearly in Japan,
where CLARK (2003) asserted that "mobile telephones have replaced
computers as the de facto e-mail terminal of choice for the majority of
Japanese." However, the situation in Korea and Taiwan is quite similar to
Japan and would seem to follow on from the Japanese experience.
This paper examined how relationships with friends could be affected by
dependency on mobile telephones in the context of IT literacy and personal
attributes. The analysis of this paper showed that despite the existence of
somewhat different patterns of factor interdependency influencing the
relationship with friends and acquaintances in these metropolitan areas,
communication through mobile telephones has a relatively low impact on the
actual depth or width of relationships.
We originally hypothesized that the usage of mobile telephones for voice
calls and text messages would deepen or widen relationships with friends
and acquaintances. It is evident in the analysis that these hypotheses were
incorrect in all three metropolitan areas that we studied. These results
suggest that the mobile platform only holds a weak influence upon the
relationships of the younger generation. As a result, communications via the
mobile platform can be considered to be of a superficial nature, maintaining
the strength of relationships developed through other communications
media.

No. 61, 1st Q. 2006

92

References
ADESON J.G. & GERBRUNG D.W. (1988): "Structural Equation Modeling in
Practice: A Review and Recommended Two-Step Approach", Psychological Bulletin,
No. 103(3), pp. 411-423.
CLARK T. (2003): "Japan's Generation of Computer Refuseniks", Japan Media
Review, http://www.ojr.org/japan/wireless/1047257047p.php
CUNNINGHAM H. (1988): "Digital Culture the View from the Dance Floor", in J.
Sefton-Green, DigitalDiversions: Youth Culture in the Age of Multimedia, UCL Press,
pp. 128-148.
DALY J.A. (1987): "Personality and interpersonal communication: Issues and directions", in J.C. McCroskey & J.A. Daly (Eds.), Personality and interpersonal
communication, pp. 13-41, Beverly Hills, CA: SAGE.
LONGMATE E. & BABER C. (2002): "A Comparison of Text Messaging and Email
Support for Digital Communities: A Case Study", in X. Faulkner, J. Finlay & F.
Detienne (Eds.), People and Computers XVI- Memorable Yet Invisible, Proceedings
of HCI 2002, Springer-Verlag, London.
GREEN S.J. (2000): Digital Diversions Youth Culture in the Age of Multimedia, UCL
Press.
HYERS K. (2003): Mobile Messaging in Japan, Asia and AME: 2003 Through 2007,
Wireless Data Markets.
Institute of Socio-Information and Communication Studies (2001): Information
Behavior 2000 in Japan, University of Tokyo Press (in Japanese), pp. 33-38.
ITO M & OKABE D. (2003): Paper presented at the conference: 'Front Stage-Back
Stage: Mobile Communication and the Renegotiation of the social Sphere', June 2224, Grimstad, Norway.
KOPOMAA T. (2000): The city in your pocket: Birth of the mobile information society,
Helsink: Gandeamus.ata., ISBN 951-662-802-8.
LONGMATE E. & BABER A. (2002): "Comparison of Text Messaging and Email
Support for Digital Communities: A Case Study, People and Computers XVIMemorable Yet Invisible", Proceedings of HCI, pp. 257-262.
MASTUDA M. (2001): "University Students and the Use of Mobile Telephones", the
Study
of
Socio-Information,
Bunkyo
university
(in
Japanese)
from:
http://www10.plala.or.jp/misamatsuda/youth-mobile.html
SCHMITZ J. & FULK j. (1991): "Organizational colleagues, media richness, and
electronic mail", Communication Research, I8(4), pp. 487-523.
SOUKUP P.A., BUCKLEY F.J. & ROBINSON D.C. (2001): "The influence of
information technologies on theology", Theological Studies, Vol. 62 (2), pp. 366-373.

A.R. KIM & H. MITOMO

93

SUGAI P. (2005): "Mapping the Mind of the Mobile Consumer", International


Marketing Review, vol. 22 (6), pp. 641-656.
TANAKA K. (2002): "Small talk with friends and family: Does text messaging on the
mobile telephone help users enhance relationships", University of Washington, pp.
48-54.
VANCLEAR C.A. (1991): "Testing a cyclical model of communicative openness in
relationship development: Two longitudinal studies", Communication Monographs.
58(4), pp. 337-361.
WEILENMAN A. & LARSSON C. (2001): "Local Use and Sharing of Mobile
Telephones", in B. Brown, N. Green & R. Harper (Eds.), Wireless World: Social and
Interactionl Aspects of the Mobile Age,Springer-Verlag, pp. 99-155.
Ministry of Public Management, Home Affairs, Posts and Telecommunications,
Japan. See: http://www.soumu.go.jp/
Minister of Information and
http://www.mic.go.kr/index.jsp.

Communication

Republic

of

Korea.

See:

The Directorate General of Telecommunication, Ministry of Transportation and


Telecommunications, Taiwan. See:
http://www.dgt.gov.tw/English/International-org/International-org.shtml
Telecommunications Carriers Association, Japan. See: http://www.tca.or.jp/

No. 61, 1st Q. 2006

94

Annex

Detailed results of model 4

Personal Attributes

Relationship

Usage of
Mobile

IT Literacy

The values in the box beside each arrow are the coefficients estimated for each metropolitan
area:
Top = Tokyo
Middle = Seoul
Bottom = Taipei

Opinion
Interview with
David EVANS
Vice Chairman of LECG Europe, London

Interview with David EVANS


Vice Chairman of LECG Europe, London

Conducted by Marc BOURREAU, David SEVY & Nathalie SONNAC

C&S: How would you define two-sided markets (2SM)?

David EVANS: Two-sided markets are those in which entities we call


catalysts bring together two (or more) groups of customers who need each
other in some way, but who cannot capture the value of the mutual attraction
on their own. For-profit businesses, joint ventures, cooperatives, standardsetting bodies and governments operate catalysts and therefore operate in
two-sided markets.
What distinguishes catalysts and two-sided markets from the more
conventional one sided business is the interdependency among the two (or
multiple) customer groups, as well as the need for an intermediary - or
catalyst to realize the full potential of that interdependency. In a single
sided business, suppliers work across the value chain to deliver products or
services to their customers. Each cog in that wheel is completely
independent of the other, the demand for a particular component is
completely independent of any other member of the value chain. In twosided markets, interdependency among customer groups is the defining
chacteristic of the business, and in the absence of a catalyst to facilitate that
mutual interaction, neither side is able to realize their full potential which is
to provide profitable products or services to the end users of their products
or services.

C&S: Which information and communication technology (ICT) market would


you qualify as 2SM? Are there ICT markets that could be considered 2SM at
first glance, but are not really 2SM?

DE: A large part of the ICT market is two-sided, and with the advent of more
sophisticated technology and content capabilities, some markets are now
evolving towards becoming multi-sided.
COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 97.

98

No. 61, 1st Q. 2006

Mobile telephony is a two sided market and with the introduction of content
and television programming on mobile phones, it is becoming a multi-sided
market. Mobile telephony is two-sided when viewed in several ways. Firstly,
these mobile networks bring together callers and receivers. While that may
seem trivial, an examination of pricing strategies shows that it is actually
quite central to the business. Moreover, modern mobile telephones are built
on top of an operating system, which attracts developers to write
applications such as ring tones for it that makes the device more attractive to
subscribers. As content continues to find its way onto mobile phones, the
dynamics between operating systems, content providers, advertisers,
handset manufacturers and network operators must be perfectly aligned in
order for subscribers to find the offer attractive and at a price point they find
pleasing and for the various customer groups to get and stay on board.
Terrestrial or free television is two sided it brings together viewers and
advertisers. Pay TV is two-sided also, even though it may not initially
appear that way. Even although viewers pay for content, advertisers,
sponsors and other third party revenue sources are often important as well,
and the interdependency among advertisers and viewers is becoming very
important.
The information market is obviously huge and by and large is also twosided. Whether information is received via newspapers or magazines or web
portals, readers (or web site visitors) and advertisers have an
interdependency that can only be managed by a catalyst.
Search engines, which are information tools, are often used to support twosided businesses. Site visitors are granted free use of the search tool and
advertisers pay for the privilege of reaching these visitors. Without visitors,
advertisers wouldn't be interested and without advertisers, search engines
would lack a revenue stream to fund their operations. Some of these sites
are giving rise to the phenomenon of "mash ups" which is a website or
web application that seamlessly combines content from more than one
source into an integrated experience for the end user. In this instance, the
interdependency among content sources and web visitors is the dynamic
that must be managed.

C&S: Do you think that the development of ICTs can stimulate the development
of 2SM? For instance, in the last few years, online dating sites, which can be
viewed as 2SM, have been developing fast, thanks to the possibilities offered
by the internet. More generally, it could be argued that ICTs facilitate the
development of platforms.

DE: Technology offers catalysts the opportunity to transition from operating


only in the physical world, such as dating clubs, auctions, retailers or even
publishers, to operating in a virtual world or possibly both. It also gives birth

Interview with D. EVANS

99

to many new catalysts that have embraced the catalyst concept and created
new businesses operating purely in a virtual world. Nowhere is this more
obvious than in search and other online information portals.
For catalysts that operate in both worlds, physical and virtual, pricing and
product design decisions become interesting and complex. They must
decide design and pricing decisions across media and not just within media.
By that I mean that they must decide whether the physical or virtual world
will subsidize the other instead of being limited to thinking only in terms of
how the customer groups in a single instance must be engaged, for
example, readers and advertisers of a newspaper.
Technology stimulates the development of catalysts in another important
way as well. Many information and communication based catalysts are also
software platforms or operating systems. As the internet makes it easier and
easier for developers to write new applications, the value of the catalyst to
the end user becomes increasingly valuable. This often has the effect of
then attracting more developers who write more applications, which, in turn,
attracts more end users, and so on.
It must also be said that in some cases, technology makes it harder for
catalysts to operate profitably. A recent study published out of the
newspaper industry suggested that one reader of a print version of a
newspaper is worth 100 online readers. This means that for each reader that
a physical newspaper loses to its online portal, it must attract 100 times as
many viewers in order to remain profitable. American newspapers have
been losing a significant portion of their value as advertising has migrated to
the internet.

C&S: What do you think about the current state of research on 2SM? In
particular, do you think that a general theory of 2SM could emerge, that would
provide general insights applicable to any particular industry situation? Or do
you think that future research on 2SM has to develop more specific markets
(software, media etc.)? Besides, to our knowledge, there is not yet any
econometric work on 2SM. What are the specific difficulties of doing
econometrics on 2SM?

DE: A robust set of findings - on pricing structure for example - on two-sided


businesses has generally emerged. More robust findings are probably within
reach. However, we are learning that the nature of interactions between the
various market participants, the technology, and other institutions are very
important for how two-sided businesses operate in practice. Further
progress on the theoretical front will most probably require specialization in
different types of two-sided businesses. As the theory evolves, it wouldn't be
surprising to find that the categories in which we think of two-sided
businesses evolve as well. There has been some econometric work evaluating the extent of indirect network effects mainly; and I've done some

100

No. 61, 1st Q. 2006

work on how a market disruption affected the pricing structure for payment
systems in Australia. Econometric analysis -particularly modern estimation of
structural models - is difficult because of the complexity and
interdependence in these businesses. My guess is that the results will be
fragile. My own work has focused on detailed case studies concerning the
evolution and operation of these businesses.

C&S: Do you think that the theory of 2SM provides sufficiently robust and nonambiguous insights for decision makers (from firms, competition or regulatory
authorities)?

DE: Yes and no. Businesses understand the dynamic nature of catalysts
and the fundamental need to get both sides on board. But I think that they
underestimate the difficulty in crafting the business models needed to
sustain it. This is due, in part, to the fact that business schools just don't
provide the tools or the training to think in this two or multi-dimensional
world. Setting prices is one example of this complexity. Pricing models must
be carefully constructed to reflect the impact of interdependencies on
demand and price. Cost-plus or value-based pricing, which is the tool of the
modern business executive, are lethal when applied to catalysts. The failure
to understand the role of pricing subsidies in stimulating demand was largely
the reason for the many of the dotcom failures of the late 1990s/early 2000s.
Unfortunately, human nature makes us distrustful of all that is different and
that we don't understand. And as a result, catalysts have had more than
their fair share of run-ins with government authorities, not to mention the
media, which tend to see chicanery in business practices that don't fit
preconceived molds. Microsoft's run-in with antitrust cops in the United
States is a good example of this.
Some economists - and lawyers for public and private entities - suggested
that Microsoft had gotten developers to write thousands of applications for
the Windows operating system to make it harder for others to compete.
What they didn't recognize was that operating systems are catalysts that
serve developers, users, hardware makers, and possibly other communities
as well. All operating systems compete by encouraging application
developers and others to make complementary products that help
consumers. Microsoft may well have crossed the line in other actions that it
took, but encouraging developers to write applications was an essential part
of the catalytic reaction that created value for the many communities this
operating system served.

Interview with D. EVANS

101

Two sided markets and strategy


C&S: What are the new and useful messages of 2SM theory for strategy
makers? For instance, in the telecommunications industry, does the 2SM
approach help us to understand the current process of vertical integration,
whereby telecommunications operators are becoming increasingly active in
video program markets (television via mobiles or ADSL etc.)?

DE: The role of the catalyst in a two sided market is to get all customer
groups on board. It does this in a number of ways. Often product design is
essential to creating an attractive, safe and convenient mechanism for
customer groups to take advantage of their mutual interaction. Pricing
strategies are also important since two sided markets work because the
catalyst often subsidizes one customer group in order to attract the other.
This fundamental principle does help to explain the organization of the
telecommunications ecosystem. The network operator, as the catalyst, has
an incentive to get content providers interested in their platform so that they
can attract new subscribers who not only pay for that content, but also for
air-time consumed. However, this does not necessary imply that operators
be vertically integrated. In fact, with the right product design, pricing strategy
and business model, catalysts ignite markets without the need to own their
suppliers.

C&S: In 2SM, price structure is fundamental. Do you think that, in some hi-tech
markets, the current structure of prices is inefficient and should be revised to
take into account 2SM effects?

DE: No. I don't believe economists at least have any basis for concluding
that pricing is inefficient. In many high-tech markets, such as internet
television, businesses are struggling to find the right pricing model. It is a
very difficult task; but businesses have strong financial incentives to find a
pricing structure that gets customers on board and creates a profitable
business. Economists, armed with the theory of two-sided markets, can help
guide businesses in this effort.

Two sided markets and regulation (competition policy)


C&S: What are the important messages of the 2SM theory for regulators or
competition authorities?

The most important message is that regulatory and antitrust analysis must
take into account the interdependencies between the multiple sides of the
market. It is analytically unsound to treat particular sides of the market in
isolation. The second most important message is that regulators and
authorities need to be cognizant of the economics involved in creating and

102

No. 61, 1st Q. 2006

managing two-sided businesses. These businesses can release value for


consumers only if they set prices and design their products in a way that will
get both sides on board and both sides interacting with each other.
Regulators and authorities need to be careful not to prevent business
strategies that are ultimately beneficial to consumers and to recognize that
there could be pro-competitive reasons for business behavior that may seem
odd. Of course, two-sided businesses engage in anti-competitive behavior,
just as one-sided ones do. So the third message is that, while regulators and
authorities should be cognizant of the peculiarities of these businesses, they
should by no means give them a free pass.

C&S: In a 2SM, do you think that the efficient market structure is a monopoly?
If this is the case, what does this mean for regulators or competition
authorities in sectors like the media and hi-tech markets?

DE: The evidence is overwhelming that hardly any industries based on twosided markets evolve towards a monopoly. Just look at advertisingsupported media, exchanges and matchmaking services, payment systems,
and software platforms. Some - such as Microsoft in personal computers have tended towards single-firm dominance. Most others have tended
towards oligopoly and some are quite competitive. In practice, product
differentiation and multi-homing work offset indirect network effects.

Two sided markets and specific markets


C&S: In the payment systems market, different systems (Visa, MasterCard
(MC), Amex etc.) compete. Do you think that we could see concentration in this
market (in particular, a duopoly Visa/MC in the world)? What do you think
about the project of creating a "European payment system"? What is the role
of innovation and differentiation in competition between payment systems?

DE: The natural tendency for payment systems is to have competition as a


result of product differentiation and multi-homing - the fact that it is easy for
cardholders and merchants to have and use different card brands. However,
for payment systems, market evolution depends heavily on institutional and
legal structures. For example, in Europe, as one goes from country to
country, the organization of the payment system depends very much on the
organization of the banking system, national laws affecting competition and
other institutional factors. Compare France and the UK, for example. I
believe that the internet and mobile telephony are two major forces that will
make it easier for new card systems to come into existence in local markets
and also, more importantly, around the world. It is more likely that the global
card industry is at an inflection point where MasterCard and Visa will face
new and interesting challenges, than that these global brands will secure a
hegemony.

Interview with D. EVANS

103

C&S: In the computer and software industries, two types of platforms are
competing for gamers and game developers: game consoles on the one hand
and the Windows platform on the other. Do you think that the coexistence of
consoles and PCs will continue or do you think that a convergence of the two
types of platforms is likely in the future?

DE: Gaming platforms are highly specialized and follow a different business
model than personal computers software platforms. I therefore think that
these platforms will remain distinct and that customers, application
developers, and console makers will not find that it makes sense to have
"one size fits all." An interesting question though is which--if either--of these
platforms captures home entertainment, including television.

Other theme
C&S: You have an MIT Press book coming out called Invisible Engines on
software platforms as two-sided, and a Harvard Business Press book on
management and strategic aspects. Could you tell us more about these two
books?

DE: Invisible Engines (forthcoming MIT Press Fall 2006) is the story of
software platforms, the technology that powers everything from mobile
phones to search engines, from automobile navigation systems to digital
video recorders and from smart cards to web portals. The book tells the
story of how this malleable and adaptable code - and the products that it
facilitates - is challenging, and perhaps even destroying, many longestablished industries. Entrepreneurs and executives should find insights
that they can apply in their own businesses; whether these be an industry on
the verge of creative destruction brought about by software platforms, or one
that is just emerging as a result of the opportunities that these invisible
engines can facilitate.
Catalyst Code (forthcoming, Harvard Business School Press Winter 2007)
describes the business value of catalysts as they build, stimulate and govern
two sided markets. The book is organized around a practical, yet strategic,
framework for creating a successful catalyst and outlines the steps needed
to create and maintain a profitable catalyst ecosystem. It offers insights from
hundreds of successful catalysts on issues such as product design, pricing
and the structure of business models. Catalyst Code also debunks many
traditional business school theories on pricing, in particular, and illustrates
how the application of conventional management theories spells disaster for
catalysts and two sided markets

Articles
Municipal Wi-Fi Networks: The Goals, Practices,
and Policy Implications of the US Case
The EU Regulatory Framework for Electronic
Communications: Relevance and Efficiency
Three Years Later
Modelling Scale and Scope
in the Telecommunications Industry: Problems
in the Analysis of Competition and Innovation

Municipal Wi-Fi Networks: The Goals, Practices,


and Policy Implications of the U.S. Case (*)
Franois BAR & Namkee PARK
Annenberg School for Communication
University of Southern California, Los Angeles, CA

Abstract: This paper explores three broad questions about municipal Wi-Fi networks in
the U.S.: why are cities getting involved, how do they go about deploying these networks,
and what policy issues does this new trend raise? To explain municipal involvement, the
paper points out that cities have both the means to provide relatively inexpensive
deployment and the motives to provide wireless connectivity to city employees, foster the
economic development of communities and offer universal and affordable broadband
services to residents. The paper then explores nine possible business models, ordered
according to two questions: who owns the network and who operates it. Each of the
possible business models is described and its policy implications are discussed. Finally,
the paper addresses the political and legal fight over the right of cities to build these
networks. The authors argue in conclusion that the current municipal Wi-Fi movement
should be allowed to proceed without federal restrictions.
Key words: municipal wireless, internet policy.

n the wake of Wi-Fi's spectacular rise during the past ten years, a new
twist has emerged: the last few years have seen a growing number of
municipal governments deploy Wi-Fi networks, in the U.S. and abroad.
According to VOS (2005), there were 82 municipal Wi-Fi networks in the
U.S. as of July 2005, up from 44 the previous year (VOS, 2004). In addition,
another 35 municipalities are currently planning to deploy such networks
(VOS, 2005). Outside the U.S., over the same period, this number increased
from 40 to 69 (VOS, 2005). Such municipal enthusiasm for deploying and
operating telecommunication networks comes as a surprise given the
prevailing trend of deregulation and privatization in public utilities.
This paper explores the deployment of municipal Wi-Fi networks within
the U.S. context, where the deployment of wireless broadband takes on

(*) This paper was prepared for the First Transatlantic Telecom Forum, IDATE, Montpellier,
November 22, 2005.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 107.

No. 61, 1st Q. 2006

108

particular urgency because U.S. wired broadband penetration is seriously


lagging behind other developed countries 1. We address three broad
questions about municipal Wi-Fi networks in the U.S.: why are cities getting
involved, how do they go about deploying these networks, and what policy
issues does this new trend raise? We first examine the reasons why local
governments are interested in deploying Wi-Fi networks. Local governments'
involvement in the deployment and operation of telecommunications
networks or public utilities is not a new phenomenon. Their direct
involvement with public utilities such as electricity, water, or gas has a
history of over one hundred years, and cities have owned cable networks
since the late 1980s (for a review of municipal entry into the cable market,
see CARLSON, 1999; TONGUE, 2001). With Wi-Fi, they now have access
to a relatively inexpensive technology that leverages other municipal assets
such as the many antenna site locations controlled or owned by the city
like street lights and traffic signals. However, against the prevailing
deregulation and privatization of the telecommunications marketplace, local
governments' involvement in the deployment of Wi-Fi networks introduces a
new element in the broadband internet 2 service market. Therefore, we need
to understand the reasons why local governments have now decided to
enter this market.
Secondly, we explore the various ways in which U.S. cities are pursuing
the construction and operation of municipal Wi-Fi networks.
Some
municipalities are building and managing public "hotzones," which provide
wireless internet access in downtown or public parks, while others plan to
provide wireless access simply for governmental uses for police or fire
departments, or for city employees. Yet other local governments pursue
broader goals through the deployment of citywide Wi-Fi networks. Local
governments propose to play various roles ranging from simply financing
networks to building and operating them (GILLETT, LEHR & OSORIO,
2003). Although Wi-Fi networks require less capital than their wired
counterparts, they still have to be funded and maintained. Cities are
pursuing a variety of business models to support their efforts, ranging from

1 According to ITU broadband statistics released in January 2005, U.S. broadband penetration
th

th

dropped from 13 place in 2004 to 16 in 2005. See:


http://www.itu.int/osg/spu/newslog/ITUs+New+Broadband+Statistics+For+1+January+2005.aspx

2 The term broadband is commonly used to refer to "data services that are fast, always
available, and capable of supporting advanced applications requiring substantial bandwidth"
(FCC, 2005, p. 11). More specifically, however, it means "an advanced telecommunications
service that has the capability of supporting, in both downstream and upstream directions, a
transmission speed in excess of 200 kilobits per second (kbps)" (FCC, 2004, p. 12).

F. BAR & N. PARK

109

subsidized free nets to advertising-supported or various forms of fee


collection. This raises questions about capital investment to cover
construction costs, revenue streams to cover operations, cost of capital, and
customer maintenance and service, even the possibility that municipalities
may use the new networks as a revenue source. These business models in
turn promise to shape the long-term sustainability and future evolution paths
of the municipal Wi-Fi networks. Where cities choose to charge citizens for
Wi-Fi services, they come into direct competition with industrial players,
including incumbent wired network providers.
Finally, the approach cities decide to follow will shape to a large extent
the local policy goals they might pursue through their Wi-Fi networking
activities. For example, it will affect their ability to explore synergies between
the provision of traditional city services and network access, the extent to
which they can use Wi-Fi as a tool for economic development, or whether
Wi-Fi can help them bridge the digital divide within its community. These
local issues form the backdrop of intensifying state and national policy
debates with regard to whether local governments should be allowed to
provide wireless internet services or not. The paper's third section explores
the arguments on both side of this debate.
Altogether, the current wave of municipal Wi-Fi deployment unleashes a
wide variety of technology, application, organizational and policy
experiments. Behind the current nationwide debate about the proper role for
cities with respect to broadband network deployment and operation, one
interesting policy question is: should these multiple experiments be allowed
to proceed, or do they represent wasteful, duplicative, un-coordinated
efforts? We argue that there is much to be learned from letting these local
stories unfold, so that we can explore a variety of deployment trajectories for
Wi-Fi. Relying simply on nationwide carriers following a nationwide
coordinated blueprint might be more efficient in the short-term pursuit of
known opportunities, but at the cost of discovering possibly richer
alternatives through multiple local experiments. Given its low deployment
cost and technical features, Wi-Fi makes sense for limited scale
deployments and constitutes an ideal platform to engage in such multiple
simultaneous experiments.

No. 61, 1st Q. 2006

110

Why municipal Wi-Fi networks?

Two main forces are driving the current wave of municipal Wi-Fi
deployment. Firstly, with mass-produced low-cost unlicensed wireless
technology, municipalities have easy access to the means: Wi-Fi networks
are relatively inexpensive to deploy and operate, and they take advantage of
available city assets such as street lights and urban furniture, which make
ideal antenna sites. Secondly, municipal governments point to a growing list
of motives: Wi-Fi networks can help them to provide connectivity for city
employees, entice businesses to locate in their downtowns, make their local
convention centers more desirable, and offer broadband internet access to
citizens whose homes were beyond DSL's reach.
The mass-market development of Wi-Fi technology has given local
governments the means to deploy pervasive local broadband networks that
are relatively inexpensive when compared to earlier wired alternatives. The
technology's success resulted from three main forces that lead to its widespread diffusion (BAR & GALPERIN, 2004). Firstly, the absence of licensing
requirement for the 2.4 GHz and 5 GHz spectrum in which Wi-Fi operates
has led to wide-ranging participation in the technology's development.
Secondly, industry-led standardization of the technology through the IEEE
and the Wi-Fi Alliance has ensured broad interoperability. Finally, the
resulting large scale production of Wi-Fi chipsets resulted in low unit costs
for Wi-Fi equipment, fueling the technology's integration as standard
equipment in laptop computers and allowing widespread diffusion of Wi-Fi
access points for private and public use. The availability of unlicensed
spectrum and low equipment costs have spurred new demand from
households, businesses, and government agencies for wireless internet
access, thus motivating local governments to explore wireless broadband
provision as part of their commitment to serve their communities.
Furthermore, the related development of new mesh wireless
architectures gives cities a distinct advantage. Early uses of Wi-Fi mainly
saw the connection of access points at the end of broadband lines, offering a
'cordless Ethernet' for data access reminiscent of cordless phones for voice.
Mesh architectures allow data to bounce from one Wi-Fi device to the next,
reaching its ultimate destination through a series of wireless hops.
Importantly, while cordless Ethernet deployments required each access
point to be wired to the network, meshed devices only need a power source
and self-organize into an alternative network. This gives municipalities a
distinctive advantage since they control a large number of powered locations

F. BAR & N. PARK

111

dispersed throughout urban areas street lights, traffic signals, municipal


buildings, police and fire stations that can all serve as excellent sites for
mesh devices. Maintenance of the mesh network can become part of ongoing maintenance of these facilities, allowing cities to benefit from
economies of scale and scope in their operations (LEHR, SIRBU &
GILLETT, 2004). By contrast, private carriers who have attempted to build
public urban Wi-Fi networks have had to negotiate access rights to sites,
and have largely resorted to making deal with store chains such as
Starbucks coffee shops and Kinko's copy centers. Moreover, local
governments that own public utilities have decades of experience in
operating complex technologies, serving customers of many kinds,
managing billing and collection systems, and providing technical support
(BALLER & STOKES, 2001; BAR & GALPERIN, 2005). As a result, many
municipalities have come to see Wi-Fi networks as a natural extension of
their on-going activities.
Not only do municipal governments have the means to deploy Wi-Fi
networks, they also have a number of key motivations for doing so. Firstly,
local governments need broadband networks to support their internal
operations and deliver services to their citizens. Networks are essential for
government to provide services such as code enforcement, utility monitoring,
meter reading, police or fire protection, and cities operate internal networks
to support their employees in all those tasks, as well as inter-governmental
communications (GILLETT, LEHR & OSORIO, 2003). Insofar as Wi-Fi
technology allows them to deploy these networks less expensively and to
offer better network coverage throughout urban areas, they help enhance
government work performance and lower provision costs for those services.
Furthermore, governments are increasingly eager to provide on-line services
to residents and businesses, ranging from building permit applications,
access to government meetings or municipal databases such as property
information and crime statistics, and on-line participation in civic debate.
Secondly, local governments see the provision of broadband wireless
access as a way to promote local economic development. They believe that
pervasive broadband access can create significant incentives for businesses
to locate in a particular area, and can attract business visitors and
conventions. The third rationale is to provide universal and affordable
broadband internet access to their residents thus bridging the digital
divide. Internet access and ICTs are important for people to sustain and
enhance their social, economic, educational, and cultural lives in the
Information Society. However, recent reports indicate that U.S. broadband
penetration has fallen from third to sixteenth in the world during the last four

112

No. 61, 1st Q. 2006

years and broadband access for American citizens continues to lag behind
most other industrialized countries (TURNER, 2005.)
Some local
governments, faced with what they perceive as lukewarm private sector
efforts to solve this problem, are seeking to provide "public information
utilities" as they did in the past with essential public utilities such as
electricity or water (SACKMAN & BOEHM, 1972; SACKMAN & NIE, 1973.)
Sifting through the various justifications for municipal Wi-Fi deployment
projects, we find evidence of these three rationales. Municipal Wi-Fi
networks serve to increase the effectiveness of government service delivery
in many public safety networks. For instance, the police department of San
Mateo, CA, claims it was able to improve the productivity of its officers
without increasing the number of patrols on the street by giving them better
access to police resources through a metro-scale Wi-Fi network. Municipal
deployments are pursued for economic development goals in metropolitan
cities such as Philadelphia, Los Angeles, or San Francisco, which are
currently planning to construct citywide Wi-Fi networks or providing free
broadband access in hotzones. In the case of Philadelphia, the city
government decided early this year to spend an estimated USD 10 million to
build a Wi-Fi network that would cover the entire 135-square-mile city area
as a way to remain a competitive location for businesses and an attraction
for visitors (The Wireless Philadelphia Executive Committee, 2005). In a plan
to deploy a citywide Wi-Fi network, the city government of Los Angeles also
claims that it is essential to have a reliable, affordable and accessible
broadband network in order to maintain and enhance the economic activities
of the city (Mayor's Wi-Fi & Beyond Executive Advisory Panel, 2005).
Small counties where other broadband services have not been available
are also considering citywide Wi-Fi networks or hotzones in order to entice
businesses to locate in their community or keep them from leaving. For
instance, some local governments have built Wi-Fi networks in an effort to
meet the needs of businesses that require high speed communication
facilities. The city government of Scottsburg, IN, which has a population of
8,000, decided to build its own Wi-Fi network in response to local auto
dealerships' request for broadband internet service provision. In order not to
lose approximately 70 jobs provided by the local dealerships, the city
government has invested $385,000 for the construction of a Wi-Fi network
(REARDON, 2005), which currently has more than 400 customers
(Muniwireless.com, 2005). Long Beach, CA, provides free wireless internet
access in its downtown and convention center in an effort to attract visitors
and convention organizers (BAR & GALPERIN, 2004).

F. BAR & N. PARK

113

Many local governments seek to provide universal and affordable


broadband internet access to underserved areas and to residents who
cannot afford to pay high prices for commercial internet service (cable
modem or DSL). For example, before the city of Chaska, MN built its own
Wi-Fi network, the local phone company, Sprint, provided DSL service for
USD 40-45 per month, and the local cable company, Time Warner, offered
cable modem service at USD 45-50 (Reardon, 2005). Now the city-owned
ChaskaNet offers Wi-Fi internet service for USD 15.99 a month (Tropos
Networks, 2004a), and over 2,000 of its citizens were paying subscribers as
of October 2004 (COX, 2004). In the case of Cerritos, CA, before the city
government built its municipal Wi-Fi network, no area of the city was covered
by cable modem, and only a small portion was served by DSL with
inconsistent access at best (Tropos Networks, 2004b). Since the inception of
the network, the residents of the city have enjoyed internet access at about
USD 30 per month (Pronto Networks, 2004).
Local governments also hope their Wi-Fi networks will generate
additional revenues, but this may take awhile. It is projected that it would
take a few more years for local government to reach a break-even point,
since they have invested a considerable amount of funds in the construction
of Wi-Fi networks. Even one of the most ambitious municipal Wi-Fi networks,
in Philadelphia, anticipates that it will not break even until its fourth year of
operation (The Wireless Philadelphia Executive Committee, 2005).
Especially in cases where cities provide free access to users, relying on
advertising revenues from local businesses to recoup their expenses,
additional revenues in the short term look harder to achieve. One case in
point is the recent cancellation of Orlando's city Wi-Fi network where
wireless access was free. The city government of Orlando anticipated as
many as 200 users a day, but only 27 users per day showed up, forcing the
network to shut down after 18 months of operation because the city could
not justify spending USD 1,800 per month to keep the system running
(EWALT, 2005). Thus, whether or not municipal Wi-Fi networks can provide
additional revenues to local governments still remains to be seen.

Searching for viable business models

In their efforts to deploy municipal Wi-Fi networks, local governments


have proposed a wide array of organizational structures and business
models. Various proposals have discussed structures such as private

No. 61, 1st Q. 2006

114

consortium, public community enterprise, cooperative wholesale, public


authority, nonprofit, private sector partnership, and it can be confusing to
sort through the multiple options. Fundamentally, two questions guide our
analysis of the various options under discussion: who would own the
network and who would operate it? For each of these two questions, there
are essentially three answers proposed the same set of three for both
questions the city, one private player (usually a telco or an ISP, or a
company like Google in San Francisco), or multiple others (a set that can
include local merchants, Wi-Fi cooperatives, multiple ISPs, or community
organizations). The nine possible options are mapped out in table 1, each
one leading to a unique set of policy issues. Of course, these nine options
are not mutually incompatible and local governments can elect to combine
several of them. Each, however, represents an archetype, useful to explore
specific policy issues. We review the three ownership options in turn,
examining the implications of different operating arrangements.
Table 1 - Municipal Wi-Fi business models
Who owns?
City

One private actor

Multiple others

Who operates?
City
One Private actor
Multiple others

Public utility

Hosted services

Public overlay

Wholesale

Franchise

Private overlay

Wholesale
open platform

Common carrier

Organic mesh

City-owned networks
In a first set of cities, local governments choose to own the Wi-Fi network
infrastructure. This option is often chosen in cities where the initial motivation
is to provide communication facilities for the city's internal needs. They
typically contract with an equipment maker to install network equipment on
city-owned sites. When their plans go beyond internal use to include offering
Wi-Fi services to the public, municipalities have three main choices.
The first is for the city itself to operate the service and retail it, through a
public utility along the lines of municipal water or power utilities. The most
prominent reason for adopting this model is to take advantage of the past
experience of public utility companies in provision of other infrastructure.

F. BAR & N. PARK

115

Through such an arrangement, cities can leverage their existing resources


for subscriber acquisition, customer service, technical support, and billing.
The city of Owensboro, KY, has adopted this model. Owensboro Municipal
Utilities contracted with Alvarion Networks to build the network it owns and
operates. Alvarion Networks also built Wi-Fi networks in Scottsburg, IN, and
Island Pond, VT, the cities own and operate their citywide municipal network
with service fees of USD 35 and USD 30, respectively. The city of Chaska,
MN, is another example, where Chaska.net, a city-owned ISP hired Tropos
to build the Wi-Fi network it owns and operates, charging residents USD 16
per month. Thus far, only a few cities have adopted this business model.
These have typically been relatively small cities, ones where private telcos
and cable companies were not providing broadband internet access. The
advantage of the public utility model is that it gives cities the greatest control
over the network, its operation and the services it provides.
A second option is for the city to act as a wholesaler, reselling excess
capacity in the network to a private operator, usually a telecom company or
an internet service provider, who then retails Wi-Fi service to the city
residents. In this model, while a city funds the design, construction and
operation of a municipal Wi-Fi network, service providers perform customer
acquisition, customer service, technical support, and billing. The city can
receive benefits through reduced telecommunication costs by owning the
network, instead of leasing it from private companies. The city of Pasco, WA,
follows that approach, where the city's utility company owns the network
while a private ISP is in charge of the network operation, charging residents
USD 25-75 per month. The Wi-Fi network of Long Beach, CA, is another
example, where the city government contracted with a private vendor, Venier
Networks, to build the network while Color Broadband and G-Site are
responsible for operation of the network and management of the login to the
website, respectively. In Hermosa Beach, CA, the city hired Strix Systems
for the construction of the network, leaving LA Unplugged in charge of its
operation. The planned Philadelphia network adopts this model and the city
recently appointed a commercial ISP, EarthLink, as the service provider.
A third option offers a variant on the wholesaler model in which the city
offers excess capacity in its network to several ISPs, as an open platform.
No city has followed that path so far, although this model is one of those
under consideration in San Francisco for the city's upcoming Wi-Fi
deployment. It will be particularly interesting to follow the development of

116

No. 61, 1st Q. 2006

such an arrangement, now that DSL is moving away from providing an open
network for ISPs 3.
In all three operation approaches, a city's ownership of the network gives
it substantial control over its deployment, coverage and service conditions.
However, these modes put the city in direct competition with private telcos
and cable companies for the provision of internet service.

Single private owner


In a second set of cities, local governments are making agreements with
one private company to build and own the network, under an agreement that
allows them to use city-owned antenna sites. In such cases the same three
options exist for network operation.
The first option, municipal operation of hosted services on that private
infrastructure, is possible in theory, but unlikely in practice. A city choosing
that approach would essentially set up a municipally-controlled ISP offering
services on privately-owned Wi-Fi facilities. So far, no city has explored that
avenue.
The second, and by far the most prevalent option is for the private
network owner to operate the Wi-Fi service as well and sell it directly to
consumers. This arrangement mimics the franchising of cable systems
operators, and cities can structure agreements that carve out city benefits
similar to the public/education/government (PEG) access channels in
addition to eventual franchise fees and access fees for antenna siting. In
most of these cases, the city is the private concern's main customer, in effect
the "anchor tenant" of that Wi-Fi system. In many cases also, the city uses
its control over right of access to antenna sites to negotiate provision
conditions with the network provider. These can range from limits on monthly
fees (for all subscribers or specifically targeted to economically
disadvantaged city residents), or requirements insuring network coverage
throughout the city. The city of Cerritos, CA, was one of the first to adopt this
business model for its Wi-Fi network. In this case, equipment makers Tropos
Networks and Pronto Networks built the citywide network, which Aiirmesh
then owned and operated. The city provided access to municipal buildings

3 Cable-based broadband networks have always been closed in the U.S. since the terms of
their franchise agreement do not require them to be common carriers.

F. BAR & N. PARK

117

and intersection signal light structures for antenna installation (Pronto


Networks, 2004). After its construction, the Wi-Fi service is now available to
more than 50,000 city residents (Pronto Networks, 2004). The city is
Aiirmesh's largest customer and engaged in negotiations that resulted in
Aiirmesh offering a range of service levels such as Aiirmesh Home for local
residents, Aiirmesh In-Town for visitors, and Aiirmesh BusinessPro for
business use.
A number of cities have adopted this franchise model largely because, by
delegating the deployment and operation to private vendors and ISPs, local
governments simply take on the role of organizer, and thus rely on the
franchisee for the construction, operation and administrative tasks. Their
control over city-owned rights of way allows local governments to influence
the shape and character of Wi-Fi service, to receive revenue as they charge
for access to this physical infrastructure, and to negotiate terms of access to
the network for their internal use. This business model does not require the
city to invest up-front in the construction of the network and the franchisee
has incentives to operate the network efficiently to generate profits and
compete with other forms of internet service.
In a number of smaller cities that have adopted this model, such as
Cerritos, CA, there was no alternative broadband service available. The
project unfolded smoothly as residents directly benefited from newfound
internet access, while there where no existing broadband providers to
complain about unfair competition. However, the way in which cities use
their power over antenna location sites to choose specific network operators
seems bound to raise controversy, especially as larger cities adopt this
model. For instance, the city government of Milwaukee, WI, recently
declared that it would follow such an approach for the construction of its
citywide municipal network, at no cost to city tax-payers. Critical concerns
have been raised about how much the local government would charge for
the use of city infrastructures, how it would control ISP rates, and how it
would go about selecting the franchisee.
The third option, one also that is theoretically possible, but so far not
implemented in practice, would see the private network owner function as a
common carrier, making its Wi-Fi network infrastructure available to multiple
ISPs, city services, and possibly others such as private networks. They may
choose to do so because of requirements imposed by the city (for example
in exchange for access rights to antenna sites) or because it makes
business sense to have others retail the service to individual customers. This
is one of the options currently under discussion in the city of San Francisco.

118

No. 61, 1st Q. 2006

Multiple private owners


Finally, cities may choose to encourage construction of Wi-Fi networks by
multiple players. This could include local Wi-Fi co-operatives, retail
businesses (in shopping districts for example), or community organizations,
in addition to for-profit network providers. In a sense, this constitutes a
continuation of the current deployment mode for Wi-Fi, where a number of
independent private and public efforts have led to the deployment of uncoordinated Wi-Fi coverage areas. As local governments ponder their
options to foster more consistent coverage and services, one of their options
is to use their authority to promote greater coordination among these various
efforts and to encourage more consistent coverage within municipal areas.
Here again, there are multiple options for network operation.
A first option is for municipalities to offer a common public overlay to
these multiple networks, that could provide features ranging from a common
city 'branding' to uniform login and authentication. A similar concept has
been promoted by wireless community activist project "NoCat.net" in the
form of a suite of software services including NoCatAuth (a centralized
authentication system that works across multiple independent co-op
networks), NoCatSplash (a user front-end for access authentication) and
NoCatMaps (a free node database and mapping tool) 4. This approach is
closely related to the way in which the city of Austin, TX, has encouraged the
coordinated deployment of multiple independent Wi-Fi systems through the
Austin Wireless City project (FUENTES-BAUTISTA & INAGAKI, 2005).
A second option would be for the multiple network owners to outsource
service provision and retail/billing operation to a private overlay operator
such as Boingo or iPass: this is currently one of the prevailing models for
commercial public Wi-Fi provision in coffee shop and hotels, a model that
could conceivably be extended to other types of venues. For example,
Boingo currently lists free networks on its Wi-Fi location finder interface,
although it obviously does not charge for access through them.
The third option, a set of diversely-owned network facilities operated by
multiple players would provide an interesting test of the self-organizing,

4 For more information, see http://nocat.net. The project, hosted by O'Reilly and associates in
Sebastopol, CA, takes its name from a quote attributed to Albert Einstein, who is said to have
described radio in the following way: "You see, wire telegraph is a kind of a very, very long cat.
You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this?
And radio operates exactly the same way: you send signals here, they receive them there. The
only difference is that there is no cat."

F. BAR & N. PARK

119

organic mesh envisioned by proponents of a broadly open spectrum


common (see BENKLER, 2002; REED, 2002). Optimistic visions expect that
current Wi-Fi deployments might naturally emerge into a more ubiquitous
self-organizing coherent mesh network, where the multiple players seek
interconnection or collaboration arrangements as they see fit. However, one
could envision a local government taking an active role to usher in such an
outcome, for example by promoting broader Wi-Fi deployment in city-owned
buildings such as libraries and municipal offices, or by making antenna sites
available in exchange for a commitment to cooperate with other Wi-Fi
networks in the area.

The debate over municipal Wi-Fi

Opponents of municipal Wi-Fi deployments raise three main objections.


Firstly, they claim that the involvement of cities represents unfair competition
for private carriers because they can use public assets. Secondly, they
argue that municipal governments have no particular technological expertise
and are likely to prove incompetent in selecting technological approaches,
applications and business models. Thirdly, they believe that government
intervention, favoring one specific technology, creates distortion by
foreclosing competition among alternatives in the marketplace. The debate
quickly turned to quasi-ideological fights about the proper place of
government in modern society.
Perhaps more productively, it would be useful to clearly examine the four
key government roles that underlie the various business models and the
policy arguments explored in this paper and articulate the policy rationale for
specific kinds of government involvement in Wi-Fi deployment. We sketch
these here roles, leaving a full development of the related arguments for
another paper.
x Use of city-owned structures. A recurring theme in the reasons
invoked by cities to justify their Wi-Fi deployments (and a core gripe of the
private players they then compete with) is the free access cities have to ideal
antenna sites. Rather than questioning the right of cities to deploy Wi-Fi, it
would be more useful to examine the conditions under which public and
private players alike should be able to gain access to these facilities.
x City use of the network. Local governments often stand to become the
principal user of the Wi-Fi network. The question then is whether the most

120

No. 61, 1st Q. 2006

efficient way for cities to serve their own networking needs should be to insource or out-source the Wi-Fi infrastructure and its operation. One
important related factor that should be included in that analysis is the
potential for synergies that could be open by having a single player, the local
government, being both provider and user of the network, which open useful
avenues for virtuous learning cycles.
x City funding of the network. In many cases, municipal involvement in
Wi-Fi deployment has an important financial component cities might
propose to fund the network's construction or subsidize its operations. In
such cases, it is important to examine the two underlying rationales for such
use of taxpayers' money. Firstly, it might be argued that private parties would
be short-sighted, or their capital too "impatient" to wait for long-term returns.
A second argument would encourage the inclusion of social goals such as
bridging the digital divide in the evaluation of such funding decisions. In both
cases, the underlying political debates should be confronted directly.
x City regulation of prices. Finally, some local governments seek to
justify their involvement on the grounds that there is a social need for them
to influence the service's pricing level and structure, so as to encourage
access by certain population categories. Critics have pointed out that
providing target user populations with vouchers toward commercial internet
access may be a more effective way to achieve these goals (THIERER,
2005). Here again, the underlying social policy deserves to be debated
directly, and its mechanisms clearly articulated.
The deployment of municipal networks has provoked a political and legal
fight with regard local governments' right to build those networks.
Confronted with the increase of municipal Wi-Fi networks, legacy network
providers (telecom and cable companies) and their supporters argue that
cities and municipalities have unfair advantages over private companies,
because they regulate those private companies, avoid fees and taxes, obtain
low cost finances, and utilize public work forces and facilities. They argue
that city and municipality subsidies allow them to offer network access at
below-cost prices, which in turn distorts fair competition and puts private
companies format a serious disadvantage.
Telcos Verizon and SBC initially led the battle, but they were recently
joined by cable companies such as Comcast. Recently, after the
announcement from the city government of Philadelphia that EarthLink
would be the ISP for Philadelphia's citywide Wi-Fi network, Comcast argued
that the role of local governments should be limited to that of a disinterested
arbiter, and that they should not be the ones to pick Wi-Fi winners

F. BAR & N. PARK

121

(COOPER, 2005). Carriers have successfully lobbied for state and federal
legislation to prohibit local governments from providing broadband internet
services.
Thus far, at least 14 states including Texas, Virginia and Missouri have
enacted laws that would prohibit municipal provision of broadband internet
services, while Nebraska explicitly allowed local governments to provide
such services. The debate has now extended beyond the States to the
national level. Early this year, Texas Senator Pete Session introduced a bill
that would effectively prohibit state and local governments from providing the
internet, telecommunications, or cable service if a private company offers a
substantially similar service. Senators John McCain and Frank Lautenberg
introduced a opposite bill, the so-called "Community Broadband Act of
2005", which would, by contrast, explicitly authorize local governments to
deploy broadband networks. Later, Senator John Ensign introduced another
bill, the "Broadband Investment and Consumer Choice Act of 2005",
suggesting that local governments must first notify carriers and allow them to
bid for the provision of broadband services if they want to offer the services
to their residents. While Ensign's bill does not abolish municipal
governments' right to deploy broadband networks, it places heavy
administrative burdens in their way making it fairly close to Session's bill
(TAPIA, STONE, & MAITLAND, 2005). The debate with respect to the
municipalities' provision of broadband internet access is ongoing in the U.S.
Congress.

Conclusion

Part of the excitement surrounding the current wave of municipal Wi-Fi in


the United States stems from the wide-ranging experimentation that
accompanies these deployments, in areas ranging from technology and
applications to institutional arrangements and policy approaches. From a
technological standpoint, municipal Wi-Fi networks represent the largest
deployment of mesh architecture, a chance to test in a real-life environment
whether this promising architectural approach is indeed as resilient as
expected, and how it truly scales. From an applications standpoint, we can
expect fascinating experimentation to result from the municipal governments
combined role as user and provider of networking applications, one that
allows them to leverage synergies associated with simultaneously learning
"by using" and "by doing" Wi-Fi networking, as well as the development of a

122

No. 61, 1st Q. 2006

new class of civic applications that could potentially transform the


relationship between citizens and their governments. Equally interesting will
be the institutional experimentation. Diverse cities, pursuing a variety of
organizational arrangements and business models, will constitute many
testing grounds for policies exploring alternative allocations of roles and
responsibilities, diverse combinations of public and private incentives. If only
to learn the lessons from these technological, applications and institutional
experiments, the municipal Wi-Fi movement should be allowed to proceed
without federal restrictions. Its local character and relatively small minimum
efficient scale makes Wi-Fi networking perfectly adapted to experimentation
at the municipal level.

F. BAR & N. PARK

123

References
BALLER J. & STOKES S. (2001): "The case for municipal broadband networks:
Stronger than ever", Journal of Municipal Telecommunications Policy, 9(3), from
th
http://www.baller.com/library-art-natoa.html [October 9 , 2005]
BAR F. & GALPERIN H.:
- (2004): "Building the wireless internet infrastructure: From cordless Ethernet
archipelagos to wireless grids", COMMUNICATION & STRATEGIES, 54(2), pp. 4568.
- (2005): "Geeks, cowboys and bureaucrats: Deploying broadband, the wireless
way", paper presented for the Network Society and the Knowledge Economy, Lisbon,
Portugal.
BENKLER Y (2002): "Some economics of wireless communications", Harvard
Journal of Law & Technology, 16(1), pp. 25-83.
CARLSON S.C. (1999): "A historical, economic, and legal analysis of municipal
ownership of the information highway", Rutgers Computer & Technology Law
Journal, 25(1), pp. 3-60.
Cooper C. (2005): "Should you have a right to broadband?" CNet News.com,
October 21st from:
http://news.com.com/Should+you+have+a+right+to+broadband/2010-1071_3-5905711.html

[October 25th, 2005]

th
COX A. (2004): "More municipalities offering the service", October 18 CNN.com,
from http://www.cnn.com/2004/TECH/internet/10/18/wireless.city [October 15th, 2005]

EWALT D.M. (2005): "Orlando kills municipal Wi-Fi project", June 23

rd,

Forbes, from

http://www.forbes.com/home/technology/2005/06/23/municipal-wifi-failure-cx_de_0623wifi.html

[October 9th, 2005]

FCC:
- (2004): "Availability of advanced telecommunications capability in the United
States", Fourth Broadband Deployment Report, September, FCC 04-208, GN Docket
No. 04-54.
- (2005): "Connected and on the go: Broadband goes wireless", Wireless Broadband
Access Task Force Report, February from:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-256693A1.pdf
[October 10, 2005]
FUENTES-BAUTISTA M. & INAGAKI N. (2005): Wi-Fi's promise and broadband
divides: Reconfiguring public internet access in Austin, Texas. Paper presented at
the Telecommunications Policy Research Conference 2005 Conference, Arlington,
VA.
GILLETT S., LEHR W. & OSORIO C. (2003): "Local government broadband
initiatives", paper presented at Telecommunications Policy Research 2003
Conference, Arlington, VA.

124

No. 61, 1st Q. 2006

LEHR W., SIRBU M. & GILLETT S. (2004): "Municipal wireless broadband: Policy
and business implications of emerging access technologies", from:
http://itc.mit.edu/itel/docs/2004/wlehr_munibb_doc.pdf [October 8th, 2005]
Mayor's Wi-Fi and Beyond Executive Advisory Panel (2005): "Fast & easy: The future
of Wi-Fi & beyond in the city of Los Angeles", April 25th, from:
th
http://www.lacity.org/mayor/LA_Wifi&Beyond_0504.pdf [October 8 , 2005]
Muniwireless.com (2005): "Scottsburg, Indiana wireless network saves the
community", from:
http://muniwireless.com/municipal/projects/295 [October 16th, 2005]
Pronto Networks (2004): "Metro-scale broadband city network in Cerritos, California",
Pronto Networks Case Study, from:
th
http://www.prontonetworks.com/CerritosCaseStudy.pdf [October 8 , 2005]
REARDON M. (2005): "Local officials sound off on municipal wireless", May 3, CNet
News.com, from:
http://news.com.com/Local+officials+sound+off+on+municipal+wireless/2100-7351_3-5694248.html

[October 9th, 2005]

REED D. (2002): "How wireless networks scale: the illusion of spectrum scarcity",
presentation given at ISART 2002, Boulder, CO.
SACHMAN H. & BOEHM B. (1972): Planning community information utilities.,
Montvale, NJ: AFIPS Press.
SACHMAN H. & NIE N. (1973): The information utility and social choice, Montvale,
NJ: AFIPS Press.
TAPIA A., STONE M. & MAITLAND C. (2005): "Public-private partnership and the
role of state and federal legislation in wireless municipal networks", paper presented
at Telecommunications Policy Research 2005 Conference, Arlington, VA.
The Wireless Philadelphia Executive Committee (2005): "Wireless Philadelphia
business plan: Wireless broadband as the foundation for a digital city", from:
http://www.phila.gov/wireless/pdfs/Wireless-Phila-Business-Plan-040305-1245pm.pdf
[October 8th, 2005]
THIERER A.D. (2005): "Risky business: Philadelphia's plan for providing Wi-Fi
service. Periodic Commentaries on the Policy Debate", The Progress & Freedom
Foundation Report, from http://www.pff.org/issues-pubs/pops/pop12.4thiererwifi.pdf
th
[October 15 , 2005]
TONGUE K.A. (2001): "Municipal entry into the broadband cable market:
Recognizing the inequities inherent in allowing publicly owned cable systems to
compete directly against private providers", Northwestern University Law Review,
95(3), pp. 1099-1139.
Tropos Networks:
- (2004a): "Metro-scale Wi-Fi as city service chaska.net, Chaska, Minnesota", A
Tropos Network Case Study, from:

F. BAR & N. PARK

125

http://www.tropos.com/pdf/chaska_casestudy.pdf [October 8th, 2005]


- (2004b): "Aiirmesh Communications uses Tropos equipment to go live with
America's first unwired Wi-Fi city", Tropos Networks Press Release, from:
th
http://www.tropos.com/pdf/tropos_aiirmesh_release.pdf [October 10 , 2005]
TURNER S.D. (2005): "Broadband reality check: The FCC ignores America's digital
divide. Free Press Report, August, from:
http://www.baller.com/pdfs/FP-CU-CFA_Broadband_Reality_Check.pdf
[October 8th, 2005]
VOS E.:
- (2004): "Muniwireless.com first anniversary report", from:
http://www.muniwireless.com [October 8th, 2005]
- (2005): "Muniwireless.com July 2005 report", from: http://www.muniwireless.com
th
[October 8 , 2005]

The EU Regulatory Framework


for Electronic Communications:
Relevance and Efficiency Three Years Later
Audrey BAUDRIER
ATOM, Paris I Panthon Sorbonne University

Abstract: In 2002 the European Union implemented a new regulatory framework to


oversee electronic communications services and networks across Europe. Three years
down the line, the multiplicity of players and the quasi-contractualisation of their relations
through the Framework Directive have complicated European regulatory governance
structure. Is the implementation of the new regulatory framework relevant and efficient?
This paper uses transaction costs to try and find an answer to this question.
Key words: Regulatory governance structure, transaction costs and temporal specificity.

hree years ago the European Union implemented a new regulatory


framework to regulate electronic communications services and
networks across Europe. However, the multiplicity of players and the
quasi-contractualisation of their relations through the Framework Directive
have complicated European regulatory governance structure. Is the
implementation of the new regulatory framework relevant and efficient?
This question relates to the debate that followed the implementation of
the new European regulatory framework for electronic communications
adopted in 2002. This, in turn, points to the need for an economic analysis of
the relevance and the efficiency of the regulatory governance structure. The
answer to this question may determine the scope and objectives of the
future review of the regulatory framework to be conducted by the European
Commission in the course of 2006.
To understand what is at stake here, regulation needs to be considered
in the context of the regulatory changes that have affected the organization
of the electronic communications industry, an issue that is covered in the
first section. The complexity of the regulatory governance structure raises
the question of its relevance and its efficiency in the light of the concepts of
transaction costs, which is covered in the second section. In the third part of

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 127.

128

No. 61, 1st Q. 2006

the paper, our analysis shows how time matters and how the temporal
specificity of the regulation device has an impact on the relevance and the
efficiency of the European regulatory governance structure.

Genesis and consequences of the European regulatory


framework for electronic communications
The changes in the electronic communications sector over the last twenty
years have been accompanied by a transformation in the regulatory
coordination of these activities. The majority of EU member states have
completely or partially privatised their national public operators and founded
a national system of market regulation. These institutional changes,
generally driven by the rapid evolution of electronic communications
technologies and the structure of demand for services, led to an in-depth
reorganization of public authorities. At the end of the liberalization process
promoted by the European Commission at the end of the 1990s, the
economic organization of the electronic communications industry shifted
from a monopolistic sector to an open and regulated one, governed at the
same time by competition and other socio-economic objectives. Thus,
regulation has appeared as a new mode of public action.
Unlike the model of the Federal Communications Commission in the
United States of America, there is no supranational institution to regulate
electronic communications services and networks across Europe. The first
feature of the European institutional environment lies in the complex
interplay of institutions that are involved in the regulatory process. This
process is deeply rooted in the subsidiarity principle 1, which makes it
possible to take national market specificities into account. The second
feature lies in the articulation between rules, which are mainly defined at the
European level (treaty, directives and sector regulations) and the actions
of regulatory authorities whose scope of activities is mainly national. The
implementation of the regulatory framework is consequently entrusted to two
different types of institutions: governments and national parliaments charged

1 A principle whereby competences are distributed between member states and the Union: "in
the fields which do not depend on its exclusive competence, the Union intervenes only if and
insofar as the objectives of the action considered cannot be carried out adequately by the
Member States and thus can, because of dimensions or the effects of the action considered,
being realized better at community level" (ISAAC, 1994: 49-51).

A. BAUDRIER

129

with transposing it into national legislations; and national regulatory


authorities, whose mission is to implement the framework on a daily basis.
This duality has led to the emergence of new forms of cooperation across
Europe, without, however, exhausting the need for regulation functioning on
a European level.
The European Regulators' Group in the field of the electronic
communications networks and services (E.R.G.), which brings together the
national regulatory authorities of 32 European countries 2 and the European
Commission, is an example of the development of consultative bodies that
have conferred a growing role to national regulatory authorities. The E.R.G.
was set up as a forum aimed at advising and assisting the European
Commission in the field of electronic communications. It enables cooperation
between national regulators and the European Commission and, serves as a
body for reflection, debate and advice on the implementation of the
electronic communications framework as required by article 7(2) of the
Framework Directive.
The independent Regulators Group (I.R.G.) is a complementary
organization to the E.R.G. It was set up in 1997 as an informal group of
national regulators. The I.R.G. serves as a platform for discussion between
regulators through several working groups.
The governance structure resulting from the reform is described below at
the Figure 1.
The new governance structure is composed of four levels: fundamental
political choices translated into regulatory principles (level 1), the more
detailed technical measures necessary to achieve legislative objectives are
adopted in accordance with regulatory principles (levels 2 and 3), and finally
control over implementation (level 4). The institutional innovation lies both in
the introduction of a checks and balances mechanism, and in the networking
of national regulators with European institutions. The regulatory process
shows two main features: the articulation of competition law and sector rules
to regulate the relevant markets; and contractual-type coordination between
national regulators and European institutions, particularly through the E.R.G.

2 The E.R.G. was established in July 2002. Its members are the heads of the National
Regulatory Authorities (NRAs). These comprise the 25 EU member states, the four EFTA
(Switzerland, Norway, Iceland and Liechtenstein) and the 3 EU Accession / Candidate States
(Bulgaria, Romania and Turkey). The European Commission attends and participates in E.R.G.
meetings.

No. 61, 1st Q. 2006

130

Figure 1 - Structure of the European regulatory framework

Level 1: Adoption of main principles


The European Commission adopts formal rules (directives and sector regulations)
The European Parliament

The Council of Ministers

Agreement on principles and definition of executive powers in the directive


or in the sector regulation
Level 2: Measures of implementation
The European Commission, after consulting the CO.COM., asks for the advice of
the E.R.G. on the technical implementation of measures
The E.R.G. prepares, in consultation with operators, final
users and consumers, a piece of advice to be submitted to
the European Commission
The European Commission examines this piece of advice
and presents a position to the CO.COM.
The CO.COM. votes on the proposal within a defined time
period
The European Commission adopts the measure or submits
it to the Council of Ministers and to the European Parliament
in case of a negative vote

The European
Parliament is fully
informed and can
adopt a resolution if
the measures overlap
the executive powers

Level 3: Cooperation between regulators


The E.R.G. studies common recommendations, guidelines and common standards (in
the fields which are not covered by the EU legislation), organizes mutual evaluations
and compares regulatory practices in order to help the European Commission to
ensure the implementation of rules. The E.R.G. can request experts from the
Independent Regulators' Group (I.R.G.).
Level 4: Implementation and control
The European Commission verifies that member states conform with European
Union legislation
The European Commission can lodge an appeal against any member state that
does not respect Community legislation

This complex coordination structure can challenge economists. To what


extent is this organizational choice efficient and relevant? The question of
the efficiency and relevance of the organizational form of regulation is
important because the electronic communications sector is at the crossroads
of three important European challenges: the build-up of a large single
competitive market and the development of pan European networks; the
recent widening of the European Union to ten new member states whose
needs are specific; and the deepening of the market economy as a means of

A. BAUDRIER

131

making the European Union the most competitive economic area in the
world by 2010.
This rapid overview raises questions regarding the efficiency of the new
governance structure in the light of the concepts of transaction cost theory.

Regulation and transaction costs

Through the constraints that it imposes on economic agents, regulation is


part of the economic problem of allocating resources to maximize growth
and wealth. However, when one turns to economic analysis for an
explanation of choices concerning the organizational modes of regulation,
the insufficiency of existing tools becomes painfully apparent. The standard
normative approach considers regulation as a black box maximizing social
welfare. This approach does not account for regulatory failures, or the
effects of the characteristics of institutions, and fails to deal with the choice
between alternative organizational modes for public intervention and the
comparative efficiency of such choices. It cannot thus explain the relevance
and the efficiency of organizational forms of regulation.
With reference to the importance of the costs associated with exchange
relations, Ronald Coase highlighted the benefits of a positive approach to
studying the world "such as it is" (COASE, 1988: 14). This approach is
based on the comparison of transaction costs attached to the various
exchange modes of rules. From this point of view, the characteristics of
transactions, uncertainty, and opportunism are important factors to be taken
into account. It is in the spirit of this approach that we choose to consider the
implementation of the European regulatory framework for electronic
communications.
The research programme of the neo-institutional economy is based on
the concepts of ownership rights and transaction costs. Through the lenses
of transaction costs, regulation can be viewed under two different aspects.
Firstly, the governance structure of regulation, which determines the roles
and the interactions between institutions, and secondly the temporal
specificity featuring regulatory coordination.
The governance structure of regulation appears as the institutional
framework in which political ownership rights between economic agents are
defined and exchanged. This institutional framework determines the

132

No. 61, 1st Q. 2006

organizational form of regulation through its effects on quasi-contractual


mechanisms, whose object is to exchange "rights to regulate", i.e. rights
relating to the design, implementation, interpretation and control of rules.
These exchange mechanisms conceal coordination costs, i.e. costs for
organising regulatory powers and distributing implementation competences
between institutions.
The institutional environment is a key element to designing this exchange
mechanism. It refers to the fundamental constraints, or the rules of the
game, which frame the behaviour of individuals (NORTH, 1990), i.e. formal
explicit rules (constitutions, laws, etc.), and informal ones, often implicit
(social conventions, standards) that affect economic performance by making
it possible to create and distribute wealth (NORTH, 1991:97). Thus, choices
in matters of regulatory governance structure are partially conditioned by the
institutional environment (LEVY & SPILLER, 1994, 1996; MENARD &
SHIRLEY, 2002).
According to this approach, regulation is a question of design whose two
components are regulatory governance structure on the one hand, and
regulatory incentives on the other hand (LEVY & SPILLER, 1994, 1996). The
first component covers the mechanisms by which political and legal
institutions reduce arbitrary power and regulate possible conflicts. The
second component relates to the rules that apply to firms such as price,
subsidies, competition and market entry. Even if the regulatory incentives
affect performance, their impact (positive or negative) is manifest only if a
regulatory governance structure has been implemented successfully (LEVY
& SPILLER, 1994:205).
The first of these two elements i.e. the regulatory governance structure
is particularly important to our understanding of the relevance and
efficiency of the European regulatory framework. Indeed, the regulatory
governance structure determines the form and the gravity of regulatory
coordination problems, and the limits of the institutional options available to
solve them.
Among these institutional constraints, the vertical divisions of
competences between the federal and national levels are limitations created
to generate inefficiencies, which contribute to the protection of ownership
rights (WEINGAST & MARSHALL, 1988). For example, the federal
Constitution of the United States of America intentionally imposed the friction
between and inside executive branches in order to increase administrative
costs and to lengthen the period of time required to implement or to change

A. BAUDRIER

133

policies (CHERRY & WILDMAN, 1999:613). Thus, we can conclude that the
organizational inefficiency is a means of fragmenting powers to safeguard
some individual rights.
One way of fragmenting powers involves the temporality of the regulation
process. Few research papers usually refer to this notion (QUELIN &
RICCARDI, 2004: 74). However, in addition to the usual concept of physical
assets specificity, "temporal specificity" is worth mentioning (CROCKER &
MASTEN, 1996: 8, 27). Indeed, this form of specificity refers to a situation
whereby it is difficult to replace in a missing partner within a reasonable time
period or whereby there is a very tight time limit for realizing transactions
(BICKENBACH, KUMKAR & SOLTWEDEL, 1999: 3). In these situations,
one of the partners is dependent on another. There then occurs a form of
temporal monopoly resulting from the fact that one partner is, to some
extent, dependent on the other's action (MacKAAY, 2004: 12-13).
The notion of temporal specificity seems a particularly judicious tool for to
analysing the relevance and efficiency of the implementation of the EU
regulatory framework in electronic communications.

Analysis of the implementation


of the EU regulatory framework
In order to apply the temporal specificity notion to our empirical subject of
analysis it is necessary to isolate a relevant sequence of regulatory relations
forming a coherent "system." As a result, we have chosen to study the
relations between national regulatory authorities and European institutions
within the relevant market analysis framework. We show that these relations
form a complex web that is favourable to a study in terms of coordination
costs.

The choice of the empirical subject of analysis


The regulation of relevant markets is governed by the Framework
directive, which defines the contractual-type relations between national
regulators and European institutions, and the relations between national
regulators and firms.

134

No. 61, 1st Q. 2006

The legal framework is designed so as to allow ex ante regulation only if


the degree of competition in some markets is considered insufficient at the
end of an analysis based on competition common law methodology. This
analysis consists of defining the relevant markets, evaluating the degree of
competition in these markets, appointing the powerful operators, and
applying or, on the contrary, withdrawing the regulatory obligations for
these operators.
This contractual perspective introduces a dynamic vision of interactions
between institutions and market players, which enables us to analyse the
efficiency of the new governance structure according to their transaction
costs. These interactions, which consist of transferring regulation rights, are
governed by detailed procedures that conceal coordination costs, i.e.
organization costs of the regulatory capacities and distribution costs of
executive competences between institutions.
Contrary to most studies, which are based on the traditional criteria of
economic analysis to provide methodologies or indicators to evaluate
effective competition in relevant markets, our approach is based on the
institutional coordination problems raised by the interplay of the various
institutions concerned by relevant markets regulation. More specifically, our
approach focuses on the temporal specificity of the regulatory device to
evaluate its relevance and efficiency.

The temporal specificity of relevant markets regulation


EU regulatory coordination is burdened by heavy time constraints and by
uncertainty. This paper examines the temporal specificity and the frequency
with which relevant market regulation is implemented, as well as the
uncertainty surrounding the conditions for implementing regulation. We show
that the efficiency and relevance of the coordination device depends heavily
on the deadlines and conditions necessary to its setting up.
The temporal specificity arises from the fact that the production and
delivery modes of products and services evolve at the rate of technological
progress. As the characteristics of products and services change, demand
and offer substitution effects also evolve. Due to the rapid evolution of
electronic communications markets, the relevance of any regulatory
coordination system depends heavily on the time required to set it up. The
time required by institutional coordination is one factor that makes the
various parties dependent on each other. This interdependence is reinforced

A. BAUDRIER

135

by the fact that relevant markets regulation is a prospective exercise that


should be renewed every 18 to 24 months.
Figure 2 below illustrates the European regulatory process for relevant
markets regulation.
Figure 2: European regulatory process
The national regulator defines the relevant market,
designates the operators with significant market
power and drafts a regulatory measure.

Pre-notification
meetings
for draft regulatory
measures

24 weeks on average
The national regulator works out
a public consultation.
+ 6 weeks

Advice of
other
national
regulators

The regulator notifies the draft measure


and modifies it if need be.

Advice of
CoCom

+ 6 weeks

+ 4 weeks
The European
Commission
indicates its
disagreement.

Advice of the
national
competition
authority

Advice of
other
national
regulators

The European
Commission
indicates its
agreement.

+ 8 weeks

The European
Commission
exerts its veto.

The European Commission does


not exert its veto.
The draft measure is ratified.

The national regulator modifies


or redrafts the measure.

The national regulator adopts


and implements the measure.

Figure 2 shows that relevant market regulation requires a total of 48


weeks on average to respect all the procedures to be followed by institutions
and stakeholders (national regulatory authorities, European Commission,
etc.). In order to understand the specificity of the relations implemented in
the European regulatory process, it is necessary to analyse its different
phases.
For national regulators the first phase consists of defining the relevant
market in question, designating the operators with significant market power

136

No. 61, 1st Q. 2006

and drafting a regulatory measure. This first phase is important insofar as


any delay lengthens the prescribed schedule for the following phases. On
average, national regulatory authorities require 24 weeks to complete this
first phase. This duration can vary more or less from one member state to
another according to the financial and human resources available to the
national regulator 3.
The regulator carries out a national public consultation on the draft
measure for 6 weeks. It transmits the project for advice to the national
competition authority, which comes to a decision on the definition of markets
and the designation of powerful operators within 6 weeks.
The European Commission and other national regulatory authorities are
subsequently notified of the measures considered and their motivations,
giving these bodies 4 weeks to present their observations. In two cases, the
European Commission can delay its decision for 8 weeks and has a right to
veto, when the decision aims to define a market that is not already defined in
the European Commission's recommendation, and when the decision aims
to appoint one or more powerful operators.
The veto by the European Commission intervenes following a three step
procedure (letter of serious doubts, opening of a second examination, then,
if necessary, veto). If the European Commission estimates that the
regulator's project is not compatible with Community legislation, it can raise
serious doubts. The European Commission must explain the reasons for any
decision to delay its examination for 8 weeks. This means that regulators are
given precise information regarding the points that have been criticised and
can answer these charges. Other regulators can present their observations
during this time.
Before making its decision, the European Commission consults the
Committee of Communications (CoCom), whose opinion is not constraining.
The decision by the European Commission must be accompanied by a
detailed and objective analysis of the reasons why it estimates that the draft
measure should not be adopted, as well as precise proposals relating to the
modifications to be made to the draft measure.

3 National regulators informed the Commission of their market analyses in a disparate way and
at different times. While seven regulators (the United Kingdom, Finland, Ireland, Portugal,
Austria, Sweden, Hungary) had notified the Commission of at least one market analysis by the
end of January 2005, twelve regulators (Belgium, Cyprus, Spain, Estonia, Italy, Latvia,
Lithuania, Luxembourg, Malta, Poland, Czech Republic, and Slovenia) had still not notified the
Commission of any market analysis by this date.

A. BAUDRIER

137

In the end, it appears that the multiplicity of players and the quasicontractualisation of their relations complicate governance structure. The
regulatory framework, by regulating markets on a national and Community
level, is admittedly an asset insofar as it offers institutions a degree of
flexibility and adaptability. However, the complexity of the organizational
structure, by multiplying coordination and temporal costs, could be
detrimental to the market development in the end. Thus, although the
decentralization of implementation powers seems in accordance with the
subsidiarity principle, the long and onerous procedures involved in the
relevant market regulation device are likely to harm its relevance and its
adequacy in terms of building a single electronic communications market.

Conclusion

Our study shows that it is important to take into account the temporal
specificity of regulation in designing a regulatory governance structure. The
governance structure resulting from the reform is relatively effective, insofar
as it meets the need for guarantees against discretionary power and the
inherent risk of opportunism. The complexity of the institutional structure,
through the dispersion of powers and competences, seems deliberate. The
regulatory governance structure is useful to legitimate interests such as
political balance and transparent relations between national regulatory
authorities, European institutions and firms.
However, the multiplicity of players and quasi-contractualisation of their
relations complicates the governance structure. Is this complexity, by
multiplying coordination costs and temporal costs, detrimental to the fast
moving electronic communications markets? How can such problems be
prevented?
These questions raise important concerns to be taken into account when
the implementation of the EU regulatory framework is reviewed in the future.

No. 61, 1st Q. 2006

138

References
BICKENBACH F., KUMKAR L. & SOLTWEDEL R. (1999): "The New Institutional
Economics of Antitrust and Regulation", Kiel Working Paper no. 961, Kiel Institute of
World Economics.
CHERRY B.A. & WILDMAN S.S. (1999): "Institutions Endowment as Foundation for
Regulatory Performance and Regime Transitions", Telecommunications Policy, 23.
COASE R. (1988), The Firm, The Market, and The Law, The University of Chicago
Press, New York.
CROCKER K.J. & MASTEN S.E. (1996): "Regulation and Administered Contracts
Revisited: Lessons from Transaction-Cost Economics for Public Utility Regulation",
Journal of Regulatory Economics, vol. 9, pp. 5-39.
ISAAC G. (1994): Droit communautaire gnral, 4me dition, Collection Droit
Sciences conomiques, Masson.
LEVY B.D. & SPILLER P.T.:
- (1996): Regulations, Institutions, and Commitment: Comparative Studies of
Telecommunications, Cambridge, UK: Cambridge University Press.
- (1994): "The Institutional Foundations of Regulatory Commitment", Journal of Law,
Economics and Organization,10 (Fall), pp. 201-246.
MacKAAY E. (2004): Le thorme de Coase, version prliminaire, Chapitre "Analyse
conomique du droit II. Institutions juridiques", Editions Thmis, Montral et
Bruylant, Bruxelles.
MENARD C. & SHIRLEY (2002): "Reforming Public Utilities: lessons from Urban
Water Supply in Six Developing Countries", The World Bank.
NORTH D.C.:
- (1991): "Institutions", Journal of Economic Perspectives, vol. 5, no. 1, pp. 97-112.
- (1990): Institutions, Institutional Change and Economic Performance, Cambridge
University Press.
QUELIN B. & RICCARDI D. (2004): "La rgulation nationale des
tlcommunications: une lecture conomique no-institutionnelle", Revue franaise
d'administration publique, no. 109, pp. 65-82.
WEINGAST B.R. & MARSHALL W.J. (1988): "The Industrial Organization of
Congress; or, Why Legislatures, Like Firms, Are Not Organized Like Markets", The
Journal of Political Economy, vol. 96, no. 1, February, pp. 132-163.

Modelling Scale and Scope


in the Telecommunications Industry:
Problems in the Analysis of Competition and Innovation (*)
Alain BOURDEAU de FONTENAY
Columbia University Institute for Tele-Information (CITI), New York
Jonathan LIEBENAU
London School of Economics and Columbia University, New York

Abstract: A theory of scale and scope that takes into account the endogenous nature of
technology and the contextual manner in which systems architecture and functionality are
shaped by market structures requires an alternative approach to modelling and analysis.
Following on from "A new view of scale and scope in the telecommunications industry;
implications for competition and innovation" (BOURDEAU et al., 2005) , we apply the
concepts of embeddedness, integration and competition to show how the current models
can be improved. We also show how the many-layered "network of networks" can be
evaluated.
Key words: scale and scope; modelling; innovation and competition

Introduction: the problem of modelling scale and scope

Regulators, business strategists and industry analysts face major


challenges in trying to understand the structural transformations that the
telecommunications industry is undergoing. One major inhibitor to our ability
to conceptualize an industry structure differing from current forms is that we
generally start with inappropriate assumptions about how to model scale and
scope for network industries. An alternative approach to analyzing scale and
scope will provide us with the means not only of understanding the status of
alternative structures, it will also allow us to see how the dynamic
characteristics of such transformations can emerge.

(*) We are extremely grateful to James Alleman, Dimitris Boucas, Paul David, Catherine de
Fontenay and Christiaan Hoggendorn for their influential assistance.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 139.

No. 61, 1st Q. 2006

140

Our approach entails a reconsideration of firm structures in a dynamic


market context (e.g., STIGLER, 1951; CHANDLER, 1990; WILLIAMSON,
1985; and YANG, 2001) and addresses the static-technology models that
characterize neoclassical analysis (ARMSTRONG, 2002; WOROCH 2002).
By starting from a dynamic market framework, we can demonstrate that
scale and scope economies of the kind that can generate substantial market
growth and profit opportunities today operate on a network of networks level,
rather than at a firm level. Furthermore, the "network of networks" structure
is many layered, which means that economies need to be evaluated at the
level of individual activities not "end to end" vertical services.
Consequently, the legacy structure of the incumbents, characterised most
notably by their vertical and horizontal integration, serves as both a handicap
and an opportunity to their greater profitability. Their structure, and the
technology they developed to support their business form, protects legacy
markets by constituting entry barriers, but that structure also handicaps
incumbents from responding fully to market signals, investing efficiently and,
ultimately, sustaining their competitive advantage.

The problem of data

The most concrete (and often only) data available are the data that
describe the incumbent's performance. Those data at the aggregate level do
not reflect competitive market forces, except to the limited extent that
incumbents' stock shares are publicly traded in the economy-wide capital
market. Therefore such data do not offer an understanding of the
competition problem and the dimension of the challenges actually faced by
incumbents and entrants alike. Even if costs data are adjusted through cost
allocation formulae, these methods are at bottom arbitrary and without any
direct link to what firms could be expected to do when confronted by
competitors. The problem this creates can be illustrated by Gould's 1889
monopolization of the all the railroad facilities in Saint Louis, Missouri
(LIPSKY & SIDAK, 1999). Through this action, all market signals about the
relative value of the various bridges, stations, etc. disappeared. Information
about the efficient use of assets in a monopoly-dominated environment is
similarly difficult to come by. While there have been occasional small steps
to recognize and correct some of these problems (see SIDAK & SPULBER,
1997), most are grossly inadequate, if not misleading (ECONOMIDES,

A. BOURDEAU de FONTENAY & J. LIEBENAU

141

1997) 1. Much more significantly, none of the attempts reflect the proper
characteristics of incompletely developed markets, yet the existence of a
competitive market that is unaffected by any one firm's decision to integrate
vertically is set out by COASE (1937) and WILLIAMSON (1985) as a
prerequisite 2. The fundamental problem appears to have been
acknowledged by Spulber when he developed the "market determined"
ECPR [M-ECPR] access pricing method.
The distortion of economic analysis by inadequate data has myriad
ramifications. For example, FUSS & WAVERMAN'S (2002) work implies that
it is impossible for an entrant to recover its fixed cost from the incumbent's
technology unless it is duplicating the entire range of outputs the incumbent
produces. This means that cost-based pricing methods that do not look at
alternative technologies will be biased in favour of the legacy technology.
Cost-based pricing approaches such as ECPR accept the legacy cost
structure as efficient and free of entry barriers. Once the basic problems of
these approaches are understood, it is not unreasonable to argue that prices
so derived exceed the market value that would emerge as competition
becomes established 3.
The inadequate data problem is much more than a problem for regulatory
debates, although many of these debates have important outcomes for the
players and policy. It is, moreover, a fundamental challenge to the internal
management of incumbents and would-be entrants alike. Without an ability
to know more about how to efficiently deploy assets to best meet evolving
markets, there is an unusually high degree of risk attached to making
business decisions. Useful thinking about what competition might look like
can be accomplished using our estimation. To achieve this successfully
primarily relies on using the limited historical experience we have had with

1 SIDAK & SPULBER (1997, p. 371) do consider the social opportunity cost, but only when
regulation is perfect. At that point, the opportunity cost tha incumbents would receive for
foregoing a downstream customer is zero and "efficient component pricing rule" [ECPR]
reduces to LRIC. They do not consider how the private opportunity cost deviates from the social
opportunity cost at other regulated prices in the form of a private rent for the incumbent.
2 WILLIAMSON (1976) illustrates what a player with market power may be able to do and how
neglecting the competitive assumptions can produce totally different results. His analysis helps
to highlight the contrast between his competitive model and a model where a player has market
power.
3 TELRIC ("total element long run incremental cost") represents a conceptual step toward
correcting cost-based pricing from this inherent "monopoly-centric" bias (MANDY & SHARKEY,
2003).

No. 61, 1st Q. 2006

142

elements of competition and carefully extrapolating from them 4. It also


implies integrating into one's model assumptions that have at least some
market basis.

Modelling

In building useful models, we believe it is important to begin with an


understanding of policy changes to competition and why it is both responsive
to, and itself represents an instrument for changing the technology and
industry structure environment.
We have previously discussed how scale and scope economies have
been misunderstood and how that misunderstanding has clouded over a
realistic picture of what is happening in today's telecommunications
marketplace (BOURDEAU de FONTENAY et al. 2005). At the same time, an
accurate understanding of scale and scope economies at work in the sector
is critical to industry members, potential investors and policy makers who
must navigate today's uncharted waters. To be useful for this purpose, scale
and scope measurements must be considered at both a sector level and the
level of the individual firm, although properly applied economic analysis
casts strong doubts about there being efficiency characteristics to most
currently alleged economies. Measurements at the product (or function) level
can also be important, because they define the parameters of technology
givens.
At the sector level, we have observed that a government-acquiesced
monopoly is not compatible with the common neoclassical hypothesis of an
exogenous technology that constrains the monopoly's decision-making
ability. Effectively, the technology hypothesis is currently being improperly
used with, as a consequence, the perception that the technology restricts the
incumbents' degree of liberty when, in fact, it is the reverse that holds.
Therefore, observed legacy scale and scope does not inform us about what
competitive efficiency may be like other than highlighting the complexity that
is involved in a transition process from legacy to competition. The

4 For the policy maker, there is the Schumpeterian model that all works out in the end - but it
may take a very long time to work itself out. That model is of little use to the businessperson.
Besides, the approach implied by STIGLER (1951) and CHANDLER (1990) should be more
efficient once the time dimension and its associated cost are taken into account.

A. BOURDEAU de FONTENAY & J. LIEBENAU

143

incumbent's scale economies are of no forward looking value to the policy


planner implementing open entry.
Telecommunication systems almost always exist as layers within
networks of networks 5. In addition, they often tend to be interconnected. In
every dimension they are complex systems. For instance, at the access
level, an IP or ATM layer may overlay a DSL layer, itself layered over the
construction layer of the infrastructure. Even the infrastructure layer is
typically layered on more basic layers like poles or conduits that are built
along city streets, country roads and railways.
Although it is rarely modelled in this way, economies of scale and scope
can be assessed at any of these levels and even among them. Properly
assessing them can help us to understand the cost advantages and
disadvantages in structures, such as between BT and Kingston
Communications, or the internet compared with centralized networking.
Ultimately, such investigations help us to discern the critical trade-off points
between markets and organization that Coase described in his theory of the
firm (COASE, 1988). That knowledge can in turn better enable planners to
recognize the potential in and for markets, as well as for optimizing asset
use.
It is also necessary to translate the implications for modelling and
planning so that appropriate benchmarks can be applied to assess changes
in efficacy of network layers. For an existing firm, over a short period it can
be hard to adjust production processes, especially those having significant
fixed and sunk costs. In such circumstances short run economies of scale
and scope can be relatively high 6. Similarly, a change in the output

5 PUPILLO & CONTE (1998) is the only econometric study to date that has taken a credible
step toward lookings at activity and layer-specific economies of scale and scope.
6 Properly, we should speak of returns to a set of factos that are changed in a fixed proportion,
while the remaining factors are held constant. It is a basic result of classical economics that,
eventually, those factors that are increased 'to scale' will exhibit diminishing returns to scale. In
many situations, say, the local loop, those diminishing returns are rarely reached before the
telephone company increases capacity. The same is true of most industrial processes since few
things are as harmful to a firm as not being able to satisfy demand and prices can rarely be
changed, even in a competitive sector, as flexibly as presumed in theoretical neoclassical
models. It follows that in most industrial processes, and especially in telecommunications, the
fixed costs or inputs that cannot be changed in the short run do not act as a constraint, since
they are available in excess capacity (CAVES & BARTON, 1990). For such reasons, in spite of
the eventual diminishing returns to a subset of inputs, telecommunications firms commonly
operate where there are indeed substantial increasing returns to variable factors. This factor is
rarely considered in policy debates, where most assertions about scale economies do not even
consider the long-run and only refer to situations where a subset of inputs are held constant.

144

No. 61, 1st Q. 2006

measure, from time-based circuit usage to "always on" access, for example,
will have a significant and often hard to predict impact on economies of scale
and scope, even although nothing is changed in the technology itself. Still,
established firms often need to change in response to changing market
conditions. Frequently it is changing scale and scope opportunities made
possible by technology or competitive innovation that pressure such moves.
Sunk costs and monopoly legacy may be the cause of apparent
economies within an existing structure, but they may not relate to true
economic efficiencies that benefit the firm (or society). Instead, these factors
may have both inefficiency consequences and anticompetitive
consequences.
The example of the numbering plan invention can again be used to
illustrate this point. Once trunking was placed under the control of a
ubiquitous monopoly service provider, the manufacture of specialized
equipment became possible that might not otherwise have been developed
under open market conditions.
In the American networking environment, one might think of a class 4
switch that routes large amounts of traffic across regions to a small set of
assigned points. Economists recognize that specialized equipment can
reduce the time of producing a unit of output, but the set-up time required for
such specialized equipment can be high 7. Therefore, small firms, and those
operating in a competitive environment, might instead choose to use more
general-purpose equipment "off the shelf." Such a decision may make the
most sense in a market environment where costs are constantly challenged.
The class 4 switch represents a large commitment of "sunk" resources that
may make it harder to introduce innovation, but which does provide a
substantial advantage for a limited set of functions.
It is probably inefficient as a technology when compared to distributed
processing, and it may well constitute a barrier to entry, although improved
efficiency resulted by some measures. Regardless of the choices made, the
normal systems model poorly informs operators about the efficiency of their
investments.

We talk of short-run economies of scale to specify those assertions found in literature, in reports
published by operators and in regulatory decisions.
7 See virtually any text on industrial organization, such as WALDMAN & JENSEN (2000):
Industrial Organization, Addison-Wesley.

A. BOURDEAU de FONTENAY & J. LIEBENAU

145

Whether the incumbent's net short-run costs, i.e., the sum of all (implicit and
explicit) namely, the short-run costs, are negative or positive, the entrants'
corresponding short-run costs are unlikely to be negative, hence producing a
net benefit. The incumbent's net direct costs are always negative. As far as
incumbents are concerned, it is futile to compute meaningful implicit short-run
costs from accounting data. That does not mean that the situation is hopeless.
Those costs relate primarily to factors such as the incumbent's obligation to
provide certain services, universal service etc. ARMSTRONG (2000) provides a
partial list of these costs, as well as the costs born by incumbents. If the total
cost, i.e., the sum of the direct and indirect costs, is higher than it would be
under some alternative organization, then it is reasonable to argue that the
incumbent, in the light of its fiducial obligations to its owner, would choose the
alternative organization. An alternative organization would consist in some
divestiture of those elements of the business that are the source of losses. The
advantage of such a divestiture is its ability to provide an improved estimate of
the incumbent's local costs. If we assume that the regulator has no intention of
making the local loop financially unviable, it is rational to conclude that the
regulator will act on the revealed cost of the local bottleneck and impose
reasonable access tariffs without concerns about crosssubsidization. This
scenario implies that the benefits the incumbent derives from the existing
organization are superior to those derived from a divestiture, i.e., that the
incumbent is able to derive a rent from the service obligations. That rent was
probably in the form of influencing public authorities to set access conditions
that would make competitive entry less likely. If this were not the case, the kinds
of arguments found in the literature or in such places as Justice Breyer's
dissents would make it a fiduciary obligation for incumbents to seek such a
divestiture. One of the authors was asked informally as a member of a team of
experts to prepare a proposal for an overseas incumbent to evaluate whether
the latter would not be better off divesting of its local networks in light of what it
saw as new and exceptionally extreme regulated access pricing conditions.
Within weeks that incumbent changed its mind (not its public relations
campaign). It never divested itself of its local network. Similarly, a few years ago
BT made a proposal to divest its local facilities. After having received two offers
within a very short time, it pulled out of this project and is still vertically
integrated. In all of these situations and in many others, had service obligations
not been a cost, those incumbents would have had the opportunity to reveal
their implicit losses by placing the source of those losses on the market and
thus revealing the actual size of those so-called losses. On the contrary, it is
probable that an increase in their fixed costs, including an increase in
uncertainties, can only undermine new entrants' chance of success. From that
perspective, what we observed beginning in 2001 with the fall of competitor
after competitor is hardly surprising. Moreover, the correspondence between
economies of scale and the high entry cost is generally taken for granted by
economists (WOROCH, 2002) and some regulators (e.g. POWELL, 2001) as an
efficient outcome, yet there is no logical basis for such a conclusion.

The competitive failure that comes about when investors are reluctant to
accede to entrants' non-recurring costs is clearly a benefit to the incumbent
(DAWSON, 2002). However, this benefit is likely to be short run since
innovations that might have assisted incumbents to better anticipate and
react to market changes are eliminated (HORAN, 2002). Entrants who
anticipate this will be far better informed about effective entry conditions and
thus far more dangerous.

No. 61, 1st Q. 2006

146

Firm-based economics versus practices

In an interconnected networking economy, the firm's production structure


becomes disconnected from its output 8. Firms correspond to piece-parts of
the overarching greater network web and are no longer entirely selfsufficient. This is a feature of the emerging informatics governance structure.
In even a simplified model such as the one studied by SENGUPTA (2001) 9,
economies of scale and scope have been shown to depend upon the
system's governance and its institutions. Observed "ex post" economies of
scale and scope include the inefficiencies that are integral to its governance
structure. Consequently, one must start the study of economies of scale and
scope at a level where there is some reasonable hope of isolating
engineering and organizational dimensions from other factors, say, agency
problems due to information asymmetries. This can only be done at both the
most elementary levels, i.e., at individual layers within the operator's
production process (reduced to reasonable geographic areas to address
horizontal integration concerns) and at the complete system level. Literature
on telecommunications economics generally does not get to these levels.
We must also incorporate the time dimension required of useful analysis,
that is, the need to recognize and accommodate short run and long run cost
differences. We know that short-run pricing does not provide a stable longterm solution since it needs to cover only variable costs. In the case of a
sector where costs are largely fixed, whether the economies of scale and
scope properly reflect the long-run is quite relevant and generally has not
been subjected to much scrutiny.
A further dimension that needs to be taken into account is the process of
output. Here the specification becomes hard to interpret where scale and
scope are dealt with at the network level. Since the time of Marshall, the
analysis of economies is typically presented in the context of a plant, i.e., a
geographically constrained place with a small number of discreet functions
that can be modelled as a black box with fixed inputs and outputs. However,

8 See, for example, GULATI, LAWRENCE & PURANAM (forthcoming).


9 While Sengupta's analysis is used to study a very different environment, he is studying a
complex and fully interconnected network. Individual agent's opportunistic behaviour is an
integral dimension of his analysis and the extrapolation to, for example, a telecommunication
network appears to be straightforward. His analysis covers most of the issues addressed in the
economic analysis of networks, including the issues of network externalities. Evidently, his
scope is much broader since he also studies the institutions and their evolution. His analysis is
also interesting because, both at the activity level and at the level of the complete network, it is
characterized by very large economies of scale with very large inter-firm dependence.

A. BOURDEAU de FONTENAY & J. LIEBENAU

147

a product only becomes a commodity and subject to analysis once it is


traded. The problem is far greater in the monopoly environment where many
activities are not subject to trade. This can be illustrated by the local loop.
One could consider the output in terms of a capacity measure or a usage
measure or, still, many other ways. In this situation, one output measure,
such as the number of calls as in Australia, implies a very different form of
scale and scope analysis from an output measure such as the flat rate
charge for the line used in New Zealand. In addition, calls and usage
become, at a minimum, customer-specific, with the need to specify both
origination and termination. A local loop situated in a city such as Wellington,
New Zealand will also have different characteristics from one situated in,
say, Montpellier in France because the characteristics of the soil, the way
the cities are laid out, the way people live, etc. In practice, those
characteristics are averaged and typically treated as differences in
efficiency 10.
The unique nature of a telecom incumbent's integration warrants a
careful analysis of its key determinants and how those determinants affect
the balance between efficiencies and inefficiencies. GASMI et al. (2002)
show that, under information asymmetries, a profit-maximizing incumbent
will automatically discriminate against competitors in making perfectly
natural day-to-day decisions as basic as resource allocation. The initial
question to ask then is, what determinants might affect our best guess of the
efficiency's cost/benefit of vertical integration? This effectively leads us, after
weighing all the determinants, to evaluate the possibility of inefficiency
caused by the aggregation of functions within the incumbent firm's
management structure.
The interpretation of vertical integration is sensitive to the environment
within which it is observed. COASE (1988) and WILLIAMSON (1971, 1985)
have shown how one could establish the efficiency of vertical integration in a
competitive environment. A firm that integrates functions that are more
efficiently produced by a market places itself at a cost disadvantage vis--vis
its competitors. Correspondingly, properly integrating a function within a firm
means that the function is not efficiently separable and exhibits scale or
scope economies. As this does not easily lend itself to clear product or
functional segmentation, it lacks the many intermediate markets that could
be expected in a functionally competitive environment. Thus, there is little
hope that meaningful discussion can be had of upstream and downstream

10 Cost models, especially engineering-based cost models, attempt to address those factors.

148

No. 61, 1st Q. 2006

markets and their technology the starting point of most integration


analysis. ARMSTRONG (2002) considers another possibility, namely, a
monopoly that does not produce an intermediate good, i.e., whose
production process is not separable, or, equivalently, that is not vertically
integrated. Armstrong assumes also that the monopoly is able to produce an
intermediate good at a cost (to compensate for the lack of separability) 11.
Under those circumstances, the entrant has to be sufficiently innovative to
be able to compensate for the incumbent's cost for producing the
intermediate good. In addition, because there is no intermediate goods
market it would have limited or no ability to extract rent from its innovation.
ARMSTRONG's model is commonly found in the literature that considers the
(dis)incentive of incumbents to invest when unbundling is mandated
(HARRIS & KRAFT, 1997; JORDE, SIDAK & TEECE, 2000; QUIGLEY,
2004) 12. The model's construction and the assumption that the unbundling
rate is too low result tautologically in the incumbent's disincentive to invest.
In those models, the assumption that the vertically integrated incumbent
would base its investment decision exclusively on the aggregate returns of
the integrated upstream and downstream activities is inconsistent with its
profit maximization. The incumbent has to consider the probability of a more
efficient downstream provider who would take advantage of such greater
efficiency (STIGLER, 1951). A former Hewlett-Packard CEO illustrates this
process:
"We used to bend all the sheet metal, mould every plastic part that
went into our products. We don't do those things anymore, but
somebody else is doing them for us." 13

If incumbents do not follow that path, then they are not subject to
competitive discipline in a meaningful way. If this is not the case, then an
incumbent's actions, such as investment and resource allocation decisions,
may not be efficient for society or its shareholders.

11 While Armstrong lists a number of potential market failures that can be expected to affect
consumers, entrants, and the monopoly incumbent respectively, he models only the gross cost
impact of universal service on incumbents.
12 JORDE, SIDAK & TEECE (2000) state that "[m]andatory unbundling decreases an ILEC's
incentive to invest in upgrading its existing facilities by reducing the ex ante payoffs of such
investment It makes no economic sense for the ILECs to invest in technologies that lower its
own marginal costs, so long as competitors can achieve the identical cost savings by regulatory
fiat" (p. 8). Their approach implies that there is no way for incumbents to differentiate between
the return on upstream and downstream activities. Yet, it is interesting to note that their results
are only based on assertions about TELRIC being too low that they have not established and
that cannot be established.
13 Quoted from BESANKO, DRANOVE & SHANLEY (2000), p. 109.

A. BOURDEAU de FONTENAY & J. LIEBENAU

149

The task of identifying conditions that may help decide, in a particular


setting, whether the integration of functions is efficient is most usually based
on game theory. While game theory is ideal to study comparatively static
environments that can be differentiated by their level and form of vertical
integration, the telecommunication sector raises unique challenges. Of all of
them, perhaps the most basic challenge is the lack of experience with
competition, hence the problems associated with establishing a benchmark.
The problem requires some attempt to identify the issues inherent in
transitioning from monopoly to competition and constructing a transition
path.
Incumbents and policy makers tend to emphasize sunk costs in the
implementation of competition. Insofar as competition takes place through
the duplication of facilities, then the problem is a real one. While we do not
know enough, at the most basic level, it may well be that we are going
through what Justice BREYER (1979, 2004) saw as a major policy concern,
the wasteful duplication of facilities 14. The danger of wasteful duplication of
facilities was addressed to some extent when Congress forced U.S.
telephone companies to provide cable operators with access to the
telecommunication zone of poles and, later, to ducts and conduits. Those
conditions have been extended to some extent to new entrants.The
economies of scale and the sunk costs dimensions of this form of real estate
are real. While CAVE et al. (2002) acknowledge Spulber's conclusion that
sunk costs are probably relatively low, they also acknowledge examples by
Woroch that would suggest economies of scale and sunk costs in parts of
the local network. They consider that the low sunk costs, if real, stop at the
local loop.
In practice, we argue that discussions of economies of scale and of sunk
costs have little meaning as long as what they relate to has not been clearly
specified. In CAVE et al. (2002), such discussions always relate to wireline
telephone companies as we still know them. The same is true for Spulber
and Woroch. With such an aggregation of heterogeneous activities,
concepts such as scale and sunk costs can only be ex post concepts that
have no policy or business relevance. Once we look at activities such as,
say, conduits or cables or switches, we observe significant scale economies
and significant sunk costs that are restricted to small geographical areas,
say, sections of Newark, NJ or of Montpellier, France or Wellington, NZ.

14 See BOURDEAU de FONTENAY & LIEBENAU (forthcoming).

No. 61, 1st Q. 2006

150

Those economies of scale and sunk costs do not relate to the incumbent
telephone companies we observe today.
When we look at higher layers, the layer of interconnected networks, we
observe large economies of scale at the aggregate level, i.e., at the network
of networks. However, once more those economies are unrelated to any
individual operator. Whether those individual networks are large or small has
little to do with economies of scale and with sunk costs. The only economies
of scale that are relevant are the economies of scale that all networks are
able to achieve when they are interconnected into a network of networks.
Similarly, the sunk costs are related to limited geographical areas,
neighbourhoods in cities, and the extent to which they are actually "sunk,"
i.e., not recoverable is questionable. Individuals in any of those
neighbourhoods would still have the same demand for telecommunications
services, i.e., there would be entrepreneurs willing to take over those
facilities to continue offer those services. This may not be done at the price
existing incumbents may like, but this may reflect nothing more than, for
example, increased efficiency in the use of rights of ways, conduits, and
poles. Telephone companies as monopolies created artificial costs by
refusing to share conduits or ducts, while new entrants have been efficient at
making use of existing rights of ways such as sewage systems, canals, and
subways. So, we can see that in these cases, the problem of sunk costs is
largely fictitious. It mostly reflects the organization incumbents have chosen
for themselves, organizations that have fought against vertical and horizontal
disintegration, hence against what appears to be normal long term trends
(STIGLER, 1951; CHRISTENSEN, 1997; BOURDEAU de FONTENAY &
HOGGENDORN, 2005).

Innovation

The telecommunications environment exhibits an extraordinarily high


degree of innovation and capturing that innovation potential has been from
the outset a primary purpose of the change of policy from monopoly to open
entry. Innovation certainly poses potential threats to existing markets. At the
same time, it offers greater growth and profit potential, and stimulates
differing views.
From the monopoly-centric vantage point, innovation appears particularly
focused on lowering costs to improve profits. HDSL (high-speed digital

A. BOURDEAU de FONTENAY & J. LIEBENAU

151

subscriber line) is an example. HDSL implementation happened to be largely


transparent to customers and it increased profitability. HDSL did not
contribute to the expansion of the private line segment, as retail prices were
largely maintained. Its primary contribution was to lowering the cost of
procuring private lines, hence increasing incumbents' profits.
From the competitive-centric vantage point, innovation primarily seeks
new market opportunity. Deployment of ADSL and SDSL are cases in point.
At the same time, combined with unbundling, those technologies created an
immediate threat to incumbents in the form of cannibalization of private lines'
revenue. A legacy monopoly that is not a natural monopoly can be expected
to be susceptible to innovation. This can be illustrated by operations and
support systems (OSS) development. New entrants did not want to carry the
cost of the internal software development organizations incumbents use to
build the huge network management systems and OSS (operating support
systems) and BSS (business support systems) systems. Consequently, a
market developed to serve entrants, resulting in dramatically lower costs.
These systems are now modularized and built largely with off-the-shelf
elements. Although incumbents have been slow to follow this trend in their
core activities, it is not unusual to observe an arms-length subsidiary such as
a long distance subsidiary subject to the challenges of competition, adopt
such new technologies and reject the legacy systems.
Optimizing planning calls for a fuller understanding of innovation
opportunities from all perspectives including cost savings, anticipating
competitive challenges, new market opportunity, and managing transition
markets. Economists have consistently argued that beyond traditional
regulatory oversight, higher social welfare can be achieved in this sector
through a credible threat of entry that pressures established players to be
efficient. The credibility of that threat is a direct function of the cost of entry
(TEECE, 1995), as well as the institutional costs of facilitating the innovation
process, including improved cost management. In addition, that threat must
identify and address the cost imposed by entry barriers 15. Such an objective

15 This is well illustrated by many experiences in telecommunications and not just by the
regulator's often successful efforts to lower wholesale transaction costs by creating wholesale
markets throughout the 1980s and 1990s. For instance, the videotext experience of the 1980s
produced the same results. While videotext was introduced in such countries as Canada
(Telidon), France (Teletel), Germany (Bildschirmtext), Japan (Captain), Sweden, the U.K., and
the U.S., it was only in France that its deployment was successful. One of the key differences
between France and all the other countries is that France was the only nation to provide
information service providers a decentralized (i.e., not vertically integrated) public address on
the X-25 network. All the others adopted a vertically integrated, centralized approach. Teletel's

152

No. 61, 1st Q. 2006

is complex and costly to implement, but policy makers accept that challenge
because of the common assessment that the welfare cost of continued
monopoly in a dynamic environment is much more costly 16. By the same
token, the dynamic environment creates new challenges for existing players
and existing markets, but it also creates new opportunities for growth and
profit. The public policy purpose of achieving greater efficiency to benefit
public welfare is mirrored by the opportunity and the need for existing
players to become more efficient.

Conclusion: models of static monopolies do not apply


to dynamic networks of networks
There are other activities in telecommunications that would seem to
exhibit large economies of scale and scope that are not bounded
geographically in the manner that construction and the maintenance of the
network's physical layer is. One example is the perceived economies of
scale and scope associated with network management. Such activities have
historically been centralized within the operator and this is still the dominant
situation for most incumbents. However, the emergence of competition has
given some impetus to the division of labour, as Stigler implies with new
markets developing allowing outsourcing of such functions (CRANDALL,
1988; FRANSMAN, 2002; Telstra and Telecom New Zealand's recent move
to outsource some of the tasks associated with local service provision). It is
the emergence of competition that gave rise to markets for such functions,
although many are still vulnerable to fluctuations in demand despite the
efficiency improvements being offered. Insofar as incumbents maintain their
highly integrated structure, and entrants remain marginal, the market for
such new services is small and most fragile. The traditional economic
modelling of competitive telecommunications markets and performance has
been biased by unexpressed and unchallenged assumptions concerning
economies of scale and scope. Those assumptions are most likely false and
thereby threaten a good deal of the analysis made by industry mangers and
policy makers alike the analysis specifically devoted to anticipating

entry cost was so low that a majority of the information service providers created their services
and managed them on the Apple E computers. Some of those were exceptionally successful.
16 FCC Chairman Powell (2001) argues that "[s]uch an approach requires heavy regulation to
protect against the anticompetitive and anti-consumer tendencies of a monopolist. And, it
requires heavy government management of expenses, revenues and rates... Economic scale
does matter and it does take a great deal of resources to deploy these networks...".

A. BOURDEAU de FONTENAY & J. LIEBENAU

153

markets, assessing competitive challenges and determining pricing. It also


has impact on a firm's organization and efficient use of assets or profit
maximization. Analysis based upon the old policy and old industry structure
model will be accurate only insofar as the old model is preserved and
unchallenged by Schumpeter's "winds of destructive innovation." What is
being missed is a better ability to recognize new market opportunities
opened by the change of policy and a better way to prepare to prosper within
a dynamic environment. There are tools that can be created for planning
within this world. However, creating such tools cannot be done based on
existing economic assessments of the sector. This involves seeking market
benchmarks and helps us identify the levels at which economies of scale
and scope actually operate in today's dynamic environment.
In the case of the local provision of telecommunication services, the
activities to be considered would include activities such as rights-of-way, real
estate and buildings, ducts, conduits and poles, switching, fibre optic and
copper cable etc. Stigler shows that some of those activities are indeed
characterized by large economies of scale and scope, activities such as
ducts and conduits and poles, both in the construction and in the operational
phases. However, those kinds of activities have no "telephony-specific" or
"telecom-specific" characteristics, hence their economies of scale and
scope, to be properly exploited, would need to be exploited across a much
broader range of activities including, say, electricity and water 17. There is a
hint of Stigler's model in the sharing of poles by incumbent telephone
companies and incumbent public utilities. However, poles were not treated
as a profit maximizing activity commercially separate from other activities in
the production chain, as demonstrated by the telephone companies' refusal
to provide excess capacity on the poles' communication space when cable
companies began to expand. In other words, incumbent operators have
preferred to foster vertical integration over divesting such activities in order
to achieve lower costs and it is left to governments to impose the obligation
to share. Our approach to modelling explains this sort of competitive
behaviour.

17 Even where there are significant "sector-specific" constraints, as in the case of electricity,
those need not prevent the sharing of facilities, for example, between telecommunications and
electricity. In that case, a dominant case in the U.S., poles for the transport of electricity are
used because, for safety and security concerns, they have to be higher. The poles are
organized horizontally with, in the upper part of the poles, an electricity transport zone, under it,
a "no man's land" that acts as a safety zone, and still below it a communication zone for all
telecommunications needs. This type of arrangement would seem to be unique to North
America, even though there is considerable pressure on a large number of municipalities
around the world to share facilities, even at the construction stage.

No. 61, 1st Q. 2006

154

Bibliography
ARMSTRONG Mark (2002): "The Theory of Access Pricing and Interconnection", in
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.),
New York: North-Holland.
BESANKO David, DRANOVE David & SHANLEY Mark (2000): Economics of
nd
Strategy, 2 edition, New York, NY: John Wiley & Sons.
BOURDEAU de FONTENAY Alain & HOGGENDORN Christiaan (2005): "The
Economics of Vertical Integration: Adam Smith, Allyn Young and George Stigler",
working paper and CITI Workshop Reforming Telecom Markets: A Commons
Approach to Organizing Private Transactions, New York: Columbia University, at:
http://www.citi.columbia.edu
BOURDEAU de FONTENAY Alain & LIEBENAU Jonathan (forthcoming): "Innovation
in Telecommunications: the Judicial Process and Economic Interpretations of
Costing, Transacting and Pricing".
BOURDEAU de FONTENAY Alain, LIEBENAU Jonathan & SAVIN Brian (2005): "A
new view of scale and scope in the telecommunications industry: implications for
competition and innovation", COMMUNICATIONS & STRATEGIES, no. 60.
BREYER Stephen:
- (1979): "Analyzing regulatory failure: mismatches, less restrictive alternatives, and
reform", Harvard Law Review, 92 (3): 547-609.
- (2004): Economic reasoning and judicial review, AEI-Brookings Joint Center 2003
Distinguished Lecture, Washington, D.C.: AEI-Brookings Joint Center for Regulatory
Studies.
CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.) (2002):
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, New York: North-Holland.
CAVES Richard & BARTON David (1990): Efficiency in U.S. Manufacturing
Industries, Cambridge, MA: MIT Press.
CHANDLER Alfred D. Jr. (1990): Scale and Scope: The Dynamics of Industrial
Capitalism, Cambridge, MA: Bellknap.
CHRISTENSEN Clayton M. (1997): "Making Strategy: Learning By Doing", Harvard
Business Review, November-December.
COASE Ronald H. (1988): "The Nature of the Firm", reprinted from Economica, 1937
as Chapter 2 in The Firm, The Market, and the Law, Chicago, IL: The University of
Chicago Press.
CRANDALL Robert W. (1988): "Surprises with Telephone Deregulation and with
AT&T Divestiture", American Economic Review, 78 (2), pp. 323-327.

A. BOURDEAU de FONTENAY & J. LIEBENAU

155

DAWSON Fred (2002): "The Powell Doctrine: FCC Chairman Talks Policy with
XCHANGE as Action on Broadband Intensifies", Xchange Web Extras, at:
http://www.xchangemag.com/webextra/241webx1.html accessed in 2002
ECONOMIDES Nicholas (1997): "The Tragic Inefficiency of the M-ECPR", working
Papers 98-01, New York University, Leonard N. Stern School of Business,
Department of Economics.
FRANSMAN Martin (2002): Telecoms in the Internet Age: From Boom to Bust to ?,
Oxford: Oxford University Press.
FUSS Melvyn & WAVERMAN Leonard (2002): "Econometric Cost Functions", in
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.),
New York: North-Holland.
GASMI Farid, KENNET David Mark, LAFFONT Jean-Jacques & SHARKEY William
W. (2002): Cost Proxy Models and Telecommunications Policy, Cambridge, MA: MIT
Press.
GULATI Ranjay, LAWRENCE Paul R. & PURANAM Phanish (forthcoming):
"Adaptation in vertical relationships: beyond incentive conflict", Strategic
Management Journal.
HARRIS Robert G., & KRAFT Jeffrey C. (1997): "Meddling Through: Regulating
Local Telephone Competition in the United States", Journal of Economic
Perspectives, 11, pp. 93-112.
HORAN Tim (2002): "Communications Restructuring: The Long and Winding Road",
CITI, Columbia University, JORDE Thomas M., SIDAK J. Gregory & TEECE David
J. (2000): "Innovation, Investment, and Unbundling", Yale Journal on Regulation, 17
(1), pp. 1-37.
LIPSKY Abbot B. & SIDAK Gregory J. (1999): "Essential Facilities", Stanford Law
Review, 51, pp. 1187-1249.
MANDY David M. & SHARKEY William W. (2003): "Dynamic pricing and investment
from static proxy models", working paper 40, OSP Working Paper Series.
Washington, DC: FCC.
POWELL Michael K. (2001): "Remarks". FCC, National Summit on Broadband
th
Deployment, Washington, D.C., October 25 2001.
PUPILLO Lorenzo & CONTE Andrea (1998): "The economic of Local Loop
Architectures for Multimedia Services", Information, Economics and Policy, 10, pp.
107-126.
QUIGLEY Neil (2004): "Dynamic competition in telecommunications implications for
regulatory policy", Toronto, Canada: C.D. Howe Institute Commentary, 194, February
2004, at: www.cdhowe.org accessed October 2004

156

No. 61, 1st Q. 2006

SENGUPTA Nirmal (2001): A New Institutional Theory of Production: an Application,


New Delhi: Sage Publications.
SIDAK Gregory J. & SPULBER Daniel F. (1997): Deregulatory Takings and the
Regulatory Contract: The Competitive Transformation of Network Industries in the
United States, Cambridge, MA: Cambridge University Press.
STIGLER George J. (1951): "The Division of Labour is Limited by the Extent of the
Market", The Journal of Political Economy, 59 (3).
TEECE David J. (1995): "Telecommunications in Transition: Unbundling,
Reintegration, and Competition", Michigan Telecommunications Technology Law
Review, 47,
WILLIAMSON Oliver:
- (1971): "The Vertical Integration of Production: Market Failure Considerations",
American Economic Review, 61,112-125.
- (1976): "Franchise Bidding for Natural Monopolies in general and with respect to
CATV", The Bell Journal of Economics, 7 (1), 73-104.
- (1985): The Economic Institutions of Capitalism, New York: Free Press.
WOROCH Glenn A. (2002): "Local Telecommunication Network Competition", in
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.),
New York: North Holland.
YANG Xiaokai (2001): Economics: New Classical versus Neoclassical Frameworks,
Malden, MA: Blackwell Publishers.

Features

Regulation and Competition


Firms and Markets
Technical Innovations
Public Policies
Use Logics
Book Review

Regulation and Competition

Competitive Compliance:
streamlining the Regulation process in Telecom & Media
Grard POGOREL
Ecole Nationale Suprieure des Tlcommunications
CNRS UMR 5141 LTCI-ENST, Paris

urrent consultations within the European Union 2006 Electronic


Communications Review have emphasised the burden and delays of
procedures associated with the regulation process. This burden is deeply
resented by regulators and regulated parties alike. How could these
procedures be made more efficient and more closely aligned with the limited
resources of regulators?
To achieve this goal, the method put forward in this paper draws upon
some useful elements of regulation and practice as applied to governance
rules by the European Union Financial Action Plan and Sarbanes Oxley in
the U.S. We propose to extend the concept of "compliance" in the form of
"competitive compliance" requirements to electronic communications. As
generally intended by compliance, this involves shifting the monitoring of
regulatory rules by an outside party (the regulator) to inside the company.
Regulation being internalised by regulated parties in their organisation and
functioning would leave regulators with a lighter, more hands-off role.
In the realm of electronic communications regulation, this would lead to
the elaboration of competitive compliance rules and procedures within
electronic communications undertakings, embedding references to the
regulatory framework. Authoritative compliance guidelines would refer to the
varied areas of ex-ante electronic communications regulation and some
COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 159.

160

No. 61, 1st Q. 2006

generic competition areas in cases where remedies would have had to be


applied.
These compliance guidelines would be monitored from the inside of
companies by internal 'compliance' officers and services, which already exist
in banks.
The regulator relations or government affairs unit of operators would
preserve its role as defending special interests and pursue its lobbying
activities. However, in addition to this department, a separate unit, in the
form of a competitive compliance officer or body, would be granted an
"independent" status.
This officer or body in charge of competitive compliance within firms
subjected to electronic communications regulation would be provided ample
powers within the company to check the conformity of the firm's governance,
activities and practices with the regulatory framework. It would gather
information on costs, rates, market structures and commercial practices and
behaviours, etc., provided by competent departments within the firm, and
assemble this information in an appropriate, predetermined format for the
regulator's sake. Such an officer or body would also be awarded extensive
enquiry and decision powers and report directly to the company's senior
management (CEO or COO). Its mission would be to embed Competitive
Compliance in corporate organisation, strategy, and behaviour by playing
the whistleblower if necessary.
Competitive compliance would fit into the institutional interaction,
consultation and negotiation process between firms, players and regulators,
which is presently taking place, but would make this more efficient. The data
to be monitored for regulatory purposes, for instance, would be elaborated in
direct connection with the strategies and decisions potentially under
regulatory scrutiny. This data would thus be more quickly available, ideally in
real time, and of a better quality, with rules and obligations designed to
effectively match the needs of the regulatory process. As a result, the
competitive compliance report would make it easier for the regulator to
monitor that legal and regulatory requirements are fulfilled.
What would be the costs and benefits of setting up this kind of
"competitive compliance" model in electronic communications? The costs of
the EU Financial Action Plan and Sarbanes Oxley have provoked cries of
outrage from some quarters, as was to be expected. The enlightened
consensus, however, is that the benefits of regulation clearly outweigh the

G. POGOREL

161

expenses in many market failure situations. The burden of compliance is


offset by notable gains in economic welfare.
This procedure could be tested on volunteer companies and, if
successful, be made compulsory. To complement the administrative and
procedural nature of the process, economic incentives to implement the
necessary governance rules, comply and report sincerely, would conform to
market mechanisms and rules governing decisions regarding risk and
uncertainty. Penalties, based on the assessment of damages to consumers
and preferably to be handed over to the damaged parties, could be levied,
as well as proportionate fines for inadequate behaviour.
There has been a lot of talk to-date about the necessary evolution of
sector-specific regulation in telecoms and the media towards an ex-post,
more generic, competition law approach. This gap has not yet been bridged.
Competitive compliance would help to achieve this by facilitating ex ante and
ex post integration, thus bringing telecoms and the media closer to generic
competition regulation. In the end, competitive compliance would shift the
balance in favour of alleviated regulation. It would respond to requests for
less day-to-day intervention on the part of regulators and streamline the
regulation process. The overall outcome of this shift in regulatory
organisation and processes would ultimately be highly positive for all parties.

Firms and Markets

The world broadband access market (*)


Loc LE FLOCH
IDATE, Montpellier

00 million broadband subscribers worldwide, by end 2005: steady growth


momentum being sustained from year to year. DSL confirms its status as
broadband internet's main driving force, ahead of cable modem access.
Very high-speed broadband becoming a reality in South Korea and Japan
while, over in the United States, the RBOCs have begun deploying FTTx
infrastructures.

Analysis by geographic zone


Asia-Pacific is still in the lead, but Europe is closing the gap. The AsiaPacific zone is still number one in terms of broadband access, accounting for
over 40% of the globe's user base by mid-2005. For the first time ever,
Europe is now home to more broadband subscribers than North America.
The US is the country with the largest base of broadband subscribers,
but reporting declining growth rates. And China is expected to take the lead
in 2006: China's broadband user base virtually doubled during the period
running from mid-2004 to mid-2005. Meanwhile, in Europe, Germany, the

(*) This article is an abstract of a report carried out by IDATE entitled "The world broadband
access market". With detailed national analyses of 11 key countries, an assessment of leading
access providers' strategies, this report provides a comparative analysis of markets around the
world, the key challenges facing the sector and market growth up to 2009.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 163.

164

No. 61, 1st Q. 2006

UK and France, are in a dead heat for the largest broadband base, each one
having around 10 million connections by end 2005.
South Korea is still the benchmark in broadband penetration. Despite
being near saturation, South Korea is still ahead of the world's other mature
markets (the Netherlands, Switzerland, Canada) by a nose. France and the
UK, which have being enjoying significant growth rates, are reporting
penetration rates comparable to the one found in the US (around 14%).

Analysis by access technology


The decline of dial-up
Subscribers' migration from dial-up to broadband access is a clear and
irreversible trend worldwide, although dial-up does still account for a
sizeable user base in some countries (e.g. the US where the dial-up to
broadband ratio is still 2 to 1).
DSL still broadband's driving force
DSL's role as the primary driving force behind growth of the broadband
user base is well entrenched, even in the US. Its majority share of the
European market is increasing, even in the Netherlands and the UK where
cable modem had been leading the way until recently.
The cable modem alternative
Cable's dominance of the North American market remains an exception
in the global broadband landscape, thanks to exceptional coverage and the
presence of heavyweight players (Comcast, TWC).
Advent of very high-speed broadband
Backed by the authorities, and enabled by highly concentrated
populations in certain cities, the migration to very high-speed broadband is
stepping up in Japan and South Korea. In the Nipponese market, there were
in fact more new FTTx/ETTx subscribers than new DSL subscribers in the
first half of 2005, with the country reporting a base of 3.4 million FTTx/ETTx
subscribers in mid-2005.

L. LE FLOCH

165

In the US, very high-speed broadband has become a reality now that
fibre optic unbundling obligations have been lifted, thus paving the way for
RBOCs' investments in FTTx.
In Europe, most very high-speed broadband projects are being financed
by local authorities or public utilities. But incumbent carriers are beginning to
be more keen about very high-speed broadband, following the relative failure
of the models implemented by FastWeb in Italy and B2 in Sweden, which
extended their coverage through unbundling.

AP

EU

NA

LA

AME

Growth of the broadband base, by geographic zone (millions)


mid-2004
mid-2005
mid-2004
mid-2005
mid-2004
mid-2005
mid-2004
mid-2005
mid-2004
mid-2005
0

10

20

DSL

30

40

50

60

70

80

CM and Other

Source: IDATE

Changing structures and growing competition


A state of lasting competition has taken hold in the broadband sector, as
revealed by the drop in incumbent carriers' market share. The degree of
competition does, however, vary considerably in Asia, North America and
Europe. Even within Europe, despite there being certain common trends,
significant disparities remain in terms of industrial organisation, either due to
cable's prominence, the aggressiveness of the incumbent's strategies, or the
prominence of wholesale DSL and unbundling.
Rise of unbundling in Europe
Little used by alternative carriers up to 2003, unbundling has now
become one of the prime axes in operators' and ISPs' broadband growth
strategies. France currently boasts Europe's largest base of unbundled DSL.

No. 61, 1st Q. 2006

166

After months of swift growth for shared access, a new trend giving priority
to full unbundling is emerging in several markets.
Growth of unbundled DSL in Europe
31/12/2002

Germany
% of DSL lines
Spain
% of DSL lines
France
% of DSL lines
Italy
% of DSL lines
Netherlands
% of DSL lines
United Kingdom
% of DSL lines
Sweden
% of DSL lines

175 000
5%
3 000
0%
3 000
0%
52 000
6%
50 000
14%
2 000
0%
9 000
2%

31/12/2003

465 000
10%
16 000
1%
273 000
8%
240 000
11%
232 000
24%
8 000
0%
30 000
5%

30/12/2004

870 000
13%
114 000
4%
1 591 000
25%
450 000
10%
462 000
25%
47 000
1%
210 000
24%

30/06/2005

1 500 000
18%
297 000
9%
2 330 000
30%
653 000
12%
584 000
27%
69 000
1%
299 000
25%

Source: IDATE

Rapid broadband deployment in South Korea was enabled


by open networks
Benefiting from a population living in collective buildings (local loop
owners), and cities equipped with dense fibre and open cable
infrastructures, South Korean ISPs were able to deploy their services rapidly
under a neutral model (an ISP's ability to use DSL, cable model and
Ethernet LAN).
Japan, unbundling's champion
The Japanese market is undoubtedly the most competitive in the world.
In mid-2005, NTT East and NTT West's combined market share (number of
broadband connections) totalled only 36%. Very affordable (1 EUR/month)
shared access and the ability to unbundle dark fibre for connecting to
distribution frames have created a high level of DSL competition between
alterative operators. Because of this situation, NTT has been stepping up
deployment of its FTTx/ETTx offers.
America's singularity
In mid-2005, cable modem connections accounted for 57% of the
broadband base in the US. DSL and FTTx are slowing gaining ground,

L. LE FLOCH

167

however, going head to head with cablecos who still lead the way with their
triple play bundles.
The FCC took a series of measures to alter and define the new scope of
unbundling. Lifting unbundling obligations for FTTx had a decisive impact on
RBOCs' investment strategies (Verizon FiOS, SBC Lightspeed).

Recent changes in the players' industrial strategies


Incumbent carriers
The steady decline in revenues generated by fixed telephony (due to the
massive rise of mobile), and the growing ubiquity of broadband has forced
incumbent carriers to place internet access at the heart of their strategies. All
have been reincorporating their ISP subsidiaries which were originally
created as start-ups. This reintegration process is enabling the incumbents
to market broadband multi-service bundles.
Cable operators
A long string of mergers in a bid to achieve critical mass, and increase
service coverage, have been taking place in the cable industry in recent
months, particularly in Europe. NTL's merger with Telewest in the UK is
emblematic of this trend.
Alternative operators/ISPs
As unbundling is gaining ground, recent months have been marked by
the transformation of ISPs with no infrastructure into broadband operators.
Two options are available to them: either invest in the network or take over
an operator that owns the infrastructures. A case in point here is T-Online
which has elected to acquire Albura's network assets in Spain, and invest in
its own infrastructures in France.
The race for critical mass will continue
The issue of critical mass (subscriber base), and the capacity to mobilise
massive investments are becoming key to success in the marketplace.
Consolidation is thus likely to continue.

168

No. 61, 1st Q. 2006

The offers' momentum and positioning in 2005


A decline in the growth momentum and a drop in prices,
a sign of the sector's increasing maturity
After having dropped sharply, the tariffs charged for broadband access
remained relatively unchanged in the first half of 2005. The price of basic
DSL and broadband cable offers are now largely the same across the board.
While Asia-Pacific is home to the lowest prices in the world, particularly
given the bitrates on offer (Japan, Taiwan), European operators too are
offering very competitive prices, particularly in France, in the Netherlands
and in Sweden.
The trend of ever higher bitrates likely to continue
The never-ending rise of the bitrates on offer, fuelled by fierce
competition and enabled by the technologies (ADSL2+, FTTx/Ethernet,
VDSL2, and DOCSIS2 for cable) fostered the emergence of the triple play
model, which combines internet ac access and telephony with a TV and
video offer. To prepare for these offers, operators in the US, notably SBC
and Verizon, invested in the deployment of FTTx networks, which are crucial
to their ability to offer TV services (e.g.: FiOS TV).
All for triple play and
The fact of marketing a TV over DSL offer puts ISPs head to head with
satellite and cablecos. A great many IPTV services have been launched on
DSL networks in Europe (France, Italy, Sweden), confirming that we have
moved beyond the experimental stage. Despite regulatory uncertainties,
Asia remains one of the globe's major centres for IPTV development, given
the high bitrates available there, particularly in South Korea and Japan. In
the US, the RBOCs are heralding pioneer deployments of TV services,
although the law requires that they sign agreements with local authorities in
each city where they want to market their service. VoIP's development too is
equally disparate. It has become a killer app in Japan's broadband market
(close to half of Japan's broadband users subscribe to VoIP). In the US,
VoIP is being pushed chiefly by cablecos, who view it as a promising growth
relay. In Europe, in the meantime, some incumbent carriers have launched a
service to counter the VoIP offers being marketed by alternative operators
(France, Italy, the UK).

L. LE FLOCH

169

Coming soon the quadruple play


Operators are now working to implement quadruple play offers,
combining mobility services, internet access, telephony and TV. The
Americans are proving the pioneers in this area, and the Baby Bells were the
first to unveil quadruple play offers either via their mobile or MVNO
subsidiary. Beyond the bundles, the next stage will centre around broadband
fixed-mobile convergence.
It nevertheless remains that, for now, these services still appear to
geared to creating a distinction from the competition and cementing
customer loyalty, rather than being fuelled by a logic of increasing ARPU.

Technical Innovations

Instant messaging:
Towards a convergent multimedia hub (*)
Vincent BONNEAU
IDATE, Montpellier

nstant messaging developed at the end of the 1990s as an inter-personal


real-time communication system based on text messages. IM has now
outgrown its initial role and has transformed itself into a strategic multimedia
hub for promoting other services. This service consequently lies at the heart
of the battle between ISPs and internet portals (and between portals
amongst themselves), attracting a large number of new entrants. After
gaining popularity among consumers on PCs, the tool is now spreading to
mobile devices. Its potential has yet to be fully exploited.
In Europe and China operators (fixed and mobile) have preferred cases
to develop proprietary tools in many cases. Their initiatives have met with
very little success. In South Korea and the USA, on the other hand,
operators are working together with major portals, exploiting their brands
and their communities and seem to be achieving better results. IM has also
spread to the business world. This market obeys very different logics that
are closer to the IT world of the leaders Microsoft and IBM. Presence is
enabling new applications, incorporated into existing applications that have
already been deployed, based on collaboration and contextualisation.

(*) This report carried out by IDATE offers an in-depth analysis of the instant messag-ing (IM)
market and developments (interoperability, VoIP, mobility etc.) in the con-sumer and business
segments. This is a major market in terms of advertising for portals, traffic for mobile operators
and licences for professional software vendors.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 171.

172

No. 61, 1st Q. 2006

IM is already a "killer application" on PC


Since its initial launch in 1996 based on ICQ, instant messaging has
become one of the key fixed internet applications. It is effectively a mass
market of over 10 billion messages per day, which affects one in two internet
users in the developed countries of North America, Europe and Asia.
Moreover, IM is steadily reaching relative saturation in these countries, while
retaining a sustained rate of growth in other countries like China.
All internet users are affected by the phenomenon, regardless of their
gender, level of education or profession. Only the type of connection
(broadband promotes usages thanks to its unlimited nature) and above all
age are really discriminating factors. The youngest users, notably adolescents, are effectively the most experienced users. They more often use IM
for exchanges than email or mobile telephony and make massive use of
some of the new functions available. The time spent using the IM tool
already exceeded 3 hours per month in Europe and 5 hours per month in the
USA in 2004. Furthermore, many young people use IM on a daily basis.

IM has assumed the role of a remote control and multimedia hub


By relying on its innovative functions such as presence management (the
availability of interlocutors), IM has been enhanced by several services that
are very popular with users, based on communication (file exchange, voice
from PC to PC and even PC to telephone, video conferencing) and
customisation (emotions icons, avatars and skins). IM also stands out from
other communication tools by virtue of its multi-tasking nature, enabling
other activities simultaneously via the PC (web browsing, games, etc.) and
other devices (telephone and TV). Portals are capitalising on this converging tool to promote their other activities (webmail, blog, contents etc.) thanks
to tab systems. The latest additions involve a search motor built into IM.
Furthermore, IM is steadily being incorporated into web sites, notably
communities dating, video games and social networking) and could become
part of tomorrow's television.

IM market remains dominated by a few big portals


Since its very beginnings the market has been dominated by the major
internet portals. Their positions vary tremendously from country to country,

V. BONNEAU

173

mainly for historical reasons and primarily to the advantage enjoyed by first
entrants, as products are almost identical. The leading ISP in the
narrowband market, AOL, consequently dominates the North American
market and is capitalising on the installed base of ICQ, acquired in 1998 and
interoperable with its AIM tool since 2004. MSN is the major leader in the
European market and is benefiting from embedded versions with Windows.
For its part, Yahoo! Is well positioned in various markets. In Asia the market
is dominated by local players such as QQ in China, which has since become
a major portal, and NateOn in South Korea, which is the only IM service
offered by a major ISP to have overtaken these portals. Similar initiatives
have effectively been carried out in Europe (FT/Wanadoo, Tiscali, DT/TOnline, etc.), but have only achieved very limited success. North American
ISPs have preferred to form alliances with major portals.
Generating revenues from advertising, portals have effectively been the
only players in a position to financially justify such a free service, acting as a
call service and an instrument for winning customer loyalty to the portal
brand, and paid for by other services. Paying services (SMS, voice and a
few avatars) remain marginal and are struggling to develop due to the lack of
a suitable billing system and an ample free offering.

IM's multimedia capabilities attracts new players


ISPs have recently shown renewed interest in IM, with tools that are
proprietary or adapted for the major portals. The majority are no longer
fighting for IM itself, but for the possibilities that it offers, notably in terms of
VoIP. Along the same lines, but with a different positioning, players from
VoIP such as Skype are highlighting their IM functions. Other new players
are the generalist portals such as Google and more local, community-based
portals (Skyrock and Shanda). For these players the attraction of IM is the
hub itself, which is capable of generating traffic for long periods of time by
winning customer loyalty. IM thus offers another way of earning money from
traffic. Google's entry into this market, expected since its acquisition of
Picasa, illustrates the battle between the internet portals (MSN, Yahoo!),
which has already begun in other segments of the market (webmail, search
engines etc.). Google had to react, notably since other portals are offering
their search engines via IM. The various toolbars (desktop sidebar)
effectively fulfil some of the roles of the multimedia hub, but do not
sufficiently reach a mass market.

174

No. 61, 1st Q. 2006

IM is strategic for portals and remains a blocked market without


widespread interoperability
The competitive structure of the fixed IM consumer market has evolved
very little since its launch, due to a lack of interoperability between tools. IM
is effectively based on a club effect, encouraging users to turn to the most
popular services and rewarding the first entrants to capture the market.
Technical solutions do exist based on XMPP (Jabber), supported by France
Telecom and Google, and SIMPLE, derived from SIP, the protocol that has
come to dominate the VoIP market. Several companies are also offering
clients a large number of protocols, whose usage remains limited and not
guaranteed. Moreover, many users are using several tools dictated by their
circle of friends.
The advertising model does not encourage players to open up their
networks. Valorisation effectively depends almost entirely on the number of
users of the portal. IM is a key tool for portals, because this service is
sometimes the most popular with internet services ahead of webmail (which
is the case of MSN in Europe). Moreover, this model is now profitable for the
biggest players thanks to the renewed appeal of online advertising, including
for the IM tool itself, whether this be via the usual methods (banners and pop
ups) or by the new options offered in terms of customisation (major brands,
launch of film, etc.) to better reach the target of young internet users.
Widespread interoperability does not look likely at present, since it would
facilitate market access for Google and community based portals. Bilateral
agreements between the major portals, on the other hand, like that between
MSN and Yahoo!, remain a possibility. Except for the agreement with AOL,
Google is therefore no threat in IM. Its refined offering should only attract
technophiles and should only destroy a very small share of the advertising
market.

IM looks likely to have little impact on VoIP


Exchanges of text messages via IM facilitate the passage of a telephone
call. There have consequently been a large number of announcements of
developments based on VoIP of the latest IM tools. However, they do not
make it possible to question the development of the VoIP market. These
solutions are still too complicated for regular usage in the mass market,
making it necessary to use headphones and a microphone and for the PC to
be switched on. Above all, they require a broadband connection, which itself

V. BONNEAU

175

is increasingly distributed in a bundle with unlimited flat rate packages (at


highly competitive prices) for national calls from a traditional telephone. VoIP
consequently remains an operator market for the general public, outside of a
few niches that are already highly competitive such as international traffic.
Unless they become operators themselves (which goes far beyond the
problem of IM), portals therefore do not present any serious threat. However,
they must offer this service to complement their multimedia hub offering.
Moreover, they have offered this service for almost 5 years, but with
offerings that were technically inadequate in the past. For ISPs, on the other
hand, the integrator of the telephony service tool with an IM tool (such as BT
with Yahoo or AOL TotalTalk) can make it possible to facilitate usages
based on a familiar interface to compete better with new VoIP entrants,
notably those with an offering like Vonage.

IM is now developing on mobiles


Given the success of IM on PCs, and notably the level of traffic, initiatives
are rapidly growing to transpose the service to mobile telephones, albeit with
more restricted functions. The first attempts, which began in 2000/2001
(SMS codes, WAP type interface), have all failed. They notably weren't
sufficient ergonomic or did not make it possible to truly use presence. The
market did not really start to develop until the advent of solutions based on
client software on open OS (Symbian, J2ME, Windows etc.), making it
possible to reproduce the PC experience more fully. Software vendors like
Oz and Comverse are offering clients multi-applications. "Adapted" solutions
date back to 2003, but usages remain very modest. Unsurprisingly, the main
users are those who use fixed technology, notably the youngest users.
However, as the service is paying and requires a fairly expensive device,
revenues are also discriminating. Mobile IM is arousing major interest on the
part of mobile users on the level of video and games entertainment services
for the youngest mobile users. The oldest users are also interested in this
technology, but more for a business usage, in addition to email and web
browsing. It can therefore be billed like all other mobile services, although it
is available free on PCs.

IM mobile growth depends on fixed IM


Unlike fixed IM, the level of the development of mobile IM varies
significantly from region to region. Mobile IM is indeed more developed in

176

No. 61, 1st Q. 2006

South Korea and the USA, even if thoses countries are lagging behind for
mobiles.
The South Korean and North American operators have tried to offer a
service making it possible to access the main fixed IM services, notably
those on AOL and NateOn's portals. The majority are offering a service with
no subscription invoiced on the basis of the SMS (on a pay-as-you go basis
or in packs). The results are more than encouraging, notably for Danger and
Verizon Wireless, which already reached 20 billion messages per month in
2004.
Most European operators, on the other hand, are looking to develop a
proprietary solution that is interoperable with other mobile operators based
on the IMPS standard, created especially for them by mobile equipment
manufacturers (Ericsson, Nokia etc.). The results are rather disappointing.
Users effectively want to be able to communicate with their existing fixed IM
contacts. Some operators (Vodafone, KPN etc.) have consequently recently
decided to change their positioning and to enable access to the services of
MSN, the European market leader. The services are offered in the form of a
subscription, with incoming traffic deducted from the flat rate package as
with other mobile services.
Interoperability between IM mobiles is consequently not a priority,
although the service is paying on mobiles. It is currently a question of being
able to discuss with fixed communities, which are already highly developed.
Interoperability between mobile IM and fixed IM could accelerate the
development of an almost generalized interoperability between the major
portals (AOL, MSN, Yahoo!, and even QQ, NateOn) without imposing
widespread interoperability.

Mobile IM creates opportunities for mobile operators


Mobile IM is imposing a relation with the major fixed IM portals. However,
the mobile operators retains strict control over the subscriber in developed
countries via the invoicing relationship, terminals (subsidised or distributed
by operators in 70% of cases) and the service itself, which requires network
gateways. Apart from pure SMS, operators can even filter traffic. Portals
cannot concretely offer "tailored" solutions without mobile operators. Several
operators are also hesitating about switching to mobile IM due to fears that
they may cannibalise revenues from SMS. Some usages are effectively
similar. The U.S. model does not have this draw-back, as it is based on

V. BONNEAU

177

substitution pure and simple. However, even in the European model, mobile
IM should be able to compensate for the losses in SMS thanks to flat rate
models.
Mobile IM is a big plus in the data strategy of operators. It can effectively
make it possible to increase data traffic and facilitate the migration to
unlimited data flat rate packages that imply greater usage. It can also enable
operators to differentiate themselves for young users, notably for MVNOs.
Mobile IM can also make it possible to reproduce the fixed multimedia
hub on mobiles, offering easier access to certain existing services (voice,
push-to-talk, video conferencing, SMS, sending photos) or to new services,
notably based on localisation. Mobile IM will not necessarily be a killer
application, but could be a killer hub, if it is more clearly promoted.
Positioning of the various players involved in the instant messaging market
Geographical
presence
of IM

IBM

Nokia
Pro IM

Novell

Microsoft

LCS

Jabber

MSN

(MSN, LCS,
Windows, Office, )

Oracle

Alcatel

Internet giants

Yahoo

Sun

Vodafone

Siemens

Global

RIM

AOL

Skype

Reuters

PanEuropean
initiatives

(AIM et ICQ)

IMLogic

Google
Apple

Orange

Odigo

Trillian
T-Online

Gaim

Ericsson

Tiscali

Multi-protocols
Wanadoo
/ Voila
Oz

VoIP initiatives

Comverse

National
Danger

BT

China
Telecom

Special features
of Asia

Earthlink

NateO

Colibria

QQ

Movistar

KPN

Daum
Shanda
Skyrock

Equipment manufacturers

Software vendor

Source: IDATE

SBC
U.S. alliances

Portal

Cingular

Oprateur
Verizon

Operator

KTF

Sprint
Value
chains

Use Logics

Mobile CE
The nomadic era (*)
Laurent MICHAUD
IDATE, Montpellier

obile CE is a market that appears ready to be shaped by new


consumption modes, including mobile TV and portable digital players,
as new ways of accessing content emerge. The devices themselves, from
simple mobile handsets to portable video players, are the first to undergo a
swift and radical evolution of the features they offer, embodying a wide array
of possibilities for consuming nomadic content: music and video downloads,
live TV programme viewing, etc. The market's value chain and business
models will be greatly affected by the availability of content, of which music
is currently the most advanced segment. Thanks to their economic power,
mobile telephony players appear to be in a solid position for enjoying a
lasting influence on nomadic services, albeit without the absolute certainty of
being able to corner the bulk of market value.

A nascent market
Mobile CE involves most content, and most areas of entertainment,
business and communication. This content can be accessed using a host of

(*) This IDATE market report provides an analysis of the emerging trends in the world of mobile
Consumer Electronics (CE), now having to contend with increasing competition from the
telecom and computing industries.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006 p. 179.

180

No. 61, 1st Q. 2006

devices: portable video players, MP3 players, handheld gaming consoles,


PDAs, smartphones and mobile handsets.
Initially viewed as a measure of quality, digital technology soon became a
way to store and distribute content using a computer and the web. Music
files exchanged over online peer-to-peer networks shook up the recording
industry, whose sales have been plummeting, to ultimately give birth to
online music stores, like iTunes, and to trends like offering advance release
singles to a mobile operator's subscribers. With the potential of being fully
digital, content distribution is undergoing a radical transformation.
At the same time, progress made in data storage technologies, and with
all components, fuelled in large part by the mobile telephony industry, now
makes it possible to design multi-functional miniaturised personal products
(offering communication, photos, music, video, e-mail). A simple cellphone
can now act as a veritable multimedia jukebox, both for playing content and
producing it.
Mobile CE is not yet a homogeneous and structured market,
guaranteeing all of its players a steady income. It is more of a vast bundle of
market segments, populated by players all obeying the rules of different
economic systems, many of which are old and now being severely
undermined.

The devices will shape consumption modes


First, the fact of digitising content marks a break with the traditional logic
followed by the CE industry, wherein a given type of content was associated
with its counterpart equipment. The problem for consumers now is choosing
the type of content they want based on the its availability for the type of
equipment they own.
With this in mind, influenced by the mobile phone phenomenon, changes
in the way that equipment is being used is leading to a proliferation of
personal devices, both mobile and multifunctional. It is no longer households
equipping themselves, but individuals according to their needs and buying
power. Hence the growing trend of increasingly personalised content
consumption.
Often viewed as the chief obstacle by classic CE content distributors, the
base of compatible equipment (decoders, set-top boxes, DVD players, etc.)

L. MICHAUD

181

is now becoming a major selling point for mobile CE. The devices seek to be
appealing, trendy and image boosters. They will be the best way for service
providers (mobile operators, mobile TV, music, etc.) to attract customers and
make them loyal, while also embodying their offer.

Value chain shaped by content availability


Only distribution modes and formats that offer protection for copyright
holders will supply a viable economic base for the long term. While mobile
operators currently appear well-equipped to play a major role in nomadic
content distribution, thanks to the internet access they provide to their
millions of subscribers, they are by no means invulnerable to competition
that by-passes their business, such as free mobile TV broadcasting or direct
digital content sales at hotspots. Content distribution can also take an
alternative approach a good case in point being Nokia's CoolZone
initiative, whereby retailers can sell mobile phone content via Bluetooth in
their shops.

Music fuelling mobile CE


Music is currently the most advanced mobile CE segment, with close to
60 million MP3 players sold in 2005, and as many mobile phones with a
built-in MP3 player. Mobile operators have proven their ability to create a
market in this segment, thanks to ringtone downloads. But, for actual songs,
consumer habits and the devices themselves well preceded the creation of
viable paid distribution solutions. Since the launch of the iTunes store in
2003, geared to allowing users to legally download songs for their iPods, the
music industry has seen an opportunity for marketing digitised music. The
number of music offers is expected to increase exponentially in 2006 and,
beyond that, video could take the same direction and enjoy the same
success.

The mobile phone momentum


The number one personal device, the mobile phone is helping to boost
the penetration of nomadic digital applications. Photo phones are now
outselling classic and digital camera sales while, in 2005, there were as
many phones with a built-in MP3 player sold as there were portable MP3

182

No. 61, 1st Q. 2006

players. Although offering lesser performances than an iPod or a digital


camera, mobile handsets are making their way into all the markets whose
basic features they can integrate. Through an effect of unparalleled volume
(1 billion mobile phones will be sold in 2009), they is now positioned as the
device of reference for all nomadic uses. Nokia is thus likely to be the
leading vendor of mobile music devices in 2006, followed closely by
Samsung and Motorola.

Properties of the devices of the future


2006 will mark the expansion of the popularity enjoyed by Apple's its
music business to video and television. Ultimately, nomadism, mobility and
portability concern all digital entertainment. The consumer electronics value
chain is converging into an organisation already tried and tested by the
mobile telephony sector. The mobile phone is currently the device that best
embodies what mobile CE may well become.
Omnipresent and able to communicate
A mobile CE device is, first and foremost, portable and pocket-size.
Close to 2 billion individuals will own a mobile phone by the end of the year.
Having a web-enabled, Wi-Fi, Bluetooth and eventually WiMAX-compatible
device is now the norm in the most mature markets.
Personal
Mobile devices can be personalised. Telephones are the prime example
of the almost emotional attachment people can have with their machines.
For young people, they have become identity shapers and ways of
communicating with their group of friends. Plus, they are equipped with
identification, payment and locating systems.
Display-centric
There has been a trend of miniaturising phones whose main function is
calling, and of increasing the size of those used for a variety of functions. It
is chiefly the screen that is getting bigger, to offer friendlier browsing,
viewing and greater immersion in the content.

L. MICHAUD

183

Multiservice kiosks
The devices' network capacities are exploited primarily by telecom
operators which, in addition to telephony, are offering a body of information,
communication and entertainment services: SMS, MMS, push mail, ringtone
and music extract downloads, games, access to news, etc. Here, telcos are
becoming service providers by distributing content. In future, they will be
offering access to TV programmes and possibly to video content and films
for download.
Multifonctional
They are multifunctional and multimedia. They are a platform for storing
content, playing it, recording it, viewing it, distributing it, and for
communicating. They manage audio, video, voice, pictures and text, as well
as business applications.
Interoperable
They are interoperable with other fixed and mobile devices. Mobile CE
devices now communicate essentially amongst themselves and with PCs,
with computers being the main source of content thanks to the internet. But
mobile CE devices could well become autonomous in the area of content
supply, taking the same direction as mobile telephony.
Standardised
The interoperability of mobile CE requires not only standardisation of
content formats, but also standardised communication tools, operating
systems, API, players, etc. It also implies the integration of technical rights
protection and management mechanisms, without which content publishers
may well be reluctant to allow distribution of their products.
Portable media library
Mobile CE devices are equipped with storage capacities that now allow
them to hold the equivalent of between 1,000 and 2,000 songs, depending
on the level of compression used, or the equivalent of 60 movies in DivX
format. In future, they will be capable of storing tens of thousands of songs
and several hundred films.

No. 61, 1st Q. 2006

184

What winning strategy(ies) for industry players?


There are a great many players in the mobile CE market, and from a
wide variety of backgrounds. The challenges ahead for all players in this
fledgling market involve integrating a portion of the other players'
businesses, both upstream and downstream. A classic CE manufacturer can
combine its product with a distribution and/or broadcasting service (e.g.
Apple in the music and later video business; portable video player
manufacturer Archos and satellite pay-TV operator Echostar).
The content publisher has taken the digital path, and now needs to
undertake digital distribution of its works. It also aims to increase the time
spent consuming its content thanks to mobile device, and heighten the value
of this consumption by increasing its advertising revenues. The technical
service, and particularly the content protection service, needs to move
towards interoperability, and convince publishers and distributors with its
business model, or become a distributor itself. Software solutions providers
are taking up a position throughout the mobile device value chain. They
need to persuade device manufacturers that their solutions, and especially
their operating systems, are the key to the compatibility and interoperability
of the devices. Telecom operators working in tandem with mobile phone
manufacturers enjoy a head start in the mobile CE market. But they too face
a sizeable challenge, namely convincing consumers. This will require efforts
not only in making their offers legible, but also in creating business models
that are clear, simple and adapted to a wide array of users' budgets.
Prominence of nomadic applications on mobile devices
Applications/Terminal

Video games
Audio player
Video player
Telephony
Radio
Diary & business
applications
TV
Community tools
(messaging)
Photography
Video camera
GPS

Video game
console

MP3
player

Video
player

Mobile
phone

9999
99
999
9
9

9999
99
99

99
99
9999
-

99
999
9
9999
99

99

999

99

99

9
9
9

999
999
9

99
9
9

99
99
9

Source: IDATE

Mobile
TV

PDA

9
99
99
999
9

99
99
9
999
9

99

9999

9999 -

Book Review

Peter HUMPHREYS & Seamus SIMPSON


Globalisation,
Regulation

Convergence

and

European

Telecommunications

Edward Elgar publishing, 2006, 218 pages

by Zdenek HRUBY
The book addresses the issue of development and changes in
telecommunications, liberalisation and regulation in course of the last two
decades. Changes in regulation, technology, economy, globalisation and
competition are the key words, while the main focus of the book is
European. Its authors analyse fact and ideas in detail and provide a good,
comprehensive guide to topics to date. Special attention is given to the
interrelation between the effects of globalisation, European and national
regulatory and business responses. The review of liberalisation and reregulation is transparent, providing readers with full orientation. The analysis
and explanation of technological changes, globalisation, institutional
structures, European integration, ideas and their implementation in practical
regulation reform are particularly interesting. Although the common findings
are European, with a key role played by EU regulatory policy, legislation,
responsible bodies and institutions, the authors see diversity among EU
member states. The features of a regulatory state both at a European and a
national level are identified. Different perspectives are evoked, included
Stiglers and Peltzmanns interest groups concept.
The respective chapters of the book cover the following concepts: the status
quo ante and explanation of a set of variables resulting in a paradigmatic
policy change for liberalisation, with remaining state intervention; the
emergence of a telecommunications policy at EU level culminating in the
1998 package of directives, the transposition and implementation of EU
policies by member states and the emergence of the "regulatory state," the
Electronic Communication Regulatory Framework for services, while the
infrastructure and content points of view are also analysed. The EU's
position, role and influence on policies and processes driven by
organisations like WTO and ITU are also spotlighted. Finally, the relationship
between the EU and regulation on a national level is considered, as well as
further convergence and increasing role of economic goals vis-a-vis public
service provision.
COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 185.

186

No. 61, 1st Q. 2006

This book is written from a political sciences point of view, and is neither a
technical economic publication, nor an engineer's technological approach. It
should consequently prove transparent, usable and valuable to a wide
variety of readers ranging from academics to industry professionals and
officers.

Byung-Keun KIM
Internationalizing the Internet
The co-evolution of Influence and Technology
Edward Elgar publishing, Coll. New horizons in the economics of innovation, 2005,
300 pages

by Bruno LANVIN
Based on research carried out at SPRU (Sussex University), Professor
Byung-Keun Kim offers a refreshing and thought provoking perspective on
the genesis, evolution and internationalization of the internet.
An original perspective with a pessimistic conclusion
The author's main concern is to explore how the interaction of various social
groups and countries has shaped the internet as we know it today. In this
venture, the internet is not considered as a given, but as the concretization
of one among many possible futures: starting with the observation that
studies on the genesis and history of the internet have essentially focused
on its institutional U.S. background (DARPA, NSF, etc..), the author calls
attention to the international dimensions of internet growth and asks what
made it a global infrastructure? Is the internet reducing or compounding
existing technological and economic divides across the planet?
Pointing to the limitations of traditional analyses (techno-economic and
socio-technological), Professor Kim covers new ground by focusing on
governance issues, and raises interesting historical perspectives on
competition/cooperation between the U.S. and Europe with regard to the
architecture, management and politics of the internet. A highly compelling
part of the book includes a consideration of 'population dynamics', and of the
effects that demography could have on the future shape of internet usage.
Although Professor Kim's reflections on the use and diversity of languages
on the internet are predictable, the picture nevertheless remains striking.
The author's conclusion is rather pessimistic, since he sees the internet as a
factor of additional divides between countries, rather than a contributor to
more equality and equity at the global level.

Book Review

187

A highly rich analysis that calls for more


Based on an impressive survey of existing literature covering a variety of
fields (including technology, economics, sociology), as well as rigorous
analyses and comparisons of available data, the book offers a new and truly
inter-disciplinary approach to one of the crucial issues of our time. Its main
limitations are of two kinds: on the one hand, the author sometimes seems
reluctant to move away from a comparison of existing bodies of thought and
analysis and provide a truly independent vision of where the internet is
heading or explain why the data he relies on is already dated on the other. In
Chapter 6 (Internationalization and Digital Divide), for example, none of the
tables/graphics use data more recent than 2000. In a field where changes
are so rapid, this is a significant shortcoming.
In spite of these limitations, this is a quite remarkable book, which will
provide useful background to all those interested in the internet, including
researchers and decision makers, both public and private.
As we enter the 'post-WSIS' era, and as the Internet Governance Forum
starts its work, research of this kind of calibre is urgently needed. Perhaps
Professor Kim, the SPRU and others could be encouraged to pursue this
work by looking at the most recent developments (i.e. post 2000 bubble) in
this area? Maybe a more optimistic vision would emerge from such an
update, based on the growing number of examples of developing countries
where the internet has been used to reduce poverty, create jobs, enhance
education, and reduce the gap between rich and poor.

Bethany McLEAN & Peter ELKIND


The Smartest Guys in the Room:
The Amazing Rise and Scandalous Fall of Enron
Portfolio, Penguin Group, Inc., New York, NY, USA
Updated paperback edition, 2004, 440 pages

by James ALLEMAN
If you did not read about the bankruptcy of Enron, you could mistake The
Smartest Guys in the Room for a novel certainly implausible although
entertaining. But, incredibly, it was true. Arrogance, avarice, gluttony, and
even sex were all part of this "novel." McLean and Elkind offer a picture of
corporate hubris and greed that gives the reader pause for thought about the
social control of large corporations and the infrastructure that supports them.
We will concentrate on two areas: how telecommunications played a role in
the collapse of Enron and the failure of corporate governance in the Enron
case (and its lessons for policy makers).

188

No. 61, 1st Q. 2006

The authors, Wall Street Journal reporters, who were among the first to
detect problems at Enron, offer a picture of a corporation that is disturbing.
The company's chairman, Ken Lay, seems to worry more about the interior
decor of the corporate jet than the health of the corporation. Its greedy CFO,
Andrew Fastow, was out for himself, and its board members remained
oblivious to the conflicts to which he alerted them. Fastow raised billions
through debt and equity vehicles, yet his office did not know when this debt
matured. It did not have any concept of the magnitude or maturity of the
debt! CEO Jeff Skilling was apparently more concerned about making the
quarterly "number" for Wall Street than ensuring the corporation's viability
and, hence, did not probe the financial stability, legality, nor the propriety of
the financial vehicles Fastow designed.
These are only the major characters in a cast of players who, in one way or
another, led to the fall of Enron. But it was not just people inside Enron that
were culpable. The accountants, auditors, investment banker all "strutted
and fretted upon" Enron's stage, which in itself signifies failure.
How could a corporation that was considered one of the top U.S. companies,
recognized as such by well-regarded business monthlies such as Fortune
and used as a case study at leading business schools, fail so dramatically
and so fast.
Clearly, I cannot give away the ending the company went bankrupt,
leaving thousands of employees without jobs or pensions. One of the top
accounting firms, Arthur Anderson, was destroyed. Executives have and are
being prosecuted; people have already gone to jail, more will. Yet to me, the
phenomena that are of interest are the magnitude of the fraud and how long
it continued. How could banks support this level of debt? Why was the fraud
not detected sooner? What could prevent such an occurrence again? The
financial checks and balances credit agencies, government regulators,
Wall Street analysts, bankers and lawyers failed, creating uncertainty and
a loss of confidence in the financial markets. Not only do the authors shed
light on these issues, they do it in an entertaining way.
Where do information, communication and technology (ICT) fit into Enron?
The company began as an oil and gas enterprise an energy company, but
transformed its public persona into a company engaged in ICT and trading.
ICT was part of the hype, as was trading, since these businesses were the
"darlings" of Wall Street. They were the growth industries, not the staid, old
energy business. When Skilling realized what was happening to internet
stocks, he began to promote and fund Enron Broadband, a small part of an
Oregon utility Enron had purchased. Since the company was already
(successfully?) trading energy, it was an easy step to consider broadband
trading, or so it seemed. The requisite skills were already in-house. The
value of the company would increase substantially, according to Skilling's
calculation whereby USD 1 billion in investment would add USD 20 billion to
market capitalization (p.185). However, broadband trading was not

Book Review

189

equivalent to energy trading and the venture failed. In the end, the
management was not there, employees overspent, over acquired, undermanaged and understood little of the business.
When the company made a significant gain from a pre-IPO purchase of
Rhythms NetCommunications USD300 million on a USD10 million
investment to hedge its gain (since Enron could not sell the stock
immediately) Fastow offered to set up a set of his infamous Special Purpose
Entities (SPE). But, as with the other SPEs, this was a sham. Ultimately, the
hedge was supported by Enron stock, and thus was not a hedge at all,
although it did provide Fastow and his partners with over a quarter million
dollars in compensation for the "risks" they took. It was all part of the hubris;
so long as Enron stock does not fall, the SPEs worked. However, gravity
even applies to the stock market when the weight of debt becomes too great
(and is revealed).
What are the lessons to be learnt from this experience? This is a difficult
question. Analysts are supposed to thoroughly understand corporations, the
auditors' job is to uncover and report deceptions and weakness in company
accounts, credit rating agencies are supposed to probe into the depths of the
financials to understand the exposure of bondholders, while the Security and
Exchange Commission (SEC) is supposed to be the guardian of the veracity
of corporate financials. All failed. Yet many public sources for malfeasance
were available. Journalists, in addition to the authors of The Smartest
Guys, wrote articles skeptical of Enron's practices as early as 1993. Short
sellers were also aware of problems with the company. So the lesson I draw
from all of this is that there is a systemic failure, which needs serious
scrutiny, analysis and remedial legislation. The Sarbanes-Oxley legislation
passed in the United States in the wake of the Enron and MCI WorldCom
bankruptcies was passed in haste and more to reassure the financial
community that Congress could fix what was clearly broken than to provide a
long-term solution. While the book points to the fractures in the system, it
does not offer any clear remedies, nor does Sarbanes-Oxley. Much more
analysis needs to be completed to avoid future disasters like the failures of
Enron and MCI WorldCom.
One of the more useful aspects of the book is the list of players and a short
note on their role in this drama. I found it a useful reference while reading
the book. While The Smartest Guys in the Room is entertaining and
readable, it does have a few minor flaws. There is some repetition in the four
hundred plus pages and the index was not updated for the paperback
edition, but this should not deter anyone from reading the book. It certainly
offers an instructive guide to anyone interested in seeing what can go wrong
in the corporate environment. Although one may not learn what remedies to
apply, the issues are clearly identified. The next step is to analyze the
complexity of these problems and find remedies.

190

No. 61, 1st Q. 2006

Peggy VALCKE, Robert QUECK & Eva LIEVENS


EU Communications Law
Significant market power in the mobile sector
Edward Elgar Publishing, 2005, 187 pages

by Petros KAVASSALIS
In the era of communications policies that promote competition, the
regulation of network-based industries in recent years had focused
increasingly on the correction of undesirable market outcomes. In this
context, independent (sector-specific) national regulatory authorities (NRAs)
in most developed countries are in responsible for closely monitoring the
competition process, with the objective of sustaining a "sufficient" level of
competition and of imposing remedies, whenever they are needed 1. To
achieve this goal, NRAs, in accordance with the EU competition law, dispose
of a variety of regulatory instruments ranging from simple persuasion to
enforcement and license adaptation. More recently, this regulatory arsenal
has been completed with a new framework that aims to impose remedies on
industry players with significant market power (SMP). The book by P. Valcke
et al. offers a detailed explanation of the properties of this new EU regulatory
framework (2003) and examines how the latter currently applies in the
European mobile sector. With this book, the authors make a valuable
contribution to regulation policy literature and raise issues of great
importance for the SMP management process.
The term SMP defines a situation whereby a company enjoys a position
equivalent to dominance, that is to say a position: "Of economic strength
affording it the power to behave to an appreciable extent independently of its
competitors, customers and ultimately customers." 2 In economic terms, this
essentially signifies the power to raise prices above the competitive level.
The authors of the book carefully draw the distinction between SMP and
competition policy: "In competition policy, the triggering factor is to be found
in specific conduct (abuse) or a change in market structure (merger)". In the
new regulatory framework, this triggering factor comes at the 'market
selection' stage, where some relevant markets are singled out for the SMP
procedure." Obviously, in assessing whether a company possesses SMP,
the definition of the "relevant market" (which includes the products or

1 For an overview of regulatory conditions in the post-liberalization era with reference to mobile
communications markets, see: J. UBACHT, 2004, Regulatory Practice in Telecommunications
Markets: How to Grasp the 'Real World of Public Action', C&S, no. 54, pp. 219-242.
2 Article 14 of the Framework Directive, O.J. 24.204.2002, L 108/33.

Book Review

191

services making up the market and the geographical boundaries of that


market) for ex ante regulation is of fundamental importance. This is the first
stage in a three-step procedure that also encompasses: analysis of the
markets selected in order to determine companies ("undertakings" in the
jargon of the sector) with SMP and the imposition of obligations or remedies.
The authors elaborate meaningfully on how relevant markets should be
defined in practice on the basis of a "three criteria test": i) the existence of
high and non-transitory entry barriers, ii) the presence of a market structure
that does not tend towards effective competition within a relevant time frame
and, iii) the inefficiency of competition law alone to adequately address the
market failure(s) concerned. This is opposed to the theoretical approach
adopted in the past of defining relevant markets in advance by directives.
The authors also provide a number of cases from relevant markets in the
mobile sector (i.e. access and call origination, the wholesale national market
for international roaming and voice call termination) and propose an
analytical grid of the legal criteria for "relevant market" selection applied to
mobile markets.
The authors are also careful to distinguish between different forms of
SMP: i) single dominance, ii) joint dominance (as in the case of "tacit
collusions" and oligopolistic markets) and, iii) leveraging of market power to
an adjacent market (in the sense that NRAs should be able, for example, to
prevent companies with SMP in an upstream wholesale or access market
from leveraging this market power downstream into the market for retailing
services). Ultimately, the authors extend their analysis to discuss specific
issues regarding the imposition of remedies at two levels: i) general
principles applying to the regulatory action of NRAs, such as impartiality,
transparency etc. and, ii) specific principles applying to the correcting action
of NRAs (i.e. the imposition of remedies on undertakings with SMP), to guide
NRAs in selecting appropriate remedies based on the "reasoned decision"
principle (i.e. remedies proportionate to the nature of the problem identified),
promoting service- and/or infrastructure-based competition and designing
incentive compatible remedies (with built-in mechanisms that give incentives
to the undertaking with SMP to comply with the remedy).
Beyond this extensive and highly informative discussion of possible
actions and against-SMP effective remedies, there is a lot to like about this
book. The (sector-specific) SMP regulation regime is convincingly discussed,
supported by a strong body of empirical evidence and successfully
compared, at the end of the book, with general competition law principles
and methodologies.

192

No. 61, 1st Q. 2006

Summary

Jean GABSZEWICZ & Nathalie SONNAC


L'industrie des mdias
Ed. La Dcouverte, Coll. Repres, 2006, 121 pages
Cet ouvrage rpond aux questions qui se posent propos des l'industrie des mdias,
telles que le rle jou par l'interaction entre le march des mdias et celui de la
publicit, l'incidence de cette interaction sur la diversit des contenus mdiatiques,
l'analyse de la concentration du secteur et ses consquences au niveau du
pluralisme et de la diversit, ainsi que sur la rglementation de l'industrie. Sont
prsentes les caractristiques de cette industrie et les outils conomiques de leur
analyse : nature des cots de production, composition des recettes, pratiques
culturelles des consommateurs, acteurs du march et modalits d'intervention de
l'Etat.
Ce livre s'adresse aux aux tudiants, chercheurs et professionnels qui travaillent
dans le champ de l'conomie des mdias, de l'information et de la communication.
This text offers answers to issues concerning the media industry such as the role
played by the interaction between the media market and the advertising market, the
effect of this interaction on the diversity of media content, as well as an analysis of
concentration in the sector and its consequences in terms of pluralism, diversity and
industry regulation. The book provides an overview of this industry's characteristics
and the economic tools for their analysis, including: the nature of production costs, a
breakdown of revenues, consumers' cultural practices, market players and forms of
state intervention.
The book is aimed at all students, researchers and professionals working on the
economy of the media, information and communication

The authors



James ALLEMAN is a professor in the College of Engineering and Applied


Science, University of Colorado, Boulder. In the fall of 2005 Dr. Alleman was a
visiting scholar at IDATE in Montpellier, France; previously, he was a visiting
professor at the Graduate School of Business, Columbia University, and director of
research at Columbia Institute of Tele-Information (CITI). Professor Alleman
continues his involvement at CITI in research projects as a senior fellow. He has also
served as the director of the International Center for Telecommunications
Management at the University of Nebraska at Omaha, director of policy research for
GTE, and an economist for the International Telecommunication Union. Dr. Alleman
was also the founder of Paragon Service International, Inc., a telecommunications
call-back firm and has been granted patents (numbers 5,883,964 & 6,035,027) on the
call-back process widely used by the industry.

Franois BAR is Associate Professor of Communication in the Annenberg


School for Communication at the University of Southern California. He directs the
Annenberg Research Network on International Communication. Prior to USC, he
held faculty positions at Stanford University and at the University of California at San
Diego. Since 1983, he has been a member of the Berkeley Roundtable on the
International Economy (BRIE), at UC Berkeley, where he previously served as
program director for research on telecommunications policy and information
networking. He has held visiting faculty appointments at the University of Toronto, the
University of Paris-XIII, Thsus, and Eurcom. His current research interests
include comparative telecommunication policy, as well as economic, strategic and
social dimensions of computer networking, new media and the internet.

Audrey BAUDRIER is President of Study group I in the Development Sector of


the
International
Telecommuncations
Union
(ITU),
that
deals
with
telecommunications policies and national regulatory strategies. Affiliated to the
Research Centre ATOM of the University Paris I Pantheon-Sorbonne, her research
focuses on the political economy of regulation and the governance of public service
network markets.
http://atom.univ-paris1.fr

Vincent BONNEAU is a Senior Consultant in the Marketing and Forecasting


Department of IDATE. He is mainly in charge of the impact of the software industry
on the telecom markets. Prior to IDATE, Vincent BONNEAU worked for the French

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 193.

194

No. 61, 1st Q. 2006

Trade Commission (Economic Department of the Embassy of France) in San


Francisco as an analyst in charge of the software industry. He has also worked for
marketing departments at several telecommunication companies including NOOS
(French leading cable operator), Wanadoo and France Telecom. Vincent graduated
from Ecole Polytechnique (1997) and from Ecole Nationale Suprieure des
Tlcommunications (2002). He also holds a MS from HEC in IT Management (2002
v.bonneau@idate.org

Alain BOURDEAU de FONTENAY teaches in the department of economics at


Queen's College, City University of New York and is senior research fellow at the
Columbia Institute for Tele-Information. He has written extensively on
telecommunications economics and has worked at Bell Laboratories, the Canadian
telecommunications regulator. He is also a cofounder of de Fontenay, Savin and
Kiss, an international consulting firm. His current work focuses on concepts of
markets and other forms of exchange relations, especially in ICTs. He holds a
doctorate from Vanderbilt University.
Marc BOURREAU is an assistant professor in the Department of Economics and
Social Sciences at Ecole Nationale Suprieure des Tlcommunications (ENST,
Paris). He is also a research associate in the laboratory of industrial economics (LEI)
at the Center for Research in Economics and Statistics (CREST). Prior to joining the
ENST faculty, Professor Bourreau worked for France Tlcom (1997-2000) as a
regulatory economist. He received a master's degree in engineering science from
ENST. He also holds a Ph.D. in economics from Universit Paris 2 Panthon-Assas
and an "habilitation diriger des recherches" from Universit Paris 1 PanthonSorbonne. His main research interests are economic and policy issues relating to
broadcasting, telecommunications and the internet. He has published several articles
on these topics in journals such as The European Economic Review, Information
Economics and Policy, Telecommunications Policy, Revue Economique and Revue
d'Economie Politique.
Thomas CORTADE completed his PhD in economics at the University of
Montpellier 1 (France) in 2005. His main research interests are industrial
organization, regulation and competition policy applied to network industries, and
more precisely to the ISP market. He recently published a paper on the impact of ISP
mergers on welfare in the journal Economie et Prvision.

David EVANS is an authority on the economics of high-technology and platformbased businesses, primarily as they relate to competition policy and intellectual
property. He is the author of four books and over 70 articles in journals ranging from
the American Economic Review and Foreign Affairs to The Yale Journal on
Regulation. His many opinion pieces have appeared in newspapers worldwide
including the Washington Post, Wall Street Journal, Financial Times, Les Echos and
El Pais. A specialist on competition policy in the U.S. and European Union, he has
served as an expert and testified before courts, arbitrators, regulatory authorities and

The authors

195

legislatures in the U.S. and Europe. He has led economic analysis in several
important antitrust cases over the last 25 years including U.S. v AT&T. Most recently,
Dr. Evans led an international economic team on a landmark series of cases
involving a large global technology firm in the U.S. and Europe. Since September
2004 he has been visiting professor of competition law and economics at University
College, London. He has a Ph.D. from the University of Chicago. He is also co-author
of Paying with Plastic (MIT Press, 2005), which is considered by many as "the
definitive account of the trillion-dollar payment card industry".

Zdenek HRUBY is Deputy Minister of Finance in the Czech Republic, where he is


responsible for international relations, privatizations and revitalization. He is also the
Czech Republic's national coordinator of EU programmes, governmental coordinator
for negotiations regarding the state aid to the banking sector and a member of the
statutory bodies of several companies. In an academic capacity, Mr. Hruby presently
teaches at the Institute of Economic Studies at the Charles University in Prague and
at the Czech Technical University in Prague. He has also been a guest lecturer at
several other universities including London Business School, the University of
Cambridge and Institut fr Wirtschaftsforschung Halle. Zdenek Hruby graduated in
cybernetics from the Czech Technical University in Prague, of Electrical Engineering
and completed postgraduate studies in economics.

Petros KAVASSALIS is one of the founders of the MIT Internet Telecommunications Convergence Consortium MIT-ITC. Currently, he is the Director of the
ATLANTIS Group at the University of Crete. Petros Kavassalis holds a degree in Civil
Engineering from the National Technical University of Athens and a Ph.D. from
Dauphine University in Paris (Economics and Management). In the past, he worked
as a Researcher at the Ecole polytechnique, Paris (Centre de Recherche en Gestion,
Laboratoire d'Econometrie) and at MIT (Research Programme on Communications
Policy at CTPID). His interests focus on the fields of information management & ebusiness, information economies and the emergence of organizational patterns on
the Web and mobile applications and services.

http://atlantis.uoc.gr

AeRee KIM is a PhD student at the Graduate school of Global Information and
Telecommunication Studies at the Waseda University, Tokyo, Japan. She holds a
Masters in Global Information and Telecommunication from the Waseda University.
AeRee Kim has been a guest research officer at NTT Cyber Communication
Laboratory Group since 2005. In recent years, she has received various scholarships
and was a fellow of the Rotary Foundation Scholarship. Her research interests mainly
focus on information and telecommunications economic analysis and the effect of
information on society.

A World Bank senior advisor on e-strategies, Bruno LANVIN has occupied senior
positions at the World Bank (Manager of the Information for Development Program
(infoDev), and Executive Secretary of the G-8 DOT Force), and in the United Nations

196

No. 61, 1st Q. 2006

(Head of Electronic Commerce in UNCTAD, Chief of Cabinet of the Director General


of the UN). Author of a significant number of articles and books on international
trade, the 'e-economy', knowledge and innovation, he has an MA (Mathematics and
Physics), and MBA (HEC) and a PhD in Economics (La Sorbonne).

As part of the "Industrial Analyses" Department of IDATE, Loc LE FLOCH has


specialised on telecom industry development in countries of the Middle East and
North Africa region, in particular on regulatory and technical issues. He is also in
charge of surveys concerning the broadband access market (DSL, cable, fibre optic).
Prior to joining IDATE, Loc worked for French telecommunications operator Neuf
Telecom and carried out various researches related to the deployment of Neuf
Telecom's network. He graduated as engineer, holder of a diploma of the National
Institute of Telecommunications (INT-2001).
l.lefloch@idate.org

Jonathan LIEBENAU teaches in the department of management in the


information systems group at the London School of Economics and Political Science.
He is also affiliated with the Columbia Institute for Tele-Information, Columbia
University Graduate School of Business. He has published articles on high
technology businesses, fundamental concepts of information and technology policy.
His current research focuses on the use of ICTs in business and the
telecommunications industry and regulation in Europe, the USA and Central Asia. He
holds a doctorate from the University of Pennsylvania.

Laurent MICHAUD joined IDATE in February 2000 as a consultant in the "Media


Economics" department. His skills cover the fields of economic and financial analysis
and evaluation, statistical data processing, computer-assisted simulation systems,
short and mid-term forecasts and database management. Laurent Michaud is in
charge of IDATE's multi-client digital entertainment studies. He carries out expert
missions on video games issues, and also contributes to strategic, sector-based
market reports. He was head of Research at Montpellier University of Economics'
research laboratory, Le Centre d'tudes et de Projets and holds a post-graduate
professional degree (D.E.S.S.) in Economic and Financial Project Engineering.

l.michaud@idate.org

Hitoshi MITOMO is Professor of Telecommunications Economics at the


Graduate school of Global Information and Telecommunication Studies, Waseda
University, Tokyo, Japan. Before joining the GITS faculty, he was an associate
professor (1992-1998) and a professor (1998-2000) of Transportation and
Telecommunication Economics at Senshu University. Since1992, he has been a
guest research officer at the Institute for Posts and Telecommunications Policy,
Ministry of Posts and Telecommunications (MPT). He has served as a member of the
Telecommunications Council of the Japanese Government and on several
committees in the MPT. Hitoshi Mitomo graduated from Yokohama National
University with a BA in Management Science and holds a Masters in Environment

The authors

197

Science from the University of Tsukuba. He is a doctoral student at the Institute of


Socio-Economic planning, University of Tsukuba, and received his PhD in
Engineering from Toyohashi University of Technology.

Namkee PARK is a doctoral student at the Annenberg School for


Communication, University of Southern California. His studies and research focus on
communication technology and policy, as well as information/media economics.
Before coming to USC, he studied at Michigan State University (MA,
Telecommunication) and Yonsei University, Seoul, Korea (BA & MA, Mass
Communication). He is currently working on his dissertation project, which examines
the deployment of municipal Wi-Fi networks.

Grard POGOREL is Professor of Economics and Management, Ecole Nationale


Suprieure des Tlcommunications (ENST, Paris). He published numerous articles,
books and reports. He acted as evaluator, auditor and reviewer for the NSF, Harvard
Business Review, Research Policy and EU research programmes in Information and
Communications Technologies. He has been a frequent member of monitoring
committees (composed of independent experts) of the EU Framework Research
Programme, Chair of the Monitoring Panel for 2000-2001, Chair of the Monitoring
Committee of the European Union Information Society and Technologies Research
Programme for 2001-2002. He participates in the international panel of experts for
the World Competitiveness Yearbook (IMD, Lausanne) since 1996. He is a member
of the Board of Directors of the International Telecommunications Society (ITS).

www.enst.fr/egsh/pogorel

David SEVY is a principal in the European Competition Policy Group of LECG


and heads the Paris team. David has advised clients on a wide variety of competition
policy issues, covering diverse sectors. He graduated from Ecole Polytechnique
(1986) and received his Ph.D. in economics from Ecole Polytechnique (1993). After a
post-doc at AT&T Bell Laboratories (1994), David worked for France Tlcom (19941999) as senior regulatory economist. He then joined McKinsey & Company in Paris,
from 1999 to 2002, where he was involved in numerous strategy and management
projects for telecommunications, high technology and pharmaceuticals companies.
David's academic research focuses mainly on theoretical and applied industrial
economics, especially on topics relevant for competition in the telecommunications
industry. He is currently Adjunct Professor of Economics at Ecole Polytechnique,
where he teaches business and antitrust economics. He is also co-author of the
recently published textbook Economie de l'entreprise (Editions de l'Ecole
Polytechnique).
Nathalie SONNAC who was awarded a PhD in economics, is an associate
professor at the University Paris II and a researcher at CARISM and CREST/LEI.
She obtained in 2004 an "Habilitation diriger des recherches" in Information &
Communication Sciences. Her research focuses on media economics (press, TV
broadcasting and the internet) and the information economy. Her publications

198

No. 61, 1st Q. 2006

include: Economie de la presse, with Patrick Le Floch (2nd Ed. La Dcouverte, coll.
Repres, 2005); L'industrie des medias, with Jean Gabszewicz (Ed. La Dcouverte,
coll. Repres, 2006); and several articles in refereed journals.

Tommaso VALLETTI is Professor of Economics at Imperial College London


(U.K.) and University of Rome "Tor Vergata" (Italy). He is a research fellow at the
Centre for Economic Policy Research (CEPR, London) and a research affiliate of the
Global Consortium for Telecommunications (London Business School). Professor
Valletti is also a member of the panel of academic advisors to Ofcom, the U.K.
communications regulator and served as a board director of Consip, the Italian Public
Procurement Agency from 2002 to 2005. He has advised numerous bodies, including
the European Commission, the OECD, and the World Bank on topics such as
network interconnection, mobile termination and spectrum auctions. He is the editor
of Information Economics & Policy, associate editor of the Journal of Industrial
Economics, and a member of the advisory board of both the Journal of Network
Industries and COMMUNICATIONS & STRATEGIES. He has published numerous
articles in various journals.

Marianne VERDIER is a PhD student in economics at Ecole Nationale


Suprieure des Tlcommunications de Paris (ENST). She also works for "le
Groupement des Cartes Bancaires", CB. A former student of HEC (2004), she also
graduated from the research master APE (Campus Paris Jourdan) and from the
ENSAE (Ecole Nationale de la Statistique et de l'Administration Economique) in
2005. In 2001 and 2002, she participated in risk management projects conducted by
two French banks.

ITS News

Message from the Chair


Dear ITS Members,
Biennial Conferences are ITS' most important events .At our biennials, ITS
members from around the world gather to discuss papers, present new findings
and forge new contacts. ITS biennials attract many first-time attendees who
often return to future ITS events and become long-standing members of our
organization. The conference registration fees at biennials include a two-year
ITS membership, providing the major source of members for our organization.
ITS has been very fortunate with its past biennials. We have had a long
series of committed organizers, strong academic leadership, generous sponsors
and exciting new venues. ITS has had a global reach in its past biennials,
convening in places such as Tokyo, Boston, Venice, Sophia Antipolis, Sydney,
Seville, Stockholm, Buenos Aires, Seoul and, most recently, Berlin. Just
mentioning these cities evokes many happy memories!
For many years, it was an explicit priority of ITS to convene a biennial in
China. And finally we are at such a fortunate point in our history. In Beijing, 1216 June 2006, together with our conference host, the Beijing University of Posts
and Telecommunications (BUPT), ITS is convening its 16th Biennial Conference,
with the theme "Information Communication Technology (ICT): Opportunities
and Challenges for the Telecommunications Industry". ITS is particularly thankful
for the leadership of Professor Ting-Jie Lu, Dean of the School of Economics
and Management at BUPT, and Professor Xu Yan of the Hong Kong University
of Science and Technology for making this biennial a reality. The conference has
many supporters in the scientific community, and a solid core of corporate
sponsors. I encourage you to visit the conference website for an over view of the
distinguished conference committee, the preliminary program, and other useful
information regarding registration, hotels, and post-conference tours.

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 199.

200

No. 61, 1st Q. 2006

Foremost, the 2006 biennial will be an important meeting place to discuss the
future of telecommunications and ICTs and their impact on our societies. The
conference will fulfil the basic mission of ITS, which is to provide a non-aligned
and non-profit forum where academic, private sector, and government
communities can meet to identify pressing new problems and issues, share
research results, and form new relationships and approaches to address
outstanding issues. ITS places particular emphasis on the interrelationships
among socioeconomic, technological, legal, regulatory, competitive,
organizational, policy, and ethical dimensions of the evolving applications,
services, technology, and infrastructure of the communications, computing,
Internet, information content, and related industries.
Likewise, ITS regional conferences are critical to the mission of ITS and
provide the same basic forum and values, although in a smaller format and
setting. ITS is pleased to announce that the 17th European Regional ITS
Conference will be convened in Amsterdam, The Netherlands, August 22-24,
2006. The Conference will focus on next generation telecommunications
infrastructure and services and will be hosted by the University of Amsterdam.
Co-chairs of the conference are Prof. N.A.N.M. van Eijk, University of
Amsterdam, Prof. Harry Bouwman, Delft University and Dr. Brigitte Preissl,
Deutsche Telekom.
ITS is grateful for the regional ITS conferences, and depends on the
commitment of regional organizers. Notable is the long and faithful tradition of
ITS European Regional conferences, convening regional ITS events in 14
European countries since 1987. ITS Board member Prof. Juergen Mueller has
been the main organizer since the start, and he has with great skill been able to
gather many memorable and fine ITS events. ITS is proud to have Dr. Brigitte
Preissl, also ITS Board member, as a new main driver of ITS European Regional
Conferences. The recent ITS European Regional in Porto, 2005, show-cased Dr.
Preissl's fine hand in putting together thematic sessions, engaged discussants
and quality papers. The Porto conference was also one of the more wellattended European regional events, and had excellent local arrangements due
to the efforts of ITS Board member Prof. Joo Confraria da Silva. Furthermore,
ITS Board member Prof. Gary Madden has convened two Asia-Australian
Regional ITS conferences in Perth, the most recent one also in 2005. Both of
these events have been highly acclaimed, well-attended and generated several
books and journals issues, testifying to the high academic and professional
qualities of the local organizers at Curtain Business School, Perth. Since a few
years, ITS has also supported workshops at the IDATE International
Conferences, events where a special issue of the ITS Journal Communications
& Strategies has been issued, with many contributions from ITS members.
ITS participates in jointly organized conferences as well. The most recent
example involves ITS joining with Medetel's annual e-health conference, to be
held 5-7 April in Luxemburg. ITS shares the conference together with a large
number of other participating major international organizations including the
International Telecommunication Union (ITU), the International Society for

ITS News

201

Telemedicine & eHealth (ISfTeH), the European Health Telematics Association


and the World Health Organization (WHO). This joint conference represents a
broadening of the traditional topics examined by ITS, and is therefore a very
welcome initiative. Furthermore, ITS will sponsor special sessions at the 11th
Congress of International Society for Tele-health, 26-29 November, 2006, in
Cape Town. Significantly, ITS Vice-Chair Lucy Firth has secured benefits for ITS
members attending these events.
ITS is also pleased to welcome two new Board members, Professor Ting-Jie
Lu, Dean of the School of Economics and Management at the Beijing University
of Posts and Telecommunications; and Dr. Aniruddha Banarjee, Vice President,
NERA Economic Consulting. Similarly, we would like to thank the following
departing Board members: Professor Cristiano Antonelli of the University of
Torino, Professor Grard Pogorel of ENST Paris, Dr. Lorenzo Pupillo of Telecom
Italia and Dr. Momin Hayee of BT. If I would elaborate on all the fine
contributions they have made, it would be a very long list. Let me just mention
that Professor Antonelli, Professor Pogorel and Dr. Pupillo each served in the
important role of Program Chair of an ITS biennial, (1990 in Venice, 1992 in
Sophia Antipolis, and 2000 in Buenos Aires, respectively).
ITS is also proud of its many corporate and society members, the support of
which is critical to the vitality of ITS: Arnold & Porter, BT, BAKOM, Deutsche
Telekom, France Telecom, IDATE, Infocom, KT, NERA Economic Consulting,
NTT DoCoMo, Telecommunications Authority of Turkey and TELUS. Without the
government and industry engagement, ITS would not be able fulfil its mission.
To be effective, ITS depends on hearing from our members and the public.
Please let us know how we can continue to most effectively serve your
professional interests and ambitions. For more information on ITS and all the
events I mentioned above, please visit our website (www.itsworld.org). I look
forward to working with you and hope to see you at a future ITS event.
Sincerely,

Erik Bohlin
ITS Chair

34th Research Conference on


Communication, Information,
and Internet Policy
September 29 to October 1, 2006
Arlington, Virginia

www.tprc.org

TPRC hosts an annual conference on communication, information, and


Internet policy that brings a diverse, international group of researchers from
academia, industry, government, and nonprofit organizations together with
policy makers. It serves two primary goals: (1) dissemination of research
relevant to current communications regulatory and policy debates around the
world; and (2) promotion of new research on emerging issues.
TPRC

welcomes

national,

international,

comparative,

and

multidisciplinary or interdisciplinary studies. Subject areas of particular


interest include but are not limited to the following:
- Comparative Studies of Networked Industries
- Competition Policy in Network Technologies and Industries
- Laws and Regulation in a Time of Rapid Change
- ICTs for Development and Community Informatics
- Intellectual Property and Digital Rights
- The Transformation and Future of Content/Media
- Next Generation Devices and Networks
- e-Applications and Internet Governance
- Spectrum Policy and Wireless Applications

COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 203.

DIGIWORLD2006
The digital world's challenges

Mobile telephony's success worldwide, ubiquity of


broadband access, entrenchment of new
practices centred around blogs and podcasting,
recognition of the Internet titans' growing clout
and ambitions: 2005 confirmed the vitality of the
DigiWorld.
But this effervescence is not all smooth sailing.
While certain innovations can prove value
destroyers (IP telephony), others are fuelling
myriad questions concerning the business
models that consumers will be willing to accept,
which players will be the market's prime
beneficiaries, and the consequent changes that
will need to be made to regulatory frameworks.

Print & on line


English & French
29 EUR HT

In this 2006 edition of our DigiWorld yearbook,


you will find valuable data on the central
components of the new digital world, along with
analyses from IDATE's experts and a
comprehensive round-up of the highlights of the
year gone by.

Drawing on IDATE data and the analyses supplied by its experts, this sixth
edition of the DigiWorld report provides an up-to-date and structured view of the
challenges facing our digital world.

Information

www.idate.org
Contact
Sophie MONJO - +33(0)4 67 14 44 56 s.monjo@idate.org

Call for papers


If you would like to contribute to a forthcoming issue of
COMMUNICATIONS & STRATEGIES, you are invited to submit your
paper via email or by post on diskette/CD.
As far as practical and technical questions are concerned, proposals
for papers must be submitted in Word format (.doc) and must not
exceed 20-22 pages (6,000 to 7,000 words). Please ensure that your
illustrations (graphics, figures, etc.) are in black and white - excluding
any color - and are of printing quality. It is essential that they be
adapted to the journal's format (with a maximum width of 12 cm). We
would also like to remind you to include bibliographical references at
the end of the article. Should these references appear as footnotes,
please indicate the author's name and the year of publication in the
text.
All submissions should be addressed to:
Sophie NIGON, Coordinator
COMMUNICATIONS & STRATEGIES
c/o IDATE
BP 4167
34092 Montpellier CEDEX 5 - France
s.nigon@idate.org - +33 (0)4 67 14 44 16
http://www.idate.org

Anda mungkin juga menyukai