Competition in
two-sided markets
Application to information and communication industries
Mobile
Impact
David EVANS,
Vice Chairman of LECG Europe
Interview with
Foreword
Edmond Baranes
Editor
Yves Gassot
Publishing Director
Contents
Dossier
Competition in two-sided markets:
Application to information and communication industries
Introduction
Marc BOURREAU & Nathalie SONNAC ..................................................... 11
A Strategic Guide on Two-Sided Markets Applied to the ISP Market
Thomas CORTADE ..................................................................................... 17
Retail Payment Systems: What can we Learn from Two-Sided Markets?
Marianne VERDIER ..................................................................................... 37
Mobile Call Termination: a Tale of Two-Sided Markets
Tommaso VALLETTI ................................................................................... 61
Impact of Mobile Usage on the Interpersonal Relations
AeRee KIM & Hitoshi MITOMO ................................................................... 79
Opinion
Interview with David EVANS, Vice Chairman of LECG Europe
Conducted by Marc BOURREAU, David SEVY & Nathalie SONNAC ......... 97
Articles
Municipal Wi-Fi Networks:
The Goals, Practices, and Policy Implications of the US Case
Franois BAR & Namkee PARK ................................................................ 107
Features
Regulation and Competition
Competitive compliance:
streamlining the Regulation process in Telecom & Media
Grard POGOREL ..................................................................................... 159
Technical Innovations
Instant messaging: Towards a convergent multimedia hub
Vincent BONNEAU .................................................................................... 171
Use Logics
Mobile CE - The nomadic era
Laurent MICHAUD ..................................................................................... 179
Book Review
Peter HUMPHREYS & Seamus SIMPSON, Globalisation, Convergence
and European Telecommunications Regulation, by Zdenek HRUBY ....... 185
Byung-Keun KIM, Internationalizing the Internet - The co-evolution of
Influence and Technology, by Bruno LANVIN ........................................... 186
Bethany McLEAN & Peter ELKIND, The Smartest Guys in the Room: The
Amazing Rise and Scandalous Fall of Enron, by James ALLEMAN ......... 187
Peggy VALCKE, Robert QUECK & Eva LIEVENS, EU Communications
Law Significant power in the mobile sector, by Petros KAVASSALIS ..... 190
Summary ................................................................................................. 192
The authors ................................................................................................. 193
ITS News..................................................................................................... 199
Announcements .......................................................................................... 203
Dossier
Introduction
A Strategic Guide on Two-Sided Markets Applied
to the ISP Market
Retail Payment Systems: What can we Learn
from Two-Sided Markets?
Mobile Call Termination:
a Tale of Two-Sided Markets
Impact of Mobile Usage
on the Interpersonal Relations
Introduction
Marc BOURREAU
Ecole Nationale Suprieure des Tlcommunications, Paris
Nathalie SONNAC
Laboratoire d'Economie industrielle du CREST-INSEE
& Universit de Paris II
and "two-sided markets" This interest was first sparked by antitrust cases
concerning the credit card market in the United-States and the development
of business models in the "new economy."
There is no unified theory of two-sided markets to-date, but rather
different models co-exist that are applied to specific industry cases: payment
systems (ROCHET & TIROLE, 2002), media markets (FERRANDO et alii,
2004), the internet (LAFFONT, MARCUS, REY & TIROLE, 2003; CAILLAUD
& JULLIEN, 2003; BAYE & MORGAN, 2001), etc.
Although these different models were developed simultaneously and
independently, it soon became clear that they shared some common
features, giving birth to the two-sided markets theory. In a nutshell, a twosided market is one in which a platform (or more than one platform) brings
together two (or more) groups of consumers that are interdependent. More
formally, there are inter-group (or indirect) externalities in a two-sided
market.
The concepts of direct and indirect network effects (or network
externalities) are not new 1. There is a direct network effect when the utility
of a consumer who purchases a product or a service depends on the
number of consumers who purchase the same product or service. A
12
Introduction
13
14
be charged different prices; in online dating sites, for instance, women are
charged less than men.
Moreover, when competition among platforms is introduced, this can lead
to surprising results. A monopolistic platform can be more efficient than
competing platforms, for instance. The results obtained under competition
depend on the assumptions with respect to consumers' connection
decisions: do consumers connect to one platform at most ("single-homing")
or can they connect to more than one platform ("multi-homing")? Finally, one
last important contribution of this theory is that it provides an interesting and
convincing analysis of the well-known "chicken and egg" problem (see
CAILLAUD & JULLIEN, 2003).
To-date, the theory of two-sided markets suffers from two main
weaknesses. Firstly, there is no unified framework as yet. However, we
believe that it might be difficult to find such a unified setting. Better prospects
may be offered by developing models tailored to specific industry situations.
Secondly, research has remained mainly theoretical so far, and empirical
evidence is weak.
The papers in this dossier
Introduction
15
16
References
BAYE M.R. & MORGAN J. (2001): "Information Gatekeepers on the Internet and the
Competitiveness of Homogeneous Product Markets", American Economic Review,
vol. 91, pp. 454-474.
CAILLAUD B. & JULLIEN B. (2003): "Chicken & egg: Competition among
intermediation service providers", Rand Journal of Economics, vol. 34, pp. 309-328.
ECONOMIDES N. (1996): "The Economics of Networks", International Journal of
Industrial Organization, vol. 14, pp. 673-699.
FERRANDO J., J. GABSZEWICZ, D. LAUSSEL & N. SONNAC (2004): "Two-Sided
Network Effects and Competition: an Application to Media Industries", Lucarnes
bleues 2004/09.
GABSZEWICZ J., LAUSSEL D. & SONNAC N. (2004): "Programming and
Advertising Competition in the Broadcasting Industry", Journal of Economics and
Management Strategy, vol. 13, pp. 657-669.
GABSZEWICZ J. & SONNAC N. (2006): L'industrie des mdias, Editions La
Dcouverte, Paris, "Repres".
LAFFONT J.-J., MARCUS S., REY P. & TIROLE J. (2003): "Internet Interconnection
and the off-net cost pricing principle", Rand Journal of Economics, vol. 34, pp 370390.
ROCHET J.-C. & TIROLE J. (2002): "Cooperation Among Competitors: The
Economics of Payment Card Associations", Rand Journal of Economics, vol. 33, pp.
549-570.
Abstract: This paper looks at a new body of literature that deals with two-sided markets
and focuses on the Internet Service Provider (ISP) segment. ISPs seem to act as a
platform enabling transactions between web sites and end consumers. We propose a
strategic guide for ISPs that covers features of two-sided markets such as strong
externalities and discuss how these market characteristics can affect competition policy.
Key words: Platform, externalities, price allocation, competition policy.
18
T. CORTADE
19
20
and the existence of two different prices raise the issue of price allocation.
This in turn raises poses key questions. What are the efficient price level and
an efficient allocation of prices from the platform's point of view? What are
the implications of the presence of positive externalities?
The first implication is essential. EVANS (2003) affirms that the price on
each side can be different. In cases where demand is developed on each
side, price level and allocation play an important role in maintaining two
different types of consumer. We can argue with ROCHET & TIROLE (2003)
that since there is a membership externality, the price charged by ISPs for a
transaction decreases with the size of the installed base. Again, this effect
closely resembles the network positive externality. However, the usage
externality may be internalised by the user groups through the price
structure set by the platform. In this case EVANS (2004) argues that the
service is jointly consumed by the two types of users in two-sided markets,
and the usage externality exists only if the transaction takes place.
Figure 1 - An ISP as a platform
Platform (ISP)
Access
Access
aB
aS
Web site (S)
1 It is worth noting that potential negative externalities do exist. This is the case with advertising
in newspapers. Indeed, consumers are willing to pay more to have less advertising. For a more
detailed analysis of this point see FERRANDO, GABSZEWICZ, LAUSSEL & SONNAC (2004)
AND GABSZEWICZ, LAUSSEL & SONNAC (2002).
T. CORTADE
21
22
T. CORTADE
23
pS pB c
B
pS pB
KS KB
where p and p are the price respectively for the buyers (internet
S
B
users) and the sellers (web sites) and K and K represent the respective
elasticity of demand from each group. The interesting insight afforded by
ROCHET & TIROLE (2004) is that prices are inversely proportional to
elasticity.
It follows that ISP strategy should consider the side of the market more
sensitive to price by analysing the direct elasticity on each side impacted by
the usage externality. Internet users should thus be more sensitive than web
sites.
In line with the principles described above (network effect and elasticity),
this section considers competition between ISPs, in cases of single home
connection i.e. where each side can only be connected to one platform.
ARMSTRONG (2004b) focuses on competition between ISPs that
provide services perceived as different by users. The author supposes that
each consumer, web site or internet user, can be connected to one exclusive
24
ISP only. The first insight provided by this study is that the net surplus for
each group is a function of the external benefit of having an additional
consumer in the group. Its main conclusion is that ISPs should consider this
external benefit as a measure of the opportunity cost.
This means that, since there is competition between ISPs, their strategy
should be based on avoiding price hikes to discourage consumers from
switching to a competitor's platform. The expression of price is simple. It is
the sum of fixed costs and the substitutability parameter (since services are
perceived as different), minus the valorisation of the inter-group externality
resulting from the transaction. Moreover, this means that pricing is generally
not cost-oriented.
The impact of single homing on pricing strategy can be summarised as
follows:
x In the presence of single homing, the more the users on one side
place a high value on the presence of the other group, the lower the price
determined by the ISP should be.
x However, the single homing hypothesis is not really consistent with
the ISP market. Web sites, in particular, can be connected to several
platforms.
ISP competition with multihoming
T. CORTADE
25
Vi
d1B d2 B DJ B
di B
> @
pB pS c
pB
B
K
pS
K / V
S
B
S
where K and K respectively represent the demand-elasticity of internet
users and of web sites for a given platform, where the transaction takes
place. It is worth noting that the web sites' elasticity is corrected by the index
26
1)
T. CORTADE
27
According to EVANS (2004) other factors impact the price structure such
as investment on one side of the market, since an investment allows the
platform to decrease the price on this side. As a result, this strategy makes it
possible to attract new consumers on the other side. Moreover Evans
argues that multihoming offers a key insight in the study of two-sided
markets. Multihoming consequently implies higher competitive pressure and
tends to decrease prices.
28
This section examines cases where competition takes place with non
exclusive services. In many cases users are connected to several platforms
(multihoming). This is particularly true of internet users. CAILLAUD &
JULLIEN (2003) show that service providers have incentives to propose non
exclusive services when competitive pressure is not too high in order to
exercise their market power. In such cases it is easy to divide (to subsidize),
but more difficult to conquer. With non exclusive services the competitive
pressure is lower, making it more difficult to attract new users
Finally ARMSTRONG & WRIGHT (2004) provide an analysis of this topic
based on endogenous users' decisions between exclusive and non
exclusive services. Their results close resemble those cited above. We can
consequently argue that:
An optimal strategy for ISPs is to sustain losses on one side in order to
achieve a critical installed base on this side. The "divide and conquer"
T. CORTADE
29
30
T. CORTADE
31
32
access charge for termination. This means that the ISP at the origination of
the traffic must pay an access charge to its rival for termination. Finally,
users' decision to join one exclusive ISP (i.e. single homing) is endogenous.
The ISPs are considered as perfect substitutes from the consumer's point
of view. The total price set by an ISP consists of the price set for consumers,
plus the price fixed for web sites. The authors adopt the "off-net cost
principle". Moreover, they suppose that the hypothesis of the "balanced
calling pattern" is respected. This highlights an important difference between
their views and theoretical literature on the telecommunications industry. The
receivers of traffic pay a price to receive calls, which is not true in
telecommunications. This has two major implications. The first is related to
prices, while the second is linked to competition stability.
The impact on price is as follows: when a consumer receives traffic
without paying, ISPs are left to pick up the perceived marginal cost (as
pointed out by LAFFONT, REY & TIROLE, 1998a). However, when
consumers pay to receive traffic, the perceived marginal cost is only equal to
the opportunity cost of losing a consumer who may switch to another ISP.
This is the result of the usage externality in two-sided markets. Moreover,
competition stability is stronger in this context. Indeed, when receivers do
not pay for traffic, then equilibrium can only exist if the access charge is
close to the marginal cost for termination or if the networks are close
substitutes. Yet in the scenario outlined above this is never the case, since
the sum of the prices (for each side) is just equal to the traffic cost,
independently of the access charge level. The access charge only
determines how cost is allocated between the two sides.
As a result, the price structure implied by externalities modifies the
access pricing problem. Here again, it is the study of the total price that is
relevant.
All the features can potentially modify the tools used by competition
policy. In short, two main difficulties for competition policy arise with regard
to two-sided markets. The first is characterized by the utility received by
consumers, since there are usage and membership externalities. Although it
is difficult to measure these externalities, they must be taken into account in
studies of two-sided markets. The second difficulty concerns the advantages
that consumers derive from price structure that enable them to perform
transactions at the lowest possible cost. It is important to consider that the
benefits on one side increase with participation on the other. Again, it is not
easy to take this effect into account in competition policy.
T. CORTADE
33
Conclusion
This paper attempts to offer a strategic guide to two sided markets, and
to identify the difficulties for competition and regulatory policy with regard to
the features of two-sided markets.
At first, our analysis shows that two-sided markets differ from their
classical counterparts because there is a third party involved that is subject
to two different types of demand. The platform allows transactions between
different user groups. As a result, there are two types of externality. The first
externality, also present in the telecommunications industry, is the
membership externality, whereas usage externality is specific to the two
sided market structure. Thus users of the platform benefit from the presence
of members on the other side.
Such interactions have an impact on price level, and especially on the
allocation of the total price between the two sides of the market. Indeed,
platforms charge each side a price. In such cases, it is possible for the third
party to charge one side a price below marginal cost and the other a price
that is higher than this cost. However, as demonstrated above, such prices
do not express cross subsidies or market power. Price allocation is not
neutral.
As a result, we show that competition policy tools can be modified by
such features of two sided markets. The most efficient market structure is
not always competition (multihoming). On the contrary, concentrated
markets can be justified since there are strong externalities. Similarly,
mergers are not necessarily detrimental to the industry. The second insight
34
T. CORTADE
35
References
ARMSTRONG M. (2004b): "Competition in two-sided markets", mimeo.
ARMSTRONG M. & WRIGHT J. (2004): "Two-sided markets with multihoming and
exclusive dealing", working paper.
CAILLAUD B. & JULLIEN B.:
- (2003): "Chicken & egg: Competition among intermediation service providers",
Rand Journal of Economics, vol. 34, pp. 309-328.
- (2004): "Two-sided markets and electronic intermediaries", working paper IDEI.
EVANS D.S.:
- (2003): "Some empirical aspects of multi-sided platform industries", Review of
Networks Economics, vol 3, pp. 191-209.
- (2004): "The antitrust economics of two-sided markets", Yale Journal on Regulation,
vol. 2, pp325-382.
FERRANDO J., GABSZEWICZ J., LAUSSEL D. & SONNAC N. (2004): "Two-Sided
Network Effects and Competition: An Application to Media Industries", Conference on
"The economics of two-Sided markets", Toulouse, January 23rd-24th.
GABSZEWICZ J., LAUSSEL D. & SONNAC N. (2002): "Network effects in the press
and advertising industries", mimeo, CORE Discussion Paper.
JULLIEN B.:
- (2001): "Competing with network externalities, and price competition", mimeo IDEI
Toulouse.
- (2004): "Two-sided markets and electronic intermediaries", working paper IDEI.
LAFFONT J.J. & TIROLE J. (2000): "Competition in telecommunications", MIT Press,
Cambridge.
LAFFONT J.J., MARCUS S., REY P. & TIROLE J. (2003): "Internet Interconnection
and the off-net cost pricing principle", Rand Journal of Economics, vol.34, pp 370390.
REISINGER M. (2003): "Two sided markets with negative externalities", Mimeo.
ROCHET J.C. & TIROLE J.:
- (2004): "Two-sided markets: an overview", Mimeo.
- (2003): "Platform competition in two-sided markets", Journal of the European
Economic Association, vol. 1, pp. 990-1029.
ROSON R. (2004): "Two-sided Markets", Mimeo.
WRIGHT J. (2004): "One sided logic in two-sided markets", Review of Networks
Economics, vol. 3, pp. 42-63.
Abstract: Some retail payment systems can be modelled as two-sided markets, where a
payment system facilitates money exchanges between consumers on one side and
merchants on the other. The system sets rules and standards, to ensure usage and
acceptance of its payment instruments by consumers and merchants respectively.
Some retail payment systems exhibit indirect network externalities, which is one of the
main criteria used to define two-sided markets. As more consumers use the payment
platform, more merchants are encouraged to join it. Conversely, the value of holding
payment instruments increases with the number of merchants accepting them. The theory
of two-sided markets contributes to a better understanding of these retail payment
systems, by showing that an asymmetric allocation of costs is needed to maximise the
volume of transactions. It also starts to offer results that could explain competition
between payment platforms.
However, this theory entails some limits to a thorough understanding of retail payment
systems. Firstly, we show that some retail payment systems, such as credit transfer or
direct debit systems, do not necessarily fulfil all the theoretical criteria used to define twosided markets. Moreover, this theory does not take into account specific features of the
payment industry, such as risk management or fraud prevention. This leads us to propose
new research directions.
Key words: payment systems, two-sided markets, platform competition, payment cards.
(*) I wish to thank "le Groupement des cartes bancaires" CB for its helpful support.
1 Source : www.silicon.fr, Thursday, December 9th 2004.
2 Study conducted by McKinsey in 2005, cited by the European Commission in its directive
proposal for payment services in the internal market. Directive COM(2005)603.
38
3 The rules specify which payment instruments are accepted by the system, the characteristics
of acceptance points, risk management, the clearing mechanism and the proceeding of funds
transfers.
4 For further information about the directive proposal on a "New Legal Framework" for payment
services, see: http://europa.eu.int/comm/internal_market/payments/framework/index_en.htm
5 DAVID (1985), KATZ & SHAPIRO (1985), FARRELL & SALONER (1985), et alii.
6 The value of a payment system increases with the number of its users. Network economics
also deal with a number of essential issues for payment systems, such as standard setting,
compatibility among service providers, and the role of an installed base of network facilities.
M. VERDIER
39
The purpose of this article is to underline that some private retail payment
systems fit in well with the theory of two-sided markets. Our analysis goes
beyond payment card systems. Our aim is also to highlight the limits of this
theory in its analysis of the payment industry, due to its failure to take into
account some of its peculiarities. The paper begins by discussing the two
hypotheses provided by ROCHET & TIROLE (2004) to characterise twosided markets, namely the presence of indirect network externalities and the
impact of price structure on transaction volume. We show that, unlike
wholesale payment systems, retail payment systems fit in well with the first
hypothesis, because they act as intermediaries between two distinct groups
of users, consumers on the one hand, and merchants on the other. We
subsequently draw a distinction between closed-loop and open-loop
payment systems, which is necessary to discuss the second hypothesis.
This typology enables us to show that two-sided market theory contributes to
a better understanding of the asymmetric prices chosen by payment
platforms. Meanwhile, we point out that it is less obvious to define direct
debit and credit transfer systems as two-sided markets. This is followed by a
discussion of the results provided by previous research on platform
competition. We show that it is difficult to apply these results to competition
between payment systems because the models do not take platform
differentiation sufficiently into account. Finally, we try to propose some
research perspectives. Indeed, the theory of two-sided markets could be
developed to account for specific features from the payment industry.
40
two distinct groups of users (say group B for buyers and S for sellers) 7. This
platform chooses its prices (denoted a B and a S respectively) so as to attract
the two groups of agents in the market, and in order to internalise the
indirect network externalities that each group causes to the other. Indeed,
the number of agents from a given group willing to trade on the platform
depends on the number of agents on the other side of the market. The
presence of indirect network externalities between two distinct groups of
users builds a first criterion to define two-sided markets.
However, ROCHET & TIROLE (2004) consider that the first criterion is
not sufficient to conclude that a market is two-sided. They suggest a more
precise definition, whereby the transaction volume depends not only on the
total price a S a B , but also on the price structure ( a B , a S ). For instance, the
transaction volume should be sensitive to a small reduction in the price paid
by one group of users, if the aggregate price level remains constant.
According to Rochet and Tirole, the failure of the Coase theorem is the key
feature that links transaction volume to price structure. In other words, endusers should not be able to pass interaction costs on from one side to the
other, and bargain to internalise indirect network externalities. This situation
may arise when transaction costs are high or when the platform sets up
rules that prevent end-users from bargaining 8.
M. VERDIER
41
9 Two banks can play different roles during the settlement of a transaction, but these roles may
be switched during the following deal.
10 Also referred to as "three-party" systems.
11 In many countries, gift checks are not legally considered as payment instruments.
42
S
Payment system
aB = pB
aS = pS
Purchase
of a good
B
buyer
S
seller
Consumer's bank
(I)
Merchant's bank
(A)
Price
pB
Consumer
B
Commission
Payment system
S
Purchase
of a good
pS
Merchant
S
12 For instance, if retail-banking markets are perfectly competitive, banks' costs are completely
passed on to consumers and merchants.
M. VERDIER
43
Consumer, Bank I,
"issuer"
Merchant, Bank A,
"Acquirer" (*)
Price of a transaction
cI
bS
cA
Baxter noticed that each user will be willing to proceed to a transaction if,
and only if, the benefit of that transaction exceeds its price, which is equal to
the bank's marginal cost under perfect competition. Baxter assumes that the
merchant cannot discriminate according to the payment instrument 14.
Therefore, a consumer will be able to use his payment card if at the same
13 For more details about the launching of the payment card in France, see: "La carte bleue: la
petite carte qui change la vie", Patricia Kapferer and Tristan Gaston-Breton, dition le cherche
midi. The payment garantee was a good way of competing with cheques, which were not
garanteed.
14 Otherwise, there is no externality associated to card usage, because the merchant can
always charge a higher price for this instrument. This rule is called "Non Discrimination Rule"
(NDR).
44
B
b S b B t c A cI .
16 Several factors can account for this negative externality. Merchants' resistance to card
acceptance can be high, or there may be an imbalance between issuers' and acquirers' costs,
generating a higher price on one side of the market
17 Cheque payments are not guaranteed in France for payments exceeding EUR 15.
18 This analysis is based on the French direct debit and credit transfer systems. Systems in
Germany are very different. For further information, please refer to the study conducted by
Bogaert&Vandemeulebrooke at:
http://europa.eu.int/comm/internal_market/payments/directdebit/index_en.htm.
At the same time, one could argue that there are indirect membership externalities between
banks in these payment systems
19 This does not mean that both banks in direct debit and credit transfer systems offer the same
services for the settlement of a transaction.
M. VERDIER
45
Closed-loop systems using a linear tariff fall perfectly into line with the
theoretical framework built by ROCHET & TIROLE (2003b) to analyse
platform pricing. To begin with, they assume that a monopoly platform
chooses its prices a B and a S for buyers and sellers, respectively, to
maximise its profits. They show that two conditions must be satisfied to
achieve an optimal outcome:
aT c 1
aT
K
where c represents the platform's marginal cost and h the sum of merchants'
and consumers' demand elasticities.
The price structure must reflect the ratio of the elasticity of consumers'
a B a S 20
.
and merchants' demands: B
S
20 If the platform chooses its prices to maximise the social surplus, the price structure also
reflects the difference between the average surplus generated on each side of the market (see
Rochet and Tirole 2003 for further information).
21 In reality, payment card platforms also charge fixed membership fees, but this does not
modify the results obtained by Rochet and Tirole substantially. The platform uses per-interaction
prices
p B and p S which take into account usage pricing and fixed costs, which are incurred
46
1949 Diners Club was deriving over three quarters of its revenues from
merchants. Initially, credit cards were even given away to consumers to
encourage them to participate in the system and solve the chicken-and-egg
problem. At the same time, merchants were ready to pay more for
membership to attract buyers they perceived as valuable. This asymmetric
pricing is not specific to card payment systems. Gift vouchers, for example,
are often given away to consumers, while merchants must pay a
commission to the platform on acceptance 22. These examples show that
two-sided market theory provides us with a strong framework for explaining
asymmetric pricing on payment platforms.
Case of open-loop systems
M. VERDIER
47
"m" paid by merchants. This linear pricing studied in literature on the topic is
a good representation of a system like Visa. Indeed, the merchant's bank
pays the consumer's bank a fixed percentage per transaction, which
corresponds exactly to interchange fees as defined by literature on this
subject. However, other systems have chosen to implement more complex
pricing methods. The French payment card system "CB", for example, chose
to use a two-part tariff, which involves a fixed multilateral part, and a variable
bilateral part 25. This example suggests that the theoretical results shown by
the literature rely heavily on the modelling choice. Indeed, in all articles, the
interchange fee is modelled using a linear and multilateral tariff. In reality,
the definition of interchange fees varies a lot across countries and payment
card systems. The reader will find useful information in the comparative
analysis carried out by WEINER & WRIGHT (2005).
Interchange fees and externalities
Baxter's basic model shows that an appropriate choice of interchange fee
enables the platform to internalise the fundamental externality described
above. Let us look once again at the benefits and costs of an interaction for
each user.
Benefit from a transaction
Consumer
Merchant
b
bS
Cost of a transaction
cI a
cA a
S
25 The variable part will vary across the couples I-A. The first element of the variable part
depends on the transaction volume, and the second on another bilateral part, which is
calculated according to the relative number of cards from each bank used fraudulently.
26 The interchange fee in this model is either positive or negative. The hypothesis of a positive
interchange fee is equivalent to the assumption of a negative usage externality caused by the
consumer to the merchant.
48
their work to cover this issue 27. In fact, this literature mainly considers two
questions. Firstly, under what conditions does the interchange fee chosen by
the platform impact the transaction volume generated by end-users?
Secondly, if the interchange fee is not neutral, is the interchange fee chosen
by the platform socially optimal? The reader can refer to ROCHET's review
of the literature for further details (2003), and to WEINER & WRIGHT (2005)
for a cross-country analysis.
27 ROCHET & TIROLE, SCHMALANSEE, WRIGHT, GANS & KING they model different
sorts of competition between banks, between merchants on retail markets, consider
heterogeneous consumers, differentiated merchants etc.
M. VERDIER
49
50
aB aS c
aB
K 0B
aS
KS
On the buyers' side, demand elasticity is replaced with K 0B , the "ownbrand" elasticity (demand elasticity of buyers who choose platform i when
the seller offers transactions on both platforms 30). On the sellers' side,
demand elasticity is multiplied by the inverse of s, the singlehoming index.
The s, which can also be seen as a loyalty index, measures the proportion of
consumers that stop trading when their favourite platform is no longer
available. In reality, is there a lot of multihoming for payment systems?
Empirical results from Marc RYSMAN's work (2004) show that consumers
often hold several payment cards, but tend to use one platform. Over 75% of
M. VERDIER
51
the participants surveyed in his study spend over 87% on the same card in a
month. However, some consumers switch to another platform periodically.
Marc Rysman computes a transition matrix, which provides a good
estimation of the loyalty index 31. The indices computed are relatively high
(from 73.1% for Amex to 85.4% for Visa) 32.
How are Rochet and Tirole's results modified if competition takes place
between open-loop platforms? The latter assume constant margins for
banks competing in retail markets, denoted by m B and m S respectively, with
m m S m B 33. The prices charged by competing platforms at a symmetric
equilibrium are characterised by the following equations:
pS pB m
pB
pS
VK 0B
KS
31 For example, if a consumer mostly used the Visa network in a given month, what would be
the probability that the Visa network would be his/her favourite network again the following
month?
32 ARMSTRONG & WRIGHT analyse the role of exclusive contracts that prevent multihoming
in platform competition.
33 According to this hypothesis, maximisation of profits and volumes are equivalent for the
platform.
34 They consider strategic merchants with no surcharges and perfect competition between
identical platforms. Like ROCHET & TIROLE (2002 and 2003), they also assume constant
margins for banks on each group of users.
52
35 However, Guthrie and Wright's hypotheses do not offer a clear description of the situation
observed in reality.
36 Diversified financials. Industry Overview "Attacking the death star", April 15th, 2004.
37 In practice, it is extremely difficult to verify whether merchants respect this rule for payments
at the point of sale.
38 Banks' margins are not fixed. As we saw previously, maximisation of profits and volumes are
not equivalent for the platform.
M. VERDIER
53
The first natural criticism of the two-sided market theory pertains to the
lack of empirical research to quantify indirect network externalities between
consumers and merchants. In order to estimate membership and usage
externalities in payment card systems, for instance, one should first use data
to estimate demand on both sides of the market. This would be difficult to
achieve, since most merchants are already equipped with terminals to
accept cards in the majority of developed countries. When they are affiliated
with a system, merchants are generally not allowed to turn down cards
because of the "honour-all-cards" rule. It would consequently be impossible
to estimate the negative usage externality that merchants would be likely to
cause to consumers. It would also be rather difficult to derive a demand
function for cards on the consumer side, because prices vary significantly
from bank to bank, according to the bundle of services sold with the card.
The appropriate theoretical framework from the literature on two-sided
markets should consquently be chosen to estimate the links between
transaction volumes and price structure. This would also prove difficult to
estimate for the payment card industry, since consumers usually pay yearly
or quarterly membership fees, while merchants are charged per transaction.
Compared to other two-sided markets, like the media industry, it seems
more difficult to gather the appropriate data and develop a theoretical
framework to analyse the payment card industry 39.
The lack of specific elements from payments industry
As far as theory is concerned, the models developed by literature on twosided markets do not take into account key aspects from the payment
industry such as risk management or fraud prevention. Payment systems
39 Readers interested in empirical analysis of other two-sided markets can refer to KAISER &
WRIGHT (2006) for the media industry and RYSMAN (2004) for the business directory market.
54
often choose more complex pricing methods to provide incentives for their
members to invest in security or in fraud detection programs, as already
mentioned for the French Carte Bleue system. For the moment, the quality
of payment services (which can depend on different elements such as
security or payments guarantee) is absent from the analysis of payment
systems. However, current evolutions in the European legal framework
(NLF) should encourage economists to analyse other aspects of payment
systems, such as the impact of risk management on access and usage
pricing. Let us consider another example. If a payment system gives access
to two different types of users, a mobile network operator that is a simple
"payment institution" and a credit institution, which pricing method should it
implement? Both players are subject to different regulatory constraints in the
New Legal Framework. However, an incident caused by an erroneous risk
management strategy could eventually affect all members of the system in
the same way. It would also be interesting to take the quality and risk
management issues in to account when studying platform competition. The
following subsection provides some research perspectives for competition
between payment platforms.
M. VERDIER
55
useful to model the entry of newcomers in the payment industry. This issue
is all the more important, since new players, like retailers or mobile network
operators, have shown their willingness to participate to the payment
industry, and to offer alternative payment technologies. Under what
conditions will these newcomers compete with existing payment systems?
Will competition between payment systems involve some infrastructure
sharing, differentiation or complementarity of payment services?
CHAKRAVORTI & ROSON (2005) already started to analyse the impact
of payment system differentiation on platform competition. It would be
interesting to develop their study to find analytical results 40. Meanwhile, it
seems very important to lift the constant margin hypothesis to analyse
platform competition 41. Are the results obtained by Chakravorti and Roson
sensitive to the assumption of uniform distributions of benefits for consumers
and merchants? It would also be interesting to see how these conclusions
may evolve with strategic merchants. This would confirm whether Guthrie
and Wright's results are related to this specific hypothesis and enable us to
compare both models. At the same time, it may be useful to study platform
system differentiation through the prism of unbundling. Will payment
systems benefit from the unbundling of essential functions, such as clearing,
to better differentiate themselves from other services? When the transaction
chain is unbundled, how do payment systems manage their complementarity? What is the impact of unbundling on the risks born by each
payment systems?
Competition between closed-loop and open-loop systems
40 CHAKRAVORTI & ROSON only give numerical simulations for the results of competition
between differentiated platforms.
41 These authors work on the hypothesis that banks' margins are constant. Consequently, as
we saw previously, maximisation of volume and profits for the platform are identical.
42 To our knowledge, the only paper on this subject was written by MANENTI & SOMMA
(2002).
56
proprietary network. This issue seems all the more important nowadays
since open-loop systems are increasingly subject to regulatory pressure,
forcing them to decrease their interchange fees. Is it possible for open-loop
systems to decrease their interchange fees while facing competition from
closed-loop systems? And is it socially optimal to set up an asymmetric
regulation of interchange fees as the Australian regulator has done?
New research perspectives inspired by the single European payments area
The creation of a single European payments area and the future of the
various national payments systems open interesting research perspectives.
For instance, the literature on two-sided markets does not offer answers to
incentives that could encourage two payments systems to merge. In fact, the
single European payments area will certainly encourage national systems to
seek economies of scale. Mergers between payment systems are not the
only scenario to consider. One could also imagine that the national systems
would cooperate so as to accept payment instruments issued by other
platforms. National payment systems can also decide to use common
supports for payment instruments that can be used in several different
systems. For instance, the cards issued in the French system CB are
cobranded with the Visa or the MasterCard logo, which means that they are
accepted by these networks, when the French use them abroad. Under what
conditions and rules will the payment systems be able to use common
instruments 43? What is the impact of cobranding alliances on competition?
Conclusion
The literature on two-sided markets sheds light on the way some retail
payment systems work and interact. The essential contribution of this branch
of literature is to show that asymmetric user pricing is needed to optimise the
volume of transactions that are routed through the platform. Consumers and
merchants are charged prices by payment systems that do not reflect the
cost of serving them. Nevertheless, we show that the two-sided market
approach does not enable us to cover all the features of the payments
M. VERDIER
57
industry. Moreover, while card payments have been widely analysed, we still
lack results on other payment instruments such as cheques. Furthermore,
risk management and fraud prevention deserve more attention. One way of
tackling those issues may be to consider the quality of the service provided
by the platform. Finally, the literature on platform competitionhas not yet
dealt with the issue of cooperation or mergers between payment systems.
Our view is that a better understanding of retail payment systems is
needed to ensure an appropriate regulation of these markets. For instance,
we do not yet know which definitive rules will be adopted in the European
directive to define European payment instruments. However, we think that
the contributions of the two-sided market theory should not be neglected.
For example, payment cards and direct debits do not obey the same logic
and should not be dealt with in the same way. We also believe that fraud
prevention is a key issue, which should inform reflections on the various
statuses that the Commission intends to define for payment service
providers.
Moreover, the emergence of new payment instruments and technological
evolutions, such as contactless payments, will perhaps provide us with some
data to empirically test the hypotheses of two-sided market theory.
58
References
ARMSTRONG M. (2005): "Competition in Two-Sided Markets", working paper, May.
ARMSTRONG M. & WRIGHT J. (2004): "Two-Sided markets, Competitive
Bottlenecks and Exclusive Contracts", working Paper, November.
BAUMOL W. (1952): "The Transactions Demand for Cash", Quaterly Journal of
Economics, vol.67, no. 4, pp. 545-556.
BAXTER, W. (1983): "Bank Interchange of Transactional Paper: Legal and Economic
Perspectives", Journal of Law & Economics, vol. 26, no. 3, October, pp. 541-588.
BORDES CH., HAUTCOEUR P-C., LACOUE-LABARTHE D. & MISHKIN F. (2005):
"The economics of money, banking, and financial markets", Pearson education.
CHAKRAVORTI S. (2003): "Theory of Credit Card Networks: A survey of the
literature", Review of Network Economics, vol. 2, no. 2, June, pp. 50-68.
CHAKRAVORTI S. & ROSON, R. (2005): "Platform competition in Two-Sided
Markets: The Case of Payment Networks", working paper, May.
DAVID P.A. (1985): "Clio and the economics of QWERTY", American Economic
Review, vol. 75, no. 2, pp. 332-337.
EVANS D. (2003): "Some Empirical Aspects of Multi-sided Platform Industries",
Review of Network Economics, vol. 2, issue 3, September, pp. 191-209.
FARRELL J. & SALONER G. (1985): "Standardization, Compatibility and Innovation",
RAND Journal of Economics, vol. 16, pp. 70-83.
GANS Joshua & KING Stephen (2003): "The Neutrality of Interchange Fees in
Payment Systems", Topics in Economic Analysis & Policy, Berkeley Electronic Press,
vol. 3, no. 1, pp. 1069-1069.
GASTON-BRETON Tristan & KAPFERER Patricia (2004): Carte bleue: la petite carte
qui change la vie, dition le cherche-midi.
GUTHRIE G. & WRIGHT J. (2003): "Competing Payment Schemes", working paper
no. 245, Department of Economics, University of Auckland.
HUNT R. (2003): "An introduction to the Economics of Payment Card Networks",
Review of Network Economics, vol. 2, no. 2, June, pp. 80-96.
KAISER U. & WRIGHT J. (2006): "Price structure in two-sided markets: Evidence
from the magazine industry", International Journal of Industrial Organization, vol. 24,
pp. 1-28.
KATZ M. & SHAPIRO C. (1985): "Network Externalities, Competition and
Compatibility", American Economic Review, vol. 75 (3), pp. 424-440.
M. VERDIER
59
MANENTI F. & SOMMA E. (2002): "Plastic Clashes: Competition among Closed and
Open Systems in the Credit Card Industry", working paper.
RYSMAN M.:
- (2004): "An empirical analysis of Payment Card Usage", working paper.
- (2004): "Competition between networks: a study of the market for yellow pages",
Review of Economic Studies, vol. 71(2), pp. 483-512.
ROCHET J-C. (2003): "the Theory of Interchange Fees: A synthesis of recent
contributions ", Review of Network Economics, vol. 2, no. 2, June, pp. 97-124.
ROCHET J-C. & TIROLE J.:
- (2002): "Cooperation Among Competitors: The Economics of Payment Card
Associations", RAND Journal of Economics, vol. 33, no. 4, winter, pp. 549-570.
- (2003a): "An Economic Analysis of the Determination of Interchange Fees in
Payment Card Systems", Review of Network Economics, vol. 2, no. 2, June, pp. 6979.
- (2003b): "Platform competition in two-sided markets", the European Economic
Association, vol. 1, no. 4, June, pp. 990-1209.
- (2004): "Two-sided markets: an overview", IDEI-CEPR conference.
ROSON R. (2005): "Two sided-markets: a tentative survey," Review of Network
Economics, vol. 4, Issue 2, June, pp. 142-160.
SCHMALENSEE R. (2002): "Payment systems and Interchange Fees", Journal of
Industrial Economics, vol. 50, no. 2 (June), pp. 103-122.
SCHIFF A. (2003): "Open and Closed systems of Two-sided Networks", Information
Economics and Policy, vol. 15, pp. 425-442.
TOBIN J. (1956): "The Interest Elasticity of the Transactions Demand for Cash",
Review of Economics and Statistics, vol. 38, no. 3, pp. 241-247.
WEINER S.E. & WRIGHT J. (2005): "Interchange Fees in Various Countries:
Developments and Determinants", working paper 05-01, Federal Reserve Bank of
Kansas City, September.
WRIGHT Julian:
- (2002): "Optimal Payment Card Systems," European Economic Review, vol. 47, no.
4, August, pp. 587-612.
- (2004): "Determinants of Optimal Interchange Fees in Payment Systems", Journal
of Industrial Economics, vol. 52, no. 1, March, pp. 1-26.
(*) Parts of this paper are based on work carried out by the author for the European
Commission. The comments from an anonymous referee are gratefully acknowledged. The
opinions expressed in this paper are the sole responsibility of the author.
62
1 This paper deals with mobile telephony only, although most of its arguments are also valid
more generally, including for a deregulated fixed telephony sector. I prefer to stick to the mobile
case to avoid crucial factors specific to fixed telephony, such as extremely large incumbency
advantages, public ownership or universal service obligations.
T. VALLETTI
63
64
calls on the side of the receiver! If one then applies the SSNIP test to this
market, the exercise looks less straightforward. Which price should one
increase? And who pays for it? The response of a customer to an increase
in the price of termination, and therefore the profitability of the (hypothetical)
firm that initiates it, will differ if the party that bears its cost is the receiver or
the sender.
A less formal market definition would at this stage consider the whole
economic environment, starting from the fact that customers do not demand
calls per se, rather they want to communicate, for example, exchange
information. Calls sent and received are just inputs in this exchange of
information. According to this view, a mobile operator is a provider of a
"platform" that allows the exchange of communications between these two
different sides, the senders and the receivers. In this sense, a mobile firm
should be analysed in the context of the "two-sided markets" framework,
which has recently received much attention both in academic literature and
in court cases.
Two-sided platforms
The term "two-sided platforms" (2SPs) refers to products and services that
must be used by two (or more) groups of customers to be of value to them.
The "platform" enables interactions between the different "sides", trying to
get the two sides "on board", and charging each side.
2SPs are the subject of a recent body of academic literature in economics
that usually refers to them as "two-sided markets" 2. Since the term "market"
is used in a different way for the purposes of antitrust policy, this paper
adopts the more neutral 2SP terminology 3. There is no unequivocal
definition of 2SPs in existing literature. Rochet and Tirole (2003) proposed
the following definition: "A market is two-sided if the platform can affect the
volume of transactions by charging one side of the market more and
reducing the price on the other side by an equal amount; in other words, the
price structure matters".
2 See ROCHET & TIROLE (2003), EVANS (2003), WRIGHT (2004), ARMSTRONG (2006).
3 See EVANS & NOEL (2005).
T. VALLETTI
65
66
various risks between the entity that services the cardholder and the entity
that services the merchant.
x Software platforms such as PCs, video games and music players. The
two sides here are represented by users who want to run software
applications and developers who write applications and sell them to users.
Are 2SPs relevant for telephony? Clearly, any network operator is a
multi-product firm. However, the mere fact that multiple product or "cluster"
markets are involved does not imply that a 2SP is implicated. If the various
products are bought and consumed by the same customer, there is no 2SP
involved since there are no inter-group network externalities. Therefore,
services such as access and call origination can be analysed, to a large
degree, with standard antitrust tools that do not need to be extended to the
analysis of 2SPs.
There are situations where 2SPs can be applied to telephony too. An
important case in point is call termination. A network operator, in this case,
falls in the category of "exchanges" introduced above, as it allows "senders"
and "receivers" to complete their match, i.e., communicate. There is an
externality involved as senders can communicate more the higher the
number of receivers they can contact, and receivers are likely to benefit from
receiving many calls the larger the number of senders there are 4. More
generally, termination revenues form an integral part of the way an operator
sets prices for both termination and outgoing services. These can be distinct
services, but have close inter-relationships since the demand and price for
one service affects the other.
Although we will analyse call termination markets only in a later section,
we anticipate here that the exercise of market power when setting
termination rates is likely to differ when calls are sent and received "on-net"
(i.e., senders and receivers both subscribe to the same network operator)
and when they are "off-net" (i.e., senders and receivers belong to different
network).
In the former case, the "platform" is likely to internalise externalities
between the two sides, and the presence of competition limits the ability of
4 It could be argued that mobile users belong to the same group. One should therefore speak of
"intra-group externalities", rather than "inter-group" externalities typical of 2SPs. However,
please note that my description of the problem relies on having "senders" and "receivers", which
represent the two groups that need a platform to conduct an exchange of communications. In
this sense, I would argue that the definition of a 2SP applies to mobile telephony literally.
T. VALLETTI
67
the network operator to raise termination prices. In the latter case, the
network operator will not internalise the effects on senders when setting the
termination rate and market failure is likely to arise. A specific example of
such market failure is the case of fixed-to-mobile (F2M) calls 5.
5 The theory of two-sided markets received some prominence in the recent case on mobile
termination rates in New Zealand; see NZ Commerce Commission (2005).
68
T. VALLETTI
69
70
particular price that is deemed to be "wrong" (for example, too high) if the
other competitors do not - that would result in losses relative to the rivals.
The threat of fines thus does not work in this context, because no individual
firm can comply. The consequence of this reasoning is that any intervention
has to ensure collective compliance - either by all firms having the same
unilateral incentives at the same time (for example, by setting up a position
in which the authority effectively requires them to "collude"!) or by their
conduct being subject to some exogenous constraint (which is another word
for regulation).
Conclusions on 2SPs
2SPs involve inter-group network externalities and are relevant in many
industries, including telecommunications. As a result of these externalities,
socially-optimal prices in 2SPs typically depend in some intricate way on
price elasticities of demand, inter-group network effects and costs. This is a
complex exercise that can be conducted by taking into account market
realities and avoiding mechanical applications of standard definitions and
tools.
Another result of externalities is that socially-optimal prices in 2SPs,
generally, are not purely cost-based. By understanding the nature of the
problem, it is therefore easy to avoid possible fallacies. For instance,
incremental cost pricing is typically not efficient with 2SPs. High individual
mark-ups may also not indicate standard market power. A more balanced
pricing structure (interpreted as prices being more in line with costs) is not
necessarily produced by fiercer competition. Moreover, the removal of
alleged cross-subsidies, such as decreasing one price (A) and increasing
another price (B), does not necessarily benefit the side (A) that pays a price
above cost. This is because, by increasing the other price (B), some B users
may drop off, thus making the product less valuable to A users as well.
Firms with the features of a 2SP are correct to stress the fact that these are
special markets, which policy-markers consequently need to be very careful
with. We agree with this point and always advocate a full and appropriate
economic analysis of these markets. However, we conclude by recalling
that, even if a two-sided market is assumed to be perfectly competitive, then
the market does not work. This is in stark contrast with standard one-sided
markets: when these markets are competitive, they are also efficient and no
regulator should interfere with their working. In two-sided markets, on the
T. VALLETTI
71
other hand, privately chosen prices, even when ideally set by competing
firms, will differ from socially-optimal prices. An appropriate intervention can
increase consumer and social welfare. 2SPs should therefore be subject to
more, rather than less regulatory oversight.
72
Incoming calls
Mobile customers want to receive calls. Under the CPP system, these
calls are initiated and paid by other customers. Given this peculiar feature,
the exercise of market definition should be conducted looking at the
behaviour of both the sender and the receiver. Let us start with the sender
first. The sender has a demand for calls to a particular person owning a
mobile phone. Calls to mobile phones do not have strong demand
substitutes, as senders typically are willing to pay a premium if they need to
contact a person without knowing her exact location. If the price of a call to a
mobile network goes up, a caller would probably reduce the number and/or
length of calls, according to her demand elasticity, but it is very unlikely that
the caller can find good alternative substitutes. A call is typically placed to a
mobile user when the caller wants to be sure to contact and interact in real
time with the called party, for which there is no effective substitute. The
sender therefore has very limited ability to find substitutes if the price of calls
to mobile goes up because of a price increase initiated by the mobile
operator that terminates the call 8.
The behaviour of senders therefore does not impose any limit on the ability
of the mobile firm to increase the price of incoming calls. However, this
analysis is incomplete since constraints on increases in the price of incoming
calls can also arise if receivers themselves react to an increase in the price
of a call to a mobile. For instance, if the receiver cares about the satisfaction
of the sender, then the price of calls to mobile telephones will be
internalised. The latter case is sometimes referred to as "closed user
groups" and can correspond to families that behave under a single budget
constraint, or some business users who provide different sorts of telephony
services to their employees. These can constitute a large part of the
customer base of a mobile operator. Mobile operators, however, have the
ability to price discriminate among different groups, for instance by offering
discounts to large business users, hence their presence does not seem to
restrict overall price levels for other customers.
8 Continuing with the example presented in box 1, where customer A is the caller and customer
B is the receiver, this price increase could be paid directly by the sender if the price pB for
termination is paid directly by customer A to B's provider at the retail level. If, instead, A's
provider bills customer A and then pays a termination charge to B's provider, the price increase
would be initiated at the wholesale level (tB) and have repercussions at the retail level (pAB). In
this latter case (the most common situation in practice), the demand for B's provider is a derived
(input) demand to be analyzed at the wholesale level. In both cases, however, customer A has
a limited ability to find a substitute means of contacting customer B.
T. VALLETTI
73
The receiver may still limit the provider's ability to charge others high
prices. In fact, if the price of incoming calls increases, the number of calls
received will decrease, which has a negative effect on the satisfaction of the
receiver, since receiving calls is clearly one of the incentives of subscribing
to a mobile telephone in the first instance. However, this is not necessarily a
disadvantage for consumers that receivers can easily see or react to. It is
documented by several NRAs (for example, Ofcom) that receivers'
awareness of the price of calls to mobile telephones is low and that the price
of incoming calls is not considered by subscribers to be an important factor
in their choice of mobile operator and other factors are more influential. The
mobile owner cares most about the prices s/he has to pay to subscribe to
and place calls with a mobile operator, but in most cases will not take into
account the prices paid by other callers to contact him/her. In fact, mobile
telephone owners may enjoy a higher level of overall satisfaction if an
increase in the price of incoming calls, despite reducing the number of
incoming calls, induces the mobile operator to decrease other prices directly
paid by subscribers.
When assessing what type of dominant behaviour might arise in the
market for incoming calls, it is useful to distinguish between the following
three types of mobile incoming calls:
- calls to mobile (on-net),
- calls to mobile (off-net),
- calls to mobile (from other non-mobile networks, mostly F2M calls in
practice).
In principle, given that a mobile firm is by definition the only firm that can
terminate calls for its own customers, SMP in the form of single dominance
should arise, no matter what type of call is under consideration. However, as
mentioned repeatedly above, in this market both a sender and a receiver are
involved and their identity cannot be neglected.
In the case of on-net calls to mobile, if the mobile firm tried to increase
the price of the termination end of the call, the sender that would suffer the
price increase would be one of its own customers. An increase in termination
price would make the overall package offered by the firm to its subscribers
less appealing, and the firm would lose customers as a result. Competitive
forces do act as a constraint on the firm's behaviour, hence there is not likely
to be any market power abused in this case. In terms of the analogy with
two-sided markets, in this case the mobile firm is a platform that perfectly
"internalises" transactions that only affect its customers.
74
Contrary to on-net calls, single dominance is likely to exist for the other
two kinds of incoming calls, mobile off-net calls and calls to mobile from
other networks (F2M calls). In these two instances, the sending party that
pays the call is not one of the firm's customers and the firm's receiving
customers would not react to a price increase, which gives the mobile firm
the ability to set the price at monopoly levels. From the point of view of
single dominance, these two types of calls are therefore quite similar.
There is nonetheless one possible important difference between these
two types of incoming calls to mobile from other customers. The difference
lies in the strategic environment. Off-net calls are charged to customers
belonging to a rival mobile network, while there no strategic interaction
between a mobile firm and a fixed firm, as these are to a large extent
separate markets.
As customers buy mobile phones with the purpose of receiving calls from
other customers, a firm might be tempted to increase its off-net termination
price in order to distort competition in the market. This incentive exists, on
top of the termination monopolisation effect, only for mobile off-net calls. For
instance, a mobile firm could set a high off-net termination charge, so that
the overall off-net price paid by rival customers is high. Customers would be
willing to join a bigger network: on-net calls, to the extent that they are
cheaper than off-net calls, imply that customers would be receiving relatively
more incoming calls.
What we have described to far can be said about the price incoming calls
in general, without distinguishing whether these calls are set at the
"wholesale" level as termination charges or at the "retail" level charging
senders directly (see box 1 again for this analogy). There is, however, a
possible main difference with the "retail" market analysis of incoming calls. If
the sending party was billed directly by the receiving operator, it seems
natural that the termination price is set directly by the receiving network, thus
the sending customer has no bargaining power. Instead, at the wholesale
level, the termination price is more likely to be negotiated between the
sending and the receiving network. Countervailing buyer power (i.e.,
bargaining, negotiations) should therefore be taken into account when
analysing the wholesale market for incoming calls in order to determine the
presence of SMP.
In particular, a bargaining model seems quite appropriate to an analysis
of the market for "off-net" M2M calls, as this is a bilateral problem of "twoway" interconnection, where two wholesale prices have to be negotiated,
T. VALLETTI
75
one in each direction. One network, when negotiating the wholesale price for
sending calls to the rival network, can always use its own wholesale price for
receiving calls from the rival as an effective threat in the bargaining game. In
this context, there are different sets of results from the literature 9:
x Bilateral wholesale negotiations can get rid of inefficiencies, given the
reciprocal nature of bargaining. This is true particularly for negotiations
among symmetrically-placed networks.
x Bilateral negotiations may be used to affect the intensity of
competition at the retail level. The nature of collusion may be different:
- Collusion may happen in a "static" framework by setting high
termination rates because of a "raise-each-other's-cost" effect 10. This
result holds true only under particular circumstances, namely retail prices
should be linear (which may be applicable to pre-paid cards), while it
does not apply under more sophisticated retail pricing structures (twopart tariffs, for example, monthly rental plus price per minute of usage).
- Collusion may also happen in a more standard "dynamic" framework,
where networks repeatedly interact with each other. The role of
wholesale termination charges may be one of giving a "focal" reference
point to set collusive retail prices. Please note that, in this case, joint
dominance should be established at the retail level, while the wholesale
level may facilitate reaching the collusive agreement.
The applicability of a bargaining model to the determination of the
wholesale price for termination of F2M calls is more controversial 11. In a
bargaining model, two parties have to find a way to divide the surplus
created by finding a deal. This division is influenced by the outside options
that the parties have, i.e., what they could get if they threaten not to strike a
9 See ARMSTRONG (2002), LAFFONT & TIROLE (2000), VOGELSANG (2003), CAMBINI &
VALLETTI (2005).
10 To see this, imagine what happens when operators charge customers collusive (monopoly)
retail prices. If mobile customers call each other with the same probability, the traffic is balanced
and an operator pays the rival the same amount for termination services that it receives from
the rival for similar services, independently of the value taken by the termination charge. This
can be an equilibrium only if no one has a unilateral incentive to deviate. If one firm deviates
from the monopoly retail charges by undercutting the rival, it induces its subscribers to call
more. Since part of the calls made is destined for the rival's network, the effect of a price cut is
to send out more calls than it receives onnet from the rival. The resulting net outflow of calls
has an associated deficit that is particularly burdensome if the unit termination charge is high.
This will discourage under-pricing in the first place. Some conditions are necessary to produce
this outcome, for instance products should not be too homogeneous, otherwise the incentive to
undercut would have the additional benefit of increasing market share.
11 See BINMORE & HARBORD (2005), UK Competition Appeal Tribunal (2005).
76
deal. The "threat" points are not as natural as in the bilateral negotiation of
termination of M2M calls. In the case of F2M calls, the negotiated price is
only "one way", as the other way (M2F) is typically regulated. This
asymmetric treatment of M2F and F2M calls is a possible source of
distortion that must be noted.
This problem of "bargaining in the shadow of regulation" still has to be
clarified in full. However, some related aspects have received partial
answers. For instance, an argument put forward has been that, to have a
viable business, a small MNO must have an interconnection agreement with
the incumbent fixed-network operator. This argument mixes up incoming
calls and all other services. In fact, as a first cut, the bargaining problem
does not seem to be affected by the size of a MNO. The size of the MNO
affects the total surplus to be bargained over, not its division. This is
because, once MNOs have some subscribers, bargaining might occur over
calls destined to those customers, therefore without substitution possibilities.
As a result, we can conclude that the existence of countervailing buyer
power over the setting of termination prices does not seem more likely for
small MNOs 12.
Conclusions
Practitioners and policy makers should not forget that the role of market
definition is to provide a basis on which regulators or anti-trust authorities
calculate important indicators such as market shares, etc., in making their
prima facie case. However, one should be very careful not to make too much
of market delineations. Market definition is not a substitute for a full analysis
of the likely competitive effects in a certain economic environment under
examination. The task of defining markets should not be confused with the
assessment of competitive effects and efficiencies. In practice, this means
that many subtle interactions that may be missed when defining markets as
a first cut, can be taken into account at later stages, for example, when
assessing market power and eventually imposing remedies.
12 In fact, there are theoretical arguments (and some empirical evidence) for supporting the
opposite result: smaller networks charge more for F2M termination than bigger networks in the
presence of consumer ignorance, mobile number portability, or no discrimination requirements
for F2M calls. See GANS & KING (2000) and WRIGHT (2002).
T. VALLETTI
77
References
ARMSTRONG M.:
- (2002): "The Theory of Access Pricing and Interconnection," in: M. Cave, S.
Majumdar & I. Vogelsang (Eds.), Handbook of Telecommunications Economics,
North Holland, Amsterdam.
- (2006): "Competition in Two-sided markets", RAND Journal of Economics.
BINMORE K. & D. HARBORD (2005): "Bargaining over fixed-to-mobile termination
rates: countervailing buyer power as a constraint on monopoly power", Journal of
Competition Law & Economics.
CAMBINI C. & T. VALLETTI (2005): "Information Exchange and Competition in
Communications Networks", CEPR Discussion Paper.
EVANS D. (2003): "The Antitrust Economics of Multi-Sided Platform Markets", Yale
Journal on Regulation.
EVANS D. & M. NOEL (2005): "Defining Antitrust Markets when Firms Operate Twosided Platforms", Columbia Business Law Review.
GANS J. & S. KING (2000): "Mobile Competition, Customer Ignorance, and Fixed-tomobile Call Prices", Information Economics & Policy.
LAFFONT J.J. & J. TIROLE (2000): Competition in Telecommunications, MIT Press,
Cambridge (MA).
ROCHET J.C. & J. TIROLE (2003): "Platform Competition in Two-sided Markets",
Journal of the European Economic Association.
VALLETTI T. & G. HOUPIS (2005): "Mobile Termination: What is the 'Right'
Charge?", Journal of Regulatory Economics.
VOGELSANG I. (2003): "Price Regulation of Access to Telecommunications
Networks", Journal of Economic Literature.
WRIGHT J.:
- (2002): "Access Pricing under Competition: an Application to Cellular Networks",
Journal of Industrial Economics.
- (2004): "One-sided Logic in Two-sided Markets", Review of Network Economics.
his paper explores the relationship between the calls and text
messages exchanged via mobile telephones owned by young people
in East Asia and their influence on perceptions of friendship with
communication partners. In the last ten years, the number of mobile
telephone users has increased dramatically. Mobile telephones have
become a popular medium of interpersonal communication. The younger
generation in particular depends heavily upon mobile telephone text
messaging, also known as SMS (short message service) and mobile e-mail
to keep in touch with one another, any time and any place. Indeed, the
mobile telephone has become an indispensable communication tool in the
daily lives of the younger generation.
(*) The authors are indebted to Philip Sugai, Miyuki Aoshima and Chiu Chia Hua for their
comments and assistance.
80
1 Latent variables are conceptual variables that cannot be directly observed, but are rather
inferred from other observable, measurable variables.
81
Changes in communication
Today's younger generation uses mobile telephones as its primary
platform for communication. Use of such advanced communication
technologies has fostered the development of new types of friendships,
which are created and sustained through electronic connections like mobile
internet connections, SMS, e-mail, etc. Furthermore, mobile telephones
have changed the way friends and acquaintances communicate, as well as
the perceptions of individuals regarding these interpersonal relationships.
Recent research has shown that participating in communication is often
more important than its content (SOUKUP et al, 2001). Some content
suggests that there are indeed significant opportunities for establishing
better and closer relationships among friends through the use of mobile
telephones (VANCLEAR, 1991).
On the other hand, MATSUDA (2001) suggests that the mobile platform
has significantly lowered the quality of communication because "consumer
related communication" and "brief messaging (just to kill time)" have been
identified as the main uses of mobile-related communications. Yet even
though the content of these mobile text messages has been labelled as
trivial or superficial, small talk itself could be considered an enabler of
interpersonal communications (KOPOMAA, 2000).
82
Methodology
Relevant literature
For this study, mobile usage is defined as voice and text communication
via the mobile telephone. A prior study (TANAKA, 2001) suggests that when
considering usage behaviour, three broad factors are to be considered:
- attributes of the medium, in this case a mobile telephone, including
technological constraints,
- cost,
- interpersonal (including personality, emotional, and social) factors.
83
2 The covariance structure analysis is a statistical model that describes the overall structure and
relationships among observable variables via some conceptual unobservable variables, i.e.,
latent variables, representative of some relevant observable variables. The analysis is very
flexible and allows endogeneity between variables.
84
Personal Attributes
Relationship
Usage of mobile
IT Literacy
Analysis
85
Seoul
Taipei
Tokyo
University Students
593
University Students
436
University Students
406
Male:
300
Female: 293
Male:
207
Female: 229
Male:
206
Female: 200
Oct. 02 to Nov. 02
May 03 to July 04
Nov. 01 to Apr. 02
Hypothesis
H1: Usage of mobile telephones and text messaging deepens
relationships with friends and acquaintances.
H2: Usage of mobile telephones and text messaging widens
relationships with friends and acquaintances.
86
Seoul
4.9
22
Taipei
Tokyo
4.5
5
2.4
23
Number of
People
Seoul
Taipei
Tokyo
1 -10
11 - 20
Over 20
29%
52.8%
18.2%
59.4%
22.7%
17.9%
64.5%
20.7%
14.8%
0
1-5
6 - 10
Over 10
N/A
5%
15%
25.7%
54.3%
2.5
59.2%
14%
6.7%
23.1%
2.1
3%
84%
10 %
3%
2.8
5 Originally, a total of 22 observable variables were identified, but a factor analysis deemed 7 to
be statistically insignificant. Specifically, for personal attributes, 5 out of 8 observable variables
(clothing expenses, cosmetic expenses, hobbies, lodging, and marital status) were statistically
insignificant; for mobile usage, 1 out of 4 observable variables (cost of sending and receiving
text messages and voice calls) was statistically insignificant; and for IT literacy, 1 out of 4
observable variables (use of game players) was statistically insignificant.
87
88
89
AGFI
RMSEA
AIC
Model 1 0.94
0.93
0.032
844.25
< Criteria >
GFI/AGFI (Adjusted Goodness of Fit Index): more than 0.9
RMSEA (Root Mean Square Error of Approximation): less than 0.08
AIC (AkaikeInformationCriterion): the lower, the better
Empirical results
90
91
Conclusion
92
References
ADESON J.G. & GERBRUNG D.W. (1988): "Structural Equation Modeling in
Practice: A Review and Recommended Two-Step Approach", Psychological Bulletin,
No. 103(3), pp. 411-423.
CLARK T. (2003): "Japan's Generation of Computer Refuseniks", Japan Media
Review, http://www.ojr.org/japan/wireless/1047257047p.php
CUNNINGHAM H. (1988): "Digital Culture the View from the Dance Floor", in J.
Sefton-Green, DigitalDiversions: Youth Culture in the Age of Multimedia, UCL Press,
pp. 128-148.
DALY J.A. (1987): "Personality and interpersonal communication: Issues and directions", in J.C. McCroskey & J.A. Daly (Eds.), Personality and interpersonal
communication, pp. 13-41, Beverly Hills, CA: SAGE.
LONGMATE E. & BABER C. (2002): "A Comparison of Text Messaging and Email
Support for Digital Communities: A Case Study", in X. Faulkner, J. Finlay & F.
Detienne (Eds.), People and Computers XVI- Memorable Yet Invisible, Proceedings
of HCI 2002, Springer-Verlag, London.
GREEN S.J. (2000): Digital Diversions Youth Culture in the Age of Multimedia, UCL
Press.
HYERS K. (2003): Mobile Messaging in Japan, Asia and AME: 2003 Through 2007,
Wireless Data Markets.
Institute of Socio-Information and Communication Studies (2001): Information
Behavior 2000 in Japan, University of Tokyo Press (in Japanese), pp. 33-38.
ITO M & OKABE D. (2003): Paper presented at the conference: 'Front Stage-Back
Stage: Mobile Communication and the Renegotiation of the social Sphere', June 2224, Grimstad, Norway.
KOPOMAA T. (2000): The city in your pocket: Birth of the mobile information society,
Helsink: Gandeamus.ata., ISBN 951-662-802-8.
LONGMATE E. & BABER A. (2002): "Comparison of Text Messaging and Email
Support for Digital Communities: A Case Study, People and Computers XVIMemorable Yet Invisible", Proceedings of HCI, pp. 257-262.
MASTUDA M. (2001): "University Students and the Use of Mobile Telephones", the
Study
of
Socio-Information,
Bunkyo
university
(in
Japanese)
from:
http://www10.plala.or.jp/misamatsuda/youth-mobile.html
SCHMITZ J. & FULK j. (1991): "Organizational colleagues, media richness, and
electronic mail", Communication Research, I8(4), pp. 487-523.
SOUKUP P.A., BUCKLEY F.J. & ROBINSON D.C. (2001): "The influence of
information technologies on theology", Theological Studies, Vol. 62 (2), pp. 366-373.
93
Communication
Republic
of
Korea.
See:
94
Annex
Personal Attributes
Relationship
Usage of
Mobile
IT Literacy
The values in the box beside each arrow are the coefficients estimated for each metropolitan
area:
Top = Tokyo
Middle = Seoul
Bottom = Taipei
Opinion
Interview with
David EVANS
Vice Chairman of LECG Europe, London
DE: A large part of the ICT market is two-sided, and with the advent of more
sophisticated technology and content capabilities, some markets are now
evolving towards becoming multi-sided.
COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 97.
98
Mobile telephony is a two sided market and with the introduction of content
and television programming on mobile phones, it is becoming a multi-sided
market. Mobile telephony is two-sided when viewed in several ways. Firstly,
these mobile networks bring together callers and receivers. While that may
seem trivial, an examination of pricing strategies shows that it is actually
quite central to the business. Moreover, modern mobile telephones are built
on top of an operating system, which attracts developers to write
applications such as ring tones for it that makes the device more attractive to
subscribers. As content continues to find its way onto mobile phones, the
dynamics between operating systems, content providers, advertisers,
handset manufacturers and network operators must be perfectly aligned in
order for subscribers to find the offer attractive and at a price point they find
pleasing and for the various customer groups to get and stay on board.
Terrestrial or free television is two sided it brings together viewers and
advertisers. Pay TV is two-sided also, even though it may not initially
appear that way. Even although viewers pay for content, advertisers,
sponsors and other third party revenue sources are often important as well,
and the interdependency among advertisers and viewers is becoming very
important.
The information market is obviously huge and by and large is also twosided. Whether information is received via newspapers or magazines or web
portals, readers (or web site visitors) and advertisers have an
interdependency that can only be managed by a catalyst.
Search engines, which are information tools, are often used to support twosided businesses. Site visitors are granted free use of the search tool and
advertisers pay for the privilege of reaching these visitors. Without visitors,
advertisers wouldn't be interested and without advertisers, search engines
would lack a revenue stream to fund their operations. Some of these sites
are giving rise to the phenomenon of "mash ups" which is a website or
web application that seamlessly combines content from more than one
source into an integrated experience for the end user. In this instance, the
interdependency among content sources and web visitors is the dynamic
that must be managed.
C&S: Do you think that the development of ICTs can stimulate the development
of 2SM? For instance, in the last few years, online dating sites, which can be
viewed as 2SM, have been developing fast, thanks to the possibilities offered
by the internet. More generally, it could be argued that ICTs facilitate the
development of platforms.
99
to many new catalysts that have embraced the catalyst concept and created
new businesses operating purely in a virtual world. Nowhere is this more
obvious than in search and other online information portals.
For catalysts that operate in both worlds, physical and virtual, pricing and
product design decisions become interesting and complex. They must
decide design and pricing decisions across media and not just within media.
By that I mean that they must decide whether the physical or virtual world
will subsidize the other instead of being limited to thinking only in terms of
how the customer groups in a single instance must be engaged, for
example, readers and advertisers of a newspaper.
Technology stimulates the development of catalysts in another important
way as well. Many information and communication based catalysts are also
software platforms or operating systems. As the internet makes it easier and
easier for developers to write new applications, the value of the catalyst to
the end user becomes increasingly valuable. This often has the effect of
then attracting more developers who write more applications, which, in turn,
attracts more end users, and so on.
It must also be said that in some cases, technology makes it harder for
catalysts to operate profitably. A recent study published out of the
newspaper industry suggested that one reader of a print version of a
newspaper is worth 100 online readers. This means that for each reader that
a physical newspaper loses to its online portal, it must attract 100 times as
many viewers in order to remain profitable. American newspapers have
been losing a significant portion of their value as advertising has migrated to
the internet.
C&S: What do you think about the current state of research on 2SM? In
particular, do you think that a general theory of 2SM could emerge, that would
provide general insights applicable to any particular industry situation? Or do
you think that future research on 2SM has to develop more specific markets
(software, media etc.)? Besides, to our knowledge, there is not yet any
econometric work on 2SM. What are the specific difficulties of doing
econometrics on 2SM?
100
work on how a market disruption affected the pricing structure for payment
systems in Australia. Econometric analysis -particularly modern estimation of
structural models - is difficult because of the complexity and
interdependence in these businesses. My guess is that the results will be
fragile. My own work has focused on detailed case studies concerning the
evolution and operation of these businesses.
C&S: Do you think that the theory of 2SM provides sufficiently robust and nonambiguous insights for decision makers (from firms, competition or regulatory
authorities)?
DE: Yes and no. Businesses understand the dynamic nature of catalysts
and the fundamental need to get both sides on board. But I think that they
underestimate the difficulty in crafting the business models needed to
sustain it. This is due, in part, to the fact that business schools just don't
provide the tools or the training to think in this two or multi-dimensional
world. Setting prices is one example of this complexity. Pricing models must
be carefully constructed to reflect the impact of interdependencies on
demand and price. Cost-plus or value-based pricing, which is the tool of the
modern business executive, are lethal when applied to catalysts. The failure
to understand the role of pricing subsidies in stimulating demand was largely
the reason for the many of the dotcom failures of the late 1990s/early 2000s.
Unfortunately, human nature makes us distrustful of all that is different and
that we don't understand. And as a result, catalysts have had more than
their fair share of run-ins with government authorities, not to mention the
media, which tend to see chicanery in business practices that don't fit
preconceived molds. Microsoft's run-in with antitrust cops in the United
States is a good example of this.
Some economists - and lawyers for public and private entities - suggested
that Microsoft had gotten developers to write thousands of applications for
the Windows operating system to make it harder for others to compete.
What they didn't recognize was that operating systems are catalysts that
serve developers, users, hardware makers, and possibly other communities
as well. All operating systems compete by encouraging application
developers and others to make complementary products that help
consumers. Microsoft may well have crossed the line in other actions that it
took, but encouraging developers to write applications was an essential part
of the catalytic reaction that created value for the many communities this
operating system served.
101
DE: The role of the catalyst in a two sided market is to get all customer
groups on board. It does this in a number of ways. Often product design is
essential to creating an attractive, safe and convenient mechanism for
customer groups to take advantage of their mutual interaction. Pricing
strategies are also important since two sided markets work because the
catalyst often subsidizes one customer group in order to attract the other.
This fundamental principle does help to explain the organization of the
telecommunications ecosystem. The network operator, as the catalyst, has
an incentive to get content providers interested in their platform so that they
can attract new subscribers who not only pay for that content, but also for
air-time consumed. However, this does not necessary imply that operators
be vertically integrated. In fact, with the right product design, pricing strategy
and business model, catalysts ignite markets without the need to own their
suppliers.
C&S: In 2SM, price structure is fundamental. Do you think that, in some hi-tech
markets, the current structure of prices is inefficient and should be revised to
take into account 2SM effects?
DE: No. I don't believe economists at least have any basis for concluding
that pricing is inefficient. In many high-tech markets, such as internet
television, businesses are struggling to find the right pricing model. It is a
very difficult task; but businesses have strong financial incentives to find a
pricing structure that gets customers on board and creates a profitable
business. Economists, armed with the theory of two-sided markets, can help
guide businesses in this effort.
The most important message is that regulatory and antitrust analysis must
take into account the interdependencies between the multiple sides of the
market. It is analytically unsound to treat particular sides of the market in
isolation. The second most important message is that regulators and
authorities need to be cognizant of the economics involved in creating and
102
C&S: In a 2SM, do you think that the efficient market structure is a monopoly?
If this is the case, what does this mean for regulators or competition
authorities in sectors like the media and hi-tech markets?
DE: The evidence is overwhelming that hardly any industries based on twosided markets evolve towards a monopoly. Just look at advertisingsupported media, exchanges and matchmaking services, payment systems,
and software platforms. Some - such as Microsoft in personal computers have tended towards single-firm dominance. Most others have tended
towards oligopoly and some are quite competitive. In practice, product
differentiation and multi-homing work offset indirect network effects.
103
C&S: In the computer and software industries, two types of platforms are
competing for gamers and game developers: game consoles on the one hand
and the Windows platform on the other. Do you think that the coexistence of
consoles and PCs will continue or do you think that a convergence of the two
types of platforms is likely in the future?
DE: Gaming platforms are highly specialized and follow a different business
model than personal computers software platforms. I therefore think that
these platforms will remain distinct and that customers, application
developers, and console makers will not find that it makes sense to have
"one size fits all." An interesting question though is which--if either--of these
platforms captures home entertainment, including television.
Other theme
C&S: You have an MIT Press book coming out called Invisible Engines on
software platforms as two-sided, and a Harvard Business Press book on
management and strategic aspects. Could you tell us more about these two
books?
DE: Invisible Engines (forthcoming MIT Press Fall 2006) is the story of
software platforms, the technology that powers everything from mobile
phones to search engines, from automobile navigation systems to digital
video recorders and from smart cards to web portals. The book tells the
story of how this malleable and adaptable code - and the products that it
facilitates - is challenging, and perhaps even destroying, many longestablished industries. Entrepreneurs and executives should find insights
that they can apply in their own businesses; whether these be an industry on
the verge of creative destruction brought about by software platforms, or one
that is just emerging as a result of the opportunities that these invisible
engines can facilitate.
Catalyst Code (forthcoming, Harvard Business School Press Winter 2007)
describes the business value of catalysts as they build, stimulate and govern
two sided markets. The book is organized around a practical, yet strategic,
framework for creating a successful catalyst and outlines the steps needed
to create and maintain a profitable catalyst ecosystem. It offers insights from
hundreds of successful catalysts on issues such as product design, pricing
and the structure of business models. Catalyst Code also debunks many
traditional business school theories on pricing, in particular, and illustrates
how the application of conventional management theories spells disaster for
catalysts and two sided markets
Articles
Municipal Wi-Fi Networks: The Goals, Practices,
and Policy Implications of the US Case
The EU Regulatory Framework for Electronic
Communications: Relevance and Efficiency
Three Years Later
Modelling Scale and Scope
in the Telecommunications Industry: Problems
in the Analysis of Competition and Innovation
Abstract: This paper explores three broad questions about municipal Wi-Fi networks in
the U.S.: why are cities getting involved, how do they go about deploying these networks,
and what policy issues does this new trend raise? To explain municipal involvement, the
paper points out that cities have both the means to provide relatively inexpensive
deployment and the motives to provide wireless connectivity to city employees, foster the
economic development of communities and offer universal and affordable broadband
services to residents. The paper then explores nine possible business models, ordered
according to two questions: who owns the network and who operates it. Each of the
possible business models is described and its policy implications are discussed. Finally,
the paper addresses the political and legal fight over the right of cities to build these
networks. The authors argue in conclusion that the current municipal Wi-Fi movement
should be allowed to proceed without federal restrictions.
Key words: municipal wireless, internet policy.
n the wake of Wi-Fi's spectacular rise during the past ten years, a new
twist has emerged: the last few years have seen a growing number of
municipal governments deploy Wi-Fi networks, in the U.S. and abroad.
According to VOS (2005), there were 82 municipal Wi-Fi networks in the
U.S. as of July 2005, up from 44 the previous year (VOS, 2004). In addition,
another 35 municipalities are currently planning to deploy such networks
(VOS, 2005). Outside the U.S., over the same period, this number increased
from 40 to 69 (VOS, 2005). Such municipal enthusiasm for deploying and
operating telecommunication networks comes as a surprise given the
prevailing trend of deregulation and privatization in public utilities.
This paper explores the deployment of municipal Wi-Fi networks within
the U.S. context, where the deployment of wireless broadband takes on
(*) This paper was prepared for the First Transatlantic Telecom Forum, IDATE, Montpellier,
November 22, 2005.
108
1 According to ITU broadband statistics released in January 2005, U.S. broadband penetration
th
th
2 The term broadband is commonly used to refer to "data services that are fast, always
available, and capable of supporting advanced applications requiring substantial bandwidth"
(FCC, 2005, p. 11). More specifically, however, it means "an advanced telecommunications
service that has the capability of supporting, in both downstream and upstream directions, a
transmission speed in excess of 200 kilobits per second (kbps)" (FCC, 2004, p. 12).
109
110
Two main forces are driving the current wave of municipal Wi-Fi
deployment. Firstly, with mass-produced low-cost unlicensed wireless
technology, municipalities have easy access to the means: Wi-Fi networks
are relatively inexpensive to deploy and operate, and they take advantage of
available city assets such as street lights and urban furniture, which make
ideal antenna sites. Secondly, municipal governments point to a growing list
of motives: Wi-Fi networks can help them to provide connectivity for city
employees, entice businesses to locate in their downtowns, make their local
convention centers more desirable, and offer broadband internet access to
citizens whose homes were beyond DSL's reach.
The mass-market development of Wi-Fi technology has given local
governments the means to deploy pervasive local broadband networks that
are relatively inexpensive when compared to earlier wired alternatives. The
technology's success resulted from three main forces that lead to its widespread diffusion (BAR & GALPERIN, 2004). Firstly, the absence of licensing
requirement for the 2.4 GHz and 5 GHz spectrum in which Wi-Fi operates
has led to wide-ranging participation in the technology's development.
Secondly, industry-led standardization of the technology through the IEEE
and the Wi-Fi Alliance has ensured broad interoperability. Finally, the
resulting large scale production of Wi-Fi chipsets resulted in low unit costs
for Wi-Fi equipment, fueling the technology's integration as standard
equipment in laptop computers and allowing widespread diffusion of Wi-Fi
access points for private and public use. The availability of unlicensed
spectrum and low equipment costs have spurred new demand from
households, businesses, and government agencies for wireless internet
access, thus motivating local governments to explore wireless broadband
provision as part of their commitment to serve their communities.
Furthermore, the related development of new mesh wireless
architectures gives cities a distinct advantage. Early uses of Wi-Fi mainly
saw the connection of access points at the end of broadband lines, offering a
'cordless Ethernet' for data access reminiscent of cordless phones for voice.
Mesh architectures allow data to bounce from one Wi-Fi device to the next,
reaching its ultimate destination through a series of wireless hops.
Importantly, while cordless Ethernet deployments required each access
point to be wired to the network, meshed devices only need a power source
and self-organize into an alternative network. This gives municipalities a
distinctive advantage since they control a large number of powered locations
111
112
years and broadband access for American citizens continues to lag behind
most other industrialized countries (TURNER, 2005.)
Some local
governments, faced with what they perceive as lukewarm private sector
efforts to solve this problem, are seeking to provide "public information
utilities" as they did in the past with essential public utilities such as
electricity or water (SACKMAN & BOEHM, 1972; SACKMAN & NIE, 1973.)
Sifting through the various justifications for municipal Wi-Fi deployment
projects, we find evidence of these three rationales. Municipal Wi-Fi
networks serve to increase the effectiveness of government service delivery
in many public safety networks. For instance, the police department of San
Mateo, CA, claims it was able to improve the productivity of its officers
without increasing the number of patrols on the street by giving them better
access to police resources through a metro-scale Wi-Fi network. Municipal
deployments are pursued for economic development goals in metropolitan
cities such as Philadelphia, Los Angeles, or San Francisco, which are
currently planning to construct citywide Wi-Fi networks or providing free
broadband access in hotzones. In the case of Philadelphia, the city
government decided early this year to spend an estimated USD 10 million to
build a Wi-Fi network that would cover the entire 135-square-mile city area
as a way to remain a competitive location for businesses and an attraction
for visitors (The Wireless Philadelphia Executive Committee, 2005). In a plan
to deploy a citywide Wi-Fi network, the city government of Los Angeles also
claims that it is essential to have a reliable, affordable and accessible
broadband network in order to maintain and enhance the economic activities
of the city (Mayor's Wi-Fi & Beyond Executive Advisory Panel, 2005).
Small counties where other broadband services have not been available
are also considering citywide Wi-Fi networks or hotzones in order to entice
businesses to locate in their community or keep them from leaving. For
instance, some local governments have built Wi-Fi networks in an effort to
meet the needs of businesses that require high speed communication
facilities. The city government of Scottsburg, IN, which has a population of
8,000, decided to build its own Wi-Fi network in response to local auto
dealerships' request for broadband internet service provision. In order not to
lose approximately 70 jobs provided by the local dealerships, the city
government has invested $385,000 for the construction of a Wi-Fi network
(REARDON, 2005), which currently has more than 400 customers
(Muniwireless.com, 2005). Long Beach, CA, provides free wireless internet
access in its downtown and convention center in an effort to attract visitors
and convention organizers (BAR & GALPERIN, 2004).
113
114
Multiple others
Who operates?
City
One Private actor
Multiple others
Public utility
Hosted services
Public overlay
Wholesale
Franchise
Private overlay
Wholesale
open platform
Common carrier
Organic mesh
City-owned networks
In a first set of cities, local governments choose to own the Wi-Fi network
infrastructure. This option is often chosen in cities where the initial motivation
is to provide communication facilities for the city's internal needs. They
typically contract with an equipment maker to install network equipment on
city-owned sites. When their plans go beyond internal use to include offering
Wi-Fi services to the public, municipalities have three main choices.
The first is for the city itself to operate the service and retail it, through a
public utility along the lines of municipal water or power utilities. The most
prominent reason for adopting this model is to take advantage of the past
experience of public utility companies in provision of other infrastructure.
115
116
such an arrangement, now that DSL is moving away from providing an open
network for ISPs 3.
In all three operation approaches, a city's ownership of the network gives
it substantial control over its deployment, coverage and service conditions.
However, these modes put the city in direct competition with private telcos
and cable companies for the provision of internet service.
3 Cable-based broadband networks have always been closed in the U.S. since the terms of
their franchise agreement do not require them to be common carriers.
117
118
4 For more information, see http://nocat.net. The project, hosted by O'Reilly and associates in
Sebastopol, CA, takes its name from a quote attributed to Albert Einstein, who is said to have
described radio in the following way: "You see, wire telegraph is a kind of a very, very long cat.
You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this?
And radio operates exactly the same way: you send signals here, they receive them there. The
only difference is that there is no cat."
119
120
efficient way for cities to serve their own networking needs should be to insource or out-source the Wi-Fi infrastructure and its operation. One
important related factor that should be included in that analysis is the
potential for synergies that could be open by having a single player, the local
government, being both provider and user of the network, which open useful
avenues for virtuous learning cycles.
x City funding of the network. In many cases, municipal involvement in
Wi-Fi deployment has an important financial component cities might
propose to fund the network's construction or subsidize its operations. In
such cases, it is important to examine the two underlying rationales for such
use of taxpayers' money. Firstly, it might be argued that private parties would
be short-sighted, or their capital too "impatient" to wait for long-term returns.
A second argument would encourage the inclusion of social goals such as
bridging the digital divide in the evaluation of such funding decisions. In both
cases, the underlying political debates should be confronted directly.
x City regulation of prices. Finally, some local governments seek to
justify their involvement on the grounds that there is a social need for them
to influence the service's pricing level and structure, so as to encourage
access by certain population categories. Critics have pointed out that
providing target user populations with vouchers toward commercial internet
access may be a more effective way to achieve these goals (THIERER,
2005). Here again, the underlying social policy deserves to be debated
directly, and its mechanisms clearly articulated.
The deployment of municipal networks has provoked a political and legal
fight with regard local governments' right to build those networks.
Confronted with the increase of municipal Wi-Fi networks, legacy network
providers (telecom and cable companies) and their supporters argue that
cities and municipalities have unfair advantages over private companies,
because they regulate those private companies, avoid fees and taxes, obtain
low cost finances, and utilize public work forces and facilities. They argue
that city and municipality subsidies allow them to offer network access at
below-cost prices, which in turn distorts fair competition and puts private
companies format a serious disadvantage.
Telcos Verizon and SBC initially led the battle, but they were recently
joined by cable companies such as Comcast. Recently, after the
announcement from the city government of Philadelphia that EarthLink
would be the ISP for Philadelphia's citywide Wi-Fi network, Comcast argued
that the role of local governments should be limited to that of a disinterested
arbiter, and that they should not be the ones to pick Wi-Fi winners
121
(COOPER, 2005). Carriers have successfully lobbied for state and federal
legislation to prohibit local governments from providing broadband internet
services.
Thus far, at least 14 states including Texas, Virginia and Missouri have
enacted laws that would prohibit municipal provision of broadband internet
services, while Nebraska explicitly allowed local governments to provide
such services. The debate has now extended beyond the States to the
national level. Early this year, Texas Senator Pete Session introduced a bill
that would effectively prohibit state and local governments from providing the
internet, telecommunications, or cable service if a private company offers a
substantially similar service. Senators John McCain and Frank Lautenberg
introduced a opposite bill, the so-called "Community Broadband Act of
2005", which would, by contrast, explicitly authorize local governments to
deploy broadband networks. Later, Senator John Ensign introduced another
bill, the "Broadband Investment and Consumer Choice Act of 2005",
suggesting that local governments must first notify carriers and allow them to
bid for the provision of broadband services if they want to offer the services
to their residents. While Ensign's bill does not abolish municipal
governments' right to deploy broadband networks, it places heavy
administrative burdens in their way making it fairly close to Session's bill
(TAPIA, STONE, & MAITLAND, 2005). The debate with respect to the
municipalities' provision of broadband internet access is ongoing in the U.S.
Congress.
Conclusion
122
123
References
BALLER J. & STOKES S. (2001): "The case for municipal broadband networks:
Stronger than ever", Journal of Municipal Telecommunications Policy, 9(3), from
th
http://www.baller.com/library-art-natoa.html [October 9 , 2005]
BAR F. & GALPERIN H.:
- (2004): "Building the wireless internet infrastructure: From cordless Ethernet
archipelagos to wireless grids", COMMUNICATION & STRATEGIES, 54(2), pp. 4568.
- (2005): "Geeks, cowboys and bureaucrats: Deploying broadband, the wireless
way", paper presented for the Network Society and the Knowledge Economy, Lisbon,
Portugal.
BENKLER Y (2002): "Some economics of wireless communications", Harvard
Journal of Law & Technology, 16(1), pp. 25-83.
CARLSON S.C. (1999): "A historical, economic, and legal analysis of municipal
ownership of the information highway", Rutgers Computer & Technology Law
Journal, 25(1), pp. 3-60.
Cooper C. (2005): "Should you have a right to broadband?" CNet News.com,
October 21st from:
http://news.com.com/Should+you+have+a+right+to+broadband/2010-1071_3-5905711.html
th
COX A. (2004): "More municipalities offering the service", October 18 CNN.com,
from http://www.cnn.com/2004/TECH/internet/10/18/wireless.city [October 15th, 2005]
rd,
Forbes, from
http://www.forbes.com/home/technology/2005/06/23/municipal-wifi-failure-cx_de_0623wifi.html
FCC:
- (2004): "Availability of advanced telecommunications capability in the United
States", Fourth Broadband Deployment Report, September, FCC 04-208, GN Docket
No. 04-54.
- (2005): "Connected and on the go: Broadband goes wireless", Wireless Broadband
Access Task Force Report, February from:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-256693A1.pdf
[October 10, 2005]
FUENTES-BAUTISTA M. & INAGAKI N. (2005): Wi-Fi's promise and broadband
divides: Reconfiguring public internet access in Austin, Texas. Paper presented at
the Telecommunications Policy Research Conference 2005 Conference, Arlington,
VA.
GILLETT S., LEHR W. & OSORIO C. (2003): "Local government broadband
initiatives", paper presented at Telecommunications Policy Research 2003
Conference, Arlington, VA.
124
LEHR W., SIRBU M. & GILLETT S. (2004): "Municipal wireless broadband: Policy
and business implications of emerging access technologies", from:
http://itc.mit.edu/itel/docs/2004/wlehr_munibb_doc.pdf [October 8th, 2005]
Mayor's Wi-Fi and Beyond Executive Advisory Panel (2005): "Fast & easy: The future
of Wi-Fi & beyond in the city of Los Angeles", April 25th, from:
th
http://www.lacity.org/mayor/LA_Wifi&Beyond_0504.pdf [October 8 , 2005]
Muniwireless.com (2005): "Scottsburg, Indiana wireless network saves the
community", from:
http://muniwireless.com/municipal/projects/295 [October 16th, 2005]
Pronto Networks (2004): "Metro-scale broadband city network in Cerritos, California",
Pronto Networks Case Study, from:
th
http://www.prontonetworks.com/CerritosCaseStudy.pdf [October 8 , 2005]
REARDON M. (2005): "Local officials sound off on municipal wireless", May 3, CNet
News.com, from:
http://news.com.com/Local+officials+sound+off+on+municipal+wireless/2100-7351_3-5694248.html
REED D. (2002): "How wireless networks scale: the illusion of spectrum scarcity",
presentation given at ISART 2002, Boulder, CO.
SACHMAN H. & BOEHM B. (1972): Planning community information utilities.,
Montvale, NJ: AFIPS Press.
SACHMAN H. & NIE N. (1973): The information utility and social choice, Montvale,
NJ: AFIPS Press.
TAPIA A., STONE M. & MAITLAND C. (2005): "Public-private partnership and the
role of state and federal legislation in wireless municipal networks", paper presented
at Telecommunications Policy Research 2005 Conference, Arlington, VA.
The Wireless Philadelphia Executive Committee (2005): "Wireless Philadelphia
business plan: Wireless broadband as the foundation for a digital city", from:
http://www.phila.gov/wireless/pdfs/Wireless-Phila-Business-Plan-040305-1245pm.pdf
[October 8th, 2005]
THIERER A.D. (2005): "Risky business: Philadelphia's plan for providing Wi-Fi
service. Periodic Commentaries on the Policy Debate", The Progress & Freedom
Foundation Report, from http://www.pff.org/issues-pubs/pops/pop12.4thiererwifi.pdf
th
[October 15 , 2005]
TONGUE K.A. (2001): "Municipal entry into the broadband cable market:
Recognizing the inequities inherent in allowing publicly owned cable systems to
compete directly against private providers", Northwestern University Law Review,
95(3), pp. 1099-1139.
Tropos Networks:
- (2004a): "Metro-scale Wi-Fi as city service chaska.net, Chaska, Minnesota", A
Tropos Network Case Study, from:
125
128
the paper, our analysis shows how time matters and how the temporal
specificity of the regulation device has an impact on the relevance and the
efficiency of the European regulatory governance structure.
1 A principle whereby competences are distributed between member states and the Union: "in
the fields which do not depend on its exclusive competence, the Union intervenes only if and
insofar as the objectives of the action considered cannot be carried out adequately by the
Member States and thus can, because of dimensions or the effects of the action considered,
being realized better at community level" (ISAAC, 1994: 49-51).
A. BAUDRIER
129
2 The E.R.G. was established in July 2002. Its members are the heads of the National
Regulatory Authorities (NRAs). These comprise the 25 EU member states, the four EFTA
(Switzerland, Norway, Iceland and Liechtenstein) and the 3 EU Accession / Candidate States
(Bulgaria, Romania and Turkey). The European Commission attends and participates in E.R.G.
meetings.
130
The European
Parliament is fully
informed and can
adopt a resolution if
the measures overlap
the executive powers
A. BAUDRIER
131
making the European Union the most competitive economic area in the
world by 2010.
This rapid overview raises questions regarding the efficiency of the new
governance structure in the light of the concepts of transaction cost theory.
132
A. BAUDRIER
133
policies (CHERRY & WILDMAN, 1999:613). Thus, we can conclude that the
organizational inefficiency is a means of fragmenting powers to safeguard
some individual rights.
One way of fragmenting powers involves the temporality of the regulation
process. Few research papers usually refer to this notion (QUELIN &
RICCARDI, 2004: 74). However, in addition to the usual concept of physical
assets specificity, "temporal specificity" is worth mentioning (CROCKER &
MASTEN, 1996: 8, 27). Indeed, this form of specificity refers to a situation
whereby it is difficult to replace in a missing partner within a reasonable time
period or whereby there is a very tight time limit for realizing transactions
(BICKENBACH, KUMKAR & SOLTWEDEL, 1999: 3). In these situations,
one of the partners is dependent on another. There then occurs a form of
temporal monopoly resulting from the fact that one partner is, to some
extent, dependent on the other's action (MacKAAY, 2004: 12-13).
The notion of temporal specificity seems a particularly judicious tool for to
analysing the relevance and efficiency of the implementation of the EU
regulatory framework in electronic communications.
134
A. BAUDRIER
135
Pre-notification
meetings
for draft regulatory
measures
24 weeks on average
The national regulator works out
a public consultation.
+ 6 weeks
Advice of
other
national
regulators
Advice of
CoCom
+ 6 weeks
+ 4 weeks
The European
Commission
indicates its
disagreement.
Advice of the
national
competition
authority
Advice of
other
national
regulators
The European
Commission
indicates its
agreement.
+ 8 weeks
The European
Commission
exerts its veto.
136
3 National regulators informed the Commission of their market analyses in a disparate way and
at different times. While seven regulators (the United Kingdom, Finland, Ireland, Portugal,
Austria, Sweden, Hungary) had notified the Commission of at least one market analysis by the
end of January 2005, twelve regulators (Belgium, Cyprus, Spain, Estonia, Italy, Latvia,
Lithuania, Luxembourg, Malta, Poland, Czech Republic, and Slovenia) had still not notified the
Commission of any market analysis by this date.
A. BAUDRIER
137
In the end, it appears that the multiplicity of players and the quasicontractualisation of their relations complicate governance structure. The
regulatory framework, by regulating markets on a national and Community
level, is admittedly an asset insofar as it offers institutions a degree of
flexibility and adaptability. However, the complexity of the organizational
structure, by multiplying coordination and temporal costs, could be
detrimental to the market development in the end. Thus, although the
decentralization of implementation powers seems in accordance with the
subsidiarity principle, the long and onerous procedures involved in the
relevant market regulation device are likely to harm its relevance and its
adequacy in terms of building a single electronic communications market.
Conclusion
Our study shows that it is important to take into account the temporal
specificity of regulation in designing a regulatory governance structure. The
governance structure resulting from the reform is relatively effective, insofar
as it meets the need for guarantees against discretionary power and the
inherent risk of opportunism. The complexity of the institutional structure,
through the dispersion of powers and competences, seems deliberate. The
regulatory governance structure is useful to legitimate interests such as
political balance and transparent relations between national regulatory
authorities, European institutions and firms.
However, the multiplicity of players and quasi-contractualisation of their
relations complicates the governance structure. Is this complexity, by
multiplying coordination costs and temporal costs, detrimental to the fast
moving electronic communications markets? How can such problems be
prevented?
These questions raise important concerns to be taken into account when
the implementation of the EU regulatory framework is reviewed in the future.
138
References
BICKENBACH F., KUMKAR L. & SOLTWEDEL R. (1999): "The New Institutional
Economics of Antitrust and Regulation", Kiel Working Paper no. 961, Kiel Institute of
World Economics.
CHERRY B.A. & WILDMAN S.S. (1999): "Institutions Endowment as Foundation for
Regulatory Performance and Regime Transitions", Telecommunications Policy, 23.
COASE R. (1988), The Firm, The Market, and The Law, The University of Chicago
Press, New York.
CROCKER K.J. & MASTEN S.E. (1996): "Regulation and Administered Contracts
Revisited: Lessons from Transaction-Cost Economics for Public Utility Regulation",
Journal of Regulatory Economics, vol. 9, pp. 5-39.
ISAAC G. (1994): Droit communautaire gnral, 4me dition, Collection Droit
Sciences conomiques, Masson.
LEVY B.D. & SPILLER P.T.:
- (1996): Regulations, Institutions, and Commitment: Comparative Studies of
Telecommunications, Cambridge, UK: Cambridge University Press.
- (1994): "The Institutional Foundations of Regulatory Commitment", Journal of Law,
Economics and Organization,10 (Fall), pp. 201-246.
MacKAAY E. (2004): Le thorme de Coase, version prliminaire, Chapitre "Analyse
conomique du droit II. Institutions juridiques", Editions Thmis, Montral et
Bruylant, Bruxelles.
MENARD C. & SHIRLEY (2002): "Reforming Public Utilities: lessons from Urban
Water Supply in Six Developing Countries", The World Bank.
NORTH D.C.:
- (1991): "Institutions", Journal of Economic Perspectives, vol. 5, no. 1, pp. 97-112.
- (1990): Institutions, Institutional Change and Economic Performance, Cambridge
University Press.
QUELIN B. & RICCARDI D. (2004): "La rgulation nationale des
tlcommunications: une lecture conomique no-institutionnelle", Revue franaise
d'administration publique, no. 109, pp. 65-82.
WEINGAST B.R. & MARSHALL W.J. (1988): "The Industrial Organization of
Congress; or, Why Legislatures, Like Firms, Are Not Organized Like Markets", The
Journal of Political Economy, vol. 96, no. 1, February, pp. 132-163.
Abstract: A theory of scale and scope that takes into account the endogenous nature of
technology and the contextual manner in which systems architecture and functionality are
shaped by market structures requires an alternative approach to modelling and analysis.
Following on from "A new view of scale and scope in the telecommunications industry;
implications for competition and innovation" (BOURDEAU et al., 2005) , we apply the
concepts of embeddedness, integration and competition to show how the current models
can be improved. We also show how the many-layered "network of networks" can be
evaluated.
Key words: scale and scope; modelling; innovation and competition
(*) We are extremely grateful to James Alleman, Dimitris Boucas, Paul David, Catherine de
Fontenay and Christiaan Hoggendorn for their influential assistance.
140
The most concrete (and often only) data available are the data that
describe the incumbent's performance. Those data at the aggregate level do
not reflect competitive market forces, except to the limited extent that
incumbents' stock shares are publicly traded in the economy-wide capital
market. Therefore such data do not offer an understanding of the
competition problem and the dimension of the challenges actually faced by
incumbents and entrants alike. Even if costs data are adjusted through cost
allocation formulae, these methods are at bottom arbitrary and without any
direct link to what firms could be expected to do when confronted by
competitors. The problem this creates can be illustrated by Gould's 1889
monopolization of the all the railroad facilities in Saint Louis, Missouri
(LIPSKY & SIDAK, 1999). Through this action, all market signals about the
relative value of the various bridges, stations, etc. disappeared. Information
about the efficient use of assets in a monopoly-dominated environment is
similarly difficult to come by. While there have been occasional small steps
to recognize and correct some of these problems (see SIDAK & SPULBER,
1997), most are grossly inadequate, if not misleading (ECONOMIDES,
141
1997) 1. Much more significantly, none of the attempts reflect the proper
characteristics of incompletely developed markets, yet the existence of a
competitive market that is unaffected by any one firm's decision to integrate
vertically is set out by COASE (1937) and WILLIAMSON (1985) as a
prerequisite 2. The fundamental problem appears to have been
acknowledged by Spulber when he developed the "market determined"
ECPR [M-ECPR] access pricing method.
The distortion of economic analysis by inadequate data has myriad
ramifications. For example, FUSS & WAVERMAN'S (2002) work implies that
it is impossible for an entrant to recover its fixed cost from the incumbent's
technology unless it is duplicating the entire range of outputs the incumbent
produces. This means that cost-based pricing methods that do not look at
alternative technologies will be biased in favour of the legacy technology.
Cost-based pricing approaches such as ECPR accept the legacy cost
structure as efficient and free of entry barriers. Once the basic problems of
these approaches are understood, it is not unreasonable to argue that prices
so derived exceed the market value that would emerge as competition
becomes established 3.
The inadequate data problem is much more than a problem for regulatory
debates, although many of these debates have important outcomes for the
players and policy. It is, moreover, a fundamental challenge to the internal
management of incumbents and would-be entrants alike. Without an ability
to know more about how to efficiently deploy assets to best meet evolving
markets, there is an unusually high degree of risk attached to making
business decisions. Useful thinking about what competition might look like
can be accomplished using our estimation. To achieve this successfully
primarily relies on using the limited historical experience we have had with
1 SIDAK & SPULBER (1997, p. 371) do consider the social opportunity cost, but only when
regulation is perfect. At that point, the opportunity cost tha incumbents would receive for
foregoing a downstream customer is zero and "efficient component pricing rule" [ECPR]
reduces to LRIC. They do not consider how the private opportunity cost deviates from the social
opportunity cost at other regulated prices in the form of a private rent for the incumbent.
2 WILLIAMSON (1976) illustrates what a player with market power may be able to do and how
neglecting the competitive assumptions can produce totally different results. His analysis helps
to highlight the contrast between his competitive model and a model where a player has market
power.
3 TELRIC ("total element long run incremental cost") represents a conceptual step toward
correcting cost-based pricing from this inherent "monopoly-centric" bias (MANDY & SHARKEY,
2003).
142
Modelling
4 For the policy maker, there is the Schumpeterian model that all works out in the end - but it
may take a very long time to work itself out. That model is of little use to the businessperson.
Besides, the approach implied by STIGLER (1951) and CHANDLER (1990) should be more
efficient once the time dimension and its associated cost are taken into account.
143
5 PUPILLO & CONTE (1998) is the only econometric study to date that has taken a credible
step toward lookings at activity and layer-specific economies of scale and scope.
6 Properly, we should speak of returns to a set of factos that are changed in a fixed proportion,
while the remaining factors are held constant. It is a basic result of classical economics that,
eventually, those factors that are increased 'to scale' will exhibit diminishing returns to scale. In
many situations, say, the local loop, those diminishing returns are rarely reached before the
telephone company increases capacity. The same is true of most industrial processes since few
things are as harmful to a firm as not being able to satisfy demand and prices can rarely be
changed, even in a competitive sector, as flexibly as presumed in theoretical neoclassical
models. It follows that in most industrial processes, and especially in telecommunications, the
fixed costs or inputs that cannot be changed in the short run do not act as a constraint, since
they are available in excess capacity (CAVES & BARTON, 1990). For such reasons, in spite of
the eventual diminishing returns to a subset of inputs, telecommunications firms commonly
operate where there are indeed substantial increasing returns to variable factors. This factor is
rarely considered in policy debates, where most assertions about scale economies do not even
consider the long-run and only refer to situations where a subset of inputs are held constant.
144
measure, from time-based circuit usage to "always on" access, for example,
will have a significant and often hard to predict impact on economies of scale
and scope, even although nothing is changed in the technology itself. Still,
established firms often need to change in response to changing market
conditions. Frequently it is changing scale and scope opportunities made
possible by technology or competitive innovation that pressure such moves.
Sunk costs and monopoly legacy may be the cause of apparent
economies within an existing structure, but they may not relate to true
economic efficiencies that benefit the firm (or society). Instead, these factors
may have both inefficiency consequences and anticompetitive
consequences.
The example of the numbering plan invention can again be used to
illustrate this point. Once trunking was placed under the control of a
ubiquitous monopoly service provider, the manufacture of specialized
equipment became possible that might not otherwise have been developed
under open market conditions.
In the American networking environment, one might think of a class 4
switch that routes large amounts of traffic across regions to a small set of
assigned points. Economists recognize that specialized equipment can
reduce the time of producing a unit of output, but the set-up time required for
such specialized equipment can be high 7. Therefore, small firms, and those
operating in a competitive environment, might instead choose to use more
general-purpose equipment "off the shelf." Such a decision may make the
most sense in a market environment where costs are constantly challenged.
The class 4 switch represents a large commitment of "sunk" resources that
may make it harder to introduce innovation, but which does provide a
substantial advantage for a limited set of functions.
It is probably inefficient as a technology when compared to distributed
processing, and it may well constitute a barrier to entry, although improved
efficiency resulted by some measures. Regardless of the choices made, the
normal systems model poorly informs operators about the efficiency of their
investments.
We talk of short-run economies of scale to specify those assertions found in literature, in reports
published by operators and in regulatory decisions.
7 See virtually any text on industrial organization, such as WALDMAN & JENSEN (2000):
Industrial Organization, Addison-Wesley.
145
Whether the incumbent's net short-run costs, i.e., the sum of all (implicit and
explicit) namely, the short-run costs, are negative or positive, the entrants'
corresponding short-run costs are unlikely to be negative, hence producing a
net benefit. The incumbent's net direct costs are always negative. As far as
incumbents are concerned, it is futile to compute meaningful implicit short-run
costs from accounting data. That does not mean that the situation is hopeless.
Those costs relate primarily to factors such as the incumbent's obligation to
provide certain services, universal service etc. ARMSTRONG (2000) provides a
partial list of these costs, as well as the costs born by incumbents. If the total
cost, i.e., the sum of the direct and indirect costs, is higher than it would be
under some alternative organization, then it is reasonable to argue that the
incumbent, in the light of its fiducial obligations to its owner, would choose the
alternative organization. An alternative organization would consist in some
divestiture of those elements of the business that are the source of losses. The
advantage of such a divestiture is its ability to provide an improved estimate of
the incumbent's local costs. If we assume that the regulator has no intention of
making the local loop financially unviable, it is rational to conclude that the
regulator will act on the revealed cost of the local bottleneck and impose
reasonable access tariffs without concerns about crosssubsidization. This
scenario implies that the benefits the incumbent derives from the existing
organization are superior to those derived from a divestiture, i.e., that the
incumbent is able to derive a rent from the service obligations. That rent was
probably in the form of influencing public authorities to set access conditions
that would make competitive entry less likely. If this were not the case, the kinds
of arguments found in the literature or in such places as Justice Breyer's
dissents would make it a fiduciary obligation for incumbents to seek such a
divestiture. One of the authors was asked informally as a member of a team of
experts to prepare a proposal for an overseas incumbent to evaluate whether
the latter would not be better off divesting of its local networks in light of what it
saw as new and exceptionally extreme regulated access pricing conditions.
Within weeks that incumbent changed its mind (not its public relations
campaign). It never divested itself of its local network. Similarly, a few years ago
BT made a proposal to divest its local facilities. After having received two offers
within a very short time, it pulled out of this project and is still vertically
integrated. In all of these situations and in many others, had service obligations
not been a cost, those incumbents would have had the opportunity to reveal
their implicit losses by placing the source of those losses on the market and
thus revealing the actual size of those so-called losses. On the contrary, it is
probable that an increase in their fixed costs, including an increase in
uncertainties, can only undermine new entrants' chance of success. From that
perspective, what we observed beginning in 2001 with the fall of competitor
after competitor is hardly surprising. Moreover, the correspondence between
economies of scale and the high entry cost is generally taken for granted by
economists (WOROCH, 2002) and some regulators (e.g. POWELL, 2001) as an
efficient outcome, yet there is no logical basis for such a conclusion.
The competitive failure that comes about when investors are reluctant to
accede to entrants' non-recurring costs is clearly a benefit to the incumbent
(DAWSON, 2002). However, this benefit is likely to be short run since
innovations that might have assisted incumbents to better anticipate and
react to market changes are eliminated (HORAN, 2002). Entrants who
anticipate this will be far better informed about effective entry conditions and
thus far more dangerous.
146
147
10 Cost models, especially engineering-based cost models, attempt to address those factors.
148
If incumbents do not follow that path, then they are not subject to
competitive discipline in a meaningful way. If this is not the case, then an
incumbent's actions, such as investment and resource allocation decisions,
may not be efficient for society or its shareholders.
11 While Armstrong lists a number of potential market failures that can be expected to affect
consumers, entrants, and the monopoly incumbent respectively, he models only the gross cost
impact of universal service on incumbents.
12 JORDE, SIDAK & TEECE (2000) state that "[m]andatory unbundling decreases an ILEC's
incentive to invest in upgrading its existing facilities by reducing the ex ante payoffs of such
investment It makes no economic sense for the ILECs to invest in technologies that lower its
own marginal costs, so long as competitors can achieve the identical cost savings by regulatory
fiat" (p. 8). Their approach implies that there is no way for incumbents to differentiate between
the return on upstream and downstream activities. Yet, it is interesting to note that their results
are only based on assertions about TELRIC being too low that they have not established and
that cannot be established.
13 Quoted from BESANKO, DRANOVE & SHANLEY (2000), p. 109.
149
150
Those economies of scale and sunk costs do not relate to the incumbent
telephone companies we observe today.
When we look at higher layers, the layer of interconnected networks, we
observe large economies of scale at the aggregate level, i.e., at the network
of networks. However, once more those economies are unrelated to any
individual operator. Whether those individual networks are large or small has
little to do with economies of scale and with sunk costs. The only economies
of scale that are relevant are the economies of scale that all networks are
able to achieve when they are interconnected into a network of networks.
Similarly, the sunk costs are related to limited geographical areas,
neighbourhoods in cities, and the extent to which they are actually "sunk,"
i.e., not recoverable is questionable. Individuals in any of those
neighbourhoods would still have the same demand for telecommunications
services, i.e., there would be entrepreneurs willing to take over those
facilities to continue offer those services. This may not be done at the price
existing incumbents may like, but this may reflect nothing more than, for
example, increased efficiency in the use of rights of ways, conduits, and
poles. Telephone companies as monopolies created artificial costs by
refusing to share conduits or ducts, while new entrants have been efficient at
making use of existing rights of ways such as sewage systems, canals, and
subways. So, we can see that in these cases, the problem of sunk costs is
largely fictitious. It mostly reflects the organization incumbents have chosen
for themselves, organizations that have fought against vertical and horizontal
disintegration, hence against what appears to be normal long term trends
(STIGLER, 1951; CHRISTENSEN, 1997; BOURDEAU de FONTENAY &
HOGGENDORN, 2005).
Innovation
151
15 This is well illustrated by many experiences in telecommunications and not just by the
regulator's often successful efforts to lower wholesale transaction costs by creating wholesale
markets throughout the 1980s and 1990s. For instance, the videotext experience of the 1980s
produced the same results. While videotext was introduced in such countries as Canada
(Telidon), France (Teletel), Germany (Bildschirmtext), Japan (Captain), Sweden, the U.K., and
the U.S., it was only in France that its deployment was successful. One of the key differences
between France and all the other countries is that France was the only nation to provide
information service providers a decentralized (i.e., not vertically integrated) public address on
the X-25 network. All the others adopted a vertically integrated, centralized approach. Teletel's
152
is complex and costly to implement, but policy makers accept that challenge
because of the common assessment that the welfare cost of continued
monopoly in a dynamic environment is much more costly 16. By the same
token, the dynamic environment creates new challenges for existing players
and existing markets, but it also creates new opportunities for growth and
profit. The public policy purpose of achieving greater efficiency to benefit
public welfare is mirrored by the opportunity and the need for existing
players to become more efficient.
entry cost was so low that a majority of the information service providers created their services
and managed them on the Apple E computers. Some of those were exceptionally successful.
16 FCC Chairman Powell (2001) argues that "[s]uch an approach requires heavy regulation to
protect against the anticompetitive and anti-consumer tendencies of a monopolist. And, it
requires heavy government management of expenses, revenues and rates... Economic scale
does matter and it does take a great deal of resources to deploy these networks...".
153
17 Even where there are significant "sector-specific" constraints, as in the case of electricity,
those need not prevent the sharing of facilities, for example, between telecommunications and
electricity. In that case, a dominant case in the U.S., poles for the transport of electricity are
used because, for safety and security concerns, they have to be higher. The poles are
organized horizontally with, in the upper part of the poles, an electricity transport zone, under it,
a "no man's land" that acts as a safety zone, and still below it a communication zone for all
telecommunications needs. This type of arrangement would seem to be unique to North
America, even though there is considerable pressure on a large number of municipalities
around the world to share facilities, even at the construction stage.
154
Bibliography
ARMSTRONG Mark (2002): "The Theory of Access Pricing and Interconnection", in
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.),
New York: North-Holland.
BESANKO David, DRANOVE David & SHANLEY Mark (2000): Economics of
nd
Strategy, 2 edition, New York, NY: John Wiley & Sons.
BOURDEAU de FONTENAY Alain & HOGGENDORN Christiaan (2005): "The
Economics of Vertical Integration: Adam Smith, Allyn Young and George Stigler",
working paper and CITI Workshop Reforming Telecom Markets: A Commons
Approach to Organizing Private Transactions, New York: Columbia University, at:
http://www.citi.columbia.edu
BOURDEAU de FONTENAY Alain & LIEBENAU Jonathan (forthcoming): "Innovation
in Telecommunications: the Judicial Process and Economic Interpretations of
Costing, Transacting and Pricing".
BOURDEAU de FONTENAY Alain, LIEBENAU Jonathan & SAVIN Brian (2005): "A
new view of scale and scope in the telecommunications industry: implications for
competition and innovation", COMMUNICATIONS & STRATEGIES, no. 60.
BREYER Stephen:
- (1979): "Analyzing regulatory failure: mismatches, less restrictive alternatives, and
reform", Harvard Law Review, 92 (3): 547-609.
- (2004): Economic reasoning and judicial review, AEI-Brookings Joint Center 2003
Distinguished Lecture, Washington, D.C.: AEI-Brookings Joint Center for Regulatory
Studies.
CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.) (2002):
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, New York: North-Holland.
CAVES Richard & BARTON David (1990): Efficiency in U.S. Manufacturing
Industries, Cambridge, MA: MIT Press.
CHANDLER Alfred D. Jr. (1990): Scale and Scope: The Dynamics of Industrial
Capitalism, Cambridge, MA: Bellknap.
CHRISTENSEN Clayton M. (1997): "Making Strategy: Learning By Doing", Harvard
Business Review, November-December.
COASE Ronald H. (1988): "The Nature of the Firm", reprinted from Economica, 1937
as Chapter 2 in The Firm, The Market, and the Law, Chicago, IL: The University of
Chicago Press.
CRANDALL Robert W. (1988): "Surprises with Telephone Deregulation and with
AT&T Divestiture", American Economic Review, 78 (2), pp. 323-327.
155
DAWSON Fred (2002): "The Powell Doctrine: FCC Chairman Talks Policy with
XCHANGE as Action on Broadband Intensifies", Xchange Web Extras, at:
http://www.xchangemag.com/webextra/241webx1.html accessed in 2002
ECONOMIDES Nicholas (1997): "The Tragic Inefficiency of the M-ECPR", working
Papers 98-01, New York University, Leonard N. Stern School of Business,
Department of Economics.
FRANSMAN Martin (2002): Telecoms in the Internet Age: From Boom to Bust to ?,
Oxford: Oxford University Press.
FUSS Melvyn & WAVERMAN Leonard (2002): "Econometric Cost Functions", in
Handbook of Telecommunications Economics: Structure, Regulation and
Competition, CAVE Martin, MAJUMDAR Sumit Kumar & VOGELSANG Ingo (Eds.),
New York: North-Holland.
GASMI Farid, KENNET David Mark, LAFFONT Jean-Jacques & SHARKEY William
W. (2002): Cost Proxy Models and Telecommunications Policy, Cambridge, MA: MIT
Press.
GULATI Ranjay, LAWRENCE Paul R. & PURANAM Phanish (forthcoming):
"Adaptation in vertical relationships: beyond incentive conflict", Strategic
Management Journal.
HARRIS Robert G., & KRAFT Jeffrey C. (1997): "Meddling Through: Regulating
Local Telephone Competition in the United States", Journal of Economic
Perspectives, 11, pp. 93-112.
HORAN Tim (2002): "Communications Restructuring: The Long and Winding Road",
CITI, Columbia University, JORDE Thomas M., SIDAK J. Gregory & TEECE David
J. (2000): "Innovation, Investment, and Unbundling", Yale Journal on Regulation, 17
(1), pp. 1-37.
LIPSKY Abbot B. & SIDAK Gregory J. (1999): "Essential Facilities", Stanford Law
Review, 51, pp. 1187-1249.
MANDY David M. & SHARKEY William W. (2003): "Dynamic pricing and investment
from static proxy models", working paper 40, OSP Working Paper Series.
Washington, DC: FCC.
POWELL Michael K. (2001): "Remarks". FCC, National Summit on Broadband
th
Deployment, Washington, D.C., October 25 2001.
PUPILLO Lorenzo & CONTE Andrea (1998): "The economic of Local Loop
Architectures for Multimedia Services", Information, Economics and Policy, 10, pp.
107-126.
QUIGLEY Neil (2004): "Dynamic competition in telecommunications implications for
regulatory policy", Toronto, Canada: C.D. Howe Institute Commentary, 194, February
2004, at: www.cdhowe.org accessed October 2004
156
Features
Competitive Compliance:
streamlining the Regulation process in Telecom & Media
Grard POGOREL
Ecole Nationale Suprieure des Tlcommunications
CNRS UMR 5141 LTCI-ENST, Paris
160
G. POGOREL
161
(*) This article is an abstract of a report carried out by IDATE entitled "The world broadband
access market". With detailed national analyses of 11 key countries, an assessment of leading
access providers' strategies, this report provides a comparative analysis of markets around the
world, the key challenges facing the sector and market growth up to 2009.
164
UK and France, are in a dead heat for the largest broadband base, each one
having around 10 million connections by end 2005.
South Korea is still the benchmark in broadband penetration. Despite
being near saturation, South Korea is still ahead of the world's other mature
markets (the Netherlands, Switzerland, Canada) by a nose. France and the
UK, which have being enjoying significant growth rates, are reporting
penetration rates comparable to the one found in the US (around 14%).
L. LE FLOCH
165
In the US, very high-speed broadband has become a reality now that
fibre optic unbundling obligations have been lifted, thus paving the way for
RBOCs' investments in FTTx.
In Europe, most very high-speed broadband projects are being financed
by local authorities or public utilities. But incumbent carriers are beginning to
be more keen about very high-speed broadband, following the relative failure
of the models implemented by FastWeb in Italy and B2 in Sweden, which
extended their coverage through unbundling.
AP
EU
NA
LA
AME
10
20
DSL
30
40
50
60
70
80
CM and Other
Source: IDATE
166
After months of swift growth for shared access, a new trend giving priority
to full unbundling is emerging in several markets.
Growth of unbundled DSL in Europe
31/12/2002
Germany
% of DSL lines
Spain
% of DSL lines
France
% of DSL lines
Italy
% of DSL lines
Netherlands
% of DSL lines
United Kingdom
% of DSL lines
Sweden
% of DSL lines
175 000
5%
3 000
0%
3 000
0%
52 000
6%
50 000
14%
2 000
0%
9 000
2%
31/12/2003
465 000
10%
16 000
1%
273 000
8%
240 000
11%
232 000
24%
8 000
0%
30 000
5%
30/12/2004
870 000
13%
114 000
4%
1 591 000
25%
450 000
10%
462 000
25%
47 000
1%
210 000
24%
30/06/2005
1 500 000
18%
297 000
9%
2 330 000
30%
653 000
12%
584 000
27%
69 000
1%
299 000
25%
Source: IDATE
L. LE FLOCH
167
however, going head to head with cablecos who still lead the way with their
triple play bundles.
The FCC took a series of measures to alter and define the new scope of
unbundling. Lifting unbundling obligations for FTTx had a decisive impact on
RBOCs' investment strategies (Verizon FiOS, SBC Lightspeed).
168
L. LE FLOCH
169
Technical Innovations
Instant messaging:
Towards a convergent multimedia hub (*)
Vincent BONNEAU
IDATE, Montpellier
(*) This report carried out by IDATE offers an in-depth analysis of the instant messag-ing (IM)
market and developments (interoperability, VoIP, mobility etc.) in the con-sumer and business
segments. This is a major market in terms of advertising for portals, traffic for mobile operators
and licences for professional software vendors.
172
V. BONNEAU
173
mainly for historical reasons and primarily to the advantage enjoyed by first
entrants, as products are almost identical. The leading ISP in the
narrowband market, AOL, consequently dominates the North American
market and is capitalising on the installed base of ICQ, acquired in 1998 and
interoperable with its AIM tool since 2004. MSN is the major leader in the
European market and is benefiting from embedded versions with Windows.
For its part, Yahoo! Is well positioned in various markets. In Asia the market
is dominated by local players such as QQ in China, which has since become
a major portal, and NateOn in South Korea, which is the only IM service
offered by a major ISP to have overtaken these portals. Similar initiatives
have effectively been carried out in Europe (FT/Wanadoo, Tiscali, DT/TOnline, etc.), but have only achieved very limited success. North American
ISPs have preferred to form alliances with major portals.
Generating revenues from advertising, portals have effectively been the
only players in a position to financially justify such a free service, acting as a
call service and an instrument for winning customer loyalty to the portal
brand, and paid for by other services. Paying services (SMS, voice and a
few avatars) remain marginal and are struggling to develop due to the lack of
a suitable billing system and an ample free offering.
174
V. BONNEAU
175
176
South Korea and the USA, even if thoses countries are lagging behind for
mobiles.
The South Korean and North American operators have tried to offer a
service making it possible to access the main fixed IM services, notably
those on AOL and NateOn's portals. The majority are offering a service with
no subscription invoiced on the basis of the SMS (on a pay-as-you go basis
or in packs). The results are more than encouraging, notably for Danger and
Verizon Wireless, which already reached 20 billion messages per month in
2004.
Most European operators, on the other hand, are looking to develop a
proprietary solution that is interoperable with other mobile operators based
on the IMPS standard, created especially for them by mobile equipment
manufacturers (Ericsson, Nokia etc.). The results are rather disappointing.
Users effectively want to be able to communicate with their existing fixed IM
contacts. Some operators (Vodafone, KPN etc.) have consequently recently
decided to change their positioning and to enable access to the services of
MSN, the European market leader. The services are offered in the form of a
subscription, with incoming traffic deducted from the flat rate package as
with other mobile services.
Interoperability between IM mobiles is consequently not a priority,
although the service is paying on mobiles. It is currently a question of being
able to discuss with fixed communities, which are already highly developed.
Interoperability between mobile IM and fixed IM could accelerate the
development of an almost generalized interoperability between the major
portals (AOL, MSN, Yahoo!, and even QQ, NateOn) without imposing
widespread interoperability.
V. BONNEAU
177
substitution pure and simple. However, even in the European model, mobile
IM should be able to compensate for the losses in SMS thanks to flat rate
models.
Mobile IM is a big plus in the data strategy of operators. It can effectively
make it possible to increase data traffic and facilitate the migration to
unlimited data flat rate packages that imply greater usage. It can also enable
operators to differentiate themselves for young users, notably for MVNOs.
Mobile IM can also make it possible to reproduce the fixed multimedia
hub on mobiles, offering easier access to certain existing services (voice,
push-to-talk, video conferencing, SMS, sending photos) or to new services,
notably based on localisation. Mobile IM will not necessarily be a killer
application, but could be a killer hub, if it is more clearly promoted.
Positioning of the various players involved in the instant messaging market
Geographical
presence
of IM
IBM
Nokia
Pro IM
Novell
Microsoft
LCS
Jabber
MSN
(MSN, LCS,
Windows, Office, )
Oracle
Alcatel
Internet giants
Yahoo
Sun
Vodafone
Siemens
Global
RIM
AOL
Skype
Reuters
PanEuropean
initiatives
(AIM et ICQ)
IMLogic
Google
Apple
Orange
Odigo
Trillian
T-Online
Gaim
Ericsson
Tiscali
Multi-protocols
Wanadoo
/ Voila
Oz
VoIP initiatives
Comverse
National
Danger
BT
China
Telecom
Special features
of Asia
Earthlink
NateO
Colibria
Movistar
KPN
Daum
Shanda
Skyrock
Equipment manufacturers
Software vendor
Source: IDATE
SBC
U.S. alliances
Portal
Cingular
Oprateur
Verizon
Operator
KTF
Sprint
Value
chains
Use Logics
Mobile CE
The nomadic era (*)
Laurent MICHAUD
IDATE, Montpellier
A nascent market
Mobile CE involves most content, and most areas of entertainment,
business and communication. This content can be accessed using a host of
(*) This IDATE market report provides an analysis of the emerging trends in the world of mobile
Consumer Electronics (CE), now having to contend with increasing competition from the
telecom and computing industries.
180
L. MICHAUD
181
is now becoming a major selling point for mobile CE. The devices seek to be
appealing, trendy and image boosters. They will be the best way for service
providers (mobile operators, mobile TV, music, etc.) to attract customers and
make them loyal, while also embodying their offer.
182
L. MICHAUD
183
Multiservice kiosks
The devices' network capacities are exploited primarily by telecom
operators which, in addition to telephony, are offering a body of information,
communication and entertainment services: SMS, MMS, push mail, ringtone
and music extract downloads, games, access to news, etc. Here, telcos are
becoming service providers by distributing content. In future, they will be
offering access to TV programmes and possibly to video content and films
for download.
Multifonctional
They are multifunctional and multimedia. They are a platform for storing
content, playing it, recording it, viewing it, distributing it, and for
communicating. They manage audio, video, voice, pictures and text, as well
as business applications.
Interoperable
They are interoperable with other fixed and mobile devices. Mobile CE
devices now communicate essentially amongst themselves and with PCs,
with computers being the main source of content thanks to the internet. But
mobile CE devices could well become autonomous in the area of content
supply, taking the same direction as mobile telephony.
Standardised
The interoperability of mobile CE requires not only standardisation of
content formats, but also standardised communication tools, operating
systems, API, players, etc. It also implies the integration of technical rights
protection and management mechanisms, without which content publishers
may well be reluctant to allow distribution of their products.
Portable media library
Mobile CE devices are equipped with storage capacities that now allow
them to hold the equivalent of between 1,000 and 2,000 songs, depending
on the level of compression used, or the equivalent of 60 movies in DivX
format. In future, they will be capable of storing tens of thousands of songs
and several hundred films.
184
Video games
Audio player
Video player
Telephony
Radio
Diary & business
applications
TV
Community tools
(messaging)
Photography
Video camera
GPS
Video game
console
MP3
player
Video
player
Mobile
phone
9999
99
999
9
9
9999
99
99
99
99
9999
-
99
999
9
9999
99
99
999
99
99
9
9
9
999
999
9
99
9
9
99
99
9
Source: IDATE
Mobile
TV
PDA
9
99
99
999
9
99
99
9
999
9
99
9999
9999 -
Book Review
Convergence
and
European
Telecommunications
by Zdenek HRUBY
The book addresses the issue of development and changes in
telecommunications, liberalisation and regulation in course of the last two
decades. Changes in regulation, technology, economy, globalisation and
competition are the key words, while the main focus of the book is
European. Its authors analyse fact and ideas in detail and provide a good,
comprehensive guide to topics to date. Special attention is given to the
interrelation between the effects of globalisation, European and national
regulatory and business responses. The review of liberalisation and reregulation is transparent, providing readers with full orientation. The analysis
and explanation of technological changes, globalisation, institutional
structures, European integration, ideas and their implementation in practical
regulation reform are particularly interesting. Although the common findings
are European, with a key role played by EU regulatory policy, legislation,
responsible bodies and institutions, the authors see diversity among EU
member states. The features of a regulatory state both at a European and a
national level are identified. Different perspectives are evoked, included
Stiglers and Peltzmanns interest groups concept.
The respective chapters of the book cover the following concepts: the status
quo ante and explanation of a set of variables resulting in a paradigmatic
policy change for liberalisation, with remaining state intervention; the
emergence of a telecommunications policy at EU level culminating in the
1998 package of directives, the transposition and implementation of EU
policies by member states and the emergence of the "regulatory state," the
Electronic Communication Regulatory Framework for services, while the
infrastructure and content points of view are also analysed. The EU's
position, role and influence on policies and processes driven by
organisations like WTO and ITU are also spotlighted. Finally, the relationship
between the EU and regulation on a national level is considered, as well as
further convergence and increasing role of economic goals vis-a-vis public
service provision.
COMMUNICATIONS & STRATEGIES, no. 61, 1st quarter 2006, p. 185.
186
This book is written from a political sciences point of view, and is neither a
technical economic publication, nor an engineer's technological approach. It
should consequently prove transparent, usable and valuable to a wide
variety of readers ranging from academics to industry professionals and
officers.
Byung-Keun KIM
Internationalizing the Internet
The co-evolution of Influence and Technology
Edward Elgar publishing, Coll. New horizons in the economics of innovation, 2005,
300 pages
by Bruno LANVIN
Based on research carried out at SPRU (Sussex University), Professor
Byung-Keun Kim offers a refreshing and thought provoking perspective on
the genesis, evolution and internationalization of the internet.
An original perspective with a pessimistic conclusion
The author's main concern is to explore how the interaction of various social
groups and countries has shaped the internet as we know it today. In this
venture, the internet is not considered as a given, but as the concretization
of one among many possible futures: starting with the observation that
studies on the genesis and history of the internet have essentially focused
on its institutional U.S. background (DARPA, NSF, etc..), the author calls
attention to the international dimensions of internet growth and asks what
made it a global infrastructure? Is the internet reducing or compounding
existing technological and economic divides across the planet?
Pointing to the limitations of traditional analyses (techno-economic and
socio-technological), Professor Kim covers new ground by focusing on
governance issues, and raises interesting historical perspectives on
competition/cooperation between the U.S. and Europe with regard to the
architecture, management and politics of the internet. A highly compelling
part of the book includes a consideration of 'population dynamics', and of the
effects that demography could have on the future shape of internet usage.
Although Professor Kim's reflections on the use and diversity of languages
on the internet are predictable, the picture nevertheless remains striking.
The author's conclusion is rather pessimistic, since he sees the internet as a
factor of additional divides between countries, rather than a contributor to
more equality and equity at the global level.
Book Review
187
by James ALLEMAN
If you did not read about the bankruptcy of Enron, you could mistake The
Smartest Guys in the Room for a novel certainly implausible although
entertaining. But, incredibly, it was true. Arrogance, avarice, gluttony, and
even sex were all part of this "novel." McLean and Elkind offer a picture of
corporate hubris and greed that gives the reader pause for thought about the
social control of large corporations and the infrastructure that supports them.
We will concentrate on two areas: how telecommunications played a role in
the collapse of Enron and the failure of corporate governance in the Enron
case (and its lessons for policy makers).
188
The authors, Wall Street Journal reporters, who were among the first to
detect problems at Enron, offer a picture of a corporation that is disturbing.
The company's chairman, Ken Lay, seems to worry more about the interior
decor of the corporate jet than the health of the corporation. Its greedy CFO,
Andrew Fastow, was out for himself, and its board members remained
oblivious to the conflicts to which he alerted them. Fastow raised billions
through debt and equity vehicles, yet his office did not know when this debt
matured. It did not have any concept of the magnitude or maturity of the
debt! CEO Jeff Skilling was apparently more concerned about making the
quarterly "number" for Wall Street than ensuring the corporation's viability
and, hence, did not probe the financial stability, legality, nor the propriety of
the financial vehicles Fastow designed.
These are only the major characters in a cast of players who, in one way or
another, led to the fall of Enron. But it was not just people inside Enron that
were culpable. The accountants, auditors, investment banker all "strutted
and fretted upon" Enron's stage, which in itself signifies failure.
How could a corporation that was considered one of the top U.S. companies,
recognized as such by well-regarded business monthlies such as Fortune
and used as a case study at leading business schools, fail so dramatically
and so fast.
Clearly, I cannot give away the ending the company went bankrupt,
leaving thousands of employees without jobs or pensions. One of the top
accounting firms, Arthur Anderson, was destroyed. Executives have and are
being prosecuted; people have already gone to jail, more will. Yet to me, the
phenomena that are of interest are the magnitude of the fraud and how long
it continued. How could banks support this level of debt? Why was the fraud
not detected sooner? What could prevent such an occurrence again? The
financial checks and balances credit agencies, government regulators,
Wall Street analysts, bankers and lawyers failed, creating uncertainty and
a loss of confidence in the financial markets. Not only do the authors shed
light on these issues, they do it in an entertaining way.
Where do information, communication and technology (ICT) fit into Enron?
The company began as an oil and gas enterprise an energy company, but
transformed its public persona into a company engaged in ICT and trading.
ICT was part of the hype, as was trading, since these businesses were the
"darlings" of Wall Street. They were the growth industries, not the staid, old
energy business. When Skilling realized what was happening to internet
stocks, he began to promote and fund Enron Broadband, a small part of an
Oregon utility Enron had purchased. Since the company was already
(successfully?) trading energy, it was an easy step to consider broadband
trading, or so it seemed. The requisite skills were already in-house. The
value of the company would increase substantially, according to Skilling's
calculation whereby USD 1 billion in investment would add USD 20 billion to
market capitalization (p.185). However, broadband trading was not
Book Review
189
equivalent to energy trading and the venture failed. In the end, the
management was not there, employees overspent, over acquired, undermanaged and understood little of the business.
When the company made a significant gain from a pre-IPO purchase of
Rhythms NetCommunications USD300 million on a USD10 million
investment to hedge its gain (since Enron could not sell the stock
immediately) Fastow offered to set up a set of his infamous Special Purpose
Entities (SPE). But, as with the other SPEs, this was a sham. Ultimately, the
hedge was supported by Enron stock, and thus was not a hedge at all,
although it did provide Fastow and his partners with over a quarter million
dollars in compensation for the "risks" they took. It was all part of the hubris;
so long as Enron stock does not fall, the SPEs worked. However, gravity
even applies to the stock market when the weight of debt becomes too great
(and is revealed).
What are the lessons to be learnt from this experience? This is a difficult
question. Analysts are supposed to thoroughly understand corporations, the
auditors' job is to uncover and report deceptions and weakness in company
accounts, credit rating agencies are supposed to probe into the depths of the
financials to understand the exposure of bondholders, while the Security and
Exchange Commission (SEC) is supposed to be the guardian of the veracity
of corporate financials. All failed. Yet many public sources for malfeasance
were available. Journalists, in addition to the authors of The Smartest
Guys, wrote articles skeptical of Enron's practices as early as 1993. Short
sellers were also aware of problems with the company. So the lesson I draw
from all of this is that there is a systemic failure, which needs serious
scrutiny, analysis and remedial legislation. The Sarbanes-Oxley legislation
passed in the United States in the wake of the Enron and MCI WorldCom
bankruptcies was passed in haste and more to reassure the financial
community that Congress could fix what was clearly broken than to provide a
long-term solution. While the book points to the fractures in the system, it
does not offer any clear remedies, nor does Sarbanes-Oxley. Much more
analysis needs to be completed to avoid future disasters like the failures of
Enron and MCI WorldCom.
One of the more useful aspects of the book is the list of players and a short
note on their role in this drama. I found it a useful reference while reading
the book. While The Smartest Guys in the Room is entertaining and
readable, it does have a few minor flaws. There is some repetition in the four
hundred plus pages and the index was not updated for the paperback
edition, but this should not deter anyone from reading the book. It certainly
offers an instructive guide to anyone interested in seeing what can go wrong
in the corporate environment. Although one may not learn what remedies to
apply, the issues are clearly identified. The next step is to analyze the
complexity of these problems and find remedies.
190
by Petros KAVASSALIS
In the era of communications policies that promote competition, the
regulation of network-based industries in recent years had focused
increasingly on the correction of undesirable market outcomes. In this
context, independent (sector-specific) national regulatory authorities (NRAs)
in most developed countries are in responsible for closely monitoring the
competition process, with the objective of sustaining a "sufficient" level of
competition and of imposing remedies, whenever they are needed 1. To
achieve this goal, NRAs, in accordance with the EU competition law, dispose
of a variety of regulatory instruments ranging from simple persuasion to
enforcement and license adaptation. More recently, this regulatory arsenal
has been completed with a new framework that aims to impose remedies on
industry players with significant market power (SMP). The book by P. Valcke
et al. offers a detailed explanation of the properties of this new EU regulatory
framework (2003) and examines how the latter currently applies in the
European mobile sector. With this book, the authors make a valuable
contribution to regulation policy literature and raise issues of great
importance for the SMP management process.
The term SMP defines a situation whereby a company enjoys a position
equivalent to dominance, that is to say a position: "Of economic strength
affording it the power to behave to an appreciable extent independently of its
competitors, customers and ultimately customers." 2 In economic terms, this
essentially signifies the power to raise prices above the competitive level.
The authors of the book carefully draw the distinction between SMP and
competition policy: "In competition policy, the triggering factor is to be found
in specific conduct (abuse) or a change in market structure (merger)". In the
new regulatory framework, this triggering factor comes at the 'market
selection' stage, where some relevant markets are singled out for the SMP
procedure." Obviously, in assessing whether a company possesses SMP,
the definition of the "relevant market" (which includes the products or
1 For an overview of regulatory conditions in the post-liberalization era with reference to mobile
communications markets, see: J. UBACHT, 2004, Regulatory Practice in Telecommunications
Markets: How to Grasp the 'Real World of Public Action', C&S, no. 54, pp. 219-242.
2 Article 14 of the Framework Directive, O.J. 24.204.2002, L 108/33.
Book Review
191
192
Summary
The authors
194
David EVANS is an authority on the economics of high-technology and platformbased businesses, primarily as they relate to competition policy and intellectual
property. He is the author of four books and over 70 articles in journals ranging from
the American Economic Review and Foreign Affairs to The Yale Journal on
Regulation. His many opinion pieces have appeared in newspapers worldwide
including the Washington Post, Wall Street Journal, Financial Times, Les Echos and
El Pais. A specialist on competition policy in the U.S. and European Union, he has
served as an expert and testified before courts, arbitrators, regulatory authorities and
The authors
195
legislatures in the U.S. and Europe. He has led economic analysis in several
important antitrust cases over the last 25 years including U.S. v AT&T. Most recently,
Dr. Evans led an international economic team on a landmark series of cases
involving a large global technology firm in the U.S. and Europe. Since September
2004 he has been visiting professor of competition law and economics at University
College, London. He has a Ph.D. from the University of Chicago. He is also co-author
of Paying with Plastic (MIT Press, 2005), which is considered by many as "the
definitive account of the trillion-dollar payment card industry".
Petros KAVASSALIS is one of the founders of the MIT Internet Telecommunications Convergence Consortium MIT-ITC. Currently, he is the Director of the
ATLANTIS Group at the University of Crete. Petros Kavassalis holds a degree in Civil
Engineering from the National Technical University of Athens and a Ph.D. from
Dauphine University in Paris (Economics and Management). In the past, he worked
as a Researcher at the Ecole polytechnique, Paris (Centre de Recherche en Gestion,
Laboratoire d'Econometrie) and at MIT (Research Programme on Communications
Policy at CTPID). His interests focus on the fields of information management & ebusiness, information economies and the emergence of organizational patterns on
the Web and mobile applications and services.
http://atlantis.uoc.gr
AeRee KIM is a PhD student at the Graduate school of Global Information and
Telecommunication Studies at the Waseda University, Tokyo, Japan. She holds a
Masters in Global Information and Telecommunication from the Waseda University.
AeRee Kim has been a guest research officer at NTT Cyber Communication
Laboratory Group since 2005. In recent years, she has received various scholarships
and was a fellow of the Rotary Foundation Scholarship. Her research interests mainly
focus on information and telecommunications economic analysis and the effect of
information on society.
A World Bank senior advisor on e-strategies, Bruno LANVIN has occupied senior
positions at the World Bank (Manager of the Information for Development Program
(infoDev), and Executive Secretary of the G-8 DOT Force), and in the United Nations
196
l.michaud@idate.org
The authors
197
www.enst.fr/egsh/pogorel
198
include: Economie de la presse, with Patrick Le Floch (2nd Ed. La Dcouverte, coll.
Repres, 2005); L'industrie des medias, with Jean Gabszewicz (Ed. La Dcouverte,
coll. Repres, 2006); and several articles in refereed journals.
ITS News
200
Foremost, the 2006 biennial will be an important meeting place to discuss the
future of telecommunications and ICTs and their impact on our societies. The
conference will fulfil the basic mission of ITS, which is to provide a non-aligned
and non-profit forum where academic, private sector, and government
communities can meet to identify pressing new problems and issues, share
research results, and form new relationships and approaches to address
outstanding issues. ITS places particular emphasis on the interrelationships
among socioeconomic, technological, legal, regulatory, competitive,
organizational, policy, and ethical dimensions of the evolving applications,
services, technology, and infrastructure of the communications, computing,
Internet, information content, and related industries.
Likewise, ITS regional conferences are critical to the mission of ITS and
provide the same basic forum and values, although in a smaller format and
setting. ITS is pleased to announce that the 17th European Regional ITS
Conference will be convened in Amsterdam, The Netherlands, August 22-24,
2006. The Conference will focus on next generation telecommunications
infrastructure and services and will be hosted by the University of Amsterdam.
Co-chairs of the conference are Prof. N.A.N.M. van Eijk, University of
Amsterdam, Prof. Harry Bouwman, Delft University and Dr. Brigitte Preissl,
Deutsche Telekom.
ITS is grateful for the regional ITS conferences, and depends on the
commitment of regional organizers. Notable is the long and faithful tradition of
ITS European Regional conferences, convening regional ITS events in 14
European countries since 1987. ITS Board member Prof. Juergen Mueller has
been the main organizer since the start, and he has with great skill been able to
gather many memorable and fine ITS events. ITS is proud to have Dr. Brigitte
Preissl, also ITS Board member, as a new main driver of ITS European Regional
Conferences. The recent ITS European Regional in Porto, 2005, show-cased Dr.
Preissl's fine hand in putting together thematic sessions, engaged discussants
and quality papers. The Porto conference was also one of the more wellattended European regional events, and had excellent local arrangements due
to the efforts of ITS Board member Prof. Joo Confraria da Silva. Furthermore,
ITS Board member Prof. Gary Madden has convened two Asia-Australian
Regional ITS conferences in Perth, the most recent one also in 2005. Both of
these events have been highly acclaimed, well-attended and generated several
books and journals issues, testifying to the high academic and professional
qualities of the local organizers at Curtain Business School, Perth. Since a few
years, ITS has also supported workshops at the IDATE International
Conferences, events where a special issue of the ITS Journal Communications
& Strategies has been issued, with many contributions from ITS members.
ITS participates in jointly organized conferences as well. The most recent
example involves ITS joining with Medetel's annual e-health conference, to be
held 5-7 April in Luxemburg. ITS shares the conference together with a large
number of other participating major international organizations including the
International Telecommunication Union (ITU), the International Society for
ITS News
201
Erik Bohlin
ITS Chair
www.tprc.org
welcomes
national,
international,
comparative,
and
DIGIWORLD2006
The digital world's challenges
Drawing on IDATE data and the analyses supplied by its experts, this sixth
edition of the DigiWorld report provides an up-to-date and structured view of the
challenges facing our digital world.
Information
www.idate.org
Contact
Sophie MONJO - +33(0)4 67 14 44 56 s.monjo@idate.org