by
Sergey L. Sundukovskiy
Doctor of Philosophy
Capella University
August, 2009
UMI Number: 3369490
Copyright 2009 by
Sundukovskiy, Sergey L.
INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
______________________________________________________________
ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106-1346
© Sergey L. Sundukovskiy, 2009
Abstract
Since its emergence, the Internet and Internet related technologies have permeated almost
all aspects of the modern life. The impact of the Internet on the day-to-day activities of its users
has been quite dramatic. However, its effect on the business community has been even more
profound. Besides yielding additional opportunities for existing businesses, the Internet has
facilitated a new era of companies that have the Internet at the center of their business model,
businesses that simply would not be able to exist without it. As such Internet-centric and
differentiation and competitive advantage. Despite the obvious importance to many businesses,
the efficiency and improvement of interactive marketing has largely stagnated or at least
proceeds at a low pace. This study examined interactive marketing through the prism of
Active experimentation has been utilized as a product strategy in numerous business fields, but it
has been largely ignored in interactive marketing. This study examined a number of
I dedicate this work to my wife Galina for her commitment, dedication and unconditional
love. Also, to my children: Aaron, Rebekah and Miriam for giving meaning to my life. Your love
encouraging me to see physical, emotional and intellectual limits as imaginary lines that are
meant to be crossed.
Last, but by no means least, to my brother: Aleksey, for seeing me the way I wish I could
truly be.
iii
Acknowledgements
experimentation and helping me generate the ideas that made this dissertation possible. Thanks
for encouraging me, listening to the things I say and hearing the things I do not say.
iv
Table of Contents
v
Qualitative Data Display for Describing the Phenomenon ....................................................... 44
Qualitative Data Display for Explaining the Phenomenon ....................................................... 45
CHAPTER 4: DATA COLLECTION AND ANALYSIS............................................................ 46
Overall Response Analysis ....................................................................................................... 46
Research Objective One ............................................................................................................ 48
Research Objective Two ........................................................................................................... 53
Research Objective Three ......................................................................................................... 58
Research Objective Four ........................................................................................................... 61
Research Objective Five............................................................................................................ 63
Research Objective Six ............................................................................................................. 66
Validity and Bias ....................................................................................................................... 69
CHAPTER 5: RESULTS, CONCLUSIONS, AND RECOMMENDATIONS ........................... 71
Results ....................................................................................................................................... 71
Conclusions ............................................................................................................................... 74
Recommendations ..................................................................................................................... 75
Limitations of the Study ............................................................................................................ 76
REFERENCES ............................................................................................................................. 78
APPENDIX A. SURVEY TEMPLATE ....................................................................................... 82
APPENDIX B. CONCEPTUAL FRAMEWORK........................................................................ 88
APPENDIX C. INTERACTIVE MARKETING .......................................................................... 89
APPENDIX D. MARKETING ..................................................................................................... 90
APPENDIX E. EXPERIMENTATION CYCLE ......................................................................... 91
APPENDIX F. DATA ANALYSIS .............................................................................................. 92
vi
List of Figures
vii
Figure F28. Experimentation product deployment risk impact and experimentation experience
cross-tabulation ........................................................................................................................... 105
Figure F29. Experimentation product deployment risk and job function breakdown ................ 106
Figure F30. Experimentation competitive standing of the company impact breakdown ........... 106
Figure F31. Experimentation competitive standing of the company impact and job function
cross-tabulation ........................................................................................................................... 107
Figure F32. Experimentation competitive standing of the company impact and experimentation
experience cross-tabulation ......................................................................................................... 107
Figure F33. Experimentation revenue impact ............................................................................. 108
Figure F34. Experimentation profit impact ................................................................................ 108
Figure F35. Experimentation market size impact ....................................................................... 108
Figure F36. Experimentation market penetration impact ........................................................... 109
Figure F37. Experimentation marketing goals impact horizontalization.................................... 110
Figure F38. Experimentation marketing levers impact horizontalization .................................. 111
Figure F39. Experimentation conditions impact horizontalization............................................. 112
Figure F40. Experimentation product lifecycle impact horizontalization .................................. 113
Figure F41. Experimentation product innovation impact horizontalization ............................... 114
Figure F42. Experimentation product improvement impact horizontalization........................... 115
Figure F43. Experimentation product deployment risk impact horizontalization ...................... 116
Figure F44. Experimentation competitive standing impact horizontalization ............................ 117
viii
ix
CHAPTER 1: INTRODUCTION
Internet Marketing
The introduction of the World Wide Web, enabled by the Internet, saw an explosive user
growth (Newman, 2001). Consequently both online based and traditional companies with
significant Internet presence started to realize the full potential of World Wide Web as a
marketing medium. Since the popularization of the Internet and Internet related technologies,
several new marketing paradigms have emerged that take advantage of Internet. Such emergence
was not planned and occurred spontaneously as an offshoot of traditional marketing (Mark,
2003). Internet marketing was formulated as one form of non-traditional marketing. Due to early
disappointments and lack of experience with a new non-traditional marketing medium Internet
marketing was initially used to supplement traditional marketing efforts. According to eMarketer
Research (2007), US online marketing in 2002 shrunk by 15.8%. Companies did not see Internet
marketing as a separate marketing channel, with a unique set of characteristics and considered it
the same as radio and television (Raman, 1996). Internet marketing was combined with
traditional television and radio campaigns because at first, it was not considered capable of
However, over time marketers began to realize that traditional marketing campaigns
consisting of television and radio advertisements, and print media designed to attract users to
their websites had a limited success, whereas the Internet itself proved quite promising in that
regard. According to eMarketer Research (2007) online advertizing spending in 2007 had
2
reached $19.5 billion, projecting to reach $36.5 billion in 2011. Companies had realized that
Internet marketing was not only a strong and independent marketing medium, but also had a
number of advantages over traditional forms of marketing, such as television, radio and print
media (Mark, 2003). In order to earn its independence and be considered comparable to one of
the traditional dominant marketing channels, Internet marketing had to be evaluated against a
marketing science. Internet marketing showed its potential for consumer accessibility, defining
corporate identity, promoting brand awareness, enabling market segmentation and achieving
capabilities such as personalization, interactivity and traceability were formulated among its
advantages over traditional dominant medium channels such as television, radio and print media.
The Internet allowed synchronous and asynchronous user behavior analysis, which in turn
allowed for previously unavailable personalization capabilities (Milley, 2000). It had the
potential to reach a vast number of World Wide Web users that otherwise would be either
difficult to reach, or in some cases completely unreachable. According to Internet World Stats
(2007), by the end of 2007 the Internet had reached 1,319,872,109 users with the largest
percentage growth accounted for by countries in Middle East, Africa and Latin America.
Since its inception, the Internet has developed into one of the strongest and mature
Marketing channels that is segmented into several sub-channels. At the present time Internet
umbrella for a set of diverse marketing sub-channels. The following distinct Internet sub-
marketing channels have now emerged: (a) search engine marketing (SEM); (b) e-mail
3
marketing; (c) affiliate marketing; (d) display marketing; and (e) social media marketing (See
Appendix C). These Internet marketing approaches, founded upon sociological, economic and
cultural marketing aspects, are now interwoven into Internet marketing fabric as a cohesive
entity. At the same time each one of these Internet marketing sub-channels has a set of unique
characteristics, based on its execution and marketing strategies. These characteristics distinguish
Interactive Marketing
marketing, search engine marketing and e-mail marketing sub-channels have another aspect in
common, their interactivity. Each one of these sub-channels relies on the user to make an
interactive step directly related to the advertisement stimuli. As such these marketing sub-
channels could be placed under another umbrella called “Interactive Marketing”. In its purest
form Deighton (1996) defines “Interactive Marketing” as an ability of the computer based
system to interact with the user for marketing based purposes. In many cases however interactive
marketing consists of a multi-transactional interaction that is based on user purchase and visit
history, declared or implied preferences and demographic information consisting of age, sex,
Most of the interactive marketing accomplished through the World Wide Web is enabled
with interactive marketing (Deighton, 1996). Based on the Veen diagram (see Appendix D) not
all Internet marketing is interactive and not all interactive marketing is conducted on the World
Wide Web or is Internet based. For instance display marketing, one of the Internet marketing
sub-channels, is not interactive in nature. In the same vain iPods, mobile devices, gaming
4
devices, interactive screen devices and e-books are capable of carrying a marketing message by
means of wireless or hard-connected communication. Each of these devices has its own
marketing specifics related to its size and interactivity, tracing power, display and
communication capabilities.
However, based on the Veen diagram (see Appendix D) it is evident that a great amount
of overlap exists between the Internet and interactive marketing. Chulho (1998) defined the
“Interactive Age” as an age of full interactivity between humans and machines and technology
connected humans. Similar parallels can be drawn between the “Information Age” and
“Participation Age”, coined by Schwartz (2005). Blogging and Social Networking are two recent
examples of the Interaction or Participation Age that are utilized by social media marketing.
At the heart of interactive marketing are the two main concepts of “Personalization” and
“Customer Engagement”. The interactive marketing itself could be fully defined in terms of
means of the push or pull marketing model. According to Haag (2006) “Personalization” is a
process of generating unique experience for an ever finer grain segment of customers. Taken to
the logical extreme, the process of personalization is a process of targeting a single individual
with the stimuli uniquely tailored to that individual and that individual alone. The logical
conclusion of these definitions is the hypothesis that the most effective customer engagement
5
personalization through data mining and information analysis. The data mining and analysis
constitutes the bulk of the personalization effort in both temporal and relative importance
steps: (a) data gathering; (b) data analysis; (c) personalized content generation. According to
comSource (2007) the data gathering portion of the personalization process undertaken by major
interactive marketing companies such as Yahoo, Google and Facebook has reached a significant,
earlier unseen level. According to comSource (2007), in a period of a single month in 2007
Yahoo on average collected data 2,520 times for every visitor to its site. Mark (2003) posited that
Internet marketing companies collect as much data a possible even without a specific goal in
mind. Since, it is not always clear what customer data would become useful in the future, all
available data is collected (Mark, 2003). It usually consists of webpage visits, visit history,
information self-declared or inferred financial information, geographic information, age and sex
information (Mark, 2003). The collected data is carefully cataloged and analyzed. The purpose of
this data analysis is to determine the most appropriate marketing strategy for a particular user
and other users that fit similar characteristics. Milley (2000) describes this approach as a “Data
The intention of any marketing strategy in general and interactive Internet marketing in
particular is to achieve a desired goal. In general interactive Internet marketing tries to achieve a
goal of conversion where the user is synchronously driven to a purchasing decision. According
to Macias (2000) even though conversion is certainly the dominant interactive marketing goal,
other goals may include customer retention, customer acquisition, lifetime value maximization
6
and product up-sell. Each one of these goals could be considered a question that interactive
provide answers to the following questions: (a) what is the most effective customer retention
strategy? (b) what is the most effective to driver customer to a purchasing decision? (c) what is
the most effective customer acquisition strategy? and (e) what is the most effective product up-
sell strategy?
By analyzing collected user data interactive marketing researchers try to answer these
questions in a predictive manner. The predictive data analysis, which is at the core of the data
empowered marketing strategy, is accomplished through diverse data modeling techniques. Dou
(1999) described the use of the Catastrophe Theory to model online store sales. Other techniques
include the Recency-Frequency-Monetary Value (RFM) model designed to assess the customer
purchasing decision. However, these and other models have several fundamental problems. First
of all, these models are predictive and as such carry a fair amount of statistical error due to a
need to make assumptions and use guess work. Consequently, these models are subject to
Simpson’s Paradox where a lack of understanding of data granularity could lead to incorrect
conclusions. Secondly, the Internet is an open system where not all of the variables and their
effect are clearly defined (Dou, 1999). As such confounding variables become a real issue.
Thirdly, the feedback loop between data collection, data analysis and interactive Marketing
fundamentally unsuitable to the set of marketing tasks related to product development and risk
mitigation. The data empowered marketing strategy absolutely requires extensive amounts of
data in order to come up with a predictive marketing model. However, at the stage of new
7
product development, such data does not exist or is very limited. As such, predictive modeling
marketing strategies by definition are not suitable for new product development. In the past
marketers had to rely on focus groups to gain some guidance on new and existing product
development (Mark, 2003). However, the focus group method has proven to be extremely error
prone due to the Survey Paradox where people say one thing and then another. According to
Mark (2003), in many cases companies would also fail to generate a focus group of the required
diversity. As such it was not uncommon to see focus group results being skewed to a particular
group of users.
product ideas were generated by company managers and corporate executives. Kohavi,
Longbotham, Henne and Sommerfield (2007) described this model as the “Highest Paid Person’s
Opinion” (HiPPO), where the most senior member of the team comes up with the product idea or
is responsible for choosing among multiple product options. When the product idea is selected,
the resultant product is manufactured in the monolithic fashion and subsequently demonstrated to
the focus group. One possible name for the risk profile in this type of product development is
“Risk Backloading”, where the bulk of the risk is born at the end of the product development
lifecycle. Consequently, HiPPO initiated and focus group validated product development is
The main goal of this study is to propose an alternative interactive Internet marketing
product development strategy that mitigates the shortcomings of predictive data modeling and
statistical uncertainly, data feedback loop and confounding variables. In addition it recommends
8
a more advantageous interactive Internet marketing product development approach that places
product initiative in the hands of the end-users and allows for a more advantageous risk profile
related to product innovation and product development. Even though this study treats Internet
and interactive marketing synonymously, it more specifically considers only the part of
interactive marketing that is enabled by Word Wide Web and Internet technologies. The findings
of the study should however be applicable to the interactive devices mentioned above.
Product Development
As mentioned earlier, the use of experimentation to investigate the problem has been
type of problem and so the process of experimentation should be fully applicable to product
and Electronics. Furthermore von Hippel (1998) posited that experimentation facilitates product
get away from the HiPPO principal, where product decisions are made by the people who are
furthest removed from the product itself. Product users themselves consciously or
subconsciously galvanize and propel product development forward. More importantly they do so
with their actions instead of the words and in this way they eliminate the Survey Paradox.
Thomke, von Hippel, and Franke (1997) argued that employing experimentation in product
development allows companies to improve their competitive standing. Since products are shaped
by the end-users directly, the chances of product failure are greatly decreased. The
experimentation process allows controlled failures early and often when the cost of change is
minimal. By employing the experimentation approach companies reverse the risk profile from
9
back-loading to frontloading. Depending on the nature of the product companies may utilize
several experimentation modes: (a) simulation; (b) prototype; and (c) sample. Experimentation
modes depend on such factors as experiment result applicability, experiment cost and experiment
would seem logical that the process of experimentation would be pertinent to Internet marketing
and interactive marketing as well. However, very little if any empirical studies have been
conducted to explore this matter. Researchers such as Li (2003), Raman (1996), Mark (2003),
among others, have broached the subject from the predictive modeling point of view. Even if
experimentation is mentioned it lacks continuity with interactive marketing itself and is always
the field of interactive marketing is sorely lacking it draws on experimentation experience from
other industries and product categories. Its background is rooted in the fields of experimentation,
interactive marketing. The narrative of the study is centered on the product development
strategies in interactive Internet marketing. It is designed to address what can be called the
Information Age Product Development Paradox, where companies that have a significant
10
and instead favor low-tech solutions. The paradox consists of a seeming contradiction between
relying on the technologies as the source of survival, yet disregarding them when it comes to
product development. With regards to product development, companies involved in the Internet
and the interactive marketing tend to ignore product development strategies that have been
developed in other fields. As such they face problems related to product risk, product
The main purpose of the study is to examine the applicability of experimentation as the
the use of experimentation in all parts of the interactive marketing product development
lifecycle: (a) idea generation; (b) idea screening; (c) concept development and testing; (d)
business analysis; (e) beta testing and market testing; (f) technical implementation; and (g)
practice that is quite limited at the present time. Furthermore, this study looks at the unique
exogenous and endogenous interactive marketing factors and their effect on interactive
marketing goals. The secondary purpose of the study is to point out the gaps and propose
11
Research Questions
This study aims to answer six main research questions. What is the impact of
experimentation on interactive marketing goals? What are the key experimentation levers
development and deployment risk? What is the impact of experimentation on the competitive
Online conversion is one of the key goals of interactive marketing companies. Even
though conversion is not the only goal, the majority of interactive marketing companies are
striving to achieve higher conversion. Even though conversion events of interactive marketing
companies are quite diverse, in essence, the conversion event is equivalent to a revenue
generating event. It is paramount to determine if the use of experimentation has a positive effect
on online conversion. This study examines a number of exogenous and endogenous variables
pertaining to interactive marketing and analyses their impact on the interactive marketing goals.
The aim of the study is not to define an exhaustive list of exogenous and endogenous factors
across all interactive marketing products and environments, but rather to determine an effective
A significant number of the research questions posed by this study are devoted to the
the study aims to examine the impact of experimentation on product innovation, product
improvement, product development and product deployment risk. Similar to the goals, the
interactive marketing companies’ products are equally as complex and as diverse. However, it is
12
important to understand the fundamental benefits offered by experimentation as it relates to
Lastly, this study will examine the impact of experimentation on the competitive standing
of companies in the interactive marketing space. The aim of the study is to show that companies
in the interactive marketing space that abstain from experimentation, in relation to product
development, ignore it at their own peril and put their competitive standing and quite possibly
designed to achieve the best results through social interaction. Despite the fact that the social
interaction in this context occurs between man and machine, the social context of the interaction
is preserved. As such the use of a qualitative design approach is consistent with social
constructivism and fully justified as the research approach for this dissertation. It is would have
experimentation relies heavily on statistical analysis and statistical data modeling. However, the
topic of this dissertation and the state of the empirical research in the experimentation field of
interactive marketing are more conducive to qualitative analysis. This study is theory forming in
nature and as such it is consistent with one of the strong fundamentals of qualitative analysis.
The experimentation approach can give companies involved in interactive marketing the
means to test marketing ideas without putting the majority of its traffic at risk. In the past
13
marketing ideas have been tested using surveys and focus groups. However, this approach is long
and inexact. It also gives rise to what can be called a “Survey Paradox”, where people say one
thing and do another (Sheffrin, 1996). The process of experimentation presents qualitative and
quantitative results and represents an innovative development in testing Internet marketing and
interactive marketing ideas. However, very little or virtually none of the empirical research
companies involved in experimentation in the field of interactive and Internet marketing are
forced to develop a practical base behind experimentation through trial and error. Such approach
results in limited success and might lead to companies abandoning experimentation altogether. In
marketing, quite limit their success due to poor alignment between the Business and Information
in the interactive and Internet marketing fields. One of the key implications of the study is the
assertion that companies that are involved in interactive and Internet marketing, outside of the
development risk and reduce their likelihood of success in achieving their marketing goals.
Another essential contribution of the study is an outline of the implementation process in the
context of the experimentation framework that allows achieving cohesion between departmental
14
optimize the marketing goals of the companies involved in interactive and Internet marketing.
Definition of Terms
Base Flow: current web site execution path with the highest fitness value. All related
Challenger Flow: variation of the Base Flow designed to achieve a higher fitness value.
Confirmation Flow: previous Base Flow designed to confirm its losing status to the
current Base Flow. The introduction of the Confirmation Flow can be classified as confirmation
testing, where results of the previous Experiment are reconfirmed. The purpose of the
confirmation testing is to make sure that the previous Base Flow had lost to the current Base
Flow in the head to head comparison and was not a result of a statistical error or impact of
the exogenous variables with the purpose of achieving Silo Goals. The purpose of the experiment
Experiment.
Flow: single variation of the web site execution path. Typically the experiment consists
of several Flows that manipulate a single variable for a side by side comparison.
variables simultaneously.
15
Phantom Flow: flow that does not cause a variation in behavior. It represents a virtual
Silo: area under experimentation. The Silo usually consists of a collection of endogenous
variables that can be manipulated in a particular context. In typical interactive marketing Silos
consist of landing web pages, up sell web pages, offer pages, etc.
Silo Goal: purpose of the experimentation. In the majority of interactive marketing cases
the Silo Goal is represented by conversion also known as a purchasing decision. Other Silo Goals
may include link offs, up sell actions, subscriptions, content contribution, etc. In general the
terms the goal of the experimentation is to achieve the highest fitness value.
variable.
View: single web page alongside the web site execution path. Typically Flow consists of
several Views that contain one or more endogenous variables that are being manipulated.
Imposed Flow: flow that allows predictive execution of the web site path without
distribution and analysis. Imposed Flow is often utilized for regulatory or testing purposes.
Silo Visit: represents a visit to one of the areas under experimentation, typically
Site Visit: represents a visit to the web site or any other Traffic Origin of the interactive
marketing company.
Traffic Origin: source where web traffic originates. Traffic Origin is often represented by
one of the interactive marketing channels such as email, search, display, affiliates, call center and
Traffic Criteria: criteria that is intrinsic to the incoming web traffic, but extrinsic to the
16
Experiment itself. Traffic criteria are a collection of exogenous variables.
Traffic Distribution: allocation of the web traffic between Experiments and Flows
according to the distribution algorithm. Traffic Distribution between Experiments and Flows
must add up to 100% otherwise some of the incoming traffic will fail to be distributed.
View Visit: represents a display web page or any other asset of interactive marketing such
domain assumptions. First, it assumes the synonymous nature of following terms: A/B Testing,
Split Testing, A/B/C Testing, Singlevariate Testing, Singlevariate Multi-Experiment Testing, and
Multivariate Testing. All of these terms are aimed at describing the same phenomenon and had
evolved under a common umbrella (see Appendix C). However, it needs to be noted that the term
strategies, yet it is not synonymous with them. However, for all intent and purposes the practice
of multivariate testing as well all univariate testing techniques will be assumed to be describing a
common phenomenon. In cases where the study discusses the intricacies of a particular
experimentation approach the name of the approach will be explicitly noted. In the same vain
such terms as Internet marketing and interactive marketing are treated synonymously and
interchangeably. In cases where a distinction between these concepts needs to be made it is made
explicitly.
17
This study also assumes that all participants of the study are not aware of their active
participation in the study. This assumption is essential for illuminating participant bias. It is also
assumed that all participants of the study have at least cursory knowledge of interactive and
Besides listed domain assumptions this study has a number of paradigmatic and
methodological assumptions. In some respects it follows the basic assumptions of the systems
approach. It assumes objective reality that is independent of the observer. Similar to the systems
approach the key aim of experimentation is to model the real world phenomenon. As such it
employs inductive and deductive reasoning as well as verification as the basis for its conclusions.
It assumes that the results of any given experiment may be a result of casual relationships and
According to Becker (1992), the researcher of the phenomenological study is viewed as a co-
creator of knowledge alongside the participants of the study. Even though such researcher
involvement is beneficial, in this particular case discrepancy between the participant’s and the
researcher’s knowledge of the studied phenomenon is too great. As such, the level of threat to
researcher bias, validity, generalization and repeatability of the study is too great. Consequently,
methodology that allowed the researcher to detach his experience from the experience of the
18
Organization of the Remainder of the Study
The remainder of the study consists of four additional chapters. Chapter Two examines
experimentation. The literature has been selected to describe each of the aspects of the study
product development concepts. Chapter Three describes the design methodology that was chosen
to conduct the study. It defends the choice of research methodology by examining the
methodological fit to the research approach and the subject of the study. It also describes
elements of the study including the data collection procedures, data sampling, data collection
instruments and data coding. Chapter Four illustrates the findings of the study and undertakes a
detailed analysis of the results of the study. It consists of data analysis and data display for
describing and explaining the phenomenon of the study. Chapter Five summarizes the findings of
the study and introduces an alternative hypothesis for the findings of the study, validity and bias
as well as examines the trend in the findings of the study. It also proposes topics for future
19
CHAPTER 2: LITERATURE REVIEW
Experimentation Introduction
have utilized experimentation since ancient times with one of the classic examples of early
experimentation originating from Egypt around 2613-2589 BC. When Egyptians attempted to
build a smooth-sided pyramid they engaged in the process of experimentation. They initially
started building a pyramid at Meidum which collapsed due to angle acuteness. Based on the
result of this failed experiment Egyptians altered the angle of the Bent-Pyramid at Dahshur more
than half way through to save it from collapse, resulting in a bent shape. Subsequent smooth-
sided Egyptian pyramids have utilized the correct angle from the outset. By looking at the
experiments often fail (sometimes they are designed to fail); (b) full scale experiments can be
quite expensive; (c) conducting experiments consecutively may take a long time; and (d)
them, this Egyptian experiment created the basis for the Design of Experiments theory. This
early experiment and subsequent application of the experiment results contained the majority of
short Thomke (1997) defined experimentation as a trial and error process. According to Thomke
(1998), the experimentation process consists of four major steps: (1) design (design consists of
20
coming up with an improved solution based on the previous experience from the preceding
experiment cycles); (2) build (the build step consists of modeling and constructing products to be
experimented upon); (3) run (the run step of the experimentation process consists of executing
the experiment in the real or simulated environment); and (4) analyze (analysis consists of data
mining and data investigation collected during experiment execution). Thomke (1998), pointed
out that the experimentation process changes under pressure from exogenous elements. These
Experimentation Strategies
The process of conducting experimentation is not uniform. The choice of the experimentation
(Beerenwinkel, Pachter & Sturmfels, 2007). Quite often the solution to a particular problem is
not singular. Fitness landscape consists of all possible solutions to a particular problem. A fitness
function defines the quality of the solution in relation to other solutions to the same problem. The
optimal solution to the problem is thought to have the highest fitness value.
A real world demonstration of the fitness landscape could be observed by looking at the
simple problem. For example, the way between home and the office consists of n routes. The set
of all routes constitutes a fitness landscape. Among all of the possible routes one of the routes is
the “best”, where best = f(n). Since, best is a relative term it must be defined in the context of the
fitness landscape. It is likely that the majority of people would consider the best route to
constitute shortest time. However, it is possible that some would choose a scenic route to be the
best. It is also possible to have multiple optimal solutions where the best route changes based on
exogenous elements such as time of day, day of the week and weather conditions. In that case,
21
the best = df(n) / dt, taking the time of the day into consideration.
Thomke (1998) defines three experimentation strategies: (a) parallel experimentation; (b)
serial experimentation with minimal learning; and (c) serial experimentation with learning. It is
possible to demonstrate these experimentation strategies by continuing with the “driving from
home to office” problem. To find out the route that yields the shortest driving time between the
home and the office and is the quickest way would require employing a parallel experimentation
strategy. However, it is impossible to accomplish this using a single driver, since it requires
taking all possible routes simultaneously. The driver of the car would need help from his friends.
If the driver of the car was determined to find the shortest route himself/herself, he/she could
employ serial experimentation with minimal learning or experimentation with learning. The
serial experimentation with minimal learning would require the driver to follow a predefined
plan of taking different routes until all possible routes where exhausted. After all routes have
been tried, the driver would have analyze the time each has taken to determine the one with the
shortest time. The serial experimentation with learning would allow the driver to avoid trying all
routes by analyzing results of initial experiments and illuminating routes that certainly would not
Experimentation Models
The experiments are further complicated by endogenous and the exogenous elements. It
is certainly possible to affect the experimental results by changing the endogenous elements of
the system (see Appendix E). For instance besides the different routes, changing drivers, car
type, fuel type, and the number of drivers among many other things would possible affect the
driving time. The experiments with multiple changing endogenous elements are called
multivariate. The experiments with a single endogenous variable are called univariate. It is
22
always possible to represent multivariate experiments as a series of univariate ones, by
temporarily freezing all but one variable. Ideally, in order to understand the impact of all
exogenous variables each of them requires a separate experiment. In other words in order to
understand the impact of the time of departure from the office on the overall travel time, the
driver needs to conduct experiments by leaving the office every hour on the hour. In the same
Based on the analysis of the experimental models it is quite obvious that the complexity
elements, such that Number of Experiments (E) = k * (m^n)!, where k is the number of the steps
in the process, m is the number endogenous variables and n is the number of exogenous
variables. Based on formula above it is possible to arrive at the conclusion that the chance of
guessing the option with the highest fitness value, when the number of endogenous and the
At the same time running experimentation of all combinations of endogenous and the
exogenous factors in order to determine the best possible combination yielding the highest
fitness value would constitute a full factorial experiment (Xu &Wu, 2001). However, even with a
trivial number of exogenous and endogenous factors the number of resulting experiments that
satisfies full factorial design is truly staggering. By applying the formula listed above a single
step process with 3 endogenous factors and 3 exogenous factors would result in 1.0888 E 28
number of experiments. For all intent and purposes running full factorial experiments beyond a
experimentation it is necessary to look at the nature of exogenous and endogenous factors. Not
all of the variables involved in the experiment affect the outcome of the experiment equally.
23
Depending on the nature of the experiment it is possible to find a smaller subset of variables that
have the greatest effect on the experiment. Anderson (1972) called this approach to
experimentation partial factorial. There are number of statistical methods of determining what
variables truly matter to the outcome of the experimentation. Li (2003) identified the following
partial factorial reduction models: (a) Univariate Poisson (relies on the analysis of the variable of
all of the involved variables); (b) Univariate Tobit without Log Transformation; (c) Univariate
Tobit with Log Transformation; (d) Discretized Univariate Tobit with Log Transformation; (e)
Discretized Univariate Tobit with Heteregeity; (f) Multivariate Count; (g) Multivariate Count
with Mixture; and (f) Multivariate two-state hidden Markov Chain Tobit.
Even a cursory look at the listed models allows them to be separated into two categories:
(a) univariate; and (b) multivariate. Fundamentally univariate models, where ANOVA tests are
applied in succession, are designed to ascertain the effect of the independent variables on the
dependant variables (Biskin, 1980). In the context of the experimentation univariate models are
designed to highlight the exogenous and endogenous factors that have a significant impact of the
outcome. On the other hand multivariate models, where a MANOVA test is conducted, are
designed to come up with sets of independent variables that have an impact on the dependant
variables (Huberty, 1986). Again by taking the experimentation context into account the
multivariate models are designed to highlight sets of exogenous and endogenous factors that
have a significant impact on the outcome of the experimentation. According to Huberty and
Moris (1989) the fundamental difference between multiple univariate ANOVA tests and
multivariate MANOVA tests consist in the consideration of the effects of the independent
variables on each other and their compound effect on the outcome. More specifically univariate
models tend to ignore the relationship between exogenous and endogenous factors and their
compounding effect, where multivariate models take this relationship into account.
24
In practical terms, going back to the “home to the office” driving example, univariate
models consider the independent impact of the weather conditions and the time of the departure
on the driving time, where multivariate models would consider these two factors in conjunction
Even though interactive and Internet marketing are relatively new phenomena they have
generated a fair amount of popular and scientific foment. It is even fair to say that the scientific
community has been lagging behind interactive and Internet marketing practitioners who have
pushed exploration boundaries. At the same time, in recent years, interactive and Internet
marketing, as a subject of scientific inquiry, have seen an increased rate of exciting empirical
research. These studies have focused on interactive and Internet marketing from social,
physiological and physiological perspectives (Jebakumari, 2002; Milley, 2000; Macias, 2000;
Liu, 2002; Newman, 2001; Mark, 2003; Bezjian-Avery, 1997; Raman, 1996).
There is abundant evidence that ancient Egyptians conducted experiments during pyramid
construction. In a more recent example of experimentation James Lind, while servicing onboard
the Salisbury, conducted an experiment of using citrus to cure scurvy. One of the notable
differences between early experimentation efforts and the experiments conducted by Lind was
the use of control and treatment groups. The results of the control group were compared with the
results of treatment groups in order to confirm or reject the hypothesis of the experiment. In the
early 20th century Ronald Fisher formulated a mathematical method for designing and analyzing
several factors or variables at the time (Fisher, 1935). Fisher (1926) initially used “complex
experimentation” as the term describing experimentation with multiple variables at the same
25
time. In more recent years researchers have focused on experimentation in the context of product
development.
Enlarged experimentation methods have been researched in the context of Electrical and
Mechanical Engineering (Wang, 1999; Hansen, 1989; Donne, 1996). It is certainly not surprising
that experimentation practice has been widely employed in industrial manufacturing since
manufacturing product commitment is quite expensive and may result in significant losses and
even, in some cases, impact on the long-term survival of the company. The product development
lifecycle often requires experimentation to be part of the product development process. The
deceptive ease of change in the Internet and interactive marketing product development has
resulted in the situation where experimentation best practices found in the industrial product
engineering are ignored. There is certainly a glaring lack of empirical research into
development. There are a few empirical works that have broached this subject (Dou, 1999; Li
2003; Ozdemir, 2000); however, these studies are primarily dedicated to data mining and
innovation (Thomke, Von Hippel & Franke, 1997; Thomke, 1998; Thomke 2001; Von Hippel
1998; Thomke; 1995). These studies have asserted that product innovation is driven by product
users themselves through interaction and experimentation. Product innovation was particularly
highlighted in these studies and was considered separately from the remaining phases of product
development lifecycle. In large, this part of this dissertation capitalizes on the mentioned studies
in the context of applying findings of above mentioned research both to Internet and interactive
26
marketing.
Interactive Marketing
If the Internet timeline could be separated into three decadal stages: (a) mid 1990s –
introduction stage; (b) early 2000 – development stage; and (c) late 2000 – maturity stage; then
the Jebakumari (2002) study could classified as a study of the stages of Internet development. It
was during this time that Internet interactivity came into strong researcher and practitioner focus.
The overall purpose of the Jebakumari (2002) study was to describe interactivity in the context
of Internet marketing. Lyons (2007) offered several research questions: (a) what are the nature,
characteristics and components of interactivity? (b) what are the shortcomings of the traditional
marketing models in context of the interactive medium? (c) how is interactivity related to
comprehension?
Jebakumari (2002) examined traditional marketing and its shortcomings to explain this
interactive phenomenon. Both traditional and interactive Internet marketing were compared and
contrasted. The conclusions reached by Jebakumari (2002) were reminiscent of a similar study
conducted by Mark (2003). Jebakumari (2002) found that a number of traditional marketing
techniques were inconsistent with the interactive media and did not adequately address the
interactive audience.
A study by Milley (2000) could be attributed to the late introductory and early
development stages of Internet marketing. Miley (2002) explored, what he calls, Web-enabled
consumer marketing, its intricacies and specifics. Miley (2002) tried to formulate a theoretical
27
model of interactive marketing on the basis of numerous case studies, presented and analyzed in
succession. An additional focus of the study was related to the operational recommendations of
running a consumer oriented interactive web site. Miley (2002) proposed the following research
questions: (a) what is the theoretical basis of Web-enabled consumer marketing? and (b) how
should the company align its operations to be congruent with the Web-enabled consumer
marketing model?
Miley (2002) reached the conclusion that Web-enabled consumer marketing requires
analysis of behavioral user data to guide future actions and marketing decisions. He also
concluded that in order to facilitate comprehensive data analysis, interactive user data must be
marketing companies must position their human and systems resources, as well as establish
The Raman (1996) study could be attributed to the introductory period of interactive and
Internet Marketing. Raman (1996) explored interactivity on the Web, at the time when it was
emerging phenomenon. In particular, Raman (1996) examined the desired customer exposure to
online banners. Similar to the later studies by Mark (2003) and Jebakumari (2002) that focused
on the comparison and contrast between traditional and interactive marketing, Raman (1996)
contrasted banner exposure in traditional and interactive marketing models. The Raman (1996)
study is similar to parallel study by Bezjian-Avery (1997) which attempted to define interactive
Raman (1996) proposed the following research questions: (a) what are the factors
affecting the desired interactive exposure? and (b) how do the levels of interactive exposure
affect the desired advertising outcome? Raman (1996) concluded that the dominant factor
28
affecting desired interactive exposure is predominantly related to interactive content richness.
Additionally, Raman (1996) concluded that an interactive advertisement that speaks to the
consumer on an individual level at the same time as being pertinent and engaging has a high
Experimentation
design is heavily focused on major engineering disciplines. The empirical research on the subject
of experimentation in interactive and Internet marketing is scarce and tangential. This study
relies on several seminal works on Design of Experiments, data modeling and product
innovation. In the area of Design of Experiments this study examined several research papers
related to the Taguchi Method. Weng (2007) presented a detailed analysis of experiment
optimization methods. These methods were compared on the bases of (a) global optimization; (b)
discontinuous object function; (c) non-differentiable function; and (d) convergence rate. Weng
(2007) found that the Taguchi Method scored extremely well in all of the compared categories.
Weng (2007) gave a detailed review of the Taguchi Method itself and its benefits over other
optimization methods. Weng (2007) also suggested several improvements to the Taguchi Method
dissertation is based on another tangential topic related to data mining and data modeling in
experimentation without being engaged in some form of data mining and data modeling. The
experimentation is enabled by data analysis and data mining. More specifically, the
29
experimentation process is data analysis driven. In his essays on interactive marketing Li (2003)
examined three cases of interactive marketing. In the first essay Li (2003) described the
functionality of cross-selling services on an interactive banking web site. Li (2003) analyzed the
behavioral reasoning behind online user actions as they pertain to the purchasing of products and
services offered by the interactive banking web site. In conducting behavioral analysis Li (2003)
utilized several multivariate probit models implemented by Hierarchical Bayer framework. This
interactive real-time online experiments. In his second essay Li (2003) analyzed the browsing
behavior of users on several interactive web sites. In order to predict future browsing paths Li
(2003) utilized several Poisson and discretized tobit models. These models were compared and
contrasted in the context of their ability to accurately predict user browsing behavior. This
dissertation utilizes the modeling technique findings presented by Li (2003). In his third essay Li
(2003) analyzed purchase and conversion data from several eCommerce web sites. He used this
data to build a predictive purchase model. Li (2003) concluded that his Hierarchical Bayer
framework supplemented with hidden Markov model could accurately predict a path reflecting
user goals ultimately leading to a potential purchase. This dissertation capitalizes on the findings
of this essay during the set up and analysis of the effect of experimentation on reaching
Similar to the Li (2003) study, research by Dou (1999) utilized similar statistical analysis
for modeling online sales. Dou (1999) examined the applicability of the Catastrophe Theory to
modeling actual behavior and predicting potential purchasing online decisions. Dou (1999)
explored what he termed the data empowered marketing strategy, where data was mined through
tracking users to guide the interactive marketing decisions of the company. Even though Dou
30
(1999) did not mention this concept as interactive marketing experimentation by name, he
hypothesized that interactive marketing data can be used to alter the interactive user experience
in real time as more of the user data was collected and analyzed. Dou (1999) called this approach
observation. Dou (1999) proposed that interactive marketing data can be modeled using the
Catastrophe model. He hypothesized that Catastrophe Theory is eminently suitable for this type
of analysis and predictive modeling. Dou (1999) concluded that it was indeed possible to model
and ultimately predict the browsing and purchasing behavior of users on interactive marketing
web sites.
The significance of both the Li (2003) and Dou (1999) studies is the fact that interactive
marketing data is been actively analyzed using a multitude of statistical models in the context of
interactive marketing. However, it is import to note that use of the Taguchi Method for similar
analysis has not been empirically researched. Additional computing paradigms for predictive
data modeling such as Evolutionary Computing and Genetic Algorithms have been explored by
several researchers (Ozdemir, 2002). Ozdemir (2002) argued that Evolutionary Computing offers
real potential in deriving a best fitness value. As such it holds significant promise for online data
Product Innovation
on the product development lifecycle in the context of interactive marketing. Even though there
are few empirical studies that directly deal with experimentation in interactive marketing,
emphasizing the web site as an interactive marketing product, there is a significant body of
31
empirical work that is devoted to experimentation in the context of product development. This
dissertation capitalizes on the several seminal works by Thomke and Von Hippel. Thomke
(1995) hypothesized that the mode of experimentation such as prototyping and simulation has a
proposed that the use of simulation experimentation is more economical and therefore far more
viewed as a product enabler and innovation driver. Thomke (1995) presented two case studies
where experimentation was used in the design of new pharmaceutical drugs and integrated-
circuit based systems. Thomke (1995) proposed experimentation design cycles consisting of
designing, building, running and analyzing activities performed in a contiguous manner. Each
successive cycle was built taking into account the findings of the previous cycle. This study
hypothesized the applicability of this cycle in general, and the process in particular, to interactive
marketing product development (see Appendix E). Thomke (1995) found that switching between
In a seemingly unrelated study Von Hippel (1998) argued that product innovation should
be driven by the people who would benefit from the end product of innovation, end users of the
product themselves. Von Hippel (1998) described what could be called the Von Hippel paradox,
where product specialists should not be primarily responsible for product innovation, but rather
defer to product users as a source of ultimate innovation. Von Hippel (1998) described this
paradox as a shift in locus of problem-solving. In this dissertation Von Hippel’s ideas are
combined with the approach proposed by Thomke (1995), where interactive marketing product
32
CHAPTER 3: METHODOLOGY
utilized a qualitative research paradigm. The choice of qualitative research methodology was
related to the nature of the topic and the innate characteristics of the field of the study.
Employing qualitative research methods makes the quality of the data of paramount importance.
Consequently, emphasis is placed on how and under what circumstances the data is collected
(Morgan & Smircich, 1980). In contrast to quantitative research methods, it is rare to see a
qualitative researcher working with large quantities of data. This is the case with the current
Maxwell (1992) defined the qualitative research methods as theory forming. These
methods are used to generate new theories or introduce new hypotheses. Maxwell (1992) called
qualitative research a paradigm that is concerned with a “breadth first” approach as opposed to a
“depth first” as is the case with quantitative research. More specifically, qualitative research is
behind it. Based on the paradigmatic characteristics provided by Maxwell (1992), the use of
qualitative research methods was consistent with the goals of the study and the state of
under the qualitative paradigm umbrella. The phenomenological method was first formulated by
Husserl (1983). Creswell (2007) defined the phenomenological method as a description of the
33
meaning for several individuals of their direct experience of a concept or a phenomenon. In this
execution of three consecutive steps. The first step consisted of adopting a phenomenological
method that encouraged the researcher to infuse quantitative data with the qualitative context that
allowed the data to be meaningful. The second step consisted of seeking out an instance where
the phenomenon can be studied in its natural context in order to distill the essence of the
phenomenon. The third and final step was described by Husserl (1983), and consisted of
The experimentation phenomenon in the context of interactive marketing sits well in the
who have experienced the phenomenon first hand. Even though participants of the study have
experienced the phenomenon they are not necessarily aware of its meaning (Georgi, 2006). This
point of view is certainly consistent with the description of the experimentation phenomenon.
The participants of this study have certainly experienced the experimentation phenomenon in the
context of interactive marketing, but by and large they are not aware of its meaning and its
the researcher to extract meaning from the experiences of the participants of the study.
The phenomenological method allows the researcher of the study to quantify his/her own
experience by supplementing findings of the study with his/her own observations and
interpretations in the context of the experience. Creswell (2007) described this type of
34
phenomenological method as hermeneutical. Van Manen (1990) described the researcher of the
study as one of the participants of the study. This researcher has an extensive experience with
method allows researcher understanding of his own experience while maintaining a strong
relationship to the topic (van Manen, 1990). However, in order to address generalization,
validation, validity and bias, the researcher must employ bracketing to distinguish his own
experience from the experience of the participants of the study. As such this researcher tried to
deemphasize his own experience. This researcher employed a combination of transcendental and
placed on the experience of the participants of the study and the experience of the researcher is
focus on the practical application of the phenomenon rather than the philosophical side of it.
grounded theory or other narrative approaches and phenomenological methods. Creswell (2007)
made a distinction between narrative study and phenomenological study, where the former is
Even though participants of the study were selected from several groups participating in
of the group to which they belong. According to Cresswell (2007) phenomenological methods
organization as a whole.
35
Design of the Study
The qualitative research paradigm employs interviews as its predominant data collection
instrument (Babour, 1998). When an interview is conducted in a purely qualitative manner, the
researcher then takes an active participation in the interview process. In that case, the researcher
is considered to be an actual instrument of the study. Babour, (1998) pointed out that participants
in a study often receive major guidance from the researcher throughout the interview process.
The qualitative research paradigm thus encourages researcher participation in order to reduce
language ambiguity and supplement the possible lack of context associated with quantitative data
collection instruments. This study however, did not employ interviews as the means of data
collection. The major concerns of the study were related to credibility, validity and bias. Since
the researcher of the study is employed by the company where the research is being conducted,
actual or potential undue influence was a paramount concern. The researcher struggled to
maintain the balance between extricating himself from the data collection process on one hand,
yet maintaining the qualitative nature of the study on the other hand.
In order to address validity, credibility and bias concepts as well as maintaining the
qualitative nature of the study, the researcher employed a research instrument used in mixed-
method research studies. More specifically this study utilized a mixed-method survey as a
research instrument. Johnson and Onwuegbuzie (2004) also described a mixed-method survey
that embodies both qualitative and quantitative aspects. Mixed-method surveys typically contain
questions found in fixed surveys. These types of questions are referred to as close-ended, where
the set of responses are limited. In addition to quantitative questions, these surveys contain
corresponding sections that allow freehand expression, allowing for a qualitative context to what
36
otherwise would be purely quantitative data. In contrast to close-ended questions these questions
are open-ended.
In order to eliminate possible researcher influence the survey was administered over the Internet
in an anonymous fashion. In addition data was collected under false pretenses. The participants
of the study were not told that the data was being collected for the purposes of research due to
the possibility of participant bias. The survey was positioned as providing helpful feedback on
The study included 23 human participants. The participants of the study work in the same
organization as the author of the study. A particular set of participants was chosen from all of the
groups involved in the interactive marketing experimentation. The technology group was
excluded from study participation, since the author of the study works for the technology group
and may exert undue influence on the participants of the study. The participants of the study
were randomly chosen from Interactive Marketing, Business Development, User Experience,
The participants from each of the mentioned business units provided information relevant
to the results of experimentation and its impact on various aspects of interactive marketing. They
were asked to elaborate on their experiences of experimentation in the context of the interactive
marketing. The participants of the study were solicited on the perceived success of
experimentation relative to its goals such as improved conversion, product innovation, product
37
Assuming that the chosen sources were both valid and credible, further research
credibility and validity depended only on the researcher himself/herself. One of the ways of
selecting credible and valid sources is by selecting them at random. More specifically only a
single representative of each business unit was selected at random. This type of selection method
Measurement Strategy
company. The study participants were selected at random to represent their interactive marketing
channel. The respondents of the study were asked to complete a mixed-method survey consisting
marketing. The research questions of the survey were designed to understand the relationship
between the corresponding dependent and independent variables. Since the number of
independent variables was too great they were grouped under common categories. For instance,
independent variables related to experimentation such as color, font, font size, and images were
grouped under a visual category. It is important to note that independent variable categories were
classified as either endogenous or exogenous. The resultant survey included five categories of
endogenous independent variables (see Appendix B) (a) visual; (b) functional; (c) positional; (d)
informational; and (e) behavioral, as well as six categories of exogenous independent variables
(see Appendix B) (a) temporal; (b) demographical; (c) seasonal; and (d) contextual.
In addition to the independent variables each of the research questions had a number of
dependent variables associated with it. These dependent variables were assigned as follows (see
Appendix B) (a) competitive standing (revenue, market size, profit, market share, and market
38
segmentation); (b) interactive marketing product development lifecycle (product risk, product
innovation, product improvement, product life cycle, and product targeting); and (c) interactive
marketing goals (cost per acquisition, cost per impression, cost per action, upsell, and click-
through rate).
Each research question was directly represented in the survey in the form of several
survey questions. In addition to asking participants of the study to answer research questions
directly, each of the dependent variables was investigated in isolation (see Appendix A). The
amortized form and the dependent variables associated with research questions. The amortized
independent variables were referred to as “Experimentation” (see Appendix A), where dependent
variables were called out in exactly the same way as they were specified in the “Conceptual
Even though the quantitative questions were utilized alongside qualitative questions,
qualitative data was not used in drawing conclusions of the study. The point of analysis
associated with the quantitative data was to ascertain consistency between qualitative and
quantitative answers. The quantitative survey questions used single and multiple choice scales.
The qualitative survey questions utilized a measurement strategy associated with the
description.
Instrumentation
related to the experimentation efforts of the company in the context of interactive marketing. The
39
company where the research was conducted utilized multifaceted interactive content. More
specifically the company used email, search, social, display, internet and affiliate interactive
marketing approaches. The participants of the study were asked to fill out the survey relating to
interactive marketing areas. The questions of the study were crafted to meet the objectives of the
study. The close-ended questions of the study were not used in the final analysis of the study, but
rather they were designed to ensure consistency of a corresponding open-ended question as well
as to guide the user to stay in the confines of the intended question. The survey was designed and
implemented using online survey software and conducted over the Internet. The participants of
the study were invited by the CEO of the company to complete the survey via. The email
contained the link to the online survey as well as an explanation of the purpose of the survey,
The survey contained five major sections: (a) introduction (questions related to overall
experimentation experience); (b) interactive marketing goals (questions related to the impact of
experimentation process on the goals of various interactive marketing channels); (c) interactive
marketing product (questions related to the impact of the experimentation process on the various
competitive standing (questions related to the impact of experimentation process on the various
key competitive indicators); and (e) sustaining effects (questions related to the sustaining effects
Data Collection
The data was collected via the SurveyMonkey.com web site. The initial survey was pre-
40
tested and modified according to the feedback from the pilot group and the mentors of the study.
The pilot group consisted of three members chosen from a pool of potential participants. The
participants of the study were given a week to complete the survey with multiple reminders sent
two days and on the day before the survey expiration period. All questions in the study were
designated as mandatory and the only two ways to exit the survey were to either to complete it or
abandon it. If the survey was abandoned in order to proceed with the survey at the future date the
participant of the study had to start the survey over again. According to the SurveyMonkey.com
statistics none of the surveys were abandoned and the effective survey completion was at 100%.
Due to employing the survey online participant anonymity was preserved. After all surveys were
completed the survey results were downloaded onto the researcher’s computer and analyzed. At
all times the survey results were protected from inadvertent or intentional disclosure. The
surveys were conducted online over secure protocol and access to the survey results was
username and password protected. When the results of the survey were downloaded to the
researcher’s computer access to the computer itself was username and password protected as
well.
The data analysis procedures roughly consisted of the steps outlined by Creswell (1998)
with slight adaptation for the needs of this study. These steps consisted of (a) horizontalization
and bracketing; (b) clusters of meaning; (c) textual description; (d) composite description. It is
important to note that the usual phenomenological step of transcription was omitted since the
data was collected via an open-ended survey administered over the Internet. As such the data
transcription consisted of downloading the results of the completed surveys. The data analysis
41
was conducted on two occasions. The initial data analysis consisted of interpretation of the
surveys submitted by the “pilot” group, comprised of a small population sample. The subsequent
data analysis was conducted during the analysis phase of the actual study.
Horizontalization
common themes and significant statements from the responses of the participants. The researcher
read responses several times while trying to comprehend and interpret their meaning. The
qualitative data collected was correlated with similarly intentioned quantitative answers. For
instance, quantitative survey question, “Please select experimentation lever types that you feel
are most instrumental in achieving interactive marketing goals? (a) visual; (b) functional; (d)
behavioral; (e) informational” was paired with the qualitative question, “Please describe in your
own words what experimentation levels were most useful in achieving interactive marketing
goals?” All discrepancies were recorded and analyzed. These discrepancies were resolved in the
final version of the survey and modifications were based on the analysis of the pilot sample.
During the second reread path significant statements and common themes were underlined and
extracted. All similar statements were grouped under a common umbrella and labeled
accordingly. The process of horizontalization was performed several times. The resultant groups
Clusters of Meaning
According to Van Manen (1990), the process of clustering consists of extracting meaning
from the grouped quotes. The process of clustering capitalized on the previous horizontalization
42
step. The responses grouped under common labels were examined for the common clumps of
meaning. In order to make this process simpler, groups were attributed to the corresponding
research questions. Since some of the survey responses could have been attributed to the one or
more research questions it was imperative to keep them organized. In some cases however
quotes were attributed to multiple emerging themes. At the same time some of the quotes
Textual Description
The textual description step of the phenomenological method described by Alvesson and
Sköldberg (2000) consists of the reflection of the participant experience written by the
researcher. The textual description process logically followed a clustering step, where clusters of
meanings were extracted and recorded. After analyzing clusters of meaning, this researcher tried
to come up with the textual narrative of what survey participants were trying to convey. The
resultant narrative contained a number of expressions used by the study participants themselves
as well as the meaning phrases derived in the earlier step. Prior to coming up with the textual
description the researcher of the study prepared a written description of his own thoughts and
feelings about the experimentation phenomenon in the context of Internet and interactive
marketing. By utilizing bracketing techniques the researcher tried to separate his experience from
the experience of the participants of the study. Even though it was difficult for the researcher to
phenomenological method, a written description prior to the textual description step, raised his
43
Composite Description
of the research itself. The narrative produced in the textual description step of the
phenomenological method is wordy and lacks coherence. The researcher of the study tried to
distill the meaning of the earlier produced narrative into a single thought that could be easily
expressed. In most cases however, the composite description step of the data analysis related to
phenomenological research methodology was combined with the textual description step. One of
the key reasons for doing so was the fact that the qualitative responses were short and as such,
represented a concise summary that required no further reduction. This is often the case with
mixed-method research instrumentation, where the qualitative portion is filled out by the
participants of the study rather than the researcher. The composite description section of the data
analysis was used to discuss issues related to the analysis of a particular research question.
.
The qualitative data display is used to describe experimentation in the interactive utilized
matrix display method. Since the study utilized a phenomenological design approach, the
horizontalization step of extracting the most meaningful aspects of the survey answers was
followed by clustering. Horizontalized data was clustered based on the business units of the
participants. The clustered data was displayed using role-ordered display. Every research
research participants.
44
Qualitative Data Display for Explaining the Phenomenon
interactive marketing utilized a matrix display method. Based on the problem type,
experimentation phenomenon is ideally suited for the explanatory effects matrix display type.
According to Miles and Huberman (1994), the explanatory effect matrix is used to answer
questions of the following type, “Why were these outcomes achieved?” and “What caused them
generally or specifically?” In the case of experimentation in interactive marketing, the study tried
improvements along the product development lifecycle as well as improved online conversion
and experimentation. The explanatory effects matrix consists of cross-tabulation between various
groups that participated in the study and the research question itself.
This researcher considers the explanatory effects matrix as the most useful display
technique for explaining and predicting dissertation topic. It is a very good fit for
experimentation into the interactive marketing phenomenon for several key reasons. First of all
the matrix format allows an analysis of the experimentation practice across multiple
not expected that different departmental units were going to feel differently about the usefulness
of experimentation and its positive impact on the dependant variables of the study such as
competitive standing, operational risks and attainment of interactive marketing goals. Possible
Principal, described by Torre and Rallet (2005). The group closest to a particular aspect of the
phenomenon is more familiar with its nuances, when compared with other groups that have
45
CHAPTER 4: DATA COLLECTION AND ANALYSIS
The survey was sent out to 23 participants over several installments. Initial responses
were analyzed for consistency and clarity. In order to increase the validity and decrease bias
survey links were only sent to departments that do not fall under direct supervision of the
researcher. In addition survey participation was proposed under false pretenses. The participants
of the study were asked to fill out a survey in the context of their job function with an express
purpose stated as evaluating and improving experimentation efforts undertaken by the company.
All participants were asked to participate on a voluntary basis with the strict assurances of
confidence. General survey settings were as follows: (a) allow only one response per computer;
(b) respondents can go back to previous pages in the survey and update existing responses until
the survey is finished or until they have exited the survey. After the survey is finished, the
respondent will not be able to re-enter the survey; (c) respondents can exit the survey and come
back at any time, unless the survey is finished; and (d) do not display a thank you page.
Based on the survey statistics, 11 out of the 23 participants completed the survey (see
Figure 1) which constitutes a 47.8% overall survey uptake. Under open-field study conditions
such a response rate would be considered extremely high especially given the length of the
survey and its mixed nature. However, under controlled conditions consistent with an
organizational study, such a response rate cannot be considered unusual. Furthermore, the
response rate most likely buoyed by the belief that participants were filling out the survey in the
Based on the statistical analysis (see Figure 2), the study participants fell into the
46
following self-identified groups: (a) Interactive Marketing – either participants or 34.8%; (b)
Business Development – two participants or 8.7%; (c) User Experience – seven participants or
30.4%; (d) Creative Design – three participants or 13%; and (e) Data Analysis – three
participants or 13%. The self-identified groups were presented as a set of options without the
possibility of adding additional groups. The group names were specifically chosen to represent
job functions instead of an actual department name. One of the key reasons behind doing so was
a desire to tie experimentation responses to the job function of the participants instead of an
actual department, based on the fact that actual responsibilities of the participants vary greatly
even in the same department and may have led to wrong conclusions during analysis. In addition,
the actual job function served as a key determinant in assigning relative weight to the responses
of the study participants. In other words, more weight was given to the members of Business
Development group when questions were centered on the effects of experimentation on the
The second question of the survey asked the participants to identify their level of
expertise with respect to experimentation. Based on the statistical analysis of the responses to the
second question, the self-identified experimentation expertises were divided as follows: (a)
Novice – eight participants or 34.8%; (b) Intermediate – 13 participants or 56.5%; and (c) Expert
– two participants or 8.7%. The experience level data has been cross-tabulated with job function
data (see Figure 3). The distribution of participants that identified themselves in the Novice and
Intermediate categories was fairly uniform across all of the groups; however, self-identified
experts belonged only to the Interactive Marketing and User Experience groups. It is difficult to
draw any conclusions based on the distribution of the self-identified experts. It could most likely
be explained by the scarcity of the collected data. In a much larger survey it is expected that the
47
Research Objective One
One of the main research questions was designed to ascertain the impact of
qualitative and quantitative survey questions: (a) Please Rate the Impact of Experimentation on
the Overall Interactive Marketing Goals (Cost Per Acquisition, Cost Per Impression, Cost Per
Action, Upsell, and Click-Through Rate); (b) Please Describe in Your Own Words the Impact of
Experimentation on the Overall Interactive Marketing Goals (Cost Per Acquisition, Cost Per
Impression, Cost Per Action, Upsell, and Click-Through Rate); (c) Please Rate The Impact of
Experimentation on the Cost Per Acquisition (CPA); (d) Please Describe in Your Own Words
How Experimentation Has Reflected upon the Cost Per Acquisition (CPA); (e) Please Rate The
Impact of Experimentation on the Cost Per Impression (CPI); (f) Please Describe in Your Own
Words How Experimentation Has Reflected upon the Cost Per Impression (CPI); (g) Please Rate
The Impact of Experimentation on the Cost Per Action (CPA); (h) Please Describe in Your Own
Words How Experimentation Has Reflected upon the Cost Per Action (CPA); (j) Please Rate
The Impact of Experimentation on the Upsell; (k) Please Describe in Your Own Words How
Experimentation Has Reflected upon the Upsell; (l) Please Rate The Impact of Experimentation
on the Click-Through Rate (CTR); and (m) Please Describe in Your Own Words How
Horizontalization
The first step of qualitative data analysis consisted of the technique of horizontalization.
statements and checking for consistency between the qualitative and quantitative response.
According to the statistical analysis of the quantitative question number three (see Figure 4), 13
48
or 92.9% of participants that had completed this question found that experimentation had a
positive impact on their overall interactive marketing goals. Only one participant or 7.1%,
determined that the impact of the experimentation had been neutral. Based on the analysis of the
qualitative data (see Figure 37) it is clear that the qualitative responses were consistent with the
quantitative response breakdown. In order to gain additional insights into the data the researcher
produced two cross-tabulation tables. The first cross-tabulation analyzed participant responses by
job function (see Figure 5), whereas the other looked at the participant experience in relation to
given responses (see Figure 6). The majority of the survey participants that indicated a positive
experience function. The only participant that found the impact of experimentation on interactive
marketing goals was neutral came from an interactive marketing group. In the same vein the
majority of the survey participants that found the experimentation impact positive described their
experimentation experience as intermediate. The two self-identified experts were split between
positive and neutral impact. In order to gain more insight into the responses of the two self-
identified experts, corresponding qualitative data was also analyzed (see Figure 36). In the
qualitative portion of the answer the self-identified expert had indicated that, “Our initiation
experimentation efforts had mixed results. However, as experimentation practices matured the
overall impact was consistently positive”. As such it is possible to speculate that the other self-
identified expert was referring to early stage efforts that could not have been characterized as
experimentation efforts yielded mixed results, with overall result being positive, “Initial
experimentation efforts had neutral impact on the business overall. After experimentation has
been coupled with the process improvements it had a positive impact on the Interactive
49
Marketing in general and listed goals in particular”. Even though the majority of participants felt
that experimentation had a positive impact on the marketing goals as a whole, when each
particular marketing goal was itemized some participant answers exhibited inconsistency. More
specifically several participants answered that experimentation had a positive impact on the Cost
Per Action (CPA), one of the goals of interactive marketing. However, when asked to describe
this positive effect in their-own words answered, “Not sure.” This discrepancy could be
explained by the fact that these participants are not part of the group that keeps track of
individual interactive marketing metrics (see Figure 13), (see Figure 14), (see Figure 15), (see
Figure 16), (see Figure 17). In contrast with these responses, study participants that interacted
with the marketing data on a daily basis were able to articulate in detail the impact of
strategy had a positive impact on the CPA”, and “CPA was reduced though experimentation with
webpage content and execution workflow.” The complete extraction of the answers related to
research question one was grouped and analyzed for significant statements (see Figure 37).
Clusters of Meaning
The second step of qualitative data analysis consisted of clustering. The process of
horizontalization step. The responses grouped under common labels were examined for common
clusters of meaning. In some cases quotes were attributed to multiple emerging themes. At the
same time some of the quotes contained several themes or meanings at once. The quantitative
survey questions pertaining to individual marketing goals were cross tabulated with an
assessment of overall experimentation impact. Based on the analysis of the cross-tabulations, the
majority of the survey participants displayed consistency in responses. The participants of the
study felt that experimentation had an equally consistent impact on the interactive marketing
50
goals, as a whole, as well as the individual interactive marketing goals in particular. In other
words, if a participant felt that experimentation had a positive overall impact, he/she felt the
same way when the interactive marketing goals were itemized. However, in several cases
responses were inconsistent, where two participants felt that the overall impact was positive, but
were not sure about the impact of experimentation on individual interactive marketing goals.
This could have been considered an anomaly; however, after looking at several responses by the
same participants of the study it became clear that “not sure” response was given to all itemized
individual interactive marketing goals as opposed to some. The most likely explanation for the
observed anomaly is due to the fact that participants of the study that gave inconsistent responses
did not deal with individual interactive marketing goals directly, but rather were only familiar
The analysis of the qualitative and quantitative data revealed several emerging themes:
(a) the overall impact of experimentation on interactive marketing goals was positive; (b) initial
experimentation efforts were less than successful until the company found a way to couple
experimentation with a mature process; (c) the impact of experimentation was viewed in an
Textual Description
The third step of qualitative data analysis involved textual description. Textual
cohesive narrative. In summary, the majority of the surveyed participants felt that
experimentation had a positive impact on their interactive marketing goals as a whole. However,
early efforts related to experimentation yielded mixed results due to the lack of an organizational
process and strong experimentation experience. When an experimentation process was fully
established its impact remained consistently positive. Assessment of the experimentation impact
51
on individual interactive marketing goals was largely consistent with the assessment of
experimentation as a whole. The majority of participants felt that individual marketing goals
such as Cost Per Action, Cost Per Impression, Cost Per Acquisition, Upsell and Click Through
Composite Description
The composite description step of the data analysis related to phenomenological research
methodology was combined with the textual description step. One of the key reasons for doing
so was the fact that the qualitative responses were short and as such, represented a concise
summary that required no further reduction. This is often the case with mixed-method research
instrumentation, where the qualitative portion is filled out by the participants of the study rather
than the researcher. The composite description section of the data analysis was used to discuss
experimentation. It is expected that some experiments yield negative results. These do not
however have a negative connotation in the sense of being bad or poor. The nature of
others result in conversion decreases. Hence, it is important to talk about the impact of
experimentation as a process rather than the impact of the individual experiment. It is however
expected that the experimentation process should result in a positive outcome as a whole;
52
Research Objective Two
Another key research question was designed to identify key experimentation levers that
impact upon the interactive marketing goals. The objective was represented by the following
qualitative and quantitative survey questions: (a) Please Select Experimentation Lever Types
That You Feel Are Most Instrumental In Achieving Interactive Marketing Goals (Cost Per
Acquisition, Cost Per Impression, Cost Per Action, Upsell, and Click-Through Rate); (b) Please
Describe in Your Own Words What Experimentation Levers Were Most Useful In Achieving
Interactive Marketing Goals (Cost Per Acquisition, Cost Per Impression, Cost Per Action,
Upsell, and Click-Through Rate); (c) Please Select Experimentation Conditions That You Feel
Had the Strongest Impact upon Interactive Marketing Goals (Cost Per Acquisition, Cost Per
Impression, Cost Per Action, Upsell, and Click-Through Rate); and (d) Please Describe in Your
Own Words What Experimentation Conditions Were Most Impactful upon Interactive Marketing
Goals (Cost Per Acquisition, Cost Per Impression, Cost Per Action, Upsell, and Click-Through
Rate).
Horizontalization
According to the statistical analysis of the quantitative question five (see Figure 7), 11
participants or 78.6% who completed this question found that visual experimentation levers had
the strongest impact on their overall interactive marketing goals. Similarly, seven participants or
50% felt the same way about the functional experimentation levels. Lastly, three participants or
21.4% and four participants or 28.6% participants felt that behavioral and informational
experimentation levers respectively had the stronger impact on the interactive marketing goals.
With regards to external conditions related to experimentation, 11 participants or 71.4% felt that
seasonal variations had the strongest impact on the interactive marketing goals. The feelings
related to Temporal, Demographical, and Contextual external conditions were almost evenly
53
split, with a slight preference given to experimentation taking into account contextual conditions
Based on the analysis of the qualitative data (see Figure 38) it is clear that the qualitative
responses were consistent with the quantitative response breakdown. In order to gain additional
insights into the data, the researcher produced two cross-tabulation tables. The first cross-
tabulation analyzed participants’ responses by job function (see Figure 8), where the other
looked at the participants’ experience in relation to given responses (see Figure 9). It was found
that 100% of the survey participants from Creative Design, Data Analysis and User Experience
agreed that Visual experimentation levels had the strongest impact on the interactive marketing
goals. In addition 50% of participants from the Interactive Marketing groups identified Visual
experiments levels as the most impactful. In contrast, only survey participants that identified
User Experience groups. This result seems consistent with the fact that Informational levels are
underrepresented as the percentage of the overall experiments performed by the company. The
fact that survey participants from the other groups did not identify the Information
experimentation level as impactful could be attributed to their lack of awareness that such levers
even exist.
One of the key insights gained from the analysis of the cross-tabulation of the participant
experience levels and their thoughts related to the most impactful experimentation levers was the
fact that survey participants with more significant experimentation experience focused on fewer
experimentation levers. In other words self-identified novice participants identified all levers as
equally impactful, self-identified intermediate participants focused more on the Visual and
Functional levers, and self-identified experts chose Visual and Functional experimentation levers
exclusively.
54
Additional cross-tabulations analyzed participant responses related to experimentation by
job function and experience on the basis of external conditions (see Figure 11), (see Figure 12).
Members of all groups identified experimentation with seasonal variations as most impactful
with the exception of the Interactive marketing group members. In contrast to the
uniformly felt that seasonality had a strong impact. This opinion held largely true with self-
identified intermediate participants, accounting for more than 70% of responses. There was split
opinion among Self-identified experts whether seasonality experimentation had the strongest
impact.
The complete extraction of the questions, related to research question two, was grouped
Clusters of Meaning
Based on the results of the horizontalized data it was evident that an overwhelming
majority of the participants felt that experimentation with visual elements yielded the most
impactful results on their interactive marketing goals. The following statement clearly expresses
a prevailing opinion, “Visual experimentation more than any other factors contributed to the
conjunction with functional levers, “Experimentation with the visual and functional elements of
our customer facing solutions had the most measurable effect on the interactive marketing KPIs.”
It is important to note that several participants indicated that experimentation with visual
elements was not only most impactful, but was the element that the company experimented with
the most, “More than 80% of the experiments have been conducted using visual elements. It is
not surprising that visual experimentation had the greatest impact on the listed marketing goals.”
55
In addition several participants of the study chose to identify all experimentation levers
indicating the importance of all factors, “I believe that all the lever types mentioned above are
external conditions were heavily clustered in favor of seasonal factors (see Figure 39), “Our
business is highly seasonal. In addition achieving marketing goals is very dependent on the
demographics of the users. Experimenting in the context of user demographics and seasonal
changes had a significant impact on the listed marketing goals.” The most significant statement
is the following response which ties the results of the analysis to the company specifics, “The
economics of the company change dramatically on the micro and macro temporal bases. Taking
Textual Description
had several prevailing themes and conclusions. The participants of the study felt that
experimentation with visual experimentation levers had a very strong positive impact on the
interactive marketing goals of the company. A significant number of participants also felt that in
addition to functional experimentation elements, functional levers had a strong positive impact as
well. One of the prevailing themes clearly identifiably in many survey responses had to do with
fact that the company chose to emphasize visual experimentation elements over other possible
options. At the same time several participants of the study emphasized that all experimentation
levers yielded positive results and therefore all need to be considered as part of the
experimentation process.
With regards to experimentation with external factors, the opinions of the participants
were similarly strongly centered on seasonal factors. Several responses highlighted the seasonal
56
nature of the business. As such, taking seasonality into account had a strong positive impact on
context, where seasonality implied changes in the business patterns from month to month and
even from quarter to quarter. However, several participants also emphasized experimentation in
the context of the micro temporal level, where experimentation accounted for changing traffic
Composite Description
Similar to research question one, responses given by the participants of the study in the
qualitative portion of research question two were short enough not to require textual description.
Therefore textual description has been combined with composite description and this section
showed possible anomalies and points of focus. One such point was related to causality. It was
not clear if the company chose to perform the majority of the experiments with visual elements
because they yielded most significant results, or experimentation with visual elements yielded
superior results because the company chose to concentrate on them. There are strong arguments
that could be made in support of both points. Visual elements are the easiest to experiment with
and they are most abundant. At the same time, the largest improvements in the interactive
marketing metrics were related to experimentation with visual elements. It is quite likely that
57
Research Objective Three
Another key research question was designed to ascertain the impact of experimentation
on product innovation. It was represented by the following qualitative and quantitative survey
questions: (a) Please Rate the Impact of Experimentation on Interactive Marketing Product
Innovation; and (b) Please Describe in Your Own Words How Experimentation Has Reflected
Horizontalization
Based on the statistical analysis of the quantitative results related to the impact of
experimentation on product innovation nine participants or 81.8% felt that experimentation had a
positive impact on interactive marketing product innovation (see Figure 21). A total of two or
18.2% participants indicated that they were not sure how to answer this question. These results
are strongly supported by qualitative data contributed by the participants (see Figure 41). Even
though the analysis of the interactive marketing product lifecycle did not directly address product
innovation while answering questions related to product lifecycle, participants of the study
offered several insights that were very helpful in understanding experiment driven product
innovation (see Figure 19), (see Figure 20). Eight participants or 72.7% felt that experimentation
has a positive impact on product lifecycle, whereas three participants or 27.3% were not sure
In order to gain further insights into the qualitative and quantitative answers given by the
participants, several cross-tabulations were produced (see Figure 22), (see Figure 23). The first
innovation with their job function. The majority of the participants of the study that indicated
that experimentation had a positive impact on product invocation were either directly involved in
product development or had a tangential job function. Two participants that were not sure about
58
the impact of experimentation on product innovation belonged to the Interactive Marketing and
of the study who were not sure about the impact of experimentation on product innovation self-
assessed their experience as intermediate and novice respectively. On a percentage basis, 100%
and 50% of novice participants assessed the impact of the experimentation on product innovation
as positive.
The complete extraction of the questions, related to research question three, were grouped
Clusters of Meaning
The following answer expressed the prevailing opinion of the survey participants,
emerged as a result.” Some qualitative statements were more specific in identifying the exact
experimentation yielded several innovative ideas.” Those participants that were not sure about
the impact of the experimentation on product innovation gave the following qualitative answers,
“I am not sure how to answer this question” and “No specific data was collected.”
The answers to the product innovation questions were cross-referenced with the questions
related to the impact of experimentation on product lifecycle. Several phases of the product
development lifecycle are closely related to product innovation. In particular, idea incubation is
almost directly synonymous with product innovation (see Figure 40). We find support for this
59
experimentation impacted on various phases of product lifecycle, especially in the early stages of
product development.” In addition to the insights gained on product innovation, we find that the
same survey participants gave identical answers, “I am not sure how to answer this question” and
“No specific data was collected” to the questions related to product lifecycle.
Textual Description
Based on the analysis of the clustered data, a compound description emerged pointing to
the fact that the majority of survey participants felt that experimentation had a positive impact on
product innovation. More specifically, study participants stated that interactive marketing
experimentation helped product innovation by allowing the company to churn through several
product ideas on an expedited basis. In addition it allowed the company to gather customer
feedback before the product was fully crafted and the company had committed to the full product
company to several innovative ideas that were either not actively discussed or had previously not
been considered as promising by the decision makers of the company. As such these ideas came
to fruition in a true customer-driven fashion. In general the company observed a shift from
Composite Description
It is important to note that product related questions are strongly correlated with the job
function of the participants. In other words participants of the study that did not focus on product
development or did not perform job duties that were at least tangential to product development
may have not been aware of the impact of the experimentation on product development or might
60
Research Objective Four
Another research question related to product development was designed to ascertain the
qualitative and quantitative survey questions: (a); Please Rate The Impact of Experimentation on
Interactive Marketing Product Improvement; and (b) Please Describe in Your Own Words How
Horizontalization
Based on the statistical analysis of the quantitative results related to the impact of
had a positive impact on interactive marketing product innovation (see Figure 24). Only two
participants or 18.2% indicated that they were not sure how to answer this question. These
results are strongly supported by qualitative data contributed by the participants (see Figure 42).
In order to gain further insights into the qualitative and quantitative answers given by the
participants, several cross-tabulations were produced (see Figure 25), (see Figure 26). The first
improvement with their job function. The majority of participants of the study that indicated that
experimentation had a positive impact on product improvement were either directly involved in
product development or had a tangential job function. Two participants that were not sure about
and Data Analysis groups. The complete extraction of the questions, related to research question
four, was grouped and analyzed for significant statements (see Figure 42).
Clusters of Meaning
61
“Experimentation affected product improvement in a positive manner. It manifested itself in the
increased pace of product improvement.” Some qualitative statements were more specific in
participants of the study that were not sure about the impact of experimentation on product
improvement gave the following qualitative answers, “I am not sure how to answer this
Textual Description
Based on the analysis of the clustered data a compound description emerged pointing to
the fact that the majority of survey participants felt that experimentation had a positive impact on
product improvement. More specifically study participants stated that interactive marketing
experimentation helped product innovation by allowing the company to churn through several
feature ideas on an expedited basis. In addition experimentation allowed the company to gather
customer feedback on the newly planned features. Furthermore, customer feedback either
Composite Description
It is important to note that the survey participants gave almost identical answers to the
questions related to product innovation and product improvement. In fact some answers were
seemingly cut and pasted from the product innovation section. It is possible that participants of
the survey combined product innovation and product improvement concepts into one.
62
Research Objective Five
ascertain the impact of experimentation on product deployment risk. It was represented by the
following qualitative and quantitative survey questions: (a) Please Rate the Impact of
Experimentation on Interactive Marketing Product Deployment Risk; and (b) Please Describe in
Your Own Words How Experimentation Has Reflected upon Interactive Marketing Product
Deployment Risk.
Horizontalization
Based on the statistical analysis of the quantitative results related to the impact of
had a positive impact on interactive marketing product innovation (see Figure 27). Only two
participants or 18.2% indicated that they were not sure how to answer this question and one
participant or 9.1% felt that the impact of experimentation on product deployment risk was
neutral. These results are strongly supported by qualitative data contributed by the participants
In order to gain further insights into the qualitative and quantitative answers given by the
participants, several cross-tabulations were produced (see Figure 28), (see Figure 29). The first
deployment risk with their job function. The majority of study participants that indicated
experimentation had a positive impact on product deployment were either directly involved in
product development or had a tangential job function. Two participants that were not sure about
the impact of experimentation on product deployment risk belonged to Interactive Marketing and
Data Analysis groups. One of the participants that indicated that the impact of the
experimentation on the product deployment risk was neutral belonged to a User Experience
63
group.
assessed experimentation experience. The participants that felt that experimentation had a
positive impact on product deployment risk were either self-assessed experts or intermediate
experimentation users. The participants of the study that indicated that they were not aware of
the specific impact or were not sure how to answer this question self-identified as either novice
The complete extraction of the questions, related to research question five, was grouped
Clusters of Meaning
“Experimentation practice has reduced product deployment risk by allowing limited product
deployment.” Some qualitative statements were more specific in identifying the exact impact of
by allowing easy rollback in the case of errors.” Those participants that were not sure about the
impact of the experimentation on product deployment risk gave the following qualitative
answers, “I am not sure how to answer this question” and “No specific data was collected.” A
participant that indicated in the quantitative portion of the survey that he/she felt that impact of
the experimentation on product deployment had been neutral, gave the following answer in the
qualitatively paired question, “I am not aware of any product deployment risks that where
mitigated.”
Textual Description
pointing to the fact that the majority of survey participants felt that experimentation had a
64
positive impact on product deployment risk. More specifically study participants stated that
interactive marketing experimentation helped mitigate product deployment risk by allowing the
company to make a partial commitment to product deployment. It allowed the company to assert
a greater control over the product deployment cycle. Only partial traffic was directed to the
newly deployed products. Full traffic flow was only turned on when the company felt that new
product did not adversely affect key performance indicators of the company such CTR, CPA,
CPI, Upsell Score and so on. If an adverse effect was discovered the company had an easy route
to either further reduce the flow of traffic to the newly deployed product in order to observe it in
a more controlled environment or reduce web traffic to zero, a decision that is tantamount to a
product rollback. Since the control of traffic is virtually instantaneous and the amount of
potential loss could be strictly controlled companies are more willing to allow investigative
analysis while products are still only exposed to a limited number of users.
Composite Description
Similar to the other research questions, the responses given by the study participants, the
qualitative portion of the research question five were short enough not to require textual
description. Therefore textual description has been combined with composite description. This
section highlighted possible anomalies and points of focus. It is important to note that research
questions three, four and five induced almost identical responses from the survey participants.
The answers to these questions were consistent among study participants. In other words, if a
participant felt that experimentation had a positive impact on product improvement he/she felt
the same way about product innovation and product deployment risk. However, if the participant
was not sure what impact experimentation had on product innovation, he/she was also unsure
about its impact on product improvement and product deployment. As such it is possible to
conclude that the answers to the qualitative and quantitative portions of the survey related to the
65
impact of various product aspects were strongly correlated to knowledge of the interactive
The remaining research question was designed to ascertain the impact of experimentation
on the competitive standing of the company. It was represented by the following qualitative and
quantitative survey questions: (a) Please Describe in Your Own Words How Experimentation
Has Reflected upon Competitive Standing of the Company; (b) Please Describe in Your Own
Words How Experimentation Has Reflected upon Competitive Standing of the Company; (c)
Please Rate The Impact of Experimentation on Revenue of the Company; (d) Please Describe in
Your Own Words How Experimentation Has Reflected upon Revenue of the Company; (e)
Please Rate The Impact of Experimentation on Profit of the Company; (f) Please Describe in
Your Own Words How Experimentation Has Reflected upon Profit of the Company; (g) Please
Rate The Impact of Experimentation on Market Size of the Company; (h) Please Describe in
Your Own Words How Experimentation Has Reflected upon Market Size of the Company; (j)
Please Rate The Impact of Experimentation on Market Penetration of the Company; (k) Please
Describe in Your Own Words How Experimentation Has Reflected upon Market Penetration of
the Company; (l) Please Rate The Impact of Experimentation on Market Segmentation of the
Company; and (m) Please Describe in Your Own Words How Experimentation Has Reflected
Horizontalization
According to the statistical analysis of the quantitative question 29 of the survey (see
Figure 30), five participants or 45.5% that had completed this question found that
experimentation had a positive impact on the competitive standing of the company. 36.6% of the
66
survey participants indicated that they were not sure how to answer this question. Lastly, two
participants or 18.2% felt that experimentation had neutral impact on the competitive standing of
the company. Other questions related to competitive standing of the company that touched upon
market penetration, market segmentation, market targeting and market size in the context of
experimentation, exhibited a similar percentage breakdown, with the notable exceptions of two
questions that inquired about the impact of experimentation on the revenue and profit of the
company. 100% of the participants of the survey felt that experimentation positively impacted
upon the revenue and profit of the company (see Figure 44).
In order to gain further insights into qualitative and quantitative answers given by the
participants, several cross-tabulations were produced (see Figure 31), (see Figure 32). The first
competitive standing of the company with their job function. Opinions of the participants of the
survey were uniformly distributed across all job functions represented in the survey.
assessed experimentation experience (see Figure 32). Similarly, the opinions of the self-
identified experts, intermediate and novice users were uniformly distributed and ranged from
The complete extraction of the questions, related to research question five, was grouped
Clusters of Meaning
The following answers expressed the prevailing opinion of the survey participants in each
category, “Experimentation made the company more competitive by remaining relevant to our
customers“, “I am not sure how to answer this question” and “Competitive impact is difficult to
ascertain.” Based on the divergent statements above, it was clear that study participants were
67
divided on the impact on the experimentation on the competitive standing on the company. A
similar divergence of opinion was observed during the composite analysis of the question related
to the competitive standing of the company that touched upon market penetration, market
segmentation, market targeting and market size in the context of experimentation, even though a
slightly higher percentage of participants had a positive opinion about the impact of the
experimentation on the revenue and profits of the company. The following statements express
the prevailing opinions, “Experimentation had a direct impact on the bottom line of the
company” and “Experimentation allowed the company to expand our customer base, resulting in
higher revenues.” It was clear that study participants did not consider revenue and company
Textual Description
Based on the analysis of the clustered data it was clear that study participants did not
the company. While examining the impact of the various elements that make up competitive
standing of the company their opinion remained mixed. At the same time study participants were
unhesitant in expressing their strong opinion in support of experimentation as one of the key
contributors to the profits and revenue of the company. It is also clear that some participants had
a difficulty interpreting the exact nature of the questions related to competitive standing of the
company. Analysis of the qualitative data revealed the highest percentage of participants were
either unsure how to answer this question or gave the shortest answer possible. In contrast, the
revenue and profit questions yielded expressive answers and indicated a strong degree and
68
Composite Description
One of the biggest anomalies and the point of focus of this research question is the fact
that study participants felt drastically different about the impact of experimentation on the
overall competitive standing of the company and the impact of experimentation on the profit and
revenue of the company. It is highly likely that study participants did not consider revenue and
In order to insure validity and eliminate bias study participants were invited to complete
the survey under false pretenses by the COO of the company where the research is conducted
and the researcher employed. The survey was presented to the participants as the means to
work. Since all of the study participants work in the same organization as the researcher of the
study, deception was absolutely vital to avoid credibility and bias issues. Even if any possible
undue influence of the researcher could be somehow mitigated, knowledge of the true nature of
the study could affect the results by any attempts of the participants to be helpful to the
researcher and thus contributing erroneous data that they would feel would advance the aims of
the research. Since the researcher of the study holds an executive position at the company even
visibility of impropriety was too great for the true nature of the study to be revealed. If
participants of the study became aware of the true intentions of the research they would
potentially try to skew their answers in the way that they would perceive to be helpful to the
Since the study employed a phenomenological approach, it was essential for the
researcher to mitigate his personal bias on the subject while interpreting the results of the study.
In order to do that the researcher utilized a Bracketing approach. Namely prior to the analysis of
69
the survey results the researcher expressed his own thoughts regarding each of the research
questions. While analyzing the results, the researcher compared his opinions with the opinions of
the participants to make sure that they did not become misinterpreted. The full set of results is
presented in the next chapter. The issues of credibility, validity and bias are further explored in
70
CHAPTER 5: RESULTS, CONCLUSIONS, AND RECOMMENDATIONS
The last chapter presented a detailed analysis of the collected data. The issues of
credibility, validity and bias were specifically outlined and discussed. This chapter centers on
presenting the study results, outlining potential gaps, as well as providing recommendations for
future research. The results of the study were summarized as a cohesive narrative that allowed
the final conclusions of the study to be formulated. In addition, the study results and conclusions
Results
One of the key aims of the study was to examine interactive marketing through the prism
pace with technological advances. The following researched questions were examined and
analyzed: (a) What is the impact of experimentation on interactive marketing goals? (b) What are
the key experimentation levers pertaining to interactive marketing? (c) What is the impact of
experimentation on interactive marketing product development and deployment risk? (f) What is
interactive marketing?
In addition to these research questions the study proposed the following research null
hypotheses: (a) experimentation has a positive impact on interactive marketing goals; (b)
71
experimentation has a mitigating impact on product development and deployment risk; and (e)
experimentation has a positive impact on the competitive standing of the companies involved in
interactive marketing.
In order to confirm or reject these null hypotheses, the study comprised a mixed-method
survey where participants of the study were asked to reflect on various aspects of
experimentation in interactive marketing in the context of their job. The study was conducted in
the organization where the researcher is employed. The participants of the study were invited to
participate in the study under false pretenses by the Chief Operating Officer (COO) of the
company. Only the researcher of the study and the COO were aware of the true intentions of the
survey. Keeping the true intentions of the study a secret was a key to increasing validity,
credibility and reducing bias. The study was conducted in a double blind manner in order to
preserve confidentially and insure impartiality. In other words, the study participants were
unaware of the true reasons for the survey and the researcher was unaware of the identities of the
survey participants.
The survey was sent out in two installments to 23 participants. Based on the application
description and composite description, the analysis of the of the received data yielded the
following results; (a) the null hypothesis asserting that experimentation has a positive impact on
interactive marketing goals was strongly confirmed. The 93.3% f the participants indicated that
experimentation had a positive impact on interactive marketing goals. This result was consistent
with the conclusions reached by Raman (1996), where user-tailored interactive advertising had a
high conversion outcome; (b) the null hypothesis asserting that experimentation has a positive
impact on product innovation in interactive marketing was confirmed as well. The 81.8% of the
72
participants felt that experimentation strongly contributed to product innovation. The results of
this particular research question were consistent with the conclusions outlined by Von Hippel
(1998), which concluded that product innovation was successfully driven by product users; (c)
the null hypothesis asserting that experimentation had a positive impact on interactive marketing
product improvement was also strongly confirmed. 81.8% of the participants indicated that
experimentation had a positive and lasting impact on product improvement. Similar conclusions
were reached by Thomke (1995) who found that experimentation significantly contributed to
product improvement; (d) the hypothesis asserting that experimentation had a positive impact on
product development and product deployment risk was similarly confirmed with 72.7% of study
participants indicating that experimentation had mitigated product development and deployment
risk. Both Von Hippel (1998) and Thomke (1995) reached conclusions in support of these
findings. In particular, their studies asserted that the economics of product development were
cost and consequently reduced product development and deployment risk; (e) the hypothesis
asserting that experimentation has a positive impact on the competitive standing of the company
was rejected by the participants of the study. Only 45.5% of the survey participants felt that the
experimentation impact on the competitive standing of the company had been positive. There
are currently no empirical studies relating the competitive standing of the company with
research findings.
The only research question, which examined the impact of experimentation on interactive
marketing levers, which did not have an associated research hypothesis yielded following results:
80% of the participants felt that visual experimentation levers had the strongest impact on
73
interactive marketing goals. This was closely followed by functional levers with 53.3% of the
vote. These findings are supported in part by the conclusions from Jebakumari (2002), Mark
(2003) and Milley (2002). In particular Milley (2002) concluded that interactive marketing
Conclusions
The results of the study indicated that experimentation had a profound impact on almost
all aspects of the company where the research was conducted. The majority of the research
results was supported by the conclusions of previous empirical research studies and was
consistent with the reference material. It was clear that the study participants felt that the
experimentation efforts of the company were overwhelmingly positive even though non-uniform
results were achieved at the different phases of adoption of experimentation. More specifically,
some participants felt that the initial experimentation efforts lacked focus and were disorganized.
As such, poor experimentation results were not deemed to be a function of the experimentation
efforts but rather due to organizational maturity. When the experimentation practice was fully
majority of the cases, the qualitative data was fully confirmed by quantitative responses. The
results of the research question relating experimentation efforts with the competitive standing of
the study were inconclusive. It appears that participants of the study did not consider the profit
and revenue of the company as significant contributors to the competitive standing of the
company. More specifically, if the research question that deals with the competitive standing of
the company was dropped from the study and replaced by the question that related
experimentation to the profit and revenue of the company it would have been overwhelmingly
74
confirmed. In fact 100% of the participants felt that experimentation positively impacted upon
the profit and revenue of the company. This aspect of experimentation warrants further study and
exploration. Another possibly confounding aspect is that it is also possible that study participants
experienced “survey fatigue”, and the questions related to the competitive standing, placed at the
end of the very long survey did not obtain an accurate or honest response. It can be argued
however that the results have a general applicability to the eCommerce and the Interactive
Marketing industry at large. If these results are confirmed in a broader study, then
experimentation in the context of interactive marketing may serve as a disruptive technology that
would drive almost universal adoption and eventual commoditization, which in turn may spur the
Recommendations
One of the key recommendations is to confirm the results of this qualitative study by
goals in a diverse set of interactive marketing companies. The results of the present study
indicate that this impact is fairly significant. If these results are confirmed in a broad study
There are number of factors that could potentially impact on the technological efforts of the
interactive marketing companies. Future studies may include such topics as: exploring the
75
experimentation in interactive marketing companies; and exploring metrics related to the
Based on the fact that this study received an inconclusive result regarding the impact of
experimentation on the competitive standing of the company it is recommended that this subject
is explored in isolation. This researcher believes that such an impact is significant and needs to
be explored in more detail. Additional research may be centered on comparing and contrasting
interactive marketing companies that utilize experimentation versus those that do not. A specific
focus of the study should concentrate on the use of experimentation to achieve competitive
advantage.
There is strong anecdotal evidence to suggest that close-loop experimentation, that ties together
all facets of interactive marketing such as Search Marketing, Display Marketing, Affiliate
Marketing and Email Marketing, is more effective than isolated interactive marketing
experimentation which has very few interactive marketing components. However, to date no
Lastly, the results of this study point to a strong correlation between a well established
experimentation process and the success of the experimentation. Additional study could focus on
researching the impact of a well established experimentation process and its contribution to the
One of the key limitations of the study is the fact that it was conducted in a single
organization and as such, the quantitative results cannot be considered statistically significant.
76
The quantitative questions of the study verified the consistency of the paired qualitative
questions. In addition participants of Data Analysis and Business Development groups were
sparsely represented. The study used terminology and concepts that were significant and specific
to the organization where the study was conducted. Even though such terminology is prevalent in
the industry at large it is possible that some aspects are either interpreted differently or are not
Another significant limitation of the study was the fact that Technology groups were
excluded from participating in the survey. Since the study researcher was responsible for all of
the technology, quality assurance and project management efforts of the company, the risk of
bias and potential loss of validity was too great to allow Technology groups to participate. As
such, a number of research questions that dealt with the technological aspects of experimentation
Lastly, the results of the study are based on the interpretations of the researcher. As such,
credibility and bias are potential limiting factors. Attempts were made to mitigate validity and
bias by using a number of techniques and approaches however; it is impossible to totally exclude
77
REFERENCES
Alvesson M. & Sköldberg K. (2000). Reflexive methodology: New vistas for Qualitative
Research, London: Sage Publications.
Anderson, A. D. (1972). Designs with partial factorial balance. The Annals of Mathematics and
Statistics, 43(4), 1333-1341.
Beerenwinkel, N., Pachter, L. and Sturmfels, B. (2007). Epistasis and shapes of fitness
landscapes, Statistica Sinica, 17, 1317-1342.
Chulho, P. (1988). Interactive marketing decision system for highly differentiated product in a
growing market, (Doctoral Dissertation, Stanford University, 1988)
Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five
traditions. Thousand Oaks, CA: Sage.
Daugherty, T. M. (2001). Consumer leaning and 3-D ecommerce: The effects of sequential
exposure of a virtual experience relative to indirect and direct product experience on
product knowledge, brand attitude and purchase intention (Doctoral Dissertation,
Michigan State University, 2001)
Deighton, J. (1996). The future of interactive marketing, Harvard Business Review, 74(6), 151-
162.
Dou, W. (1999). Modeling the online sales system: A catastrophe theory approach (Doctoral
Dissertation, University of Wisconsin-Milwaukee, 1999)
eMarketer Research. (2007). Analyst reports. eMarketer. Retrieved June 27, 2008, from
http://www.emarketer.com/
78
Fisher R. A. (1926). The arrangement of field experiments. Journal of the Ministry of
Agriculture of Great Britain, 33, 503-513.
Giorgi, A. (1985). The phenomenological psychology of learning and the verbal learning
tradition. In A. Giorgi, (Ed.), Phenomenology and Psychological Research. Pittsburgh:
Duquesne University Press.
Huberty, C.J. & Morris, J.D. (1989). Multivariate analysis versus multiple univariate
analysis, Psychological Bulletin, 105(2), 302-308.
Husserl, E. (1931). Ideas: General introduction to pure Phenomenology. (Boyce Gibson, W.R.
Trans.). London: George Allen and Unwin. (Originally published 1913).
Jebakumari, G. (2002). The dimensionality of interactivity and its effect on key consumer
variables (Doctoral Dissertation, Southern Illinois University, 1987)
Jiang, Z. (2004). An investigation of virtual product experience and its effect mechanism
(Doctoral Dissertation, University of British Columbia, 2004)
Kohavi, R., Henne, M. R. & Sommerfield, D. (2007). Practical guide to controlled experiments
on the web: Listen to your customers not to the HiPPO. Proceedings of the 13th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 959–
967
Morgan, G., & Smircich, L. (1980). The case for qualitative research. Academy of Management
79
Review, 5, 491-500.
Lee, F., Edmondson, A.C., Thomke, S. and Worline, M. (2004). The mixed effects of
inconsistency on experimentation in organizations, Organization Science, 15(3), 310.
Liu, Y. (2002). Interactivity and its implications for consumer behavior, (Doctoral Dissertation,
State University of New Jersey, 2002)
Macias, W.A. (2000). The effect of interactivity on comprehension and persuasion of interactive
marketing (Doctoral Dissertation, University of Texas, 2000)
Ozdemiz, M. (2002). Evolutionary computing for feature selection and predictive data mining,
(Doctoral Dissertation, Rensselaer Polytechnic Institute, 2002)
Thomke, S. (1995). The economics of experimentation in the design of new products and
processes (Doctoral Dissertation, Massachusetts Institute of Technology, 1995)
Thomke, S., von Hippel, E. & Franke, R. (1998). Modes of experimentation: An innovation
80
process-and competitive-variable, Research Policy, 27(3), 315-332.
Thomke, S. (2001). Enlightened experimentation: The new imperative for innovation, Harvard
Business Review, 79(2), 66-75.
Torre, A. & Rallet, A. (2005). Proximity and localization, Regional Studies, 39(1), pp. 47–59.
Van Manen, M. (1990). Researching lived experience: Human science for an action sensitive
pedagogy. State University of New York Press, New York.
Von Hippel, E. (1998). Economics of product development by users: The impact of “sticky”
local information, Management Science, 44(5), 629.
Von Hippel, E. & Katz, R. (2002). Shifting innovation to users via toolkits, Management
Science, 48(7), 821.
Wang, W. (1999). Predictive modeling based on classification and pattern matching methods
(Masters Dissertation, Simon Fraser University, 1999)
Xu, H. & Wu C.F.J. (2000). Generalized minimum aberration for asymmetrical fractional
factorial designs, The Annals of Statistics, 29(2), 549-560.
81
APPENDIX A. SURVEY TEMPLATE
Survey Page 1
82
83
Survey Page 2
84
Survey Page 3
85
86
Survey Page 4
87
APPENDIX B. CONCEPTUAL FRAMEWORK
88
APPENDIX C. INTERACTIVE MARKETING
89
APPENDIX D. MARKETING
90
APPENDIX E. EXPERIMENTATION CYCLE
91
APPENDIX F. DATA ANALYSIS
92
Figure F3. Job function and experimentation experience cross-tabulation
93
Figure F5. Experimentation impact and department cross-tabulation
94
Figure F7. Experimentation levers impact breakdown
95
Figure F9. Experimentation levers impact and job function cross-tabulation
96
Figure F11. Experimentation conditions impact and experimentation experience cross-tabulation
97
Figure F13. Experimentation CPA impact and experimentation overall impact cross-tabulation
Figure F14. Experimentation CPI impact and experimentation overall impact cross-tabulation
98
Figure F15. Experimentation CPA impact and experimentation overall impact cross-tabulation
Figure F16. Experimentation upsell impact and experimentation overall impact cross-tabulation
99
Figure F17. Experimentation CPI impact and experimentation overall impact cross-tabulation
100
Figure F19. Experimentation impact on product lifecycle and job function cross-tabulation
Figure F20. Experimentation impact on product lifecycle and experimentation experience cross-
tabulation
101
Figure F21. Experimentation product innovation impact breakdown
Figure F22. Experimentation product innovation impact and experimentation experience cross-
tabulation
102
Figure F23. Experimentation product innovation impact and job function cross-tabulation
103
Figure F25. Experimentation product improvement impact and job function cross-tabulation
104
Figure F27. Experimentation product deployment risk impact breakdown
Figure F28. Experimentation product deployment risk impact and experimentation experience
cross-tabulation
105
Figure F29. Experimentation product deployment risk and job function breakdown
106
Figure F31. Experimentation competitive standing of the company impact and job function
cross-tabulation
Figure F32. Experimentation competitive standing of the company impact and experimentation
experience cross-tabulation
107
Figure F33. Experimentation revenue impact
108
Figure F36. Experimentation market penetration impact
109
Figure F37. Experimentation marketing goals impact horizontalization
110
Figure F38. Experimentation marketing levers impact horizontalization
111
Figure F39. Experimentation conditions impact horizontalization
112
Figure F40. Experimentation product lifecycle impact horizontalization
113
Figure F41. Experimentation product innovation impact horizontalization
114
Figure F42. Experimentation product improvement impact horizontalization
115
Figure F43. Experimentation product deployment risk impact horizontalization
116
Figure F44. Experimentation competitive standing impact horizontalization
117