Anda di halaman 1dari 127

THE IMPACT OF EXPERIMENTATION ON PRODUCT DEVELOPMENT IN

COMPANIES INVOLVED IN INTERACTIVE MARKETING

by

Sergey L. Sundukovskiy

H. PERRIN GARSOMBKE, Ph.D., Faculty Mentor and Chair

LARRY KLEIN, Ph.D., Committee Member

JOSE NIEVES, Ph.D., Committee Member

William A. Reed, Ph.D., Acting Dean, School of Business & Technology

A Dissertation Presented in Partial Fulfillment

Of the Requirements for the Degree

Doctor of Philosophy

Capella University

August, 2009
UMI Number: 3369490

Copyright 2009 by
Sundukovskiy, Sergey L.

All rights reserved

INFORMATION TO USERS

The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.

______________________________________________________________

UMI Microform 3369490


Copyright 2009 by ProQuest LLC
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
_______________________________________________________________

ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106-1346
© Sergey L. Sundukovskiy, 2009
Abstract

Since its emergence, the Internet and Internet related technologies have permeated almost

all aspects of the modern life. The impact of the Internet on the day-to-day activities of its users

has been quite dramatic. However, its effect on the business community has been even more

profound. Besides yielding additional opportunities for existing businesses, the Internet has

facilitated a new era of companies that have the Internet at the center of their business model,

businesses that simply would not be able to exist without it. As such Internet-centric and

increasingly traditional businesses now rely on interactive marketing as a source of revenue,

differentiation and competitive advantage. Despite the obvious importance to many businesses,

the efficiency and improvement of interactive marketing has largely stagnated or at least

proceeds at a low pace. This study examined interactive marketing through the prism of

experimentation as a way of propelling interactive marketing forward and enabling it to keep

pace with technological advances.

Experimentation lies at the core of product development, improvement and innovation.

Active experimentation has been utilized as a product strategy in numerous business fields, but it

has been largely ignored in interactive marketing. This study examined a number of

experimentation models and their applicability to interactive marketing. In addition it focused on

the elements of interactive marketing that are conducive to experimentation.


Dedication

I dedicate this work to my wife Galina for her commitment, dedication and unconditional

love. Also, to my children: Aaron, Rebekah and Miriam for giving meaning to my life. Your love

and support gave me strength to see this project through.

To my parents: Valentina and Leonid, for always believing in me and forever

encouraging me to see physical, emotional and intellectual limits as imaginary lines that are

meant to be crossed.

Last, but by no means least, to my brother: Aleksey, for seeing me the way I wish I could

truly be.

iii
Acknowledgements

Special thanks to my friend Boris Droutman for introducing me to the subject of

experimentation and helping me generate the ideas that made this dissertation possible. Thanks

for encouraging me, listening to the things I say and hearing the things I do not say.

iv
Table of Contents

Dedication ...................................................................................................................................... iii


Acknowledgements ........................................................................................................................ iv
Table of Contents ............................................................................................................................ v
List of Figures ............................................................................................................................... vii
CHAPTER 1: INTRODUCTION ................................................................................................... 2
Introduction to the Problem......................................................................................................... 2
Internet Marketing ....................................................................................................................... 2
Interactive Marketing .................................................................................................................. 4
Product Development .................................................................................................................. 9
Background of the Study ........................................................................................................... 10
Purpose of the Study ................................................................................................................. 11
Research Questions ................................................................................................................... 12
Definition of Terms ................................................................................................................... 15
Assumptions and Limitations .................................................................................................... 17
Organization of the Remainder of the Study............................................................................. 19
CHAPTER 2: LITERATURE REVIEW ...................................................................................... 20
Experimentation Introduction ................................................................................................... 20
Experimentation Strategies ....................................................................................................... 21
Experimentation Models ........................................................................................................... 22
Interactive Marketing ................................................................................................................ 27
Experimentation ........................................................................................................................ 29
Product Innovation .................................................................................................................... 31
CHAPTER 3: METHODOLOGY ................................................................................................ 33
Description of the Methodology ............................................................................................... 33
Design of the Study ................................................................................................................... 36
Population and Sampling .......................................................................................................... 37
Measurement Strategy ............................................................................................................... 38
Instrumentation.......................................................................................................................... 39
Data Collection .......................................................................................................................... 40
Data Analysis Procedures.......................................................................................................... 41

v
Qualitative Data Display for Describing the Phenomenon ....................................................... 44
Qualitative Data Display for Explaining the Phenomenon ....................................................... 45
CHAPTER 4: DATA COLLECTION AND ANALYSIS............................................................ 46
Overall Response Analysis ....................................................................................................... 46
Research Objective One ............................................................................................................ 48
Research Objective Two ........................................................................................................... 53
Research Objective Three ......................................................................................................... 58
Research Objective Four ........................................................................................................... 61
Research Objective Five............................................................................................................ 63
Research Objective Six ............................................................................................................. 66
Validity and Bias ....................................................................................................................... 69
CHAPTER 5: RESULTS, CONCLUSIONS, AND RECOMMENDATIONS ........................... 71
Results ....................................................................................................................................... 71
Conclusions ............................................................................................................................... 74
Recommendations ..................................................................................................................... 75
Limitations of the Study ............................................................................................................ 76
REFERENCES ............................................................................................................................. 78
APPENDIX A. SURVEY TEMPLATE ....................................................................................... 82
APPENDIX B. CONCEPTUAL FRAMEWORK........................................................................ 88
APPENDIX C. INTERACTIVE MARKETING .......................................................................... 89
APPENDIX D. MARKETING ..................................................................................................... 90
APPENDIX E. EXPERIMENTATION CYCLE ......................................................................... 91
APPENDIX F. DATA ANALYSIS .............................................................................................. 92

vi
List of Figures

Figure F1. Response summary...................................................................................................... 92


Figure F2. Job function breakdown .............................................................................................. 92
Figure F3. Job function and experimentation experience cross-tabulation .................................. 93
Figure F4. Experimentation impact breakdown............................................................................ 93
Figure F5. Experimentation impact and department cross-tabulation .......................................... 94
Figure F6. Experimentation impact and experimentation experience cross-tabulation ................ 94
Figure F7. Experimentation levers impact breakdown ................................................................. 95
Figure F8. Experimentation levers impact and experimentation experience cross-tabulation ..... 95
Figure F9. Experimentation levers impact and job function cross-tabulation .............................. 96
Figure F10. Experimentation conditions impact breakdown ........................................................ 96
Figure F11. Experimentation conditions impact and experimentation experience cross-tabulation
....................................................................................................................................................... 97
Figure F12. Experimentation conditions impact and job function cross-tabulation ..................... 97
Figure F13. Experimentation CPA impact and experimentation overall impact cross-tabulation 98
Figure F14. Experimentation CPI impact and experimentation overall impact cross-tabulation . 98
Figure F15. Experimentation CPA impact and experimentation overall impact cross-tabulation 99
Figure F16. Experimentation upsell impact and experimentation overall impact cross-tabulation
....................................................................................................................................................... 99
Figure F17. Experimentation CPI impact and experimentation overall impact cross-tabulation100
Figure F18. Experimentation product lifecycle impact breakdown............................................ 100
Figure F19. Experimentation impact on product lifecycle and job function cross-tabulation.... 101
Figure F20. Experimentation impact on product lifecycle and experimentation experience cross-
tabulation..................................................................................................................................... 101
Figure F21. Experimentation product innovation impact breakdown ........................................ 102
Figure F22. Experimentation product innovation impact and experimentation experience cross-
tabulation..................................................................................................................................... 102
Figure F23. Experimentation product innovation impact and job function cross-tabulation ..... 103
Figure F24. Experimentation product improvement impact breakdown .................................... 103
Figure F25. Experimentation product improvement impact and job function cross-tabulation . 104
Figure F26. Experimentation product improvement impact and experimentation experience
cross-tabulation ........................................................................................................................... 104
Figure F27. Experimentation product deployment risk impact breakdown ............................... 105

vii
Figure F28. Experimentation product deployment risk impact and experimentation experience
cross-tabulation ........................................................................................................................... 105
Figure F29. Experimentation product deployment risk and job function breakdown ................ 106
Figure F30. Experimentation competitive standing of the company impact breakdown ........... 106
Figure F31. Experimentation competitive standing of the company impact and job function
cross-tabulation ........................................................................................................................... 107
Figure F32. Experimentation competitive standing of the company impact and experimentation
experience cross-tabulation ......................................................................................................... 107
Figure F33. Experimentation revenue impact ............................................................................. 108
Figure F34. Experimentation profit impact ................................................................................ 108
Figure F35. Experimentation market size impact ....................................................................... 108
Figure F36. Experimentation market penetration impact ........................................................... 109
Figure F37. Experimentation marketing goals impact horizontalization.................................... 110
Figure F38. Experimentation marketing levers impact horizontalization .................................. 111
Figure F39. Experimentation conditions impact horizontalization............................................. 112
Figure F40. Experimentation product lifecycle impact horizontalization .................................. 113
Figure F41. Experimentation product innovation impact horizontalization ............................... 114
Figure F42. Experimentation product improvement impact horizontalization........................... 115
Figure F43. Experimentation product deployment risk impact horizontalization ...................... 116
Figure F44. Experimentation competitive standing impact horizontalization ............................ 117

viii
ix
CHAPTER 1: INTRODUCTION

Introduction to the Problem

Internet Marketing

The introduction of the World Wide Web, enabled by the Internet, saw an explosive user

growth (Newman, 2001). Consequently both online based and traditional companies with

significant Internet presence started to realize the full potential of World Wide Web as a

marketing medium. Since the popularization of the Internet and Internet related technologies,

several new marketing paradigms have emerged that take advantage of Internet. Such emergence

was not planned and occurred spontaneously as an offshoot of traditional marketing (Mark,

2003). Internet marketing was formulated as one form of non-traditional marketing. Due to early

disappointments and lack of experience with a new non-traditional marketing medium Internet

marketing was initially used to supplement traditional marketing efforts. According to eMarketer

Research (2007), US online marketing in 2002 shrunk by 15.8%. Companies did not see Internet

marketing as a separate marketing channel, with a unique set of characteristics and considered it

the same as radio and television (Raman, 1996). Internet marketing was combined with

traditional television and radio campaigns because at first, it was not considered capable of

carrying the full weight of responsibilities.

However, over time marketers began to realize that traditional marketing campaigns

consisting of television and radio advertisements, and print media designed to attract users to

their websites had a limited success, whereas the Internet itself proved quite promising in that

regard. According to eMarketer Research (2007) online advertizing spending in 2007 had

2
reached $19.5 billion, projecting to reach $36.5 billion in 2011. Companies had realized that

Internet marketing was not only a strong and independent marketing medium, but also had a

number of advantages over traditional forms of marketing, such as television, radio and print

media (Mark, 2003). In order to earn its independence and be considered comparable to one of

the traditional dominant marketing channels, Internet marketing had to be evaluated against a

multitude of traditional marketing problems.

It had successfully answered a number of questions that were historically asked by

marketing science. Internet marketing showed its potential for consumer accessibility, defining

corporate identity, promoting brand awareness, enabling market segmentation and achieving

market localization and internationalization (Jebakumari, 2002). In addition, Internet marketing

capabilities such as personalization, interactivity and traceability were formulated among its

advantages over traditional dominant medium channels such as television, radio and print media.

The Internet allowed synchronous and asynchronous user behavior analysis, which in turn

allowed for previously unavailable personalization capabilities (Milley, 2000). It had the

potential to reach a vast number of World Wide Web users that otherwise would be either

difficult to reach, or in some cases completely unreachable. According to Internet World Stats

(2007), by the end of 2007 the Internet had reached 1,319,872,109 users with the largest

percentage growth accounted for by countries in Middle East, Africa and Latin America.

Since its inception, the Internet has developed into one of the strongest and mature

Marketing channels that is segmented into several sub-channels. At the present time Internet

marketing could be considered as a conglomerate marketing methodology, providing a common

umbrella for a set of diverse marketing sub-channels. The following distinct Internet sub-

marketing channels have now emerged: (a) search engine marketing (SEM); (b) e-mail

3
marketing; (c) affiliate marketing; (d) display marketing; and (e) social media marketing (See

Appendix C). These Internet marketing approaches, founded upon sociological, economic and

cultural marketing aspects, are now interwoven into Internet marketing fabric as a cohesive

entity. At the same time each one of these Internet marketing sub-channels has a set of unique

characteristics, based on its execution and marketing strategies. These characteristics distinguish

it from other tangential marketing approaches (Mark, 2003).

Interactive Marketing

In addition to being part of Internet marketing, social media marketing, affiliate

marketing, search engine marketing and e-mail marketing sub-channels have another aspect in

common, their interactivity. Each one of these sub-channels relies on the user to make an

interactive step directly related to the advertisement stimuli. As such these marketing sub-

channels could be placed under another umbrella called “Interactive Marketing”. In its purest

form Deighton (1996) defines “Interactive Marketing” as an ability of the computer based

system to interact with the user for marketing based purposes. In many cases however interactive

marketing consists of a multi-transactional interaction that is based on user purchase and visit

history, declared or implied preferences and demographic information consisting of age, sex,

income level and even race.

Most of the interactive marketing accomplished through the World Wide Web is enabled

by the Internet. However, it would be incorrect to consider Internet marketing as synonymous

with interactive marketing (Deighton, 1996). Based on the Veen diagram (see Appendix D) not

all Internet marketing is interactive and not all interactive marketing is conducted on the World

Wide Web or is Internet based. For instance display marketing, one of the Internet marketing

sub-channels, is not interactive in nature. In the same vain iPods, mobile devices, gaming

4
devices, interactive screen devices and e-books are capable of carrying a marketing message by

means of wireless or hard-connected communication. Each of these devices has its own

marketing specifics related to its size and interactivity, tracing power, display and

communication capabilities.

However, based on the Veen diagram (see Appendix D) it is evident that a great amount

of overlap exists between the Internet and interactive marketing. Chulho (1998) defined the

“Interactive Age” as an age of full interactivity between humans and machines and technology

connected humans. Similar parallels can be drawn between the “Information Age” and

“Participation Age”, coined by Schwartz (2005). Blogging and Social Networking are two recent

examples of the Interaction or Participation Age that are utilized by social media marketing.

At the heart of interactive marketing are the two main concepts of “Personalization” and

“Customer Engagement”. The interactive marketing itself could be fully defined in terms of

these concepts. More specifically, this researcher proposes an alternative definition of

“Interactive Marketing” describing it as a process of customer engagement through

personalization. In turn, Steinberg (2006) defines “Customer Engagement” as a process of

engaging potential or existing customers in inter-client or inter-customer communication by

means of the push or pull marketing model. According to Haag (2006) “Personalization” is a

process of generating unique experience for an ever finer grain segment of customers. Taken to

the logical extreme, the process of personalization is a process of targeting a single individual

with the stimuli uniquely tailored to that individual and that individual alone. The logical

conclusion of these definitions is the hypothesis that the most effective customer engagement

could be achieved through granular personalization activities.

The companies involved in interactive Internet marketing endeavor to achieve

5
personalization through data mining and information analysis. The data mining and analysis

constitutes the bulk of the personalization effort in both temporal and relative importance

perspectives. Ozdemiz (2002) describes personalization as an effort consisting of three major

steps: (a) data gathering; (b) data analysis; (c) personalized content generation. According to

comSource (2007) the data gathering portion of the personalization process undertaken by major

interactive marketing companies such as Yahoo, Google and Facebook has reached a significant,

earlier unseen level. According to comSource (2007), in a period of a single month in 2007

Yahoo on average collected data 2,520 times for every visitor to its site. Mark (2003) posited that

Internet marketing companies collect as much data a possible even without a specific goal in

mind. Since, it is not always clear what customer data would become useful in the future, all

available data is collected (Mark, 2003). It usually consists of webpage visits, visit history,

purchasing history, abandonment and completion history, self-declared or inferred preferences,

computer information, web browser information, self-declared or inferred demographic

information self-declared or inferred financial information, geographic information, age and sex

information (Mark, 2003). The collected data is carefully cataloged and analyzed. The purpose of

this data analysis is to determine the most appropriate marketing strategy for a particular user

and other users that fit similar characteristics. Milley (2000) describes this approach as a “Data

Empowered” marketing strategy.

The intention of any marketing strategy in general and interactive Internet marketing in

particular is to achieve a desired goal. In general interactive Internet marketing tries to achieve a

goal of conversion where the user is synchronously driven to a purchasing decision. According

to Macias (2000) even though conversion is certainly the dominant interactive marketing goal,

other goals may include customer retention, customer acquisition, lifetime value maximization

6
and product up-sell. Each one of these goals could be considered a question that interactive

Internet marketing is supposed to answer. Namely, interactive Internet marketing is required to

provide answers to the following questions: (a) what is the most effective customer retention

strategy? (b) what is the most effective to driver customer to a purchasing decision? (c) what is

the most effective customer acquisition strategy? and (e) what is the most effective product up-

sell strategy?

By analyzing collected user data interactive marketing researchers try to answer these

questions in a predictive manner. The predictive data analysis, which is at the core of the data

empowered marketing strategy, is accomplished through diverse data modeling techniques. Dou

(1999) described the use of the Catastrophe Theory to model online store sales. Other techniques

include the Recency-Frequency-Monetary Value (RFM) model designed to assess the customer

purchasing decision. However, these and other models have several fundamental problems. First

of all, these models are predictive and as such carry a fair amount of statistical error due to a

need to make assumptions and use guess work. Consequently, these models are subject to

Simpson’s Paradox where a lack of understanding of data granularity could lead to incorrect

conclusions. Secondly, the Internet is an open system where not all of the variables and their

effect are clearly defined (Dou, 1999). As such confounding variables become a real issue.

Thirdly, the feedback loop between data collection, data analysis and interactive Marketing

changes is quite long and inefficient (Dou, 1999).

In addition to the limitations mentioned above, predictive modeling strategies are

fundamentally unsuitable to the set of marketing tasks related to product development and risk

mitigation. The data empowered marketing strategy absolutely requires extensive amounts of

data in order to come up with a predictive marketing model. However, at the stage of new

7
product development, such data does not exist or is very limited. As such, predictive modeling

marketing strategies by definition are not suitable for new product development. In the past

marketers had to rely on focus groups to gain some guidance on new and existing product

development (Mark, 2003). However, the focus group method has proven to be extremely error

prone due to the Survey Paradox where people say one thing and then another. According to

Mark (2003), in many cases companies would also fail to generate a focus group of the required

diversity. As such it was not uncommon to see focus group results being skewed to a particular

group of users.

In addition, the focus group approach to product development resulted in a monolithic

product development driven by corporate executives. In this product development model,

product ideas were generated by company managers and corporate executives. Kohavi,

Longbotham, Henne and Sommerfield (2007) described this model as the “Highest Paid Person’s

Opinion” (HiPPO), where the most senior member of the team comes up with the product idea or

is responsible for choosing among multiple product options. When the product idea is selected,

the resultant product is manufactured in the monolithic fashion and subsequently demonstrated to

the focus group. One possible name for the risk profile in this type of product development is

“Risk Backloading”, where the bulk of the risk is born at the end of the product development

lifecycle. Consequently, HiPPO initiated and focus group validated product development is

extremely deficient in both risk mitigation and consumer-product fit.

The main goal of this study is to propose an alternative interactive Internet marketing

product development strategy that mitigates the shortcomings of predictive data modeling and

HiPPO-Focus Group product development. It addresses data modeling shortcomings related to

statistical uncertainly, data feedback loop and confounding variables. In addition it recommends

8
a more advantageous interactive Internet marketing product development approach that places

product initiative in the hands of the end-users and allows for a more advantageous risk profile

related to product innovation and product development. Even though this study treats Internet

and interactive marketing synonymously, it more specifically considers only the part of

interactive marketing that is enabled by Word Wide Web and Internet technologies. The findings

of the study should however be applicable to the interactive devices mentioned above.

Product Development

As mentioned earlier, the use of experimentation to investigate the problem has been

described by several researchers. It is possible to consider product development as a particular

type of problem and so the process of experimentation should be fully applicable to product

development. According to Thomke (1997), experimentation as a method of product

development has been utilized by a number of industries including Automotive, Pharmaceutical

and Electronics. Furthermore von Hippel (1998) posited that experimentation facilitates product

development by end-users of the product. User-driven product development allows companies to

get away from the HiPPO principal, where product decisions are made by the people who are

furthest removed from the product itself. Product users themselves consciously or

subconsciously galvanize and propel product development forward. More importantly they do so

with their actions instead of the words and in this way they eliminate the Survey Paradox.

Thomke, von Hippel, and Franke (1997) argued that employing experimentation in product

development allows companies to improve their competitive standing. Since products are shaped

by the end-users directly, the chances of product failure are greatly decreased. The

experimentation process allows controlled failures early and often when the cost of change is

minimal. By employing the experimentation approach companies reverse the risk profile from

9
back-loading to frontloading. Depending on the nature of the product companies may utilize

several experimentation modes: (a) simulation; (b) prototype; and (c) sample. Experimentation

modes depend on such factors as experiment result applicability, experiment cost and experiment

efficiency. According to Thomke (1997), experimentation modes constitute a continuous process

optimized based on the efficiency vs. number of experimental cycles.

Based on the applicability of experimentation to product development in other areas it

would seem logical that the process of experimentation would be pertinent to Internet marketing

and interactive marketing as well. However, very little if any empirical studies have been

conducted to explore this matter. Researchers such as Li (2003), Raman (1996), Mark (2003),

among others, have broached the subject from the predictive modeling point of view. Even if

experimentation is mentioned it lacks continuity with interactive marketing itself and is always

driven by reactive analysis.

Background of the Study

This study examines the applicability of experimentation to product development in the

confines of Internet and interactive marketing. Since, empirical evidence of experimentation in

the field of interactive marketing is sorely lacking it draws on experimentation experience from

other industries and product categories. Its background is rooted in the fields of experimentation,

product development, production innovation, Internet development, Internet marketing and

interactive marketing. The narrative of the study is centered on the product development

strategies in interactive Internet marketing. It is designed to address what can be called the

Information Age Product Development Paradox, where companies that have a significant

Internet eCommerce presence tend to ignore technology-centric product development strategies

10
and instead favor low-tech solutions. The paradox consists of a seeming contradiction between

relying on the technologies as the source of survival, yet disregarding them when it comes to

product development. With regards to product development, companies involved in the Internet

and the interactive marketing tend to ignore product development strategies that have been

developed in other fields. As such they face problems related to product risk, product

improvement and product innovation.

Purpose of the Study

The main purpose of the study is to examine the applicability of experimentation as the

product development strategy in the companies involved in interactive marketing. It examines

the use of experimentation in all parts of the interactive marketing product development

lifecycle: (a) idea generation; (b) idea screening; (c) concept development and testing; (d)

business analysis; (e) beta testing and market testing; (f) technical implementation; and (g)

commercialization. In addition this study examines the impact of experimentation on product

evolution as a function of user interactivity. It aims to contribute to the experimentation body of

knowledge as well as establishing an empirical base for interactive marketing experimentation

practice that is quite limited at the present time. Furthermore, this study looks at the unique

elements of interactive marketing experimentation. It attempts to define the majority of the

exogenous and endogenous interactive marketing factors and their effect on interactive

marketing goals. The secondary purpose of the study is to point out the gaps and propose

improvements in the Design of Experiments Theory, as it pertains to interactive marketing.

11
Research Questions

This study aims to answer six main research questions. What is the impact of

experimentation on interactive marketing goals? What are the key experimentation levers

pertaining to interactive marketing? What is the impact of experimentation on product innovation

in interactive marketing? What is the impact of experimentation on interactive marketing product

improvement? What is the impact of experimentation on interactive marketing product

development and deployment risk? What is the impact of experimentation on the competitive

standing of the companies involved in interactive marketing?

Online conversion is one of the key goals of interactive marketing companies. Even

though conversion is not the only goal, the majority of interactive marketing companies are

striving to achieve higher conversion. Even though conversion events of interactive marketing

companies are quite diverse, in essence, the conversion event is equivalent to a revenue

generating event. It is paramount to determine if the use of experimentation has a positive effect

on online conversion. This study examines a number of exogenous and endogenous variables

pertaining to interactive marketing and analyses their impact on the interactive marketing goals.

The aim of the study is not to define an exhaustive list of exogenous and endogenous factors

across all interactive marketing products and environments, but rather to determine an effective

subset usable across interactive marketing companies of different types.

A significant number of the research questions posed by this study are devoted to the

analysis of the impact of experimentation on product development lifecycle. More specifically

the study aims to examine the impact of experimentation on product innovation, product

improvement, product development and product deployment risk. Similar to the goals, the

interactive marketing companies’ products are equally as complex and as diverse. However, it is

12
important to understand the fundamental benefits offered by experimentation as it relates to

interactive marketing product development regardless of the product type.

Lastly, this study will examine the impact of experimentation on the competitive standing

of companies in the interactive marketing space. The aim of the study is to show that companies

in the interactive marketing space that abstain from experimentation, in relation to product

development, ignore it at their own peril and put their competitive standing and quite possibly

long term survival at risk.

Nature of the Study

Experimentation in the interactive marketing field is a social phenomenon since it is

designed to achieve the best results through social interaction. Despite the fact that the social

interaction in this context occurs between man and machine, the social context of the interaction

is preserved. As such the use of a qualitative design approach is consistent with social

constructivism and fully justified as the research approach for this dissertation. It is would have

been absolutely acceptable to employ a quantitative or mixed-method design approach since

experimentation relies heavily on statistical analysis and statistical data modeling. However, the

topic of this dissertation and the state of the empirical research in the experimentation field of

interactive marketing are more conducive to qualitative analysis. This study is theory forming in

nature and as such it is consistent with one of the strong fundamentals of qualitative analysis.

Significance of the Study

The experimentation approach can give companies involved in interactive marketing the

means to test marketing ideas without putting the majority of its traffic at risk. In the past

13
marketing ideas have been tested using surveys and focus groups. However, this approach is long

and inexact. It also gives rise to what can be called a “Survey Paradox”, where people say one

thing and do another (Sheffrin, 1996). The process of experimentation presents qualitative and

quantitative results and represents an innovative development in testing Internet marketing and

interactive marketing ideas. However, very little or virtually none of the empirical research

confirms positive effects of experimentation by online conversion when used as part of

interactive marketing strategies. Furthermore, due to a lack of solid theoretical research

companies involved in experimentation in the field of interactive and Internet marketing are

forced to develop a practical base behind experimentation through trial and error. Such approach

results in limited success and might lead to companies abandoning experimentation altogether. In

addition companies that do successfully pursue experimentation, in the context of interactive

marketing, quite limit their success due to poor alignment between the Business and Information

Systems designed to conduct experimentation. As such experimentation is done haphazardly,

lacks continuity and does not result in a cohesive strategy.

The significance of this study is in establishing a theoretical base behind experimentation

in the interactive and Internet marketing fields. One of the key implications of the study is the

assertion that companies that are involved in interactive and Internet marketing, outside of the

experimentation framework, negatively impact their competitive standing, increase product

development risk and reduce their likelihood of success in achieving their marketing goals.

Another essential contribution of the study is an outline of the implementation process in the

context of the experimentation framework that allows achieving cohesion between departmental

units involved in the interactive marketing experimentation. This study positions

experimentation as a product development and product improvement method designed to

14
optimize the marketing goals of the companies involved in interactive and Internet marketing.

Definition of Terms

Base Flow: current web site execution path with the highest fitness value. All related

flows performance is compared to the Base Flow.

Challenger Flow: variation of the Base Flow designed to achieve a higher fitness value.

Confirmation Flow: previous Base Flow designed to confirm its losing status to the

current Base Flow. The introduction of the Confirmation Flow can be classified as confirmation

testing, where results of the previous Experiment are reconfirmed. The purpose of the

confirmation testing is to make sure that the previous Base Flow had lost to the current Base

Flow in the head to head comparison and was not a result of a statistical error or impact of

unaccounted or unknown confounding variable.

Experiment: manipulation on one or more of the endogenous variables in the context of

the exogenous variables with the purpose of achieving Silo Goals. The purpose of the experiment

is to reject or accept the null hypothesis.

Experimentation Levers: collection of exogenous variables that is intrinsic to the

Experiment.

Flow: single variation of the web site execution path. Typically the experiment consists

of several Flows that manipulate a single variable for a side by side comparison.

Multivariate Experiment: experiment consisting of manipulation of multiple endogenous

variables simultaneously.

Multiple Singlevariate Experiments: Multiple experiment sets affecting a single

endogenous variable per set.

15
Phantom Flow: flow that does not cause a variation in behavior. It represents a virtual

duplication of any physical flow for achieving statistical significance.

Silo: area under experimentation. The Silo usually consists of a collection of endogenous

variables that can be manipulated in a particular context. In typical interactive marketing Silos

consist of landing web pages, up sell web pages, offer pages, etc.

Silo Goal: purpose of the experimentation. In the majority of interactive marketing cases

the Silo Goal is represented by conversion also known as a purchasing decision. Other Silo Goals

may include link offs, up sell actions, subscriptions, content contribution, etc. In general the

terms the goal of the experimentation is to achieve the highest fitness value.

Singlevariate Experiment: experiment consisting of manipulation of a single endogenous

variable.

View: single web page alongside the web site execution path. Typically Flow consists of

several Views that contain one or more endogenous variables that are being manipulated.

Imposed Flow: flow that allows predictive execution of the web site path without

distribution and analysis. Imposed Flow is often utilized for regulatory or testing purposes.

Silo Visit: represents a visit to one of the areas under experimentation, typically

represented by a collection of web pages.

Site Visit: represents a visit to the web site or any other Traffic Origin of the interactive

marketing company.

Traffic Origin: source where web traffic originates. Traffic Origin is often represented by

one of the interactive marketing channels such as email, search, display, affiliates, call center and

social media. Traffic Origin is one of the key exogenous variables.

Traffic Criteria: criteria that is intrinsic to the incoming web traffic, but extrinsic to the

16
Experiment itself. Traffic criteria are a collection of exogenous variables.

Traffic Distribution: allocation of the web traffic between Experiments and Flows

according to the distribution algorithm. Traffic Distribution between Experiments and Flows

must add up to 100% otherwise some of the incoming traffic will fail to be distributed.

Vertical: product centric collection of interactive marketing assets.

View Visit: represents a display web page or any other asset of interactive marketing such

as email, banner, etc.

Visitor: represents a unique visitor to the web site.

Assumptions and Limitations

In addition to paradigmatic and methodological assumptions this study has a number of

domain assumptions. First, it assumes the synonymous nature of following terms: A/B Testing,

Split Testing, A/B/C Testing, Singlevariate Testing, Singlevariate Multi-Experiment Testing, and

Multivariate Testing. All of these terms are aimed at describing the same phenomenon and had

evolved under a common umbrella (see Appendix C). However, it needs to be noted that the term

multivariate testing refers to the experimentation technique related to univariate testing

strategies, yet it is not synonymous with them. However, for all intent and purposes the practice

of multivariate testing as well all univariate testing techniques will be assumed to be describing a

common phenomenon. In cases where the study discusses the intricacies of a particular

experimentation approach the name of the approach will be explicitly noted. In the same vain

such terms as Internet marketing and interactive marketing are treated synonymously and

interchangeably. In cases where a distinction between these concepts needs to be made it is made

explicitly.

17
This study also assumes that all participants of the study are not aware of their active

participation in the study. This assumption is essential for illuminating participant bias. It is also

assumed that all participants of the study have at least cursory knowledge of interactive and

internet marketing as well as a practical understanding of experimentation as it applies to the

above mentioned concepts.

Besides listed domain assumptions this study has a number of paradigmatic and

methodological assumptions. In some respects it follows the basic assumptions of the systems

approach. It assumes objective reality that is independent of the observer. Similar to the systems

approach the key aim of experimentation is to model the real world phenomenon. As such it

employs inductive and deductive reasoning as well as verification as the basis for its conclusions.

It assumes that the results of any given experiment may be a result of casual relationships and

must be verified by further experimentation.

The methodological assumptions of the study are related to phenomenological inquiry.

According to Becker (1992), the researcher of the phenomenological study is viewed as a co-

creator of knowledge alongside the participants of the study. Even though such researcher

involvement is beneficial, in this particular case discrepancy between the participant’s and the

researcher’s knowledge of the studied phenomenon is too great. As such, the level of threat to

researcher bias, validity, generalization and repeatability of the study is too great. Consequently,

the researcher employed a combination of transcendental and experimental phenomenological

methodology that allowed the researcher to detach his experience from the experience of the

participants of the study.

18
Organization of the Remainder of the Study

The remainder of the study consists of four additional chapters. Chapter Two examines

literature related to interactive and Internet marketing, product development and

experimentation. The literature has been selected to describe each of the aspects of the study

independently yet leading to cohesion between experimentation, interactive marketing and

product development concepts. Chapter Three describes the design methodology that was chosen

to conduct the study. It defends the choice of research methodology by examining the

methodological fit to the research approach and the subject of the study. It also describes

elements of the study including the data collection procedures, data sampling, data collection

instruments and data coding. Chapter Four illustrates the findings of the study and undertakes a

detailed analysis of the results of the study. It consists of data analysis and data display for

describing and explaining the phenomenon of the study. Chapter Five summarizes the findings of

the study and introduces an alternative hypothesis for the findings of the study, validity and bias

as well as examines the trend in the findings of the study. It also proposes topics for future

research and the areas of inquiry in the subject area.

19
CHAPTER 2: LITERATURE REVIEW

Experimentation Introduction

The process of experimentation in general is at the center of scientific discovery. In fact

science or knowledge derived from experimentation is called “experimental science”. People

have utilized experimentation since ancient times with one of the classic examples of early

experimentation originating from Egypt around 2613-2589 BC. When Egyptians attempted to

build a smooth-sided pyramid they engaged in the process of experimentation. They initially

started building a pyramid at Meidum which collapsed due to angle acuteness. Based on the

result of this failed experiment Egyptians altered the angle of the Bent-Pyramid at Dahshur more

than half way through to save it from collapse, resulting in a bent shape. Subsequent smooth-

sided Egyptian pyramids have utilized the correct angle from the outset. By looking at the

experiment conducted by the Egyptians it is possible to extract several observations: (a)

experiments often fail (sometimes they are designed to fail); (b) full scale experiments can be

quite expensive; (c) conducting experiments consecutively may take a long time; and (d)

conducting experiments concurrently allows effective side by side comparison. Unbeknown to

them, this Egyptian experiment created the basis for the Design of Experiments theory. This

early experiment and subsequent application of the experiment results contained the majority of

elements founds in modern product experiments.

Thomke (1997) described experimentation as one of the forms of problem solving. In

short Thomke (1997) defined experimentation as a trial and error process. According to Thomke

(1998), the experimentation process consists of four major steps: (1) design (design consists of

20
coming up with an improved solution based on the previous experience from the preceding

experiment cycles); (2) build (the build step consists of modeling and constructing products to be

experimented upon); (3) run (the run step of the experimentation process consists of executing

the experiment in the real or simulated environment); and (4) analyze (analysis consists of data

mining and data investigation collected during experiment execution). Thomke (1998), pointed

out that the experimentation process changes under pressure from exogenous elements. These

exogenous elements consist of uncertain requirements, environmental and technological changes.

Experimentation Strategies

The process of conducting experimentation is not uniform. The choice of the experimentation

strategy depends on the problem it is trying to resolve. In order to understand experimentation

strategies it is important to look at what biologists define as the “fitness landscape”

(Beerenwinkel, Pachter & Sturmfels, 2007). Quite often the solution to a particular problem is

not singular. Fitness landscape consists of all possible solutions to a particular problem. A fitness

function defines the quality of the solution in relation to other solutions to the same problem. The

optimal solution to the problem is thought to have the highest fitness value.

A real world demonstration of the fitness landscape could be observed by looking at the

simple problem. For example, the way between home and the office consists of n routes. The set

of all routes constitutes a fitness landscape. Among all of the possible routes one of the routes is

the “best”, where best = f(n). Since, best is a relative term it must be defined in the context of the

fitness landscape. It is likely that the majority of people would consider the best route to

constitute shortest time. However, it is possible that some would choose a scenic route to be the

best. It is also possible to have multiple optimal solutions where the best route changes based on

exogenous elements such as time of day, day of the week and weather conditions. In that case,

21
the best = df(n) / dt, taking the time of the day into consideration.

Thomke (1998) defines three experimentation strategies: (a) parallel experimentation; (b)

serial experimentation with minimal learning; and (c) serial experimentation with learning. It is

possible to demonstrate these experimentation strategies by continuing with the “driving from

home to office” problem. To find out the route that yields the shortest driving time between the

home and the office and is the quickest way would require employing a parallel experimentation

strategy. However, it is impossible to accomplish this using a single driver, since it requires

taking all possible routes simultaneously. The driver of the car would need help from his friends.

If the driver of the car was determined to find the shortest route himself/herself, he/she could

employ serial experimentation with minimal learning or experimentation with learning. The

serial experimentation with minimal learning would require the driver to follow a predefined

plan of taking different routes until all possible routes where exhausted. After all routes have

been tried, the driver would have analyze the time each has taken to determine the one with the

shortest time. The serial experimentation with learning would allow the driver to avoid trying all

routes by analyzing results of initial experiments and illuminating routes that certainly would not

yield good results.

Experimentation Models

The experiments are further complicated by endogenous and the exogenous elements. It

is certainly possible to affect the experimental results by changing the endogenous elements of

the system (see Appendix E). For instance besides the different routes, changing drivers, car

type, fuel type, and the number of drivers among many other things would possible affect the

driving time. The experiments with multiple changing endogenous elements are called

multivariate. The experiments with a single endogenous variable are called univariate. It is

22
always possible to represent multivariate experiments as a series of univariate ones, by

temporarily freezing all but one variable. Ideally, in order to understand the impact of all

exogenous variables each of them requires a separate experiment. In other words in order to

understand the impact of the time of departure from the office on the overall travel time, the

driver needs to conduct experiments by leaving the office every hour on the hour. In the same

vain all weather conditions must be tested as well.

Based on the analysis of the experimental models it is quite obvious that the complexity

of experimentation grows exponentially based on the number of endogenous and exogenous

elements, such that Number of Experiments (E) = k * (m^n)!, where k is the number of the steps

in the process, m is the number endogenous variables and n is the number of exogenous

variables. Based on formula above it is possible to arrive at the conclusion that the chance of

guessing the option with the highest fitness value, when the number of endogenous and the

exogenous elements are beyond trivial, is infinitesimally small.

At the same time running experimentation of all combinations of endogenous and the

exogenous factors in order to determine the best possible combination yielding the highest

fitness value would constitute a full factorial experiment (Xu &Wu, 2001). However, even with a

trivial number of exogenous and endogenous factors the number of resulting experiments that

satisfies full factorial design is truly staggering. By applying the formula listed above a single

step process with 3 endogenous factors and 3 exogenous factors would result in 1.0888 E 28

number of experiments. For all intent and purposes running full factorial experiments beyond a

trivial number of factors is simply not practical.

In order to resolve the conundrum of no experimentation and full factorial

experimentation it is necessary to look at the nature of exogenous and endogenous factors. Not

all of the variables involved in the experiment affect the outcome of the experiment equally.

23
Depending on the nature of the experiment it is possible to find a smaller subset of variables that

have the greatest effect on the experiment. Anderson (1972) called this approach to

experimentation partial factorial. There are number of statistical methods of determining what

variables truly matter to the outcome of the experimentation. Li (2003) identified the following

partial factorial reduction models: (a) Univariate Poisson (relies on the analysis of the variable of

all of the involved variables); (b) Univariate Tobit without Log Transformation; (c) Univariate

Tobit with Log Transformation; (d) Discretized Univariate Tobit with Log Transformation; (e)

Discretized Univariate Tobit with Heteregeity; (f) Multivariate Count; (g) Multivariate Count

with Mixture; and (f) Multivariate two-state hidden Markov Chain Tobit.

Even a cursory look at the listed models allows them to be separated into two categories:

(a) univariate; and (b) multivariate. Fundamentally univariate models, where ANOVA tests are

applied in succession, are designed to ascertain the effect of the independent variables on the

dependant variables (Biskin, 1980). In the context of the experimentation univariate models are

designed to highlight the exogenous and endogenous factors that have a significant impact of the

outcome. On the other hand multivariate models, where a MANOVA test is conducted, are

designed to come up with sets of independent variables that have an impact on the dependant

variables (Huberty, 1986). Again by taking the experimentation context into account the

multivariate models are designed to highlight sets of exogenous and endogenous factors that

have a significant impact on the outcome of the experimentation. According to Huberty and

Moris (1989) the fundamental difference between multiple univariate ANOVA tests and

multivariate MANOVA tests consist in the consideration of the effects of the independent

variables on each other and their compound effect on the outcome. More specifically univariate

models tend to ignore the relationship between exogenous and endogenous factors and their

compounding effect, where multivariate models take this relationship into account.

24
In practical terms, going back to the “home to the office” driving example, univariate

models consider the independent impact of the weather conditions and the time of the departure

on the driving time, where multivariate models would consider these two factors in conjunction

with each other.

Even though interactive and Internet marketing are relatively new phenomena they have

generated a fair amount of popular and scientific foment. It is even fair to say that the scientific

community has been lagging behind interactive and Internet marketing practitioners who have

pushed exploration boundaries. At the same time, in recent years, interactive and Internet

marketing, as a subject of scientific inquiry, have seen an increased rate of exciting empirical

research. These studies have focused on interactive and Internet marketing from social,

physiological and physiological perspectives (Jebakumari, 2002; Milley, 2000; Macias, 2000;

Liu, 2002; Newman, 2001; Mark, 2003; Bezjian-Avery, 1997; Raman, 1996).

Unlike Internet and interactive marketing, experimentation is not a new phenomenon.

There is abundant evidence that ancient Egyptians conducted experiments during pyramid

construction. In a more recent example of experimentation James Lind, while servicing onboard

the Salisbury, conducted an experiment of using citrus to cure scurvy. One of the notable

differences between early experimentation efforts and the experiments conducted by Lind was

the use of control and treatment groups. The results of the control group were compared with the

results of treatment groups in order to confirm or reject the hypothesis of the experiment. In the

early 20th century Ronald Fisher formulated a mathematical method for designing and analyzing

experiments. He had introduced “factorial” as the term applicable to experiments involving

several factors or variables at the time (Fisher, 1935). Fisher (1926) initially used “complex

experimentation” as the term describing experimentation with multiple variables at the same

25
time. In more recent years researchers have focused on experimentation in the context of product

development.

Enlarged experimentation methods have been researched in the context of Electrical and

Mechanical Engineering (Wang, 1999; Hansen, 1989; Donne, 1996). It is certainly not surprising

that experimentation practice has been widely employed in industrial manufacturing since

manufacturing product commitment is quite expensive and may result in significant losses and

even, in some cases, impact on the long-term survival of the company. The product development

lifecycle often requires experimentation to be part of the product development process. The

deceptive ease of change in the Internet and interactive marketing product development has

resulted in the situation where experimentation best practices found in the industrial product

engineering are ignored. There is certainly a glaring lack of empirical research into

experimentation in the Internet and interactive marketing fields as it pertains to product

development. There are a few empirical works that have broached this subject (Dou, 1999; Li

2003; Ozdemir, 2000); however, these studies are primarily dedicated to data mining and

predictive modeling rather than experimentation as a continuous practice.

It is important to note that in recent years several researchers have focused on

interactivity in general and interactive experimentation in particular as a key driver in product

innovation (Thomke, Von Hippel & Franke, 1997; Thomke, 1998; Thomke 2001; Von Hippel

1998; Thomke; 1995). These studies have asserted that product innovation is driven by product

users themselves through interaction and experimentation. Product innovation was particularly

highlighted in these studies and was considered separately from the remaining phases of product

development lifecycle. In large, this part of this dissertation capitalizes on the mentioned studies

in the context of applying findings of above mentioned research both to Internet and interactive

26
marketing.

Some of the mentioned research in the areas of interactive marketing, experimentation

and product innovation is examined in more detail below.

Interactive Marketing

If the Internet timeline could be separated into three decadal stages: (a) mid 1990s –

introduction stage; (b) early 2000 – development stage; and (c) late 2000 – maturity stage; then

the Jebakumari (2002) study could classified as a study of the stages of Internet development. It

was during this time that Internet interactivity came into strong researcher and practitioner focus.

The overall purpose of the Jebakumari (2002) study was to describe interactivity in the context

of Internet marketing. Lyons (2007) offered several research questions: (a) what are the nature,

characteristics and components of interactivity? (b) what are the shortcomings of the traditional

marketing models in context of the interactive medium? (c) how is interactivity related to

comprehension?

Jebakumari (2002) examined traditional marketing and its shortcomings to explain this

interactive phenomenon. Both traditional and interactive Internet marketing were compared and

contrasted. The conclusions reached by Jebakumari (2002) were reminiscent of a similar study

conducted by Mark (2003). Jebakumari (2002) found that a number of traditional marketing

techniques were inconsistent with the interactive media and did not adequately address the

interactive audience.

A study by Milley (2000) could be attributed to the late introductory and early

development stages of Internet marketing. Miley (2002) explored, what he calls, Web-enabled

consumer marketing, its intricacies and specifics. Miley (2002) tried to formulate a theoretical

27
model of interactive marketing on the basis of numerous case studies, presented and analyzed in

succession. An additional focus of the study was related to the operational recommendations of

running a consumer oriented interactive web site. Miley (2002) proposed the following research

questions: (a) what is the theoretical basis of Web-enabled consumer marketing? and (b) how

should the company align its operations to be congruent with the Web-enabled consumer

marketing model?

Miley (2002) reached the conclusion that Web-enabled consumer marketing requires

analysis of behavioral user data to guide future actions and marketing decisions. He also

concluded that in order to facilitate comprehensive data analysis, interactive user data must be

both extensive and complete. As such, Web-enabled consumer marketing or interactive

marketing companies must position their human and systems resources, as well as establish

operational practices conducive to data capture.

The Raman (1996) study could be attributed to the introductory period of interactive and

Internet Marketing. Raman (1996) explored interactivity on the Web, at the time when it was

emerging phenomenon. In particular, Raman (1996) examined the desired customer exposure to

online banners. Similar to the later studies by Mark (2003) and Jebakumari (2002) that focused

on the comparison and contrast between traditional and interactive marketing, Raman (1996)

contrasted banner exposure in traditional and interactive marketing models. The Raman (1996)

study is similar to parallel study by Bezjian-Avery (1997) which attempted to define interactive

marketing and its core concepts.

Raman (1996) proposed the following research questions: (a) what are the factors

affecting the desired interactive exposure? and (b) how do the levels of interactive exposure

affect the desired advertising outcome? Raman (1996) concluded that the dominant factor

28
affecting desired interactive exposure is predominantly related to interactive content richness.

Additionally, Raman (1996) concluded that an interactive advertisement that speaks to the

consumer on an individual level at the same time as being pertinent and engaging has a high

chance of achieving the designed interactive outcome.

Experimentation

As mentioned earlier, the body of knowledge regarding experimentation and experiment

design is heavily focused on major engineering disciplines. The empirical research on the subject

of experimentation in interactive and Internet marketing is scarce and tangential. This study

relies on several seminal works on Design of Experiments, data modeling and product

innovation. In the area of Design of Experiments this study examined several research papers

related to the Taguchi Method. Weng (2007) presented a detailed analysis of experiment

optimization methods. These methods were compared on the bases of (a) global optimization; (b)

discontinuous object function; (c) non-differentiable function; and (d) convergence rate. Weng

(2007) found that the Taguchi Method scored extremely well in all of the compared categories.

Weng (2007) gave a detailed review of the Taguchi Method itself and its benefits over other

optimization methods. Weng (2007) also suggested several improvements to the Taguchi Method

that can be applicable to experimental design in interactive marketing.

In addition to the design of experiments in Electrical and Mechanical Engineering this

dissertation is based on another tangential topic related to data mining and data modeling in

interactive and internet marketing. It is important to note that it is impossible to conduct

experimentation without being engaged in some form of data mining and data modeling. The

experimentation is enabled by data analysis and data mining. More specifically, the

29
experimentation process is data analysis driven. In his essays on interactive marketing Li (2003)

examined three cases of interactive marketing. In the first essay Li (2003) described the

functionality of cross-selling services on an interactive banking web site. Li (2003) analyzed the

behavioral reasoning behind online user actions as they pertain to the purchasing of products and

services offered by the interactive banking web site. In conducting behavioral analysis Li (2003)

utilized several multivariate probit models implemented by Hierarchical Bayer framework. This

dissertation examined the applicability of the models proposed by Li (2003) in conducting

interactive real-time online experiments. In his second essay Li (2003) analyzed the browsing

behavior of users on several interactive web sites. In order to predict future browsing paths Li

(2003) utilized several Poisson and discretized tobit models. These models were compared and

contrasted in the context of their ability to accurately predict user browsing behavior. This

dissertation utilizes the modeling technique findings presented by Li (2003). In his third essay Li

(2003) analyzed purchase and conversion data from several eCommerce web sites. He used this

data to build a predictive purchase model. Li (2003) concluded that his Hierarchical Bayer

framework supplemented with hidden Markov model could accurately predict a path reflecting

user goals ultimately leading to a potential purchase. This dissertation capitalizes on the findings

of this essay during the set up and analysis of the effect of experimentation on reaching

interactive marketing goals.

Similar to the Li (2003) study, research by Dou (1999) utilized similar statistical analysis

for modeling online sales. Dou (1999) examined the applicability of the Catastrophe Theory to

modeling actual behavior and predicting potential purchasing online decisions. Dou (1999)

explored what he termed the data empowered marketing strategy, where data was mined through

tracking users to guide the interactive marketing decisions of the company. Even though Dou

30
(1999) did not mention this concept as interactive marketing experimentation by name, he

hypothesized that interactive marketing data can be used to alter the interactive user experience

in real time as more of the user data was collected and analyzed. Dou (1999) called this approach

adaptive marketing communication, where consumer behavior is analyzed through continuous

observation. Dou (1999) proposed that interactive marketing data can be modeled using the

Catastrophe model. He hypothesized that Catastrophe Theory is eminently suitable for this type

of analysis and predictive modeling. Dou (1999) concluded that it was indeed possible to model

and ultimately predict the browsing and purchasing behavior of users on interactive marketing

web sites.

The significance of both the Li (2003) and Dou (1999) studies is the fact that interactive

marketing data is been actively analyzed using a multitude of statistical models in the context of

interactive marketing. However, it is import to note that use of the Taguchi Method for similar

analysis has not been empirically researched. Additional computing paradigms for predictive

data modeling such as Evolutionary Computing and Genetic Algorithms have been explored by

several researchers (Ozdemir, 2002). Ozdemir (2002) argued that Evolutionary Computing offers

real potential in deriving a best fitness value. As such it holds significant promise for online data

modeling and interactive marketing experimentation.

Product Innovation

A significant portion of this study is devoted to analyzing the impact of experimentation

on the product development lifecycle in the context of interactive marketing. Even though there

are few empirical studies that directly deal with experimentation in interactive marketing,

emphasizing the web site as an interactive marketing product, there is a significant body of

31
empirical work that is devoted to experimentation in the context of product development. This

dissertation capitalizes on the several seminal works by Thomke and Von Hippel. Thomke

(1995) hypothesized that the mode of experimentation such as prototyping and simulation has a

significant impact on the economics of experimentation. More specifically, Thomke (1995)

proposed that the use of simulation experimentation is more economical and therefore far more

likely to be used in product development. Consequently simulation experimentation could be

viewed as a product enabler and innovation driver. Thomke (1995) presented two case studies

where experimentation was used in the design of new pharmaceutical drugs and integrated-

circuit based systems. Thomke (1995) proposed experimentation design cycles consisting of

designing, building, running and analyzing activities performed in a contiguous manner. Each

successive cycle was built taking into account the findings of the previous cycle. This study

hypothesized the applicability of this cycle in general, and the process in particular, to interactive

marketing product development (see Appendix E). Thomke (1995) found that switching between

prototyping and simulation experimentation modes significantly affected experimentation

economics, resulting in a substantial reduction in design cost.

In a seemingly unrelated study Von Hippel (1998) argued that product innovation should

be driven by the people who would benefit from the end product of innovation, end users of the

product themselves. Von Hippel (1998) described what could be called the Von Hippel paradox,

where product specialists should not be primarily responsible for product innovation, but rather

defer to product users as a source of ultimate innovation. Von Hippel (1998) described this

paradox as a shift in locus of problem-solving. In this dissertation Von Hippel’s ideas are

combined with the approach proposed by Thomke (1995), where interactive marketing product

development is driven by users through interactive experimentation.

32
CHAPTER 3: METHODOLOGY

Description of the Methodology

The study assessing the impact of experimentation on interactive product development

utilized a qualitative research paradigm. The choice of qualitative research methodology was

related to the nature of the topic and the innate characteristics of the field of the study.

Employing qualitative research methods makes the quality of the data of paramount importance.

Consequently, emphasis is placed on how and under what circumstances the data is collected

(Morgan & Smircich, 1980). In contrast to quantitative research methods, it is rare to see a

qualitative researcher working with large quantities of data. This is the case with the current

study as all analyzed data comes from a single organization.

Maxwell (1992) defined the qualitative research methods as theory forming. These

methods are used to generate new theories or introduce new hypotheses. Maxwell (1992) called

qualitative research a paradigm that is concerned with a “breadth first” approach as opposed to a

“depth first” as is the case with quantitative research. More specifically, qualitative research is

preoccupied with describing a phenomenon as thoroughly as possible, and forming a theory

behind it. Based on the paradigmatic characteristics provided by Maxwell (1992), the use of

qualitative research methods was consistent with the goals of the study and the state of

knowledge in the field of experimentation in the context of interactive marketing.

This study utilized a phenomenological method as one of the research methodologies

under the qualitative paradigm umbrella. The phenomenological method was first formulated by

Husserl (1983). Creswell (2007) defined the phenomenological method as a description of the

33
meaning for several individuals of their direct experience of a concept or a phenomenon. In this

particular case the phenomenon is a process of experimentation in the context of interactive

marketing. Husserl (1983) described the application of this phenomenological approach as an

execution of three consecutive steps. The first step consisted of adopting a phenomenological

method that encouraged the researcher to infuse quantitative data with the qualitative context that

allowed the data to be meaningful. The second step consisted of seeking out an instance where

the phenomenon can be studied in its natural context in order to distill the essence of the

phenomenon. The third and final step was described by Husserl (1983), and consisted of

describing the discovered meaning of the phenomenon.

The experimentation phenomenon in the context of interactive marketing sits well in the

phenomenological inquiry. This researcher wanted to discover meaning behind the

experimentation phenomenon in the context of interactive marketing by studying individuals

who have experienced the phenomenon first hand. Even though participants of the study have

experienced the phenomenon they are not necessarily aware of its meaning (Georgi, 2006). This

point of view is certainly consistent with the description of the experimentation phenomenon.

The participants of this study have certainly experienced the experimentation phenomenon in the

context of interactive marketing, but by and large they are not aware of its meaning and its

fundamental characteristics. By utilizing phenomenological tools as a bracketing,

horizontalization, clustering, delimiting and imaginative variation, phenomenological tools allow

the researcher to extract meaning from the experiences of the participants of the study.

The phenomenological method allows the researcher of the study to quantify his/her own

experience by supplementing findings of the study with his/her own observations and

interpretations in the context of the experience. Creswell (2007) described this type of

34
phenomenological method as hermeneutical. Van Manen (1990) described the researcher of the

study as one of the participants of the study. This researcher has an extensive experience with

experimentation in the context of Internet and interactive marketing. The phenomenological

method allows researcher understanding of his own experience while maintaining a strong

relationship to the topic (van Manen, 1990). However, in order to address generalization,

validation, validity and bias, the researcher must employ bracketing to distinguish his own

experience from the experience of the participants of the study. As such this researcher tried to

deemphasize his own experience. This researcher employed a combination of transcendental and

experimental phenomenology rather than hermeneutical phenomenology, where emphasis is

placed on the experience of the participants of the study and the experience of the researcher is

bracketed (Moustakas, 1994). Use of experimental phenomenology allowed the researcher to

focus on the practical application of the phenomenon rather than the philosophical side of it.

It is important to note the distinction between other qualitative methodologies such as

grounded theory or other narrative approaches and phenomenological methods. Creswell (2007)

made a distinction between narrative study and phenomenological study, where the former is

experienced by several study participants individually, as opposed to as a group in a latter case.

Even though participants of the study were selected from several groups participating in

interactive marketing experimentation, individual group representatives described an experience

of the group to which they belong. According to Cresswell (2007) phenomenological methods

place emphasis on the shared experience of the phenomenon. It is critical to understand

experimentation in interactive marketing in the context of a particular group as well as the

organization as a whole.

35
Design of the Study

The qualitative research paradigm employs interviews as its predominant data collection

instrument (Babour, 1998). When an interview is conducted in a purely qualitative manner, the

researcher then takes an active participation in the interview process. In that case, the researcher

is considered to be an actual instrument of the study. Babour, (1998) pointed out that participants

in a study often receive major guidance from the researcher throughout the interview process.

The qualitative research paradigm thus encourages researcher participation in order to reduce

language ambiguity and supplement the possible lack of context associated with quantitative data

collection instruments. This study however, did not employ interviews as the means of data

collection. The major concerns of the study were related to credibility, validity and bias. Since

the researcher of the study is employed by the company where the research is being conducted,

actual or potential undue influence was a paramount concern. The researcher struggled to

maintain the balance between extricating himself from the data collection process on one hand,

yet maintaining the qualitative nature of the study on the other hand.

In order to address validity, credibility and bias concepts as well as maintaining the

qualitative nature of the study, the researcher employed a research instrument used in mixed-

method research studies. More specifically this study utilized a mixed-method survey as a

research instrument. Johnson and Onwuegbuzie (2004) also described a mixed-method survey

that embodies both qualitative and quantitative aspects. Mixed-method surveys typically contain

questions found in fixed surveys. These types of questions are referred to as close-ended, where

the set of responses are limited. In addition to quantitative questions, these surveys contain

corresponding sections that allow freehand expression, allowing for a qualitative context to what

36
otherwise would be purely quantitative data. In contrast to close-ended questions these questions

are open-ended.

The resultant survey consisted of 15 open-ended questions and 17 close-ended questions.

In order to eliminate possible researcher influence the survey was administered over the Internet

in an anonymous fashion. In addition data was collected under false pretenses. The participants

of the study were not told that the data was being collected for the purposes of research due to

the possibility of participant bias. The survey was positioned as providing helpful feedback on

the experimentation efforts of the company in the context of interactive marketing.

Population and Sampling

The study included 23 human participants. The participants of the study work in the same

organization as the author of the study. A particular set of participants was chosen from all of the

groups involved in the interactive marketing experimentation. The technology group was

excluded from study participation, since the author of the study works for the technology group

and may exert undue influence on the participants of the study. The participants of the study

were randomly chosen from Interactive Marketing, Business Development, User Experience,

Data Analysis and Creative Design groups.

The participants from each of the mentioned business units provided information relevant

to the results of experimentation and its impact on various aspects of interactive marketing. They

were asked to elaborate on their experiences of experimentation in the context of the interactive

marketing. The participants of the study were solicited on the perceived success of

experimentation relative to its goals such as improved conversion, product innovation, product

improvement, risk mitigation and improved competitive standing.

37
Assuming that the chosen sources were both valid and credible, further research

credibility and validity depended only on the researcher himself/herself. One of the ways of

selecting credible and valid sources is by selecting them at random. More specifically only a

single representative of each business unit was selected at random. This type of selection method

helped reduce personal bias.

Measurement Strategy

This study surveyed interactive marketing professionals in the confines of a single

company. The study participants were selected at random to represent their interactive marketing

channel. The respondents of the study were asked to complete a mixed-method survey consisting

of 40 questions related to their experience with experimentation in the context of interactive

marketing. The research questions of the survey were designed to understand the relationship

between the corresponding dependent and independent variables. Since the number of

independent variables was too great they were grouped under common categories. For instance,

independent variables related to experimentation such as color, font, font size, and images were

grouped under a visual category. It is important to note that independent variable categories were

classified as either endogenous or exogenous. The resultant survey included five categories of

endogenous independent variables (see Appendix B) (a) visual; (b) functional; (c) positional; (d)

informational; and (e) behavioral, as well as six categories of exogenous independent variables

(see Appendix B) (a) temporal; (b) demographical; (c) seasonal; and (d) contextual.

In addition to the independent variables each of the research questions had a number of

dependent variables associated with it. These dependent variables were assigned as follows (see

Appendix B) (a) competitive standing (revenue, market size, profit, market share, and market

38
segmentation); (b) interactive marketing product development lifecycle (product risk, product

innovation, product improvement, product life cycle, and product targeting); and (c) interactive

marketing goals (cost per acquisition, cost per impression, cost per action, upsell, and click-

through rate).

Each research question was directly represented in the survey in the form of several

survey questions. In addition to asking participants of the study to answer research questions

directly, each of the dependent variables was investigated in isolation (see Appendix A). The

survey questions were formulated to draw a connection between independent variables in an

amortized form and the dependent variables associated with research questions. The amortized

independent variables were referred to as “Experimentation” (see Appendix A), where dependent

variables were called out in exactly the same way as they were specified in the “Conceptual

Framework” (see Appendix B).

Even though the quantitative questions were utilized alongside qualitative questions,

qualitative data was not used in drawing conclusions of the study. The point of analysis

associated with the quantitative data was to ascertain consistency between qualitative and

quantitative answers. The quantitative survey questions used single and multiple choice scales.

The qualitative survey questions utilized a measurement strategy associated with the

phenomenological method consisting of horizontalization, clustering, textual and combined

description.

Instrumentation

The survey (see Appendix A) contained 40 questions designed to solicit information

related to the experimentation efforts of the company in the context of interactive marketing. The

39
company where the research was conducted utilized multifaceted interactive content. More

specifically the company used email, search, social, display, internet and affiliate interactive

marketing approaches. The participants of the study were asked to fill out the survey relating to

their experience of product development through experimentation in each of the respective

interactive marketing areas. The questions of the study were crafted to meet the objectives of the

study. The close-ended questions of the study were not used in the final analysis of the study, but

rather they were designed to ensure consistency of a corresponding open-ended question as well

as to guide the user to stay in the confines of the intended question. The survey was designed and

implemented using online survey software and conducted over the Internet. The participants of

the study were invited by the CEO of the company to complete the survey via. The email

contained the link to the online survey as well as an explanation of the purpose of the survey,

ensuring that participation in the study was voluntary and anonymous.

The survey contained five major sections: (a) introduction (questions related to overall

experimentation experience); (b) interactive marketing goals (questions related to the impact of

experimentation process on the goals of various interactive marketing channels); (c) interactive

marketing product (questions related to the impact of the experimentation process on the various

aspects of product development lifecycle of various interactive marketing channels); (d)

competitive standing (questions related to the impact of experimentation process on the various

key competitive indicators); and (e) sustaining effects (questions related to the sustaining effects

of experimentation in the context of interactive marketing)

Data Collection

The data was collected via the SurveyMonkey.com web site. The initial survey was pre-

40
tested and modified according to the feedback from the pilot group and the mentors of the study.

The pilot group consisted of three members chosen from a pool of potential participants. The

participants of the study were given a week to complete the survey with multiple reminders sent

two days and on the day before the survey expiration period. All questions in the study were

designated as mandatory and the only two ways to exit the survey were to either to complete it or

abandon it. If the survey was abandoned in order to proceed with the survey at the future date the

participant of the study had to start the survey over again. According to the SurveyMonkey.com

statistics none of the surveys were abandoned and the effective survey completion was at 100%.

Due to employing the survey online participant anonymity was preserved. After all surveys were

completed the survey results were downloaded onto the researcher’s computer and analyzed. At

all times the survey results were protected from inadvertent or intentional disclosure. The

surveys were conducted online over secure protocol and access to the survey results was

username and password protected. When the results of the survey were downloaded to the

researcher’s computer access to the computer itself was username and password protected as

well.

Data Analysis Procedures

The data analysis procedures roughly consisted of the steps outlined by Creswell (1998)

with slight adaptation for the needs of this study. These steps consisted of (a) horizontalization

and bracketing; (b) clusters of meaning; (c) textual description; (d) composite description. It is

important to note that the usual phenomenological step of transcription was omitted since the

data was collected via an open-ended survey administered over the Internet. As such the data

transcription consisted of downloading the results of the completed surveys. The data analysis

41
was conducted on two occasions. The initial data analysis consisted of interpretation of the

surveys submitted by the “pilot” group, comprised of a small population sample. The subsequent

data analysis was conducted during the analysis phase of the actual study.

Horizontalization

According to Moustakas (1994), the process of horizontalization consists of defining

common themes and significant statements from the responses of the participants. The researcher

read responses several times while trying to comprehend and interpret their meaning. The

qualitative data collected was correlated with similarly intentioned quantitative answers. For

instance, quantitative survey question, “Please select experimentation lever types that you feel

are most instrumental in achieving interactive marketing goals? (a) visual; (b) functional; (d)

behavioral; (e) informational” was paired with the qualitative question, “Please describe in your

own words what experimentation levels were most useful in achieving interactive marketing

goals?” All discrepancies were recorded and analyzed. These discrepancies were resolved in the

final version of the survey and modifications were based on the analysis of the pilot sample.

During the second reread path significant statements and common themes were underlined and

extracted. All similar statements were grouped under a common umbrella and labeled

accordingly. The process of horizontalization was performed several times. The resultant groups

were compared and analyzed for discrepancies.

Clusters of Meaning

According to Van Manen (1990), the process of clustering consists of extracting meaning

from the grouped quotes. The process of clustering capitalized on the previous horizontalization

42
step. The responses grouped under common labels were examined for the common clumps of

meaning. In order to make this process simpler, groups were attributed to the corresponding

research questions. Since some of the survey responses could have been attributed to the one or

more research questions it was imperative to keep them organized. In some cases however

quotes were attributed to multiple emerging themes. At the same time some of the quotes

contained several themes or meanings at once.

Textual Description

The textual description step of the phenomenological method described by Alvesson and

Sköldberg (2000) consists of the reflection of the participant experience written by the

researcher. The textual description process logically followed a clustering step, where clusters of

meanings were extracted and recorded. After analyzing clusters of meaning, this researcher tried

to come up with the textual narrative of what survey participants were trying to convey. The

resultant narrative contained a number of expressions used by the study participants themselves

as well as the meaning phrases derived in the earlier step. Prior to coming up with the textual

description the researcher of the study prepared a written description of his own thoughts and

feelings about the experimentation phenomenon in the context of Internet and interactive

marketing. By utilizing bracketing techniques the researcher tried to separate his experience from

the experience of the participants of the study. Even though it was difficult for the researcher to

extricate himself from the studied phenomenon, a process requiring a transcendental

phenomenological method, a written description prior to the textual description step, raised his

awareness of bias and validity.

43
Composite Description

According to Giorgi (1985), composite description is designed to crystallize the meaning

of the research itself. The narrative produced in the textual description step of the

phenomenological method is wordy and lacks coherence. The researcher of the study tried to

distill the meaning of the earlier produced narrative into a single thought that could be easily

expressed. In most cases however, the composite description step of the data analysis related to

phenomenological research methodology was combined with the textual description step. One of

the key reasons for doing so was the fact that the qualitative responses were short and as such,

represented a concise summary that required no further reduction. This is often the case with

mixed-method research instrumentation, where the qualitative portion is filled out by the

participants of the study rather than the researcher. The composite description section of the data

analysis was used to discuss issues related to the analysis of a particular research question.
.

Qualitative Data Display for Describing the Phenomenon

The qualitative data display is used to describe experimentation in the interactive utilized

matrix display method. Since the study utilized a phenomenological design approach, the

horizontalization step of extracting the most meaningful aspects of the survey answers was

followed by clustering. Horizontalized data was clustered based on the business units of the

participants. The clustered data was displayed using role-ordered display. Every research

question was cross-tabulated with self-assessed experimentation experiences indicated by the

research participants.

44
Qualitative Data Display for Explaining the Phenomenon

The qualitative data display for explaining the experimentation phenomenon in

interactive marketing utilized a matrix display method. Based on the problem type,

experimentation phenomenon is ideally suited for the explanatory effects matrix display type.

According to Miles and Huberman (1994), the explanatory effect matrix is used to answer

questions of the following type, “Why were these outcomes achieved?” and “What caused them

generally or specifically?” In the case of experimentation in interactive marketing, the study tried

to prove causality between improved competitive standing, reduced operational risk,

improvements along the product development lifecycle as well as improved online conversion

and experimentation. The explanatory effects matrix consists of cross-tabulation between various

groups that participated in the study and the research question itself.

This researcher considers the explanatory effects matrix as the most useful display

technique for explaining and predicting dissertation topic. It is a very good fit for

experimentation into the interactive marketing phenomenon for several key reasons. First of all

the matrix format allows an analysis of the experimentation practice across multiple

departmental units involved in interactive marketing experimentation. However plausible, it was

not expected that different departmental units were going to feel differently about the usefulness

of experimentation and its positive impact on the dependant variables of the study such as

competitive standing, operational risks and attainment of interactive marketing goals. Possible

differences between opinions of the different departments may be explained by Proximity

Principal, described by Torre and Rallet (2005). The group closest to a particular aspect of the

phenomenon is more familiar with its nuances, when compared with other groups that have

experienced this aspect in a tangential manner.

45
CHAPTER 4: DATA COLLECTION AND ANALYSIS

Overall Response Analysis

The survey was sent out to 23 participants over several installments. Initial responses

were analyzed for consistency and clarity. In order to increase the validity and decrease bias

survey links were only sent to departments that do not fall under direct supervision of the

researcher. In addition survey participation was proposed under false pretenses. The participants

of the study were asked to fill out a survey in the context of their job function with an express

purpose stated as evaluating and improving experimentation efforts undertaken by the company.

All participants were asked to participate on a voluntary basis with the strict assurances of

confidence. General survey settings were as follows: (a) allow only one response per computer;

(b) respondents can go back to previous pages in the survey and update existing responses until

the survey is finished or until they have exited the survey. After the survey is finished, the

respondent will not be able to re-enter the survey; (c) respondents can exit the survey and come

back at any time, unless the survey is finished; and (d) do not display a thank you page.

Based on the survey statistics, 11 out of the 23 participants completed the survey (see

Figure 1) which constitutes a 47.8% overall survey uptake. Under open-field study conditions

such a response rate would be considered extremely high especially given the length of the

survey and its mixed nature. However, under controlled conditions consistent with an

organizational study, such a response rate cannot be considered unusual. Furthermore, the

response rate most likely buoyed by the belief that participants were filling out the survey in the

context of their work, rather than being participants of the study.

Based on the statistical analysis (see Figure 2), the study participants fell into the

46
following self-identified groups: (a) Interactive Marketing – either participants or 34.8%; (b)

Business Development – two participants or 8.7%; (c) User Experience – seven participants or

30.4%; (d) Creative Design – three participants or 13%; and (e) Data Analysis – three

participants or 13%. The self-identified groups were presented as a set of options without the

possibility of adding additional groups. The group names were specifically chosen to represent

job functions instead of an actual department name. One of the key reasons behind doing so was

a desire to tie experimentation responses to the job function of the participants instead of an

actual department, based on the fact that actual responsibilities of the participants vary greatly

even in the same department and may have led to wrong conclusions during analysis. In addition,

the actual job function served as a key determinant in assigning relative weight to the responses

of the study participants. In other words, more weight was given to the members of Business

Development group when questions were centered on the effects of experimentation on the

business development function of the company.

The second question of the survey asked the participants to identify their level of

expertise with respect to experimentation. Based on the statistical analysis of the responses to the

second question, the self-identified experimentation expertises were divided as follows: (a)

Novice – eight participants or 34.8%; (b) Intermediate – 13 participants or 56.5%; and (c) Expert

– two participants or 8.7%. The experience level data has been cross-tabulated with job function

data (see Figure 3). The distribution of participants that identified themselves in the Novice and

Intermediate categories was fairly uniform across all of the groups; however, self-identified

experts belonged only to the Interactive Marketing and User Experience groups. It is difficult to

draw any conclusions based on the distribution of the self-identified experts. It could most likely

be explained by the scarcity of the collected data. In a much larger survey it is expected that the

distribution of the self-identified experimentation experts would be uniform as well.

47
Research Objective One

One of the main research questions was designed to ascertain the impact of

experimentation on the interactive marketing goals. It was represented by the following

qualitative and quantitative survey questions: (a) Please Rate the Impact of Experimentation on

the Overall Interactive Marketing Goals (Cost Per Acquisition, Cost Per Impression, Cost Per

Action, Upsell, and Click-Through Rate); (b) Please Describe in Your Own Words the Impact of

Experimentation on the Overall Interactive Marketing Goals (Cost Per Acquisition, Cost Per

Impression, Cost Per Action, Upsell, and Click-Through Rate); (c) Please Rate The Impact of

Experimentation on the Cost Per Acquisition (CPA); (d) Please Describe in Your Own Words

How Experimentation Has Reflected upon the Cost Per Acquisition (CPA); (e) Please Rate The

Impact of Experimentation on the Cost Per Impression (CPI); (f) Please Describe in Your Own

Words How Experimentation Has Reflected upon the Cost Per Impression (CPI); (g) Please Rate

The Impact of Experimentation on the Cost Per Action (CPA); (h) Please Describe in Your Own

Words How Experimentation Has Reflected upon the Cost Per Action (CPA); (j) Please Rate

The Impact of Experimentation on the Upsell; (k) Please Describe in Your Own Words How

Experimentation Has Reflected upon the Upsell; (l) Please Rate The Impact of Experimentation

on the Click-Through Rate (CTR); and (m) Please Describe in Your Own Words How

Experimentation Has Reflected upon the Click-Through Rate (CTR).

Horizontalization

The first step of qualitative data analysis consisted of the technique of horizontalization.

Horizontalization consisted of grouping like-intentioned statements, underlining significant

statements and checking for consistency between the qualitative and quantitative response.

According to the statistical analysis of the quantitative question number three (see Figure 4), 13

48
or 92.9% of participants that had completed this question found that experimentation had a

positive impact on their overall interactive marketing goals. Only one participant or 7.1%,

determined that the impact of the experimentation had been neutral. Based on the analysis of the

qualitative data (see Figure 37) it is clear that the qualitative responses were consistent with the

quantitative response breakdown. In order to gain additional insights into the data the researcher

produced two cross-tabulation tables. The first cross-tabulation analyzed participant responses by

job function (see Figure 5), whereas the other looked at the participant experience in relation to

given responses (see Figure 6). The majority of the survey participants that indicated a positive

impact of the experimentation on interactive marketing self-identified as having a user

experience function. The only participant that found the impact of experimentation on interactive

marketing goals was neutral came from an interactive marketing group. In the same vein the

majority of the survey participants that found the experimentation impact positive described their

experimentation experience as intermediate. The two self-identified experts were split between

positive and neutral impact. In order to gain more insight into the responses of the two self-

identified experts, corresponding qualitative data was also analyzed (see Figure 36). In the

qualitative portion of the answer the self-identified expert had indicated that, “Our initiation

experimentation efforts had mixed results. However, as experimentation practices matured the

overall impact was consistently positive”. As such it is possible to speculate that the other self-

identified expert was referring to early stage efforts that could not have been characterized as

successful. This supposition is further confirmed by statements contributed by self-identified

intermediate experimentation users. On several occasions survey participants mentioned that

experimentation efforts yielded mixed results, with overall result being positive, “Initial

experimentation efforts had neutral impact on the business overall. After experimentation has

been coupled with the process improvements it had a positive impact on the Interactive

49
Marketing in general and listed goals in particular”. Even though the majority of participants felt

that experimentation had a positive impact on the marketing goals as a whole, when each

particular marketing goal was itemized some participant answers exhibited inconsistency. More

specifically several participants answered that experimentation had a positive impact on the Cost

Per Action (CPA), one of the goals of interactive marketing. However, when asked to describe

this positive effect in their-own words answered, “Not sure.” This discrepancy could be

explained by the fact that these participants are not part of the group that keeps track of

individual interactive marketing metrics (see Figure 13), (see Figure 14), (see Figure 15), (see

Figure 16), (see Figure 17). In contrast with these responses, study participants that interacted

with the marketing data on a daily basis were able to articulate in detail the impact of

experimentation on individual interactive marketing goals, “Experimenting with the bidding

strategy had a positive impact on the CPA”, and “CPA was reduced though experimentation with

webpage content and execution workflow.” The complete extraction of the answers related to

research question one was grouped and analyzed for significant statements (see Figure 37).

Clusters of Meaning

The second step of qualitative data analysis consisted of clustering. The process of

extracting clusters of meaning consisted of grouping significant statements identified in the

horizontalization step. The responses grouped under common labels were examined for common

clusters of meaning. In some cases quotes were attributed to multiple emerging themes. At the

same time some of the quotes contained several themes or meanings at once. The quantitative

survey questions pertaining to individual marketing goals were cross tabulated with an

assessment of overall experimentation impact. Based on the analysis of the cross-tabulations, the

majority of the survey participants displayed consistency in responses. The participants of the

study felt that experimentation had an equally consistent impact on the interactive marketing

50
goals, as a whole, as well as the individual interactive marketing goals in particular. In other

words, if a participant felt that experimentation had a positive overall impact, he/she felt the

same way when the interactive marketing goals were itemized. However, in several cases

responses were inconsistent, where two participants felt that the overall impact was positive, but

were not sure about the impact of experimentation on individual interactive marketing goals.

This could have been considered an anomaly; however, after looking at several responses by the

same participants of the study it became clear that “not sure” response was given to all itemized

individual interactive marketing goals as opposed to some. The most likely explanation for the

observed anomaly is due to the fact that participants of the study that gave inconsistent responses

did not deal with individual interactive marketing goals directly, but rather were only familiar

with it in a combined fashion.

The analysis of the qualitative and quantitative data revealed several emerging themes:

(a) the overall impact of experimentation on interactive marketing goals was positive; (b) initial

experimentation efforts were less than successful until the company found a way to couple

experimentation with a mature process; (c) the impact of experimentation was viewed in an

amortized fashion, where individual marketing was not heavily emphasized.

Textual Description

The third step of qualitative data analysis involved textual description. Textual

description consisted of the amalgamation and summation of participant responses into a

cohesive narrative. In summary, the majority of the surveyed participants felt that

experimentation had a positive impact on their interactive marketing goals as a whole. However,

early efforts related to experimentation yielded mixed results due to the lack of an organizational

process and strong experimentation experience. When an experimentation process was fully

established its impact remained consistently positive. Assessment of the experimentation impact

51
on individual interactive marketing goals was largely consistent with the assessment of

experimentation as a whole. The majority of participants felt that individual marketing goals

such as Cost Per Action, Cost Per Impression, Cost Per Acquisition, Upsell and Click Through

Rate had individually benefited from the experimentation effort.

Composite Description

The composite description step of the data analysis related to phenomenological research

methodology was combined with the textual description step. One of the key reasons for doing

so was the fact that the qualitative responses were short and as such, represented a concise

summary that required no further reduction. This is often the case with mixed-method research

instrumentation, where the qualitative portion is filled out by the participants of the study rather

than the researcher. The composite description section of the data analysis was used to discuss

issues related to the analysis of a particular research question.

In some cases participants of the study confounded several issues related to

experimentation. It is expected that some experiments yield negative results. These do not

however have a negative connotation in the sense of being bad or poor. The nature of

experimentation is such that some experiments result in conversion improvements, whereas

others result in conversion decreases. Hence, it is important to talk about the impact of

experimentation as a process rather than the impact of the individual experiment. It is however

expected that the experimentation process should result in a positive outcome as a whole;

otherwise it would be considered unsuccessful.

52
Research Objective Two

Another key research question was designed to identify key experimentation levers that

impact upon the interactive marketing goals. The objective was represented by the following

qualitative and quantitative survey questions: (a) Please Select Experimentation Lever Types

That You Feel Are Most Instrumental In Achieving Interactive Marketing Goals (Cost Per

Acquisition, Cost Per Impression, Cost Per Action, Upsell, and Click-Through Rate); (b) Please

Describe in Your Own Words What Experimentation Levers Were Most Useful In Achieving

Interactive Marketing Goals (Cost Per Acquisition, Cost Per Impression, Cost Per Action,

Upsell, and Click-Through Rate); (c) Please Select Experimentation Conditions That You Feel

Had the Strongest Impact upon Interactive Marketing Goals (Cost Per Acquisition, Cost Per

Impression, Cost Per Action, Upsell, and Click-Through Rate); and (d) Please Describe in Your

Own Words What Experimentation Conditions Were Most Impactful upon Interactive Marketing

Goals (Cost Per Acquisition, Cost Per Impression, Cost Per Action, Upsell, and Click-Through

Rate).

Horizontalization

According to the statistical analysis of the quantitative question five (see Figure 7), 11

participants or 78.6% who completed this question found that visual experimentation levers had

the strongest impact on their overall interactive marketing goals. Similarly, seven participants or

50% felt the same way about the functional experimentation levels. Lastly, three participants or

21.4% and four participants or 28.6% participants felt that behavioral and informational

experimentation levers respectively had the stronger impact on the interactive marketing goals.

With regards to external conditions related to experimentation, 11 participants or 71.4% felt that

seasonal variations had the strongest impact on the interactive marketing goals. The feelings

related to Temporal, Demographical, and Contextual external conditions were almost evenly

53
split, with a slight preference given to experimentation taking into account contextual conditions

(see Figure 10).

Based on the analysis of the qualitative data (see Figure 38) it is clear that the qualitative

responses were consistent with the quantitative response breakdown. In order to gain additional

insights into the data, the researcher produced two cross-tabulation tables. The first cross-

tabulation analyzed participants’ responses by job function (see Figure 8), where the other

looked at the participants’ experience in relation to given responses (see Figure 9). It was found

that 100% of the survey participants from Creative Design, Data Analysis and User Experience

agreed that Visual experimentation levels had the strongest impact on the interactive marketing

goals. In addition 50% of participants from the Interactive Marketing groups identified Visual

experiments levels as the most impactful. In contrast, only survey participants that identified

Informational experimentation levels as most impactful belonged to Interactive Marketing and

User Experience groups. This result seems consistent with the fact that Informational levels are

underrepresented as the percentage of the overall experiments performed by the company. The

fact that survey participants from the other groups did not identify the Information

experimentation level as impactful could be attributed to their lack of awareness that such levers

even exist.

One of the key insights gained from the analysis of the cross-tabulation of the participant

experience levels and their thoughts related to the most impactful experimentation levers was the

fact that survey participants with more significant experimentation experience focused on fewer

experimentation levers. In other words self-identified novice participants identified all levers as

equally impactful, self-identified intermediate participants focused more on the Visual and

Functional levers, and self-identified experts chose Visual and Functional experimentation levers

exclusively.

54
Additional cross-tabulations analyzed participant responses related to experimentation by

job function and experience on the basis of external conditions (see Figure 11), (see Figure 12).

Members of all groups identified experimentation with seasonal variations as most impactful

with the exception of the Interactive marketing group members. In contrast to the

experimentation experience and the cross-tabulation of the external experimental conditions,

statistical analyses exhibited an inverse behavior. 100% of self-identified novice participants

uniformly felt that seasonality had a strong impact. This opinion held largely true with self-

identified intermediate participants, accounting for more than 70% of responses. There was split

opinion among Self-identified experts whether seasonality experimentation had the strongest

impact.

The complete extraction of the questions, related to research question two, was grouped

and analyzed for significant statements (see Figure 38).

Clusters of Meaning

Based on the results of the horizontalized data it was evident that an overwhelming

majority of the participants felt that experimentation with visual elements yielded the most

impactful results on their interactive marketing goals. The following statement clearly expresses

a prevailing opinion, “Visual experimentation more than any other factors contributed to the

overall marketing goals.” In several instances visual experimentation was mentioned in

conjunction with functional levers, “Experimentation with the visual and functional elements of

our customer facing solutions had the most measurable effect on the interactive marketing KPIs.”

It is important to note that several participants indicated that experimentation with visual

elements was not only most impactful, but was the element that the company experimented with

the most, “More than 80% of the experiments have been conducted using visual elements. It is

not surprising that visual experimentation had the greatest impact on the listed marketing goals.”

55
In addition several participants of the study chose to identify all experimentation levers

indicating the importance of all factors, “I believe that all the lever types mentioned above are

necessary in a successful experimentation practice…”

In the same vain participants’ answers related to experimentation in the context of

external conditions were heavily clustered in favor of seasonal factors (see Figure 39), “Our

business is highly seasonal. In addition achieving marketing goals is very dependent on the

demographics of the users. Experimenting in the context of user demographics and seasonal

changes had a significant impact on the listed marketing goals.” The most significant statement

is the following response which ties the results of the analysis to the company specifics, “The

economics of the company change dramatically on the micro and macro temporal bases. Taking

these internal factors into consideration during experimentation proved to be successful.”

Textual Description

Analysis of the clusters of meaning resulted in an amalgamated textual description that

had several prevailing themes and conclusions. The participants of the study felt that

experimentation with visual experimentation levers had a very strong positive impact on the

interactive marketing goals of the company. A significant number of participants also felt that in

addition to functional experimentation elements, functional levers had a strong positive impact as

well. One of the prevailing themes clearly identifiably in many survey responses had to do with

fact that the company chose to emphasize visual experimentation elements over other possible

options. At the same time several participants of the study emphasized that all experimentation

levers yielded positive results and therefore all need to be considered as part of the

experimentation process.

With regards to experimentation with external factors, the opinions of the participants

were similarly strongly centered on seasonal factors. Several responses highlighted the seasonal

56
nature of the business. As such, taking seasonality into account had a strong positive impact on

the experimentation results. Furthermore several participants looked at seasonality in a macro

context, where seasonality implied changes in the business patterns from month to month and

even from quarter to quarter. However, several participants also emphasized experimentation in

the context of the micro temporal level, where experimentation accounted for changing traffic

patterns throughout the day or several days.

Composite Description

Similar to research question one, responses given by the participants of the study in the

qualitative portion of research question two were short enough not to require textual description.

Therefore textual description has been combined with composite description and this section

showed possible anomalies and points of focus. One such point was related to causality. It was

not clear if the company chose to perform the majority of the experiments with visual elements

because they yielded most significant results, or experimentation with visual elements yielded

superior results because the company chose to concentrate on them. There are strong arguments

that could be made in support of both points. Visual elements are the easiest to experiment with

and they are most abundant. At the same time, the largest improvements in the interactive

marketing metrics were related to experimentation with visual elements. It is quite likely that

both statements are true giving a bi-directional causality.

57
Research Objective Three

Another key research question was designed to ascertain the impact of experimentation

on product innovation. It was represented by the following qualitative and quantitative survey

questions: (a) Please Rate the Impact of Experimentation on Interactive Marketing Product

Innovation; and (b) Please Describe in Your Own Words How Experimentation Has Reflected

upon Interactive Marketing Product Innovation.

Horizontalization

Based on the statistical analysis of the quantitative results related to the impact of

experimentation on product innovation nine participants or 81.8% felt that experimentation had a

positive impact on interactive marketing product innovation (see Figure 21). A total of two or

18.2% participants indicated that they were not sure how to answer this question. These results

are strongly supported by qualitative data contributed by the participants (see Figure 41). Even

though the analysis of the interactive marketing product lifecycle did not directly address product

innovation while answering questions related to product lifecycle, participants of the study

offered several insights that were very helpful in understanding experiment driven product

innovation (see Figure 19), (see Figure 20). Eight participants or 72.7% felt that experimentation

has a positive impact on product lifecycle, whereas three participants or 27.3% were not sure

how to answer this question (see Figure 18).

In order to gain further insights into the qualitative and quantitative answers given by the

participants, several cross-tabulations were produced (see Figure 22), (see Figure 23). The first

cross-tabulation correlated the participants’ views on the impact of experimentation on product

innovation with their job function. The majority of the participants of the study that indicated

that experimentation had a positive impact on product invocation were either directly involved in

product development or had a tangential job function. Two participants that were not sure about

58
the impact of experimentation on product innovation belonged to the Interactive Marketing and

Data Analysis groups.

The second cross-tabulation correlated participants’ views on the impact of

experimentation on product innovation with their experimentation experience. Two participants

of the study who were not sure about the impact of experimentation on product innovation self-

assessed their experience as intermediate and novice respectively. On a percentage basis, 100%

of the self-identified expert participants, 87.5% of the self-identified intermediate participants

and 50% of novice participants assessed the impact of the experimentation on product innovation

as positive.

The complete extraction of the questions, related to research question three, were grouped

and analyzed for significant statements (see Figure 41).

Clusters of Meaning

The following answer expressed the prevailing opinion of the survey participants,

“Experimentation affected product innovation in a positive manner. Several new products

emerged as a result.” Some qualitative statements were more specific in identifying the exact

impact of experimentation on product innovation, “Customer feedback collected through

experimentation yielded several innovative ideas.” Those participants that were not sure about

the impact of the experimentation on product innovation gave the following qualitative answers,

“I am not sure how to answer this question” and “No specific data was collected.”

The answers to the product innovation questions were cross-referenced with the questions

related to the impact of experimentation on product lifecycle. Several phases of the product

development lifecycle are closely related to product innovation. In particular, idea incubation is

almost directly synonymous with product innovation (see Figure 40). We find support for this

supposition in one answer regarding product development lifecycle, “Product development

59
experimentation impacted on various phases of product lifecycle, especially in the early stages of

product development.” In addition to the insights gained on product innovation, we find that the

same survey participants gave identical answers, “I am not sure how to answer this question” and

“No specific data was collected” to the questions related to product lifecycle.

Textual Description

Based on the analysis of the clustered data, a compound description emerged pointing to

the fact that the majority of survey participants felt that experimentation had a positive impact on

product innovation. More specifically, study participants stated that interactive marketing

experimentation helped product innovation by allowing the company to churn through several

product ideas on an expedited basis. In addition it allowed the company to gather customer

feedback before the product was fully crafted and the company had committed to the full product

development. Furthermore, customer feedback either directly or tangentially pointed the

company to several innovative ideas that were either not actively discussed or had previously not

been considered as promising by the decision makers of the company. As such these ideas came

to fruition in a true customer-driven fashion. In general the company observed a shift from

opinion-driven product development to a customer-experimentation model, where opinions of

the customers were given a greater weight than previously.

Composite Description

It is important to note that product related questions are strongly correlated with the job

function of the participants. In other words participants of the study that did not focus on product

development or did not perform job duties that were at least tangential to product development

may have not been aware of the impact of the experimentation on product development or might

have given contradictory answers.

60
Research Objective Four

Another research question related to product development was designed to ascertain the

impact of experimentation on product improvement. It was represented by the following

qualitative and quantitative survey questions: (a); Please Rate The Impact of Experimentation on

Interactive Marketing Product Improvement; and (b) Please Describe in Your Own Words How

Experimentation Has Reflected upon Interactive Marketing Product Improvement.

Horizontalization

Based on the statistical analysis of the quantitative results related to the impact of

experimentation on product improvement nine participants or 81.8% felt that experimentation

had a positive impact on interactive marketing product innovation (see Figure 24). Only two

participants or 18.2% indicated that they were not sure how to answer this question. These

results are strongly supported by qualitative data contributed by the participants (see Figure 42).

In order to gain further insights into the qualitative and quantitative answers given by the

participants, several cross-tabulations were produced (see Figure 25), (see Figure 26). The first

cross-tabulation correlated participants’ views on the impact of experimentation on product

improvement with their job function. The majority of participants of the study that indicated that

experimentation had a positive impact on product improvement were either directly involved in

product development or had a tangential job function. Two participants that were not sure about

the impact of experimentation on product improvement belonged to the Interactive Marketing

and Data Analysis groups. The complete extraction of the questions, related to research question

four, was grouped and analyzed for significant statements (see Figure 42).

Clusters of Meaning

The following answer highlighted a prevailing opinion of the survey participants,

61
“Experimentation affected product improvement in a positive manner. It manifested itself in the

increased pace of product improvement.” Some qualitative statements were more specific in

identifying the exact impact of experimentation on product improvement, “Customer feedback

collected through experimentation allowed for continuous product improvement.” Those

participants of the study that were not sure about the impact of experimentation on product

improvement gave the following qualitative answers, “I am not sure how to answer this

question” and “No specific data was collected.”

Textual Description

Based on the analysis of the clustered data a compound description emerged pointing to

the fact that the majority of survey participants felt that experimentation had a positive impact on

product improvement. More specifically study participants stated that interactive marketing

experimentation helped product innovation by allowing the company to churn through several

feature ideas on an expedited basis. In addition experimentation allowed the company to gather

customer feedback on the newly planned features. Furthermore, customer feedback either

directly or tangentially pointed the company to several innovative feature ideas.

Composite Description

It is important to note that the survey participants gave almost identical answers to the

questions related to product innovation and product improvement. In fact some answers were

seemingly cut and pasted from the product innovation section. It is possible that participants of

the survey combined product innovation and product improvement concepts into one.

62
Research Objective Five

The remaining research question related to product development was designed to

ascertain the impact of experimentation on product deployment risk. It was represented by the

following qualitative and quantitative survey questions: (a) Please Rate the Impact of

Experimentation on Interactive Marketing Product Deployment Risk; and (b) Please Describe in

Your Own Words How Experimentation Has Reflected upon Interactive Marketing Product

Deployment Risk.

Horizontalization

Based on the statistical analysis of the quantitative results related to the impact of

experimentation on product improvement, eight participants or 72.7% felt that experimentation

had a positive impact on interactive marketing product innovation (see Figure 27). Only two

participants or 18.2% indicated that they were not sure how to answer this question and one

participant or 9.1% felt that the impact of experimentation on product deployment risk was

neutral. These results are strongly supported by qualitative data contributed by the participants

(see Figure 43).

In order to gain further insights into the qualitative and quantitative answers given by the

participants, several cross-tabulations were produced (see Figure 28), (see Figure 29). The first

cross-tabulation correlated participants’ views on the impact of experimentation on product

deployment risk with their job function. The majority of study participants that indicated

experimentation had a positive impact on product deployment were either directly involved in

product development or had a tangential job function. Two participants that were not sure about

the impact of experimentation on product deployment risk belonged to Interactive Marketing and

Data Analysis groups. One of the participants that indicated that the impact of the

experimentation on the product deployment risk was neutral belonged to a User Experience

63
group.

The second cross-tabulation cross-referenced participants’ answers with their self-

assessed experimentation experience. The participants that felt that experimentation had a

positive impact on product deployment risk were either self-assessed experts or intermediate

experimentation users. The participants of the study that indicated that they were not aware of

the specific impact or were not sure how to answer this question self-identified as either novice

or intermediate experimentation users (see Figure 29).

The complete extraction of the questions, related to research question five, was grouped

and analyzed for significant statements (see Figure 43).

Clusters of Meaning

The following answer expressed a prevailing opinion of the survey participants,

“Experimentation practice has reduced product deployment risk by allowing limited product

deployment.” Some qualitative statements were more specific in identifying the exact impact of

experimentation on product deployment risk, “Experimentation reduced product deployment risk

by allowing easy rollback in the case of errors.” Those participants that were not sure about the

impact of the experimentation on product deployment risk gave the following qualitative

answers, “I am not sure how to answer this question” and “No specific data was collected.” A

participant that indicated in the quantitative portion of the survey that he/she felt that impact of

the experimentation on product deployment had been neutral, gave the following answer in the

qualitatively paired question, “I am not aware of any product deployment risks that where

mitigated.”

Textual Description

Based on analysis of the clustered data, compound description emerged as significant

pointing to the fact that the majority of survey participants felt that experimentation had a

64
positive impact on product deployment risk. More specifically study participants stated that

interactive marketing experimentation helped mitigate product deployment risk by allowing the

company to make a partial commitment to product deployment. It allowed the company to assert

a greater control over the product deployment cycle. Only partial traffic was directed to the

newly deployed products. Full traffic flow was only turned on when the company felt that new

product did not adversely affect key performance indicators of the company such CTR, CPA,

CPI, Upsell Score and so on. If an adverse effect was discovered the company had an easy route

to either further reduce the flow of traffic to the newly deployed product in order to observe it in

a more controlled environment or reduce web traffic to zero, a decision that is tantamount to a

product rollback. Since the control of traffic is virtually instantaneous and the amount of

potential loss could be strictly controlled companies are more willing to allow investigative

analysis while products are still only exposed to a limited number of users.

Composite Description

Similar to the other research questions, the responses given by the study participants, the

qualitative portion of the research question five were short enough not to require textual

description. Therefore textual description has been combined with composite description. This

section highlighted possible anomalies and points of focus. It is important to note that research

questions three, four and five induced almost identical responses from the survey participants.

The answers to these questions were consistent among study participants. In other words, if a

participant felt that experimentation had a positive impact on product improvement he/she felt

the same way about product innovation and product deployment risk. However, if the participant

was not sure what impact experimentation had on product innovation, he/she was also unsure

about its impact on product improvement and product deployment. As such it is possible to

conclude that the answers to the qualitative and quantitative portions of the survey related to the

65
impact of various product aspects were strongly correlated to knowledge of the interactive

marketing product itself.

Research Objective Six

The remaining research question was designed to ascertain the impact of experimentation

on the competitive standing of the company. It was represented by the following qualitative and

quantitative survey questions: (a) Please Describe in Your Own Words How Experimentation

Has Reflected upon Competitive Standing of the Company; (b) Please Describe in Your Own

Words How Experimentation Has Reflected upon Competitive Standing of the Company; (c)

Please Rate The Impact of Experimentation on Revenue of the Company; (d) Please Describe in

Your Own Words How Experimentation Has Reflected upon Revenue of the Company; (e)

Please Rate The Impact of Experimentation on Profit of the Company; (f) Please Describe in

Your Own Words How Experimentation Has Reflected upon Profit of the Company; (g) Please

Rate The Impact of Experimentation on Market Size of the Company; (h) Please Describe in

Your Own Words How Experimentation Has Reflected upon Market Size of the Company; (j)

Please Rate The Impact of Experimentation on Market Penetration of the Company; (k) Please

Describe in Your Own Words How Experimentation Has Reflected upon Market Penetration of

the Company; (l) Please Rate The Impact of Experimentation on Market Segmentation of the

Company; and (m) Please Describe in Your Own Words How Experimentation Has Reflected

upon Market Segmentation of the Company.

Horizontalization

According to the statistical analysis of the quantitative question 29 of the survey (see

Figure 30), five participants or 45.5% that had completed this question found that

experimentation had a positive impact on the competitive standing of the company. 36.6% of the

66
survey participants indicated that they were not sure how to answer this question. Lastly, two

participants or 18.2% felt that experimentation had neutral impact on the competitive standing of

the company. Other questions related to competitive standing of the company that touched upon

market penetration, market segmentation, market targeting and market size in the context of

experimentation, exhibited a similar percentage breakdown, with the notable exceptions of two

questions that inquired about the impact of experimentation on the revenue and profit of the

company. 100% of the participants of the survey felt that experimentation positively impacted

upon the revenue and profit of the company (see Figure 44).

In order to gain further insights into qualitative and quantitative answers given by the

participants, several cross-tabulations were produced (see Figure 31), (see Figure 32). The first

cross-tabulation correlated the participants’ views on the impact of experimentation on the

competitive standing of the company with their job function. Opinions of the participants of the

survey were uniformly distributed across all job functions represented in the survey.

The second cross-tabulation cross-referenced participants’ answers with their self-

assessed experimentation experience (see Figure 32). Similarly, the opinions of the self-

identified experts, intermediate and novice users were uniformly distributed and ranged from

positive, to neutral and not sure.

The complete extraction of the questions, related to research question five, was grouped

and analyzed for significant statements (see Figure 44).

Clusters of Meaning

The following answers expressed the prevailing opinion of the survey participants in each

category, “Experimentation made the company more competitive by remaining relevant to our

customers“, “I am not sure how to answer this question” and “Competitive impact is difficult to

ascertain.” Based on the divergent statements above, it was clear that study participants were

67
divided on the impact on the experimentation on the competitive standing on the company. A

similar divergence of opinion was observed during the composite analysis of the question related

to the competitive standing of the company that touched upon market penetration, market

segmentation, market targeting and market size in the context of experimentation, even though a

slightly higher percentage of participants had a positive opinion about the impact of the

experimentation on these components of the competitive standing of the company.

A drastically different picture emerged during clustered analysis of the impact of

experimentation on the revenue and profits of the company. The following statements express

the prevailing opinions, “Experimentation had a direct impact on the bottom line of the

company” and “Experimentation allowed the company to expand our customer base, resulting in

higher revenues.” It was clear that study participants did not consider revenue and company

profits as part of the competitive standing of the company.

Textual Description

Based on the analysis of the clustered data it was clear that study participants did not

provide a uniform opinion related to impact of the experimentation on competitive standing of

the company. While examining the impact of the various elements that make up competitive

standing of the company their opinion remained mixed. At the same time study participants were

unhesitant in expressing their strong opinion in support of experimentation as one of the key

contributors to the profits and revenue of the company. It is also clear that some participants had

a difficulty interpreting the exact nature of the questions related to competitive standing of the

company. Analysis of the qualitative data revealed the highest percentage of participants were

either unsure how to answer this question or gave the shortest answer possible. In contrast, the

revenue and profit questions yielded expressive answers and indicated a strong degree and

confidence of conviction in the given answers.

68
Composite Description

One of the biggest anomalies and the point of focus of this research question is the fact

that study participants felt drastically different about the impact of experimentation on the

overall competitive standing of the company and the impact of experimentation on the profit and

revenue of the company. It is highly likely that study participants did not consider revenue and

profit to be part of the competitive standing of the company.

Validity and Bias

In order to insure validity and eliminate bias study participants were invited to complete

the survey under false pretenses by the COO of the company where the research is conducted

and the researcher employed. The survey was presented to the participants as the means to

ascertain the effectiveness of Experimentation in Interactive Marketing in the context of their

work. Since all of the study participants work in the same organization as the researcher of the

study, deception was absolutely vital to avoid credibility and bias issues. Even if any possible

undue influence of the researcher could be somehow mitigated, knowledge of the true nature of

the study could affect the results by any attempts of the participants to be helpful to the

researcher and thus contributing erroneous data that they would feel would advance the aims of

the research. Since the researcher of the study holds an executive position at the company even

visibility of impropriety was too great for the true nature of the study to be revealed. If

participants of the study became aware of the true intentions of the research they would

potentially try to skew their answers in the way that they would perceive to be helpful to the

researcher, thus affecting the credibility of the study.

Since the study employed a phenomenological approach, it was essential for the

researcher to mitigate his personal bias on the subject while interpreting the results of the study.

In order to do that the researcher utilized a Bracketing approach. Namely prior to the analysis of

69
the survey results the researcher expressed his own thoughts regarding each of the research

questions. While analyzing the results, the researcher compared his opinions with the opinions of

the participants to make sure that they did not become misinterpreted. The full set of results is

presented in the next chapter. The issues of credibility, validity and bias are further explored in

the next chapter as well.

70
CHAPTER 5: RESULTS, CONCLUSIONS, AND RECOMMENDATIONS

The last chapter presented a detailed analysis of the collected data. The issues of

credibility, validity and bias were specifically outlined and discussed. This chapter centers on

presenting the study results, outlining potential gaps, as well as providing recommendations for

future research. The results of the study were summarized as a cohesive narrative that allowed

the final conclusions of the study to be formulated. In addition, the study results and conclusions

were examined in the light of existing research.

Results

One of the key aims of the study was to examine interactive marketing through the prism

of experimentation as a way of propelling interactive marketing forward and enabling it to keep

pace with technological advances. The following researched questions were examined and

analyzed: (a) What is the impact of experimentation on interactive marketing goals? (b) What are

the key experimentation levers pertaining to interactive marketing? (c) What is the impact of

experimentation on product innovation in interactive marketing? (d) What is the impact of

experimentation on interactive marketing product improvement? (e) What is the impact of

experimentation on interactive marketing product development and deployment risk? (f) What is

the impact of experimentation on the competitive standing of the companies involved in

interactive marketing?

In addition to these research questions the study proposed the following research null

hypotheses: (a) experimentation has a positive impact on interactive marketing goals; (b)

experimentation has a positive impact on product innovation in interactive marketing; (c)

experimentation has a positive impact on product improvement in interactive marketing; (d)

71
experimentation has a mitigating impact on product development and deployment risk; and (e)

experimentation has a positive impact on the competitive standing of the companies involved in

interactive marketing.

In order to confirm or reject these null hypotheses, the study comprised a mixed-method

survey where participants of the study were asked to reflect on various aspects of

experimentation in interactive marketing in the context of their job. The study was conducted in

the organization where the researcher is employed. The participants of the study were invited to

participate in the study under false pretenses by the Chief Operating Officer (COO) of the

company. Only the researcher of the study and the COO were aware of the true intentions of the

survey. Keeping the true intentions of the study a secret was a key to increasing validity,

credibility and reducing bias. The study was conducted in a double blind manner in order to

preserve confidentially and insure impartiality. In other words, the study participants were

unaware of the true reasons for the survey and the researcher was unaware of the identities of the

survey participants.

The survey was sent out in two installments to 23 participants. Based on the application

of the phenomenological methodology consisting of horizontalization, clustering, textual

description and composite description, the analysis of the of the received data yielded the

following results; (a) the null hypothesis asserting that experimentation has a positive impact on

interactive marketing goals was strongly confirmed. The 93.3% f the participants indicated that

experimentation had a positive impact on interactive marketing goals. This result was consistent

with the conclusions reached by Raman (1996), where user-tailored interactive advertising had a

high conversion outcome; (b) the null hypothesis asserting that experimentation has a positive

impact on product innovation in interactive marketing was confirmed as well. The 81.8% of the

72
participants felt that experimentation strongly contributed to product innovation. The results of

this particular research question were consistent with the conclusions outlined by Von Hippel

(1998), which concluded that product innovation was successfully driven by product users; (c)

the null hypothesis asserting that experimentation had a positive impact on interactive marketing

product improvement was also strongly confirmed. 81.8% of the participants indicated that

experimentation had a positive and lasting impact on product improvement. Similar conclusions

were reached by Thomke (1995) who found that experimentation significantly contributed to

product improvement; (d) the hypothesis asserting that experimentation had a positive impact on

product development and product deployment risk was similarly confirmed with 72.7% of study

participants indicating that experimentation had mitigated product development and deployment

risk. Both Von Hippel (1998) and Thomke (1995) reached conclusions in support of these

findings. In particular, their studies asserted that the economics of product development were

significantly affected by experimentation practice. Experimentation resulted in reduced product

cost and consequently reduced product development and deployment risk; (e) the hypothesis

asserting that experimentation has a positive impact on the competitive standing of the company

was rejected by the participants of the study. Only 45.5% of the survey participants felt that the

experimentation impact on the competitive standing of the company had been positive. There

are currently no empirical studies relating the competitive standing of the company with

experimentation efforts. As such, it is impossible to make inferences related to consistency of the

research findings.

The only research question, which examined the impact of experimentation on interactive

marketing levers, which did not have an associated research hypothesis yielded following results:

80% of the participants felt that visual experimentation levers had the strongest impact on

73
interactive marketing goals. This was closely followed by functional levers with 53.3% of the

vote. These findings are supported in part by the conclusions from Jebakumari (2002), Mark

(2003) and Milley (2002). In particular Milley (2002) concluded that interactive marketing

results were dependent on capturing and applying behavioral data.

Conclusions

The results of the study indicated that experimentation had a profound impact on almost

all aspects of the company where the research was conducted. The majority of the research

results was supported by the conclusions of previous empirical research studies and was

consistent with the reference material. It was clear that the study participants felt that the

experimentation efforts of the company were overwhelmingly positive even though non-uniform

results were achieved at the different phases of adoption of experimentation. More specifically,

some participants felt that the initial experimentation efforts lacked focus and were disorganized.

As such, poor experimentation results were not deemed to be a function of the experimentation

efforts but rather due to organizational maturity. When the experimentation practice was fully

established the experimentation results remained consistently positive. In an overwhelming

majority of the cases, the qualitative data was fully confirmed by quantitative responses. The

results of the research question relating experimentation efforts with the competitive standing of

the study were inconclusive. It appears that participants of the study did not consider the profit

and revenue of the company as significant contributors to the competitive standing of the

company. More specifically, if the research question that deals with the competitive standing of

the company was dropped from the study and replaced by the question that related

experimentation to the profit and revenue of the company it would have been overwhelmingly

74
confirmed. In fact 100% of the participants felt that experimentation positively impacted upon

the profit and revenue of the company. This aspect of experimentation warrants further study and

exploration. Another possibly confounding aspect is that it is also possible that study participants

experienced “survey fatigue”, and the questions related to the competitive standing, placed at the

end of the very long survey did not obtain an accurate or honest response. It can be argued

however that the results have a general applicability to the eCommerce and the Interactive

Marketing industry at large. If these results are confirmed in a broader study, then

experimentation in the context of interactive marketing may serve as a disruptive technology that

would drive almost universal adoption and eventual commoditization, which in turn may spur the

emergence of an entire experimentation industry with a diverse offering of experimentation

services and tools.

Recommendations

One of the key recommendations is to confirm the results of this qualitative study by

conducting a broad quantitative study on the impact of experimentation on interactive marketing

goals in a diverse set of interactive marketing companies. The results of the present study

indicate that this impact is fairly significant. If these results are confirmed in a broad study

involving a significantly diverse number of interactive marketing companies, it is possible that

experimentation in interactive marketing might lead to a paradigm shift in interactive marketing

practices. It is also recommended that technology departments be included in future studies.

There are number of factors that could potentially impact on the technological efforts of the

interactive marketing companies. Future studies may include such topics as: exploring the

experimentation process in interactive marketing companies; exploring tools used to facilitate

75
experimentation in interactive marketing companies; and exploring metrics related to the

experimentation efforts of interactive marketing companies.

Based on the fact that this study received an inconclusive result regarding the impact of

experimentation on the competitive standing of the company it is recommended that this subject

is explored in isolation. This researcher believes that such an impact is significant and needs to

be explored in more detail. Additional research may be centered on comparing and contrasting

interactive marketing companies that utilize experimentation versus those that do not. A specific

focus of the study should concentrate on the use of experimentation to achieve competitive

advantage.

Another topic that requires strong attention is related to closed-loop experimentation.

There is strong anecdotal evidence to suggest that close-loop experimentation, that ties together

all facets of interactive marketing such as Search Marketing, Display Marketing, Affiliate

Marketing and Email Marketing, is more effective than isolated interactive marketing

experimentation which has very few interactive marketing components. However, to date no

empirical research has been conducted.

Lastly, the results of this study point to a strong correlation between a well established

experimentation process and the success of the experimentation. Additional study could focus on

researching the impact of a well established experimentation process and its contribution to the

overall experimentation success.

Limitations of the Study

One of the key limitations of the study is the fact that it was conducted in a single

organization and as such, the quantitative results cannot be considered statistically significant.

76
The quantitative questions of the study verified the consistency of the paired qualitative

questions. In addition participants of Data Analysis and Business Development groups were

sparsely represented. The study used terminology and concepts that were significant and specific

to the organization where the study was conducted. Even though such terminology is prevalent in

the industry at large it is possible that some aspects are either interpreted differently or are not

considered significant in similar companies.

Another significant limitation of the study was the fact that Technology groups were

excluded from participating in the survey. Since the study researcher was responsible for all of

the technology, quality assurance and project management efforts of the company, the risk of

bias and potential loss of validity was too great to allow Technology groups to participate. As

such, a number of research questions that dealt with the technological aspects of experimentation

in interactive marketing had to be excluded.

Lastly, the results of the study are based on the interpretations of the researcher. As such,

credibility and bias are potential limiting factors. Attempts were made to mitigate validity and

bias by using a number of techniques and approaches however; it is impossible to totally exclude

the presence of the personal bias.

77
REFERENCES

Alvesson M. & Sköldberg K. (2000). Reflexive methodology: New vistas for Qualitative
Research, London: Sage Publications.

Anderson, A. D. (1972). Designs with partial factorial balance. The Annals of Mathematics and
Statistics, 43(4), 1333-1341.

Becker, C. (1992). Living and relating: An introduction to phenomenology. Thousand Oaks,


CA: Sage.

Beerenwinkel, N., Pachter, L. and Sturmfels, B. (2007). Epistasis and shapes of fitness
landscapes, Statistica Sinica, 17, 1317-1342.

Bezjian-Avery, A. M. (1997). Cognitive processing of interactive marketing (Doctoral


Dissertation, Northwestern University, 1997)

Biskin, B. H. (1980). Multivariate analysis in experimental counseling research. The Counseling


Psychologist, 8, 69-72.

Chulho, P. (1988). Interactive marketing decision system for highly differentiated product in a
growing market, (Doctoral Dissertation, Stanford University, 1988)

Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five
traditions. Thousand Oaks, CA: Sage.

Daugherty, T. M. (2001). Consumer leaning and 3-D ecommerce: The effects of sequential
exposure of a virtual experience relative to indirect and direct product experience on
product knowledge, brand attitude and purchase intention (Doctoral Dissertation,
Michigan State University, 2001)

Deighton, J. (1996). The future of interactive marketing, Harvard Business Review, 74(6), 151-
162.

Donne, D. G. (1996). Enhancements of experiment design, sensitivity, and information content


in multidimensional NMR (Doctoral Dissertation, Purdue University, 1996)

Dou, W. (1999). Modeling the online sales system: A catastrophe theory approach (Doctoral
Dissertation, University of Wisconsin-Milwaukee, 1999)

eMarketer Research. (2007). Analyst reports. eMarketer. Retrieved June 27, 2008, from
http://www.emarketer.com/

78
Fisher R. A. (1926). The arrangement of field experiments. Journal of the Ministry of
Agriculture of Great Britain, 33, 503-513.

Fisher, R. A. (1935). The design of experiments. Oliver and Boyd.

Giorgi, A. (1985). The phenomenological psychology of learning and the verbal learning
tradition. In A. Giorgi, (Ed.), Phenomenology and Psychological Research. Pittsburgh:
Duquesne University Press.

Giorgi, A. (2006). Difficulties encountered in the application of the Phenomenological Method


in the social sciences. Analise Psicologica, 3 (24), 353-361.

Hansen, F. R. (1989). A fractional representation approach to closed-loop system


identification and experiment design (Doctoral Dissertation, Stanford University, 1989)

Huberty, C. J (1986). Questions addressed by multivariate analysis of variance and discriminant


analysis. Georgia Educational Researcher, 5, 47-60.

Huberty, C.J. & Morris, J.D. (1989). Multivariate analysis versus multiple univariate
analysis, Psychological Bulletin, 105(2), 302-308.

Husserl, E. (1931). Ideas: General introduction to pure Phenomenology. (Boyce Gibson, W.R.
Trans.). London: George Allen and Unwin. (Originally published 1913).

Husserl, E. (1983). Ideas pertaining to a pure phenomenology and to a phenomenological


philosophy. First book (F. Kersten, trans.). The Hague: M. Nijhoff (Original work
published 1913).

Jebakumari, G. (2002). The dimensionality of interactivity and its effect on key consumer
variables (Doctoral Dissertation, Southern Illinois University, 1987)

Jiang, Z. (2004). An investigation of virtual product experience and its effect mechanism
(Doctoral Dissertation, University of British Columbia, 2004)

Kohavi, R., Henne, M. R. & Sommerfield, D. (2007). Practical guide to controlled experiments
on the web: Listen to your customers not to the HiPPO. Proceedings of the 13th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 959–
967

Kwan, T. (1996). Taguchi methods of design of experiment (Masters Dissertation, California


State University Dominguez Hills, 1996)

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard


Educational Review, 62(3), 279-300.

Morgan, G., & Smircich, L. (1980). The case for qualitative research. Academy of Management

79
Review, 5, 491-500.

Lee, F., Edmondson, A.C., Thomke, S. and Worline, M. (2004). The mixed effects of
inconsistency on experimentation in organizations, Organization Science, 15(3), 310.

Li, S. (2003). Essays on interactive marketing, (Doctoral Dissertation, Carnegie Mellon


University, 2003)

Liu, Y. (2002). Interactivity and its implications for consumer behavior, (Doctoral Dissertation,
State University of New Jersey, 2002)

Macias, W.A. (2000). The effect of interactivity on comprehension and persuasion of interactive
marketing (Doctoral Dissertation, University of Texas, 2000)

Mark, J. (2002). Examination of internet marketing relative to traditional promotion in the


development of web site traffic, (Doctoral Dissertation, Capella University, 2003)

Miley, R. (2000). Web-enabled consumer marketing: Research constructs, propositions and


managerial implications (Doctoral Dissertation, University of Calgary, 2000)

Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.

Newman, E. (2001). Integrated marketing on the internet: Congruency effect of banner


advertisements and websites, (Doctoral Dissertation, Washington State University, 2001)

Ozdemiz, M. (2002). Evolutionary computing for feature selection and predictive data mining,
(Doctoral Dissertation, Rensselaer Polytechnic Institute, 2002)

Raman, N. V. (1996). Determinants of desired exposure to interactive advertizing (Doctoral,


Dissertation, University of Texas, 1996)

Schwartz, J. (Speaker). (2005). The participation age [Audio Recording]. IT Conversations


Legacy Programs.

Shane, S.A. (2004). Technological innovation, product development, and entrepreneurship in


management science, Management Science, 50(2), 133-144.

Sheffrin, S. M. (1983). Rational expectations. Cambridge: Cambridge University Press

Thomke, S. (1995). The economics of experimentation in the design of new products and
processes (Doctoral Dissertation, Massachusetts Institute of Technology, 1995)

Thomke, S. (1998). Managing experimentation in the design of new products, Management


Science, 44(6),743.

Thomke, S., von Hippel, E. & Franke, R. (1998). Modes of experimentation: An innovation

80
process-and competitive-variable, Research Policy, 27(3), 315-332.

Thomke, S. (2001). Enlightened experimentation: The new imperative for innovation, Harvard
Business Review, 79(2), 66-75.

Torre, A. & Rallet, A. (2005). Proximity and localization, Regional Studies, 39(1), pp. 47–59.

Van Manen, M. (1990). Researching lived experience: Human science for an action sensitive
pedagogy. State University of New York Press, New York.

Von Hippel, E. (1998). Economics of product development by users: The impact of “sticky”
local information, Management Science, 44(5), 629.

Von Hippel, E. & Katz, R. (2002). Shifting innovation to users via toolkits, Management
Science, 48(7), 821.

Wang, W. (1999). Predictive modeling based on classification and pattern matching methods
(Masters Dissertation, Simon Fraser University, 1999)

Weng, W. C. (2007). Taguchi’s method for optimizing electromagnetic applications (Doctoral


Dissertation, University of Mississippi, 2007)

Xu, H. & Wu C.F.J. (2000). Generalized minimum aberration for asymmetrical fractional
factorial designs, The Annals of Statistics, 29(2), 549-560.

81
APPENDIX A. SURVEY TEMPLATE

Survey Page 1

82
83
Survey Page 2

84
Survey Page 3

85
86
Survey Page 4

87
APPENDIX B. CONCEPTUAL FRAMEWORK

88
APPENDIX C. INTERACTIVE MARKETING

89
APPENDIX D. MARKETING

90
APPENDIX E. EXPERIMENTATION CYCLE

91
APPENDIX F. DATA ANALYSIS

Figure F1. Response summary

Figure F2. Job function breakdown

92
Figure F3. Job function and experimentation experience cross-tabulation

Figure F4. Experimentation impact breakdown

93
Figure F5. Experimentation impact and department cross-tabulation

Figure F6. Experimentation impact and experimentation experience cross-tabulation

94
Figure F7. Experimentation levers impact breakdown

Figure F8. Experimentation levers impact and experimentation experience cross-tabulation

95
Figure F9. Experimentation levers impact and job function cross-tabulation

Figure F10. Experimentation conditions impact breakdown

96
Figure F11. Experimentation conditions impact and experimentation experience cross-tabulation

Figure F12. Experimentation conditions impact and job function cross-tabulation

97
Figure F13. Experimentation CPA impact and experimentation overall impact cross-tabulation

Figure F14. Experimentation CPI impact and experimentation overall impact cross-tabulation

98
Figure F15. Experimentation CPA impact and experimentation overall impact cross-tabulation

Figure F16. Experimentation upsell impact and experimentation overall impact cross-tabulation

99
Figure F17. Experimentation CPI impact and experimentation overall impact cross-tabulation

Figure F18. Experimentation product lifecycle impact breakdown

100
Figure F19. Experimentation impact on product lifecycle and job function cross-tabulation

Figure F20. Experimentation impact on product lifecycle and experimentation experience cross-
tabulation

101
Figure F21. Experimentation product innovation impact breakdown

Figure F22. Experimentation product innovation impact and experimentation experience cross-
tabulation

102
Figure F23. Experimentation product innovation impact and job function cross-tabulation

Figure F24. Experimentation product improvement impact breakdown

103
Figure F25. Experimentation product improvement impact and job function cross-tabulation

Figure F26. Experimentation product improvement impact and experimentation experience


cross-tabulation

104
Figure F27. Experimentation product deployment risk impact breakdown

Figure F28. Experimentation product deployment risk impact and experimentation experience
cross-tabulation

105
Figure F29. Experimentation product deployment risk and job function breakdown

Figure F30. Experimentation competitive standing of the company impact breakdown

106
Figure F31. Experimentation competitive standing of the company impact and job function
cross-tabulation

Figure F32. Experimentation competitive standing of the company impact and experimentation
experience cross-tabulation

107
Figure F33. Experimentation revenue impact

Figure F34. Experimentation profit impact

Figure F35. Experimentation market size impact

108
Figure F36. Experimentation market penetration impact

109
Figure F37. Experimentation marketing goals impact horizontalization

110
Figure F38. Experimentation marketing levers impact horizontalization

111
Figure F39. Experimentation conditions impact horizontalization

112
Figure F40. Experimentation product lifecycle impact horizontalization

113
Figure F41. Experimentation product innovation impact horizontalization

114
Figure F42. Experimentation product improvement impact horizontalization

115
Figure F43. Experimentation product deployment risk impact horizontalization

116
Figure F44. Experimentation competitive standing impact horizontalization

117

Anda mungkin juga menyukai