Anda di halaman 1dari 83

Chapter 1

Management research:
It is the systematic and objective identification, collection, analysis,
dissemination and use of information for the purpose of improving
decision making related to the identification and solution of problems
and opportunities in management.
Research is done for two reasons 1) to identify management
problems, 2) to solve management problems.

Management research involves the identification, collection, analysis,


dissemination, and use of information. It is a systematic and objective
process designed to identify and solve management problems. Thus
management research can be classified as problem identification
research and problem solving research.

The management research process:


It is a set of six steps that defines the task to be accomplished in
conducting a management research study.

These include problem definition, development of an approach to the


problem, research design formulation, field work, data preparation
and analysis, report preparation and presentation.

Problem identification research:


Research that is undertaken to help identify problems that are not
necessarily apparent on the surface and get exist or are likely to arise
in the future.
Problem solving research:
Research undertaken to help solve specific management problems.

The management research process consists of six steps that must be


followed systematically. The role of management research is to
assess information needs and provide relevant information in order to
improve management decision making. However, the decision to
undertake management research is not an automatic one but must be
carefully considered.

Management research may be conducted internally or may be


purchased from external suppliers, referred to as the management
research industry. Full service suppliers provide the entire range of
management research services from problem definition to report
preparation and presentation. The services provided by these
suppliers can be classified as syndicated, standardized, customized,
or internet services. Limited service suppliers specialize in one or a
few phases of the management research project. Services offered by
these suppliers can be classified as field services, coding, and data
entry, data analysis, analytical services, or branded products.

Competitive intelligence: the process of enhancing market place


competitiveness through a greater understanding of a firm’s
competitors and the competitive environment.

Due to the need for management research, attractive career


opportunities are available with management research firms,
business and non-business firms, agencies with management
research departments, and advertising agencies. Information
obtained using management research becomes and integral part of
the MIS and DSS. Management research contributes to the DSS by
providing research data to the database, management models and
analytical techniques with model base, and specialized management
research programmes to the software base. International
management research is much more complex than domestic research
as the researcher must consider the environment prevailing in the
international markets that are being researched. The ethical issues in
the management research involve four stake holders.
1. Management Researcher

2. The Client

3. The Respondents
4. The Public

The internet can be used at every step of the management research


process. SPSS windows is an integrative package that can greatly
facilitate management research.

Chapter 2

Defining the marketing research problem is the most important step


in a research project. It is a difficult step, because frequently
management has not determined the actual problem or has only a
vague notion about it. The researcher’s role is to help management
identify and isolate the problem.
Problem definition: It is a broad statement of the general problem and
identification of the specific components of the marketing research
problems.

Problem Audit: A comprehensive examination of a marketing problem


to understand its origin and nature.

The tasks involved in formulating the marketing research problem


include discussions with management including the key decision
makers, interviews with industry experts, analysis of secondary data,
and qualitative research.

Secondary data: Data collected for some purpose other than the
problem at hand.

Primary data: Data originated by the researcher specifically address


the research problem.

Qualitative research: An unstructured, exploratory research


methodology based on small samples intended to provide insight and
understanding of the problem setting.

Pilot surveys: Surveys that tend to be less structured then large scale
surveys in that they generally contain more open ended questions
and the sample size is much smaller.
Case studies: Case studies involve and intensive examination of a
few selected cases of the phenomenon of interest. Cases could be
customers, stores, or other units.

These tasks should lead to an understanding of the environmental


context of the problem. The environmental context of the problem
should be analyzed and certain essential factors evaluated. These
factors include past information and forecast about the industry and
the firm, objectives of the DM, buyer behavior, resources and
constraints of the firm, the legal and economic environment and
marketing and technological skills of the firm.

Analysis of the environmental context should assist in the


identification of the management decision problem, which should
then be translated into a marketing research problem. The
management decision problem asks what the DM needs to do,
whereas, marketing research problem asks what information is
needed and how it can be obtained effectively.

Management decision problem: The problem confronting the decision


maker. It asks what the decision maker needs to do.

Marketing research problem: A problem that entails determining what


information in needed and how it can be obtained in the most feasible
way.

The researcher should avoid defining the marketing research problem


either too broadly or too narrowly. An appropriate way of defining the
marketing research problem is to make a broad statement of the
problem and then identify its specific components.

The Process of Defining the Problem and Developing an Approach

Tasks Involved

Discussions Interview Secondary Qualitative


with with Data Research
Decision Experts Analysis
Makers

Environmental Context of the Problem

Step 1: Problem Definition

Management Decision Problem

Marketing Research Problem

Approach to the Problem

Analytical
Objective/ Model:
Theoretical Verbal, Research Hypothesis Specification
Foundations Graphical, Questions of Information
Mathematical Needed

Step 3: Research Design

Developing an approach to the problem is the second step in the


marketing research process. The components of an approach consist
of an objective framework, analytical models, research questions,
hypothesis and specification of information needed. It is necessary
that approach developed be based on objective or empirical evidence
and be grounded in theory.

The steps needed for development of research questions and


hypothesis are:
1) Components of marketing research problem. Here we set
objective or theoretical framework.
2) Research questions are framed.
3) Analytical model is developed.
4) Hypothesis is stated.

The relevant variables and their inter-relationships may be neatly


summarized and via an analytical model. The most common kinds of
model structures are verbal, graphical, and mathematical.

Analytical model: An explicit specification of a set of variables and


their interrelationships designed to represent some real system or
process in whole or in part. They can have three forms: 1) Verbal
models 2) Graphical models 3) Mathematical Models.

The research questions are refined statement of the specific


components of the problem that ask what specific information is
required with respect to the problem components. Research
questions may be further refined in to hypothesis. Finally given the
problem definition, research questions, and hypothesis the
information needed should be specific.
Hypothesis: an unproven statement or proposition about a factor or a
phenomenon that is of interest to the researcher.

When defining the problem in international marketing research, the


researcher must isolate and examine the impact of the self reference
criterion (SRC), or the unconscious reference to ones own cultural
values. Several ethical issues that have an impact on the client and
the researcher can arise at this stage but can be resolved by adhering
to the seven Cs: Communication, Corporation, Confidence, Candor,
Closeness, Continuity, and Creativity.

Chapter 3

A research design is a framework or blueprint for conducting the


marketing research project. It specifies the details of how the project
should be conducted. Research designs may be broadly classified as
exploratory or conclusive.

Research design: a frame work or blue print for conducting the


marketing research projects. It specifies the details of the procedures
necessary for obtaining the information needed to structure and solve
marketing research problems.

Research designs could be classified as exploratory or conclusive.


Exploratory research: One type of research design, which has as its
primary objective the provision of insights into and comprehension of
the problem situation confronting the researcher.

Conclusive Research: research designed to assist the decision maker


in determining, evaluating and selecting the best course of action to
take in a given situation. Conclusive research is typically more formal
and structured than exploratory research.

Research Design

Exploratory Conclusive
Research Research
Design Design
Descriptive Causal
Research Research

Cross-Sectional Longitudinal
Design Design

Single Multiple
Cross-Sectional Cross-Sectional
Design Design

Classification of Marketing Research Designs

The primary purpose of exploratory research is to provide insights


into the problem. Conclusive research is to provide insights into the
problem. Conclusive research is conducted to test specific
hypotheses and examine specific relationships. The findings from
conclusive research are used as input into managerial decision
making. Conclusive research may be either descriptive or casual.

The major objective of descriptive research is to describe market


characteristics or functions. A descriptive design requires a clear
specification of the who, what, when, where, why, and way of the
research. Descriptive research can be further classified into cross-
sectional designs involve the collection of information from a sample
of population elements at a single point in time. In contrast, in
longitudinal designs repeated measurements are taken on fixed
samples. Casual research is designed for the primary purpose of
obtaining evidence about cause and effect (casual) relationships.

Comparison of Basic research designs:

Traits Exploratory Descriptive Causal


Objective Discover Ideas Describe Market Determine
and insights characteristics or cause and effect
functions relationships
Characteristics Flexible Marked by the Manipulation of
prior formulation one or more
of specific independent
hypothesis variables
Versatile Preplanned and Control of other
structured design mediating
variables
Often the front
end of total
research
design
Methods Expert surveys Secondary data Experiments
(analyzed
quantitatively)
Pilot surveys Surveys
Secondary Panels
data (analyzed
qualitatively)
Qualitative Observational
research and other data

Descriptive Research: A type of conclusive research that has as its


major objective the description of something like market
characteristics or functions. A descriptive design requires a clear
specification of the who, what, when, where, why and way (the six
Ws).

Cross-Sectional Design: A type of research design involving the


collection of information from any given sample of population
elements only once.

Single Cross-sectional Design: A cross-sectional design in which one


sample of respondents is drawn from the target population and
information is obtained from this sample once.

Multiple Cross-sectional Designs: A cross-sectional design in which


there are two or more samples of respondents, and information from
each sample is obtained only once.

Cohort Analysis: A multiple cross-sectional design consisting of a


series of surveys conducted at appropriate time intervals. The cohort
refers to the group of respondents who experience the same event
within the same time interval.

Longitudinal Design: A type of research design involving a fixed


sample of population elements that is measured repeatedly on the
same variables. The sample remains the same over time, thus
providing a series of pictures which, when viewed together, portray a
vivid illustrations of the situation and the changes that are taking
place over time.

Panel: A sample of respondents who have agreed to provide


information at specified intervals over an extended period.

Casual Research: A type of conclusive research where the major


objective is to obtain evidence regarding cause-and-effect
relationships.

A research design consists of six components. Error can be


associated with any of these components. The total error is
composed of random, sampling error and non-sampling error. Non-
sampling error consists of non-response and response errors.
Response errors encompass errors made by researcher, interviewers,
and respondents. A written marketing research proposal including all
the elements of the marketing research process should be prepared.
In formulating a research design when conducting international
marketing research, considerable effort is required to ensure the
equivalence and comparability of secondary and primary data
obtained from different countries.

Total Error: The variation between the true mean value in the
population of the variable of interest and the observed mean value
obtained in the marketing research projects.
Random Sampling Error: The error due to the particular sample
selected being an imperfect representation of the population of
interest. It may be defined as the variation between the true mean
value for the sample and the true mean value of the population.

Non-sampling Error: Non-sampling errors are errors that can be


attributed to sources other than sampling, and they can be random or
nonrandom. They result from a variety of reasons, including errors in
problem definition, approach, scales, questionnaire design,
interviewing methods, and data preparation and analysis. These
errors consist of non-response errors and response errors.

Non-response Error: A type of non-sampling error that occurs when


some of the respondents included in the sample do not respond. This
error may be defined as the variation between the true mean value of
the variable in the original sample and the true mean value in the net
sample.

Response Error: A type of non-sampling error arising from the


respondents who do respond, but give incorrect answers or their
answers are mis-recorded or mis-analyzed. It may be defined as the
variation between the true mean value of the variable in the net
sample and the observed mean value obtained in the marketing
research project. The Response errors are categorized in to three
categories:
1) Researcher Errors
2) Interviewer Errors
3) Respondent Errors
Further Researcher Errors include following sub-classifications:
a) Surrogate Information error : Variation between the information
needed for the research problem and the information sought by
the researcher. E.g. Consumer preferences are recorded instead
of consumer choice of a new brand.
b) Measurement Error: Variation between the information sought
and the information generated by the measurement process
employed by the researcher. e.g. employing a scale that
measures perceptions rather than preferences.
c) Population definition error: Variation between the actual
population relevant to the problem at hand and the population
as defined by the researcher.
d) Sampling Frame Error: Variation between the population
defined by the researcher and the population as implied by the
sampling frame used. E.g. a list of telephone numbers from
directory does not represent population of potential consumers
because of unlisted, disconnected and new numbers in service.
e) Data Analysis Error: It occurs when an inappropriate statistical
procedure is used, resulting in incorrect interpretation and
findings.

Interviewer Errors include following sub-classifications:


a) Respondent Selection Error: It occurs when interviewers select

respondents other than those specified by the sampling design


or in a manner inconsistent with the sampling design.
b) Questioning Error: Errors made in asking questions to the

respondents or in not probing when more information is needed.


E.g. interviewer does not use the exact wording given in the
questionnaire.
c) Recording Error: Errors in hearing, interpreting, and recording

the answers given by the respondents. E.g. interviewer thake a


neutral response as positive response.
d) Cheating Error: It arises when the interviewer fabricates answers

to a part or all of the interview.

Respondents Errors include following sub-classifications:


a) Inability Error: The respondents’ inability to provide accurate

answers due to fatigue, unfamiliarity, boredom, question format,


content, or other factors.
b) Unwillingness Error: The respondents’ unwillingness to provide

accurate information because of a desire to provide socially


acceptable answers, avoid embarrassment, or please the
interviewer.

Marketing Research Proposal / Synopsis: The official layout of the


planned marketing research activity for management. It describes the
research problem, approach, the research design, data collection
methods, data analysis methods, and reporting methods. All steps of
the marketing research process contains the following elements:
1) Executive Summary
2) Background
3) Problem Definition / Objectives of the Research
4) Approach to the problem
5) Research Design
6) Field Work / Data Collection
7) Data Analysis
8) Reporting
9) Cost & Time
10) Appendices

In terms of ethical issues, the researchers must ensure that the


research design utilized will provide the information sought, and that
the information sought is the information needed by the client. The
client should have the integrity not to mis-represent the project and
should describe the situation that the researcher must operate within
and not make unreasonable demands. Every precaution should be
taken to ensure the respondents’ or subjects’ right to safety, right to
privacy, or right to choose.

Chapter 8

Measurement is the assignment of numbers or other symbols to


characteristics of objects according to set rules. Scaling involves the
generation of continuum upon which measured objects are located.

Measurement: The assignment of numbers or other symbols to


characteristics of objects according to certain pre-specified rules.
Scaling: The generation of a continuum upon which measured objects
are located.

The four primary scales of measurement are nominal, ordinal,


interval, and ratio. Of these, the nominal scale is most basic in that
the numbers are used only for identifying or classifying objects.

Nominal Scale: A scale whose numbers serve only as labels or as


tags for identifying and classifying objects with a strict one to one
correspondence between the numbers and the objects.

Ordinal Scale: A ranking scale in which numbers are assigned to


objects to indicate the relative extent to which some characteristic is
possessed. Thus, it is possible to determine whether an object has
more or less of a characteristic than some other object.

Interval Scale: A scale in which the numbers are used to rate objects
such that numerically equal distances on the scale represent equal
distances in the characteristic being measured.

Ratio Scale: It is the highest scale. It allows the researcher to identify


or classify objects, rank order the objects, and compare intervals or
differences. It is also meaningful to conclude ratios of the scale
values.

In the ordinal scale, the next higher level scale, the numbers indicate
the relative position of the objects but not the magnitude of difference
between them. The interval scale permits a comparison of the
difference between the objects. However, as it has an arbitrary zero
point, it is not meaningful to calculate ratios of scale values on an
interval scale. The highest level of measurement is represented by the
ratio scale in which the zero point is fixed. The researcher can
compute ratios of scale values using this scale. The ratio scale
incorporates all the properties of the low level scales.

Table 8.1
Scale Basic Common Marketing Descriptiv Inferential
Characteristic Examples Examples e
s
Nomina Numbers Social Brand Percentage Chi-square,
l Identify and Security numbers, s, Mode binomial
classify objects Numbers, store types, test
numbering Gender
of football classificatio
players n
Ordinal Numbers Quality Preference Percentile Rank-order
indicate the Rankings, rankings, Median correlation,
relative rankings of market Friedman
positions of the teams in a position, ANOVA
objects but not tournament social class
the magnitude
of differences
between them
Interval Differences Temperatur Attitudes, Range, Product
between objects e opinions, mean, moment
can be (Fahrenheit index standard correlation
compared; zero , numbers deviation s, t-tests,
point is centigrade) ANOVA,
arbitrary Regression
, factor
analysis
Ratio Zero point is Length, Age, Geometric Coefficient
fixed; ratios of weight income, mean, of variation
scale values costs, harmonic
can be sales, mean
computed market
shares

Scaling techniques can be classified as comparative or non


comparative. Comparative scaling involves a direct comparison of
stimulus objects. Comparative scales include paired comparisons,
rank order, constant sum, and the Q-sort. The data obtained by these
procedures have only ordinal properties.

Comparative Scales: One of the two types of scaling techniques in


which there is direct comparison of stimulus objects with one
another.

Non-Comparative Scales: One of the two types of scaling techniques


in which each stimulus object is scaled independently of the other
objects in the stimulus set.

Scaling Techniques

Comparative Scales Non-Comparative Scales

Paired Rank Constant Q-Sort & Other


Comparison Order Sum Procedures
Continuous Itemized
Rating Scales Rating Scales

Likert Semantic Stapel


Differential

Figure: Classification of Scaling Techniques

Paired Comparison Scaling: A comparative scaling technique in


which a respondent is presented with two objects at a time and asked
to select one object in the pair according to some criterion. The data
obtained are ordinal in nature.

Transitivity of Preference: An assumption made in order to convert


paired comparison data to rank order data. It implies that if brand A is
preferred to brand B and brand B is preferred to Brand C, then the
brand A is preferred to brand C.

Rank Order Scaling: A comparative scaling technique in which


respondents are presented with several objects simultaneously and
asked to order or rank them according to some criteria.

Constant Sum Scaling: A comparative scaling technique in which


respondents are required to allocate a constant sum of units such as
points, dollars, chits, stickers, or chips among a set of stimulus
objects with respect to some criteria.
Q-Sort Scaling: A comparative scaling technique that uses a rank
order procedure to sort objects based on similarity with respect to
some criteria.

Respondents in developed countries, due to higher education and


consumer sophistication levels, are quite used to providing
responses in interval and ratio scales. However, in developing
countries, preferences can be best measured by using ordinal scales.
Ethical considerations require that the appropriate type of scale be
used in order to get the data needed to answer the research
questions. And test the hypotheses. The Internet, as well as several
specialized computer programmes, are available to implement
different types of scales.

Chapter 9

In non-comparative scaling, each object is scaled independently of


the other objects in the stimulus set. The resulting data is generally
assumed to be interval or ratio scaled. Non-comparative rating scales
can be either continuous or itemized. The itemized rating scales are
further classified as Likert, semantic differential, or Stapel scales.
Then using non-comparative itemized rating scales, researcher must
decide on the number of scale categories, balanced versus
unbalanced scales, odd or even number of categories, forced versus
non-forced scales, nature and degree of verbal description, and the
physical form or configuration.

Table 9.1

Scale Basic Examples Advantages Disadvantage


Characteristic s
s
Continuou Place a mark Reaction to Easy to Scoring can
s Rating on a TV construct be
Scale continuous Commercial cumbersome
line s unless
computerized
Itemized
Rating
Scales:
Likert Degree of Measureme Easy to More time
Scale agreement on nt of construct, consuming
a 1(Strongly attitudes administer,
agree) to and
5(strongly understand
Disagree)
scale
Semantic Seven point Brand, Versatile Controversy
Differentia scale with product, as to whether
l bipolar labels and the data are
company interval
images
Stapel Unipolar ten- Measureme Easy to Confusing
Scale point scale, -5 nt of construct, and difficult
to +5, without attitudes administere to apply
a neutral and images d over
point(zero) telephone

Continuous Rating Scale: It is a non-comparative scale. Also referred


to as graphic rating scale. This measurement scale has the
respondents rate the object by placing a mark at the appropriate
position on a line that runs from one extreme of the criteria variable to
the other.

Itemized Rating Scale: A measurement scale having numbers and / or


brief descriptions associated with each category. The categories are
ordered in terms of scale position. Commonly used itemized rating
scales are Likert, Semantic Differential and Stapel Scales.

Likert Scale: A measurement scale with five response categories


ranging from strongly disagree to strongly agree which requires the
respondents to indicate a degree of agreement or disagreement with
each of a series of statements related to the stimulus objects.

Semantic Differential Scale: A 7-point rating scale with end points


associated with bi-polar labels that have semantic meaning.

Stapel Scale: A scale for measuring attributes that consists of a


single adjective in the middle of an even numbered range of values
from -5 to +5, without a neutral point (zero).
The choice of particular scaling techniques in a given situation
should be based on theoretical and practical considerations. As a
general rule, the scaling technique used should be the one that will
yield the highest level of information feasible. Also, multiple
measures should be obtained.

Balanced Scale: A scale with an equal number of favorable and


unfavorable categories.

Forced Rating Scales: A rating scale that forces the respondents to


express an opinion because no opinion or no knowledge option is not
provided.

Number of Although there is no single, optimal number,


Categories traditional guidelines suggest that there should be
between five and Nine categories.
Balanced versus In general, the scale should be balanced to obtain
unbalanced objective data.
Odd or even If a neutral or indifferent scale response is
number of possible from at least some of the respondents, an
categories odd number of categories should be used.
Forced versus In situations where the respondents are expected
non-forced to have no opinion, the accuracy of data may be
improved by a non-forced scale.
Verbal An argument can be made for labeling all or many
Description scale categories. The category descriptions should
be located as close to the response categories as
possible.
Physical Form A number of options should be tried and the best
one selected.
Construct Scale Descriptors
Attitude Very Bad Neither Bad Good Very
Bad Nor Good Good
Importance Not at all Not Neutral Important Very
Important Important Important
Satisfaction Very Dissatisfied Neither Satisfied Very
Dissatisfied Dissatisfied Satisfied
nor Satisfied
Purchase Definitely Probably Might or Probably Definitely
Intent will not Buy will not Buy might not will Buy will Buy
Buy
Purchase Never Rarely Sometimes Often Very
Frequency Often

Multi-item scales consist of a number of rating scale items. These


scales should be evaluated in terms of reliability and validity.

Reliability refers to the extent to which a scale produces consistent


results if repeated measurements are made. Approaches to assessing
reliability include test-retest, alternative-forms, and internal
consistency. Validity, or accuracy of measurement, may be assessed
by evaluating content validity, criterion validity, and construct
validity.

Measurement Error: The variation in the information sought by the


researcher and the information generated by the measurement
process employed.
True Scale Model: A mathematical model that provides a framework
for understanding the accuracy of measurement.

Semantic Errors: Semantic error affects the measurement in a


constant way and represents stable factors that affect the observed
score in the same way each time the measurement is made.

Reliability: The extent to which a scale produces consistent results if


repeated measurements are made on the characteristic. Approaches
for assessing reliability include the test-retest, alternative forms and
internal consistency methods.

Scale Evaluation

Reliability Validity Generalizability

Test / Retest Alternative Forms Internal Consistency

Content Criterion Construct

Convergent Discriminant Nomological

Test-retest Reliability: An approach for assessing reliability in which


respondents are administered identical sets of scale items at two
different times under as nearly equivalent conditions as possible.
Alternative Forms Reliability: An approach for assessing reliability
that requires two equivalent forms of the scale to be constructed and
then the same respondents are measured at two different times.

Internal Consistency Reliability: An approach for assessing the


internal consistency of the set of items when several items are
summated in order to form a total score for the scale. The two
measures for internal consistency reliability are Split-half reliability
and Coefficient-Alpha.

Split-Half Reliability: A form of internal consistency reliability in which


the items constituting the scale are divided into two halves and the
resulting half scores are correlated.

Coefficient Alpha: A measure of internal consistency reliability that is


the average of all possible split-half coefficients resulting from
different splitting of the scale items.

Validity: The extent to which difference in observed scale scores


reflect true differences among objects on the characteristic being
measured, rather than systematic or random errors. Researchers may
assess content validity, criterion validity or construct validity.

Content Validity: A type of validity sometimes called face validity that


consists of a subjective but systematic evaluation of the
representative-ness of the content of a scale for the measuring tasks
at hand.
Criterion Validity: A type of validity that examines whether the
measurement scale performs as expected in relation to other
variables selected as meaningful criteria.

Construct Validity: A type of validity that addresses the question of


what construct or characteristic the scale is measuring. An attempt is
made to answer theoretical questions of why a scale works and what
deductions can be made concerning the theory underlying the scale.
Construct validity includes convergent, discriminant, and
Nomological validity.

Convergent Validity: A measure of construct validity that measures


the extent to which the scale correlates positively with other
measures of the same construct.

Discriminant Validity: A type of construct validity that assesses the


extent to which a measure does not correlate with other constructs
from which it is supposed to differ.

Nomological Validity: A type of validity that assesses the relationship


between theoretical constructs. It seeks to confirm significant
correlations between the constructs as predicted by theory.

Generalizability: The degree to which a study based on a sample


applied to a universe of generalization.

In international marketing research, special attention should be


devoted to determining equivalent verbal descriptors in different
languages and cultures. The researcher has a responsibility to both
the client and respondents to ensure the applicability and usefulness
of the scales. The Internet and computers are useful for developing
and testing continuous and itemized rating scales, particularly multi-
item scales.

Chapter 10

To collect quantitative primary data, a researcher must design a


questionnaire or an observation form. A questionnaire has three
objectives. It must translate the information needed into a set of
specific questions the respondents can and will answer. It must
motivate respondents to complete the interview. It must also minimize
response error.

Questionnaire: A structured technique for data collection that


consists of a series of questions written or verbal that a respondent
answers.

Designing a questionnaire is an art rather than a science. The process


begins by specifying:
1) The information needed
2) The type of Interviewing method
3) To decide on the content of the individual question
4) The question should overcome the respondents’ inability and
unwillingness to answer
Double barreled question: A single question that attempts to cover
two issues. Such questions can be confusing to respondents and
result in ambiguous responses.

Respondents may be unable to answer if they are not informed,


cannot remember, or cannot articulate the response.

Filter Question: An initial question in a questionnaire that screens


potential respondents to ensure that they meet the requirements of
the sample.

Telescoping: A psychological phenomenon that takes place when an


individual telescopes or compresses time by remembering an event
as occurring more recently than it actually occurred.

The unwillingness of the respondents to answer must also be


overcome. Respondents may be unwilling to answer if the question
requires too much effort, is asked in a situation or context deemed
inappropriate, does not serve a legitimate purpose, or solicits
sensitive information.

Then comes the decision regarding the question structure (step 5).
Questions can be unstructured (open-ended) or structured to a
varying degree. Structured questions include multiple choice,
dichotomous questions and scales.

Un-structured Questions: Open-ended questions that respondents


answer in their own words.
Structured Questions: Questions that pre-specify the set of response
alternative and the response format. A structured question could be
multiple-choice or a scale.

Order or position bias: A respondent’s tendency to check an


alternative merely because it occupies a certain position or is listed in
a certain order.

Dichotomous Questions: A structured question with only two


response alternatives such as Yes or No.

Determining the wording of each question (step 6) involves defining


the issue, using ordinary words, using unambiguous words, and
using dual statements. The researcher should avoid leading
questions, implicit alternatives, implicit assumptions, and
generalizations and estimates.

Once the questions have been worded, the order in which they will
appear in the questionnaire must be decided (step 7). Special
consideration should be given to opening questions, type of
information, difficult questions, and the effect on subsequent
questions. The questions should be arranged in a logical order.

Leading Questions: A question that gives the respondent a clue as to


what answer is desired or leads the respondent to answer in a certain
way.
Implicit Alternative: An alternative that is not explicitly expressed.

Classification Information: Socio-economic and demographic


characteristic used to classify respondents.

Identification Information: A type of information obtained in a


questionnaire that includes name, address, e-mail, and phone
number.

Funnel Approach: A strategy for ordering questions in a


questionnaire in which the sequence start with general question that
are followed by progressively specific questions in order to prevent
specific question from biasing general questions.

Branching Questions: Questions used to guide interviewer through a


survey by directing the interviewer to different spots on the
questionnaire depending on the answers given.

The stage is now set for determining the form and the layout of the
questions (step 8).

Pre-coding: In questionnaire designed, assigning a code to every


conceivable response before data collection.

Several factors are important in reproducing the questionnaire (step


9). These include: appearance, use of booklets, fitting entire question
on a page, response category format, avoiding overcrowding,
placement of directions, color-coding, easy to read format, and cost.
Last but not the least is pre-testing (step 10).

Pre-testing: The testing of the questionnaire on a small sample of


respondents for the purpose of improving the questionnaire by
identifying and eliminating potential problems.

Important issues are the extent of pre-testing, nature of respondents,


type of interviewing method, type of interviewers, sample size,
protocol analysis and debriefing, and editing and analysis.

The design of observational forms requires explicit decisions about


what is to be observed and how that behavior is to be recorded. It is
useful to specify the who, what, when, where, why and way of the
behavior to be observed.

The questionnaire should be adapted too the specific cultural


environment and should not be biased in terms of any one culture.
Also, the questionnaire may have to be suitable for administration by
more than one method because different interviewing methods may
be used in different countries. Several ethical issues related to the
researcher-respondent relationship and researcher-client relationship
may have to be addressed. The Internet and computers can greatly
assist the researcher in designing sound questionnaires and
observational forms.

Chapter 11
Information about the characteristic of a population may be obtained
by conducting either a sample or a census. Budget and time limits,
large population size, and small variance in the characteristic of
interest favour the use of a sample. Sampling is also preferred when
the cost of sampling error is low, the cost of non-sampling error is
high, the nature of measurement is destructive, and attention must be
focused on individual cases. The opposite set of conditions favour
the use of census.

Population: The aggregate of all the elements, sharing some common


set of characteristics that comprises the universe for the purpose of
the marketing research problem.

Census: A complete enumeration of the elements of population or


study objects.

Sample: A sub-group of the elements of the population selected for


participation in the study.

Sampling design begins by defining the target population in terms of


elements, sampling units, extent, and time.

Target Population: The collection of elements or objects that possess


information sought by the researcher and about which inferences are
to be made.
Element: Object that possess the information sought by the
researcher and about which inferences are to be made.

Sampling Unit: The basic unit containing the elements of the


population to be sampled.

Then the sampling frame should be determined. A sampling frame is a


representation of the elements of the target population. It consists of
a list of directions for identifying the target population. At this stage,
it is important to recognize any sampling frame errors that may exist.
The next steps involve selecting a sampling technique and
determining the sample size. In addition to quantitative analysis,
several qualitative considerations should be taken into account in
determining the sampling size. Finally, execution of the sampling
process requires detail specification for each step in the sampling
process.

Sampling Frame: A representation of the elements of the target


population. It consists of a list or set of directions for identifying the
target population.

Bayesian Approach: A selection method in which elements are


selected sequentially. This approach explicitly incorporates prior
information about population parameters as well as the costs and
probability associated with making wrong decisions.

Sampling with replacement: A sampling technique in which an


element can be included in the sample more than once.
Sampling without replacement: A sampling technique in which an
element cannot be included in the sample more than once.

Sample Size: A number of elements to be included in a study.

Sampling techniques may be classified as non-probability and


probability techniques.

Non-probability Sampling: Sampling techniques that do not use


chance selection procedures. Rather they rely on the personal
judgment of the researcher.

Probability Sampling: A sampling procedure in which each element of


the population has a fixed probabilistic chance of being selected for
the sample.

Non-probability sampling techniques rely on the researchers’


judgment. Consequently, they do not permit an objective evaluation
of the precision of the sample results, and the estimates obtained are
not statistically projectable to the population. The commonly used
non-probability sampling techniques include convenience sampling,
judgmental sampling, quota sampling, and snowball sampling.

Convenience sampling: A non-probability sampling technique that


attempts to obtain a sample of convenient elements. The selection of
sampling units is left primarily to the interviewer.
Judgmental sampling: A form of convenience sampling in which the
population elements are purposely selected based on the judgment of
the researcher.

Quota sampling: A non-probability sampling technique that is a two-


stage restricted judgmental sampling. The first stage consists of
developing control categories or quotas of population elements. In
the second stage, sample elements are selected based on
convenience or judgment.

Snowball sampling: A non-probability sampling technique in which an


initial group of respondents are selected randomly. Subsequent
respondents are selected based on the referrals or information
provided by the initial respondents. This process may be carried out
in waves by obtaining referrals from referrals.

In probability sampling techniques, sampling units are selected by


chance. Each sampling unit has a non-zero chance of being selected
and the researcher can pre-specify every potential sample of a given
size that could be drawn from the population, as well as the
probability of selecting each sample. It is also possible to determine
the precision of the sample estimates and inferences and make
projections to the target population. Probability sampling technique
includes simple random sampling, systematic sampling, stratified
sampling, cluster sampling, sequential sampling, and double
sampling.
Simple random sampling (SRS): A probability sampling technique in
which each element in the population has a known and equal
probability of selection. Every element is selected independently of
every other element and the sample is drawn by a random procedure
from a sampling frame.

Systematic sampling: A probability sampling technique in which the


sample is chosen by selecting a random starting point and then
picking every ith element in succession from the sampling frame.

Stratified Sampling: A probability sampling technique that uses a two-


step process to partition the population into sub-populations, or
strata. Elements are selected from each stratum by a random
procedure.

Cluster sampling: First, the target population is divided into mutually


exclusive subpopulations called clusters. Then, a random sample of
clusters is selected based on a probability sampling technique such
as simple random sampling. For each selected cluster, either all the
elements are included in the sample or a sample of elements is drawn
probabilistically.

Area sampling: A common form of cluster sampling in which the


clusters consists of geographic areas such as countries, housing
tracts, blocks, or other area descriptions.

Probability proportionate to size sampling: A selection method in


which the clusters are selected with probability proportional to size
and the probability of selecting a sampling unit in a selected cluster
varies inversely with the size of the cluster.

Sequential Sampling: A probability sampling technique in which the


elements are sampled sequentially, data collection and analysis are
done at each stage, and a decision is made as to whether additional
population element should be sampled.

Double Sampling: A sampling technique in which certain population


elements are sampled twice.
Table 11.3
Technique Strengths Weaknesses
Non-
probability
Sampling
Convenience Least Expensive, Least time Selection bias, sample not

Sampling consuming, most convenient representative, not recommended


for descriptive or causal research
Judgmental Low cost, Not time consuming, Does not allow generalization,

Sampling convenient subjective


Quota Sample can be controlled for Selection bias, no assurance of

Sampling certain characteristics of representativeness


Snowball Can estimate rare Time Consuming

Sampling characteristics

Probability
Sampling
Simple
Random
Sampling
Systematic Can increase representative-
ness, easier to implement that
Sampling SRS, sampling frame not
necessary
Stratified Includes all important

Sampling subpopulations, precision


Cluster Easy to implement, cost

Sampling effective

Exhibit 11.1

The choice between probability and non-probability sampling should


be based on the nature of the research, the degree of error tolerance
the relative magnitude of sampling and non-sampling errors, the
variability in the population, and statistical and operational
considerations.

When conducting international marketing research, it is desirable to


achieve comparability in sample composition and representativeness,
even though this may require the use of different sampling
techniques in different countries. It is unethical and misleading to
treat non-probability samples as probability sample and project the
results to a target population. The Internet and computers can be
used to make the sampling design process more effective and
efficient.

Chapter 15
Basic data analysis provides valuable insights and guides the rest of
the data analysis as well as the interpretation of the results. A
frequency distribution should be obtained for each variable in the
data.

Frequency Distribution: A mathematical distribution whose objective


is to obtain a count of the number of responses associated with
different values of one variable and to express these counts in
percentage terms.

This analysis produces a table of frequency counts, percentages, and


cumulative percentages for all the values associated with that
variable. It indicates the extent of out of range, missing, or extreme
values. The mean, mode, and median of a frequency distribution are
measures of central tendency.

Measures of location: A statistic that describes a location within a


data set. Measures of central tendency describe the center of the
distribution.

Mean: The average; that value obtained by summing all elements in a


set and dividing by the number of elements.

Mode: A measure of central tendency given as the value that occurs


the most in a sample distribution.

Median: A measure of central tendency given as the value above


which half of the values fall and below which half of the values fall.
The variability of the distribution is described by the range, the
variance, or standard deviation, coefficient of variation, and inter-
quartile range.

Measures of variability: A statistic that indicates the distribution’s


dispersion.

Range: The difference between the largest and smallest values of a


distribution.

Inter-quartile range: The range of a distribution encompassing the


middle 50 percent of the observations. It is the difference between the
75th and 25th percentile. For a set of data points arranged in order of
magnitude, the pth percentile is the value that has p percent of the
data points below it and (100-p) percent above it. If all the data points
are multiplied by a constant, the inter-quartile range is multiplied by
the same constant.

VALUE LABEL Value Frequency Percenta Cumulative


(N) ge Percentage
Very Unfamiliar 1 0 0 0
2 2 6.9 6.9
3 6 20.7 27.6
4 6 20.7 48.3
5 3 10.3 58.6
6 8 27.6 86.2
Very Familiar 7 4 13.8 100
Total 30 100

Range: 7 – 2 = 5
Inter-quartile Range: 6 – 3 = 3

The difference between the mean and an observed value is called the
deviation from the mean.
Variance: The mean squared deviation of all the values from the
mean. The variance can never be negative. When the data points are
clustered around the mean, the variance is small. When the data
points are scattered, the variance is large.

Standard Deviation: The square root of the variance. The standard


deviation of a sample, s, is calculated as:

n
( X − X )2
s= ∑
i =1 n −1

By dividing by n-1, instead of n, we compensate for the smaller


variability observed in the sample.
As Mean X (Bar) = (2 x 2 + 6 x 3 +_______+ 4 x 7) / 29
= 4.724

Variance, s2 = 2.493
Standard Deviation, s = 1.579

Coefficients of variation: A useful expression in sampling theory for


the standard deviation as a percentage of the mean. It is the ratio of
the standard deviation to the mean. It is measure of relative
variability. Meaningful only for variable measured in ratio scale. If a
characteristic shows good variability, then perhaps the market could
be segmented based on that characteristics.

Skewness and kurtosis provide an idea or measure of the shape of


the distribution.

Skewness: A characteristic of a distribution that assesses its


symmetry about the mean. Skewness is the tendency of the
deviations from the mean to be larger in one direction than in the
other, i.e., one tail of the distribution is heavier than the other.

Kurtosis: A measure of the relative peaked-ness or flatness of the


curve defined by the frequency distribution. The kurtosis of a normal
distribution is zero. If the kurtosis is positive, then the distribution is
more peaked than a normal distribution. A negative value means a
flatter distribution. If distribution is highly peaked or flat, then
statistical procedures that assume normality should be used with
caution.

INTRODUCTION TO HYPOTHESIS
• Hypothesis, one simply means a mere assumption or some
supposition to be proved or disproved.
• For a Researcher hypothesis is a formal question that he intends
to resolve. A Research hypothesis is a predictive statement,
capable of being tested by scientific methods, that relates an
independent variable to some dependent variable. Its main
function is to suggest new experiments and observations
• Example:- Students who receive counseling will show a greater
increase in creativity than students not receiving counseling”
Or
• “The automobile A is performing as well as automobile B”.
• These are hypotheses capable of being objectively verified and
tested. Thus, we may conclude that a hypothesis states what we
are looking for and it is a preposition which can be put to a test
to determine its validity.

General Procedure for Hypothesis Testing:


1. Formulate the null hypothesis H0 and the alternative
hypothesis H1.
2. Select an appropriate statistical technique and the
corresponding test statistic.
Choose the level of significance, α.
4. Determine the sample size and collect the data.
Calculate the value of the test statistic.
5. Determine the probability associated with the test
statistic under the null hypothesis, using the sampling
distribution of the test statistic. Alternatively, determine the
critical values associated with the test statistics that divide the
rejection and non-rejection regions.
6. Compare the probability associated with the test
statistic with the level of significance specified. Alternatively,
determine whether the test statistic has fallen into the rejection or
the non-rejection region.
7. Make the statistical decision to reject or not reject
the null hypothesis.
8. Express the statistical decision in terms of the
marketing research problem.

A General Procedure for Hypothesis Testing

Formulate Null hypothesis and Alternative Hypothesis.

Select an appropriate test

Choose the level of Significance

Collect data and calculate the test statistic

Determine the probability Determine the critical value


associated with the test statistic of the test statistic.

Compare probability with the Determine if TSCR falls into


level of significance rejection or non rejection region

Reject or do not reject null hypothesis

Draw a management research conclusion

*TSCR: Critical value of Test Statistic

Basic concepts Concerning Testing of Hypothesis

Null hypothesis: A statement in which no difference or effect is


expected. If the null hypothesis is not rejected, no changes will be
made.
Alternative hypothesis: A statement that some difference or effect is
expected. Accepting the alternative hypothesis will lead to changes in
opinions or actions.

• Null Hypothesis:- If we are to compare method A with method B


about its superiority and if we proceed on the assumption that
both methods are equally good , then this assumption is termed
as Null Hypothesis. As against this, we may think that the
method A is superior or the method B is inferior, we are stating
what is termed as Alternative Hypothesis.

• Level of Significance:- It is always some %(usually 5%). If we


take the significance level at 5%, then null hypothesis will be
rejected.

• One-tailed Test: A test of the null hypothesis where the


alternative hypothesis is expressed directionally. A one tailed
test would be used when we are to test, say, whether the
population mean is either lower than or higher than some
hypothesized value. For example: The proportion of internet
users who use the internet for shopping is greater than 0.40.
• The Hypothesis for this example would be
• H0 : π ≤ 0.40
• H1 : π > 0.40
• If the null hypothesis is rejected, then the alternative hypothesis
H1 will be accepted and the new Internet shopping service will be
introduced.
• Acceptance region A: Z > -1.645
Rejection region R: Z < -1.645

• Two-Tailed Test:- A test of null hypothesis where the alternative


hypothesis is not expressed directionally. A two tailed test
rejects the null hypothesis if, say, the sample mean is
significantly higher or lower than the hypothesized value of the
mean of the population. If the significance level is 5% & the two
tailed test is to be applied, the probability of the rejection area
will be 0.05 & that of the acceptance region will be 0.95.
• For example: The proportion of internet users who use the
internet for shopping is different from 0.40.
• The Hypothesis for this example would be
• H0 : π = 0.40
• H1 : π ≠ 0.40
• The one-tailed test is used more often than a two-tailed test.

Test statistic: A measure of how close the sample has come to the
null hypothesis. It often follows a well-known distribution, such as the
normal, t, or chi-square distribution. For example the z statistic which
follows the standard normal distribution would be appropriate.
Z = (p - π)/ σp

When we draw inference about a population there is a risk of incorrect


conclusion. Two types of errors can occur.

Type I error: Also known as alpha error, it occurs when the sample
results lead to the rejection of a null hypothesis that is in fact true.
The probability of Type I error (α) is also known as the level of
significance.

Type II error: Also known as beta error, it occurs when the sample
results lead to the non-rejection of a null hypothesis that is in fact
false. The probability of type II error is denoted by β. The complement
(1- β) of the probability of a type II error is called the power of a
statistical test.

But with a fixed sample size, n, when we try to reduce Type I error, the
probability of committing Type II error increases. Both types of errors
cannot be reduced simultaneously.

Power of a Test: The probability (1- β) of rejecting the null hypothesis


when it is in fact false and should be rejected.
The risk of both α and β can be controlled by increasing the sample
size.

Cross tabulations are tables that reflect the joint distribution of two or
more variables. In cross tabulation, the percentages can be computed
either column wise, base on column totals, or row-wise, based on row
totals. The general rule is to compute the percentage in the direction
of the independent variables across the dependent variable. Often the
introduction of the third variable can provide additional insights.
Cross Tabulation: A statistical technique that describes two or more
variables simultaneously and results in tables that reflect the joint
distribution of two or more variables that have a limited number of
categories or distinct values, e.g.
Internet Usage Male Female Row Total
Light 5 10 15
Heavy 10 5 15
Column Total 15 15

Contingency Table: A cross-tabulation table. It contains a cell for


every combination of categories of the two variables.

The chi-square statistic provides a test of the statistical significance


of the observed association in cross-tabulation.

Chi-square statistic: The statistic used to test the statistical


significance of the observed association in a cross-tabulation. It
assists us in determining whether a systematic association exists
between the two variables.

Chi-square Distribution: A skewed distribution whose shape depends


solely on the number of degrees of freedom. As the number of
degrees of freedom increases, the chi-square distribution becomes
more symmetrical.

The null Hypothesis is that there is no association between the


variables. The expected cell frequencies, denoted by fe are compared
to the actual observed frequencies, fo, found in the cross-tabulation to
calculate the chi-square statistic.
For r rows and c columns and n observations, the expected cell
frequency for each cell is given by
Fe = (nr nc )/n
After calculating above, we calculate value of chi-square as
∑( fo − fe ) 2

χ2 =
fe

χ2 =3.333

In case of a chi-square statistic associated with a cross tabulation,


the number of degrees of freedom is given by:
Df = (r-1) x (c-1)

The null hypothesis (H0) of no association between the two variables


will be rejected only when the calculated value of the test statistic is
greater than the critical value of the chi-square distribution with the
appropriate degrees of freedom.
As the number of degrees of freedom increases, the chi-square
distribution becomes more symmetrical.
For one degree of freedom, the value for an upper-tail area of 0.05 is
3.841.
This indicates that for 1 degree of freedom, the probability of
exceeding a chi-square value of 3.841 is 0.05. i.e. at 0.05 level of
significance with 1 degree of freedom, the critical value of the chi-
square statistic is 3.841.
For a cross-tabulation of 2 x 2, df = (2-1) x (2-1) =1. The calculate
chi-square value is 3.333, which is less than critical value of 3.841. So,
the null hypothesis of no association cannot be rejected, indicating
that the association is not statistically significant at the 0.05 level.
This lack of significance is mainly due to small sample size of 30. If
sample size would have been 300, then let each entry in table
multiplied by 10, then value of chi-square statistic would be multiplied
by 10 and would be 33.33, which is significant at 0.05 level.
For frequency observations less than 5, chi-square analysis should
not be conducted, or when table is 2 x 2. Then phi- coefficient should
be applied.

The phi-coefficient, contingency coefficient, Cramer’s V, and the


lambda coefficient provide measures of the strength of association
between the variables.

Phi-coefficient: A measure of the strength of association in the


special case of a table with two rows and two columns.

χ2
φ=
n

It takes value of 0 when there is no association. When variables are


perfectly associated, phi takes value of 1. All values of phi falls
between -1, 0, and 1. -1 represents perfect negative correlation. Phi for
above example is
3.333
φ=
30

= 0.333
So the association is not very strong.

Contingency coefficient (C): A measure of the strength of association


in a table of any size.

Cramer’s V: A measure of the strength of association used in tables


larger than 2 X 2.
Lambda Coefficient: A measure of the percentage improvement in
predicting the value of the dependent variable, given the value of the
independent variable in contingency table analysis. Lambda also
varies between 0 and 1.

Symmetric lambda: The symmetric lambda does not make an


assumption about which variable is dependent. It measures the
overall improvement when prediction is done in both directions.

Tau b: Test statistic that measures the association between two


ordinal level variables. It makes an adjustment for ties and is most
appropriate when the table of variables is square.

Tau c: Test statistic that measures the association between two


ordinal level variables. It makes an adjustment for ties and is most
appropriate when the table of variables is not square but a rectangle.

Gamma: Test statistic that measures the association between two


ordinal level variables. It does not make an adjustment for ties.

Parametric and non-parametric tests are available for testing


hypothesis related to differences.

Parametric tests: Hypothesis-testing procedures that assume that the


variables of interest are measured on at least an interval scale.
Non-Parametric tests: Hypothesis-testing procedures that assume
that the variables of interest are measured on a nominal or ordinal
scale.

The samples are independent if they are drawn randomly from


different populations. For example, data pertaining to males and
females are treated as independent samples. The samples are paired
when the data for the two samples relate to the same group of
respondents.

HYPOTHESIS TESTS
PARAMETRIC
TESTS
NONPARAMETRIC
TESTS

ONE SAMPLE ONE SAMPLE


t-test TWO SAMPLES Chi square TWO SAMPL
z-test k-s

Independent tests
Two group t – test
Z - tsst

Paired samples
Paired t-test
Hypothesis tests

Parametric tests
Non-parametric tests
(

One Two
sample sample One Two
sample sample
T test
Z test
Chi- square
K-s
Independent Paired Runs
sample sample binomial Independent Paired
sample Samples

Two-group
t- test paired t- test
z test Chi- square Sign
Mann-whitney Wilcoxon
Median McNemar
K-S Chi-square

In the parametric case, the t-test is used to examine hypothesis


related to the population mean. Different forms of the t-test are
suitable for testing hypothesis based on one sample, two
independent samples, or paired samples.

T test: A univariate hypothesis test using the t distribution, which is


used when the standard deviation is unknown and the sample size is
small.
T Statistic: A statistic that assumes that the variable has a symmetric
bell-shaped distribution, the mean is known and the population
variance is estimated from the sample.

T Distribution: A symmetric bell-shaped distribution that is useful for


small sample (n<30) testing.

Here, we assume that the random variable X is normally distributed,


with mean μ and unknown population variance σ2, which is estimated
by the sample variance s2. The standard deviation of the sample
mean, X, is estimated as sX = s / n . Then,
t = (X – μ) / sX
is t distributed with n-1 degrees of freedom.
The t distribution is similar to the normal distribution in appearance,
as both distributions are bell shaped and symmetric. But t-
distribution has more area in tails and less in centre. Given the
uncertainty in the value of s2, the observed values of t are more
variable than those of z. As number of degrees of freedom increases,
the t distribution approaches the normal distribution. For large
samples of 120, the t-distribution and normal distribution are same.

The procedure for hypothesis testing, for the special case when t –
distribution is used, is as follows:-

1. Formulate the null (H0) and the alternative hypothesis (H1)


2. Select the appropriate formula for t – statistic.
3. Select a significance level, α, for testing H0. Typically the
0.05 level is selected.
4. Take 1 or 2 samples and compute the mean and standard
deviation for each sample.
5. Calculate the t – statistic assuming H0 is true.
6. Calculate the degrees of freedom and estimate the
probability of getting a more extreme value of the statistic from
Table. Calculate the critical value of the t statistic.
7. If probability computed above is smaller than significance
level, reject H0. If probability is larger, do not reject H0. If
calculated value of t-statistic is larger than the critical value,
reject H0. If calculated value of t-statistic is smaller than the
critical value, accept H0.
8. Express the conclusion reached by the t – test in terms of
the marketing research problem

ATTITI ATTITUD
UDE E
RESPON INTER TOWA TOWAR
DENT S NET RDS DS USAGE OF USAGE OF
NUMBE E FAMILI USAG INTER TECHNO INTERNET:S INTERNET:B
R X ARITY E NET LOGY HOPPING ANKING
1 1 7 14 7 6 1 1
2 2 2 2 3 3 2 2
3 2 3 3 4 3 1 2
4 2 3 3 7 5 1 2
5 1 7 13 7 7 1 1
6 2 4 6 5 4 1 2
7 2 2 2 4 5 2 2
8 2 3 6 5 4 2 2
9 2 3 6 6 4 1 2
10 1 9 15 7 6 1 2
11 2 4 3 4 3 2 2
12 2 5 4 6 4 2 2
13 1 6 9 6 5 2 1
14 1 6 8 3 2 2 2
15 1 6 5 5 4 1 2
16 2 4 3 4 3 2 2
17 1 6 9 5 3 1 1
18 1 4 4 5 4 1 2
19 1 7 14 6 6 1 1
20 2 6 6 6 4 2 2
21 1 6 9 4 2 2 2
22 1 5 5 5 4 2 1
23 2 3 2 4 2 2 2
24 1 7 15 6 6 1 1
25 2 6 6 5 3 1 2
26 1 6 13 6 6 1 1
27 2 5 4 5 5 1 1
28 2 4 2 3 2 2 2
29 1 4 4 5 3 1 2
30 1 3 3 7 5 1 2

Some statements for a single variable are:


The market share for new product will exceed 15 percent.
80 percent of dealers will prefer the new pricing policy.
These statements are converted to null hypothesis that can be tested
by one-sample test such as t-test or the z-test.
Suppose we want to test the hypothesis that the mean familiarity with
the internet rating exceeds 4.0, the neutral value on a 7-point scale
(1- very unfamiliar, 7- very familiar).
A significance level of α=0.05 is selected. The hypothesis may be
formulated as:
• H0 : π ≤ 0.40
• H1 : π > 0.40
Standard deviation of the sample mean, X, is estimated as sX = s / n .
Then,
t = (X – μ) / sX
is t distributed with n-1 degrees of freedom.
sX = 1.579 / 29 = 1.579/5/385 = 0.293
t= (4.724-4.0)/0.293=0.724/0.293 =2.471
Here, n-1 = 29-1 = 28
Probability of getting a more extreme value than 2.471 is less than
0.05. It implies that the critical t value for 28 degrees of freedom and a
significance level of 0.05 is 1.7011, which is less than the calculated
value. Hence null hypothesis is rejected. The familiarity level does
exceed 4.0.

z test: A univariate hypothesis test using the standard normal


distribution.

Independent samples: Two samples that are not experimentally


related. The measurement of one sample has no effect on the values
of the second sample.

F test: A statistical test of the equality of the variances of two


populations.

F statistic: The F statistic is computed as the ratio of two sample


variance.

F distribution: A frequency distribution that depends upon two sets of


degrees of freedom – the degrees of freedom in the numerator and
the degrees of freedom in the denominator.

Paired samples: In hypothesis testing, the observations are paired so


that the two sets of observations relate to the same respondents.
Paired samples t test: A test for differences in the means of paired
samples.

In the non-parametric case, popular one sample test include the


Kolmogorov-Smirnov, chi-square, runs test, and the binomial test. For
two independent non-parametric samples, the Mann-Whitney U test,
median test, and the Kolmogorov-Smirnov test can be used. For
paired samples the Wilcoxon matched-pairs signed-ranks test and the
sign test are useful for examining hypothesis related to measures of
location.

Kolmogorov-Smirnov one – sample test: A one sample non-


parametric goodness of fit test that compares the cumulative
distribution function for a variable with a specified distribution.

Runs test: A test of randomness for a dichotomous variable.

Binomial test: A goodness of fit statistical test for dichotomous


variables. It tests the goodness of fit of the observed number of
observations in each category to the number expected under a
specified binomial distribution.

Mann-Whitney U test: A statistical test for a variable measured on an


ordinal scale, comparing the differences in the location of two
populations based on observations from two independent samples.
Two-sample median test: Non-parametric test statistic that
determines whether two groups are drawn from populations with the
same median. This test is not as powerful as the Mann-Whitney U.

Kolmogorov-Smirnov two – sample test: Non-parametric test statistic


that determines whether two distributions are same. It takes into
account any differences in the two distributions including median,
dispersion, and skewness.

Wilcoxon matched pairs signed ranks test: A non-parametric test that


analyzes the differences between the paired observations, taking into
account the magnitude of the differences.

Sign test: A non-parametric test for examining differences in the


location of two populations based on paired observations that
compares only the signs of the differences between pairs of variables
without taking into account the magnitude of the differences.

Table 15.19

Chapter 16

In ANOVA and ANCOVA, the dependent variable is metric and the


independent variables are all categorical, or combination of
categorical and metric variables. One way ANOVA involves a single
independent categorical variable. Interest lies in testing the null
hypothesis that the category means are equal in the population.
Analysis of variance (ANOVA): A statistical technique for examining
the differences among means for two or more populations.

Factors: Categorical independent variables. The independent


variables must be all categorical (non-metric) to use ANOVA.

Treatment: In ANOVA, particular combination of factor levels or


categories.

One-way analysis of variance: An ANOVA technique in which there is


only one factor.

The total variation in the dependent variable is decomposed into two


components: variation related to the independent variable, and
variation related to an error. The variation is measured in terms of the
sum of squares corrected for the mean (SS). The mean square is
obtained by dividing the SS by the corresponding degree of freedom
(df). The null hypothesis of equal means is tested by an F statistic,
which is the ratio of the mean square relating the independent
variable to the mean square related to error.

For example: A major department store chain wanted to examine the


effect of the level of in-store promotion and a storewide coupon on
sales. In-store promotion was varied at three levels: high(1),
medium(2), and low(3). Couponing was manipulated at two levels.
Either a $20 storewide coupon was distributed to potential shoppers
(denoted by 1) or it was not (denoted by 2). In-store promotion and
couponing were crossed, resulting in a 3 x 2 design with six cells.
Thirty stores were randomly selected, and five stores were randomly
assigned to each treatment condition. The experiment was run for two
months. Sales in each store were measured, normalized to account
for extraneous factors (store size, traffic, etc) and converted to a 1-to-
10 scale. In addition a qualitative assessment was made of the relative
affluence of the clientele of each store, again using a 1-to-10 scale. In
these scales, higher numbers denote higher sales or more affluent
clientele.

Table 16.2 carries the data display.

Suppose that only one factor, namely in-store promotion, was


manipulated. The department store is attempting to determine the
effect of in-store promotion (X) on sales (Y).

The null hypothesis is that the category means are equal:


H0: μ1 = μ2 = μ3
Here μ represents average sales.

We calculate F statistic as F=17.944


The critical value of F is 3.35 for α = 0.05

Because the calculated value of F is greater than the critical value, we


reject the null hypothesis. We conclude that the population means for
the three levels of in-store promotion are indeed different. The relative
magnitudes of the means for the three categories indicate that a high
level of in-store promotion leads to significantly higher sales.
Coupon Level, In-store Promotion, Sales, and Clientele Rating

Store Coupon In-Store Clientele


No. Level Promotion Sales Rating
1 1 1 10 9
2 1 1 9 10
3 1 1 10 8
4 1 1 8 4
5 1 1 9 6
6 1 2 8 8
7 1 2 8 4
8 1 2 7 10
9 1 2 9 6
10 1 2 6 9
11 1 3 5 8
12 1 3 7 9
13 1 3 6 6
14 1 3 4 10
15 1 3 5 4
16 2 1 8 10
17 2 1 9 6
18 2 1 7 8
19 2 1 7 4
20 2 1 6 9
21 2 2 4 6
22 2 2 5 8
23 2 2 5 10
24 2 2 6 4
25 2 2 4 9
26 2 3 2 4
27 2 3 3 6
28 2 3 2 10
29 2 3 1 9
30 2 3 2 8

Apply one-way ANOVA as:


1. Select ANALYZE from the SPSS menu bar.
2. Click COMPARE MEANS and then ONE-WAY ANOVA.
3. Move “Sales” into the DEPENDENT LIST box.
4. Move “In-Store Promotion” to the FACTOR box.
5. Click OPTIONS.
6. Click DESCRIPTIVE.
7. Click CONTINUE.
8. Click OK.

After applying SPSS, we found that sample means with values of 8.3,
6.2 and 3.7
Stores with a high level of in-store promotion have the highest
average sales (8.3) and stores with a low level of in-store promotion
have the lowest average sales (3.7).

n-way analysis of variance involves the simultaneous examination of


two or more categorical independent variables. n-way analysis of
variance provides an ANOVA model where two or more factors are
involved.

A major advantage is that the interactions between the independent


variables can be examined. The significance of the overall effect,
interaction terms, and the main effects of individual factors are
examined by appropriate F tests. It is meaningful to test the
significance of main effects only if the corresponding interaction
terms are not significant.

ANCOVA includes at least one categorical independent variable and


at least one interval or metric independent variable. The metric
independent variable or covariate is commonly used to remove
extraneous variation from the dependent variable.

Analysis of covariance (ANCOVA): An advanced analysis of variance


procedure in which the effects of one or more metric-scaled
extraneous variables are removed from the dependent variable before
conducting the ANOVA.

COVARIATE: A metric independent variable used in ANCOVA.

When analysis of variance is conducted on two or more factors,


interactions can arise. An interaction occurs when the effect of an
independent variable on a dependent variable is different for different
categories or levels of another independent variable.

Decomposition of the total variation: In one way ANOVA, separation


of the variation observed in the dependent variable into the variation
due to the independent variables plus the variation due to error.

Interaction: When assessing the relationship between two variables,


an interaction occurs if the effect of X1 depends on the level of X2 and
vice-versa.

Multiple n2: The strength of the joint effect of two (or more) factors, or
the overall effect.

Significance of the overall effect: A test that some differences exist


between some of the treatment groups.

Significance of the interaction effect: A test of the significance of the


interaction between two or more independent variables.
Significance of the main effect: A test of the significance of the main
effect for each individual factor.

If the interaction is significant, it may be ordinal or disordinal.

Ordinal Interaction: An interaction where the rank order of the effects


attributable to one factor does not change across the levels of the
second factor.

Dis-ordinal Interaction: The change in the rank order of the effects of


one factor across the levels of another.

Dis-ordinal interaction may be of a non-crossover or crossover type.


In balanced designs, the relative importance of factor in explaining
the variation in the dependent variable is measured by omega square
(ω2).

Omega squared (ω2): A measure indicating the proportion of the


variation in the dependent variable explained by a particular
independent variable or factor.

Multiple comparisons in the form of a priori or a posteriori contrast


can be used for examining differences among specific means.

Contrasts: In ANOVA, a method of examining differences among two


or more means of the treatment groups.
A priori contrasts: Contrasts that are determined before conducting
the analysis, based on the researcher’s theoretical framework.

A posteriori contrasts: Contrasts made after the analysis. These are


generally multiple comparison tests.

Multiple Comparison test: A posteriori contrasts that enables the


researcher to conduct generalized confidence intervals that can be
used to make pair-wise comparisons of all treatment means.

In repeated measures analysis of variance, observations on each


subject are obtained under each treatment condition. This design is
useful for controlling for the difference in subjects that exists prior to
the experiment.

Repeated measures ANOVA: An ANOVA technique used when


respondents are exposed to more than one treatment condition and
repeated measurements are obtained.

Non-metric analysis of variance involves examining the differences in


the central tendencies of two or more groups when the dependent
variable is measured on an ordinal scale.

Non-metric ANOVA: An ANOVA technique for examining the


differences in the central tendencies of more than two groups when
the dependent variable is measured on an ordinal scale.
K sample median test: non-parametric test is used to examine the
differences among groups when the dependent variable is measured
on an ordinal scale.

Kruskal-Wallis one way analysis of variance: A non-metric ANOVA


test that uses the rank value of each case, not merely its location
relative to the median.

Multi-variate analysis of variance (MANOVA) involves two or more


metric dependent variables.

Multivariate Analysis of variance (MANOVA): An ANOVA technique


using two or more metric dependent variables.

Figure 16.1

Chapter 17

The product moment correlation coefficient r, measures the linear


association between two metric (interval or ratio scaled) variables. Its
square, r2, measures the proportion of variation in one variable
explained by the other.

Product moment correlation (r): A statistic summarizing the strength


of association between two metric variables (interval or ratio scaled).
It is an index used to determine whether a linear, or straight line
relationship exists between X and Y. It indicates the degree to which
the variation in one variable X is related to variation in another
variable Y. It is also known as Pearson Correlation coefficient, or
simple correlation, bivariate correlation or correlation coefficient.

Covariance: A systematic relationship between two variables in which


a change in one implies a corresponding change in the other (COVxy).

For example, a researcher wants to explain attitudes toward a


respondent’s city of residence in terms of duration of residence in the
city. The attitude is measured on an 11-point scale (1=do not like the
city, 11=very much like the city), and the duration of residence is
measured in terms of the number of years the respondents has lived
in the city.
Attitude Toward Duration of Importance Attached to
Respondent No.
the City Residence Weather
1 6 10 3
2 9 12 11
3 8 12 4
4 3 4 1
5 10 12 11
6 4 6 1
7 5 8 7
8 2 2 4
9 11 18 8
10 9 9 10
11 10 17 8
12 2 2 5

The correlation coefficient is calculated as:


r = 0.9361
Here r is close to 1.0. This means that respondents’ duration of
residence in the city is strongly associated with their attitude toward
the city. The positive sign of r implies a positive relationship; the
longer the duration of residence, the more favorable the attitude and
vice versa.
When it is computed for a population than a sample, the product
moment correlation is denoted by ρ (rho). The coefficient r is an
estimator of ρ. The statistical significance of the relationship between
two variables measured by using r can be conveniently tested. The
hypotheses are:
H0: ρ = 0
H1: ρ ≠ 0
t-statistic would be applied with n-2 degrees of freedom.
Here t is calculated as 8.414 and the degrees of freedom are 12 - 2=10.
From t-distribution table, the critical value of t for a two-tailed test and
α = 0.05 is 2.228.

Hence null hypothesis of no relationship between X and Y is rejected.


This along with positive value of r indicates the attitude toward the
city is positively related to the duration of residence in the city. The
high value of r indicates that this relationship is strong.

Steps in SPSS are:


1. Select ANALYZE from the SPSS menu bar.
2. Click CORRELATE and then BIVARIATE.
3. Move Attitude into the VARIABLES box. Then move
duration into the VARIABLES box.
4. Check PEARSON under CORRELATION COEFFICIENTS.
5. Check ONE-TAILED under TEST OF SIGNIFICANCE.
6. Check FLAG SIGNIFICANT CORRELATIONS.
7. Click OK.

The partial correlation coefficient measures the association between


two variables after controlling, or adjusting for, the effects of one or
more additional variables. The order of a partial correlation indicates
how many variables are being adjusted or controlled. Partial
correlation can be very helpful for detecting spurious relationships.

Partial correlation coefficient: A measure of the association between


two variables after controlling or adjusting for the effects of one or
more additional variables.

Part correlation coefficient: A measure of the correlation between Y


and X when the linear effects of the other independent variables have
been removed from X but not from Y.

Non-metric correlation: A correlation measure for two non-metric


variables that relies on rankings to compute the correlation.

Bi-variate regression derives a mathematical equation between a


single metric criterion variable and the single metric predictor
variable. The equation is derived in the form of a straight line by using
the least squares procedure. When the regression is run on
standardized data, the intercept assumes a value of 0, and the
regression coefficients are called beta weights. The strength of
association is measured by coefficient of determination, r2, which is
obtained by computing ratio of SSreg to SSy. The standard error of
estimate is used to assess the accuracy of prediction and may be
interpreted as a kind of average error made in predicting Y from the
regression equation.

Regression Analysis: A statistical procedure for analyzing associative


relationships between a metric dependent variable and one or more
independent variables.

Bivariate regression: A procedure for deriving a mathematical


relationship, in the form of an equation, between a single metric
dependent variable and a single metric independent variable.

Least-square procedure: A technique for fitting a straight line to a


scattergram by minimizing the square of the vertical differences of all
the points from the line.

Multiple regressions involve a single dependent variable and two or


more independent variables. The partial regression coefficient, b1
represents the expected change in Y, when X1 is changed by one unit
and X2 through XK are held constant. The strength of association is
measured by coefficient of multiple determination, R2. The
significance of the overall regression equation may be tested by the
overall F test. Individual partial regression coefficients may be tested
for significance using the t-test or the incremental F test. Scatter
grams of the residuals, in which the residual are plotted against
predicted values, Yi, time, or predictor variables are useful for
examining the appropriateness of the underlying assumptions and
the regression model fitted.

Multiple regression: A statistical technique that simultaneously


develops a mathematical relationship between two or more
independent variables and an interval-scaled dependent variable.

Multiple regression model: An equation used to explain the results of


multiple regression analysis.

Residual: The differences between the observed value of Yi and the


value predicted by the regression equation, Yi.

In stepwise regression, the predictor variables are entered or


removed from the regression equation one at a time for the purpose
of selecting a smaller subset of predictors that account for most of
the variation in the criterion variable. Multi-colinearity, or very high
inter-correlations among the predictor variables, can result in several
problems. Because the predictors are correlated, regression analysis
provides no unambiguous measures of relative importance of the
predictors.

Stepwise regression: A regression procedure in which the predictor


variables enter or leave the regression equation one at a time.

Multi-collinearity: A state of very high inter-correlations among


independent variables.
Nominal or categorical variables may be used as predictors by coding
them as dummy variables. Multiple regressions with dummy variables
provide a general procedure for the analysis of variance and
covariance.

Cross validation examines whether the regression model continues to


hold true for comparable data not used in estimation. It is a useful
procedure for evaluating the regression model.

Cross-validation: A test of validity that examines whether a model


holds on comparable data not used in the original estimation.

Double Cross-validation: A special form of validation in which the


sample is split into halves. One half serves as the estimation sample
and the other as a validation sample. The roles of the estimation and
validation halves are then reserved and the cross validation process
is repeated.

Chapter 19

Factor analysis is a class of procedures used for reducing and


summarizing data. Each variable is expressed as a linear combination
of the underlying factors. Likewise the factors themselves can be
expressed as linear combinations of the observed variables. The
factors are extracted in such a way that the first factor accounts for
the highest variance in the data, the second the next highest and so
on. Additionally it is possible to extract the factors so that the factors
are uncorrelated as in principle component analysis.

Factor analysis: A class of procedures primarily used for data


reduction and summarization.

Interdependence technique: Multivariate techniques in which the


whole set of interdependent relationships is examined.

Factors: An underlying dimension that explains the correlations


among a set of variables.

In formulating the factor analysis problem the variables to be included


in the analysis should be specified based on the past research, theory
and the judgment of the researcher. These variables should be
measured on an interval or ratio scale. Factor analysis is based on a
matrix of correlation between the variables.

For example, the researcher wants to determine the underlying


benefits consumers seek from the purchase of toothpaste. A sample
of 30 respondents was interviewed using mail-intercept interviewing.
The respondents were asked to indicate their degree of agreement
with the following statements using a 7-point scale (1 = strongly
disagree, 7 = strongly agree):

V1: It is important to buy a toothpaste that prevents cavities.


V2: I like a toothpaste that gives shiny teeth.
V3: A toothpaste should strengthen your gums.
V4: I prefer a toothpaste that freshens breath.
V5: Prevention of tooth decay is not an important benefit
offered by a toothpaste.
V6: The most important consideration in buying a toothpaste
is attractive teeth.

Responden
V1 V2 V3 V4 V5 V6
t Number
1 7 3 6 4 2 4
2 1 3 2 4 5 4
3 6 2 7 4 1 3
4 4 5 4 6 2 5
5 1 2 2 3 6 2
6 6 3 6 4 2 4
7 5 3 6 3 4 3
8 6 4 7 4 1 4
9 3 4 2 3 6 3
10 2 6 2 6 7 6
11 6 4 7 3 2 3
12 2 3 1 4 5 4
13 7 2 6 4 1 3
14 4 6 4 5 3 6
15 1 3 2 2 6 4
16 6 4 6 3 3 4
17 5 3 6 3 3 4
18 7 3 7 4 1 4
19 2 4 3 3 6 3
20 3 5 3 6 4 6
21 1 3 2 3 5 3
22 5 4 5 4 2 4
23 2 2 1 5 4 4
24 4 6 4 6 4 7
25 6 5 4 2 1 4
26 3 5 4 6 4 7
27 4 4 7 2 2 5
28 3 7 2 6 4 3
29 4 6 3 7 2 7
30 2 3 2 4 7 2
The two basic approaches to factor analysis are principal component
analysis and common factor analysis. In principal component
analysis the total variance in the data is considered. Principal
component analysis is recommended when the researcher‘s primary
concern is to determine the minimum number of factors that will
account for maximum variance in the data for use in subsequent
multivariate analysis. In common factor analysis the factors are
estimated based only on the common variance. Thus method is
appropriate when the primary concern is to identify the underlined
dimensions and when the common variance is of interest. This
method is also known as principal axis factoring.

Principal Components Analysis: An approach to factor analysis that


considers the total variance in data.

Common factor analysis: An approach to factor analysis that


estimates the factors based only on the common variance.

Factor Analysis through SPSS is done as:


1. Select ANALYZE from the SPSS menu bar.
2. Click DATA REDUCTION and then FACTOR.
3. Move var1, var2, var3, var4, var5, var6 into the VARIABLES
box.
4. Click on DESCRIPTIVES. In the pop up window, in the
STATISTICS box check INITIAL SOLUTION. In the
CORRELATION MATRIX box check KMO AND BARTLETT’S
TEST OF SPERICITY and also check REPRODUCED. Click
CONTINUE.
5. Click on EXTRACTION. In the pop-up window, for METHOD

select PRINCIPAL COMPONENTS. In the ANALYZE box, check


CORRELATIN MATRIX. In the EXTRACT box, check
EIGENVALUE OVER 1. In the DISPLAY box, check UNROTATED
FACTOR SOLUTION. Click CONTINUE.
6. Click on ROTATION. In the METHOD box check VARIMAX. In the
DISPLAY box check ROTATED SOLUTION. Click CONTINUE.
7. Click on SCORESA. In the pop-up window, check DISPLAY
FACTOR SCORE COEFFICIENT MATRIX. Click CONTINUE.
8. Click OK.

The number of factors that should be extracted can be determined a


priori or based on eigen-values. Scree plots, percentage of variance,
split-half reliability, or significance tests. Although the initial or un-
rotated factor matrix indicates the relationship between the factors
and individual variables, it seldom results in factors that can be
interpreted, because the factors are correlated with many variables.
Therefore, rotation is used to transform the factor matrix into a
simpler one that is easier to interpret. He most commonly used
method of rotation is the varimax procedure, which results in
orthogonal factors. If the factors are highly correlated in the
population, oblique rotation can be utilized. The rotated factor matrix
forms the basis for interpreting the factors.

Orthogonal Rotation: Rotation of factors in which the axes are


maintained at right angles.
Varimax procedure: An orthogonal method of factor rotation that
minimizes the number of variables with high loading on a factor,
thereby enhancing the interpretability of the factors.

Oblique rotation: Rotation of factors when the axes are not


maintained at right angles.

Factor scores can be computed for each respondent. Alternatively


surrogate variables may be selected by examining the factor matrix
and selecting for each factor, a variable with the highest or near
highest loading. The differences between the observed correlations
and the reproduced correlations, as estimated form the factor matrix;
can be examined to determine model fit.

Factor scores: Composite scores estimated for each respondent on


the derived factors.

Chapter 22

Report preparation and presentation is the final step in the marketing


research project. This process begins with the interpretation of data
analysis, results and leads to conclusion and recommendations. Next
the formal report is written and an oral presentation made. After
management has read the report. Researcher should conduct a follow
up, assisting management and undertaking a thorough evaluation of
the marketing research project.
Pie chart: A round chart divided into sections.

Line chart: A chart that connects a series of data points using


continuous lines.

Stratum chart: A set of line charts in which the data are successively
aggregated over the series. Areas between the line charts display the
magnitude of the relevant variables.

Pictograph: A graphical depiction that makes use of small pictures or


symbols to display the data.

Bar chart: A chart that displays data in bars positioned horizontally or


vertically.

Histogram: A vertical bar chart in which the height of the bars


represents the relative or cumulative frequency of occurrence.

Tell ‘Em principle: An efficient guideline for structuring a


presentation. This principle states: (1) Tell ‘em what you are going to
tell ‘em, (2) tell ‘en and (3) tell ‘em what we have told ‘em.

KISS ‘Em principle: A principle of report presentation that states:


Keep It Simple and Straightforward.
In international marketing research, report [preparation may be
complicated by the need to prepare reports for management in
different countries and in different languages. Several ethical issues
are pertinent particularly those related to the interpretation and
reporting on the research process and findings to the client and the
news of these results by the client. The use of micro-computers and
mainframes can greatly facilitate report preparation and presentation.

Anda mungkin juga menyukai