Anda di halaman 1dari 27

CONSUMER BUYING BEHAVIOUR

REGARDING SHOES

SUBMITTED IN THE PARTIAL FULFILLMENT OF THE


REQUIREMENTS FOR THE DEGREE OF

MASTERS OF BUSINESS ADMINISTRATION

TO
PROF. BIBHAS CHANDRA

DMS IIT(ISM) DHANBAD

SUBMITTED BY:
DEEPAK KUMAR

ADMISSION NO. – 17MB000103

pp1page1
ACKNOWLEDGEMENT

I express my sincere gratitude to DMS, IIT (ISM) Dhanbad for


providing me a platform where I can utilize the inputs provided
by sir himself to enlighten me and carve out the best of me.
My sincere thanks to faculty members and staff members for
offering me all kinds of support and help in preparing the
research based project. I am deeply indebted to my guide Dr.
Bibhas chandra sir for not only his valuable and enlightened
guidance but also for the freedom he rendered me during this
project work.
I am thankful to my friends and well-wishers who with their
helping hand supported me and made it a relatively easier affair.

Deepak Kumar
17MB000103
MBA( 2017-2019)

2
“TABLE OF CONTENT”

PARTICULARS PAGE NO.

PREFACE 4

LITERATURE REVIEW 5

RESEARCH METHODOLOGY 6

FACTOR ANALYSIS 7-18

REGRESSION ANALYSIS 19-21

ANOVA 22-23

T TEST 24-25

CONCLUSION 26

LIMITATION & REFERENCES 27

3
PREFACE
The research work deals with the process of adoption of innovation
wherein there are series of stages involved which one goes from first
hearing about a product to finally adopting it. An individual’s innovative
personality, predisposition and cognitive style towards innovation can
be applied to consumption domain (purchase of shoes) across product.
An individual is at first exposed to innovation but lacks information
about innovation, further he becomes interested in innovation and then
he takes concept of change and finalizes his decision. I have tried to put
my best effort to complete this task on the basis of skill achieved during
studies in the institute. The research work will also let students like me
serve as self learning of spss using statistical techniques in research
problems. The research and data analysis (factor analysis, regression,
anova) will help to know what are consumer behaviors regarding buying
shoes.

4
LITERATURE REVIEW
emotions, moods, affection and specific feelings is called customer behavior, in
other words in environmental events which they exchange ideas and benefits each
is called customer behavior . Buying behavior of Peter and Olson, (1993) mention
that interactions between the people’s people, who purchase products for personal
use and not for business purposes.

Cyert (1956) may have been the first to observe that a number of managers in
addition to the purchasing agents are involved in buying process, and the concept
was labeled buying behavior‟ and popularized by Robinson(Faris and Win 1967).
Webster and Wind (1972) famously identified five buying roles, they are: 1. users
2. Influencer 3.buyer 4.decider and 5 Gatekeeper (Webster and wind, 1972).
Further categories have been suggested as the „initiator‟ (Bonoma, 1981), and the
„analyst‟ and spectator by Wilson (Wilson, 1998).

The product purchase decision is not always done by the user. The buyer
necessarily purchases the product. Marketers must decide at whom to direct their
promotional efforts, the buyer or the user. They must identify the person who is
most likely to influence the decision. If the marketers understand customer
behavior, they are able to predict how customers are likely to react to various
informational and environmental cues, and are able to shape their marketing
strategies accordingly (Kotler, 1994).

Schiffinan and Kanuk(2004)define customer‟s behavior as the behavior that the


customers display in searching for purchasing, using, evaluating and disposing of
products and services that they expect will satisfy their needs. Customer buying
behavior incorporates the acts of individuals directly involve in obtaining, using
and disposing of economics goods

and service including the decision process that precede and determine these acts.

5
RESEARCH METHODOLOGY
 Sampling methodology:
Sample Size — 70 respondents
Sample Unit- Students of Graduation and the Post Graduation, and working
people have been taken as sample unit.
Sampling Area – Dhanbad, Patna, Delhi,Bhojpur,Chandigarh
Sampling Technique - Random Sampling technique
 Data collection:
Primary data has been used by me in the form of Questionnaire &
Observation, which are the two basic methods of collecting primary data
 Distribution of respondents based on gender, age and income:

Gender

Cumulative
Frequency Percent Valid Percent Percent

Valid male 45 64.9 64.9 64.9

female 25 35.1 35.1 100.0

Total 70 100.0 100.0

6
7
FACTOR ANALYSIS:
A multivariate statistical technique used to identify factors underlying the variable
by means of clubbing related variables in the same factor. A dimension reduction
technique which reduces the large number of variables into few factors on the basis
of their interrelation. The key concept of factor analysis is that multiple observed
variables have similar patterns of responses because they are all associated with a
latent (not directly measured) variable. In every factor analysis, there are the same
number of factors as there are variables. Each factor captures a certain amount of
the overall variance in the observed variables, and the factors are always listed in
order of how much variation they explain. Factor analysis is related to principal
component analysis but the two are not identical. There has been significant
controversy in the field over differences between the two techniques. PCA is a
more basic version of exploratory factor analysis that was developed in the early
days prior to the advent of high-speed computers. From the point of view of
exploratory analysis, the eigen values of PCA are inflated component loadings i.e.
contaminated with error variance. The widespread adoption of PCA can be
attributed to it being the default extraction method both SPSS and SAS. Holding
this default position has more than likely led to PCA being used mistakenly when
exploratory factor analysis is more suitable. The goal of PCA is to reduce the
measured variables to a small set of composite components that capture as much
information as possible in as few components as possible. On the other hand, the
goal of exploratory factor analysis (EFA) is to find the latent structure of the
dataset by uncovering common factors. Therefore, exploratory factor analysis
accounts for shared variance. This is an important distinction from PCA as it
fundamentally means EFA is more suitable when exploring underlying theoretical
constructs.

8
Assumptions in factor analysis:
1. No outlier: Assume that there are no outliers in data.

2. No perfect multi collinearity: Factor analysis is an interdependency technique.

3. There should not be perfect multi colinearity between the variables.

4. Homoscedasticity: Meaning of homoscedasticity between all the variables is that


the variance around the regression line is the same for all values of the predictor
variable(X) Since factor analysis is a linear function of measured variables, it does
not require homoscedasticity between the variables.

5. Linearity: Factor analysis is also based on linearity assumption. Non-linear


variables can also be used. After transfer, however, it changes into linear variable.

Steps to carry out:

 Compute descriptive statistics for all the variables to compute mean,


standard deviation, skewness, standard error, kurtosis, and coefficient of
variability to understand the nature of variables under study.

 Prepare correlation matrix with all the variables taken in the study.

 Apply KMO test to check the adequacy of the data for running factor
analysis. Value of KMO ranges from 0 to 1. Larger the value of KMO, more
adequate is the sample. As a convention, any value greater than 0.5 signifies
the adequacy of sample to run factor analysis.

 Apply Bartlett’s test of sphericity for testing the hypothesis that the
correlation matrix is not an identity matrix. If the correlation matrix is an
identity matrix then factor analysis becomes inappropriate.

9
 Obtain unrotated factor solution by using the principal component analysis.
This will provide us the number of factors along with their eigen values, can
be shown by the scree plot. This solution also provides the factor loadings of
the variables of the variables on different factors, percentage variability
explained by each factor.
FACTOR ANALYSIS PROCEDURE IN SPSS:
We are supposed to do factor analysis on the data we have collected through our
questionnaire which contains 70 cases. The data to be taken into consideration are
the questions which are on the basis of Likert scale. The very first step we need to
do is to copy all the likert scale responses in the spss. These data will appear in
the “Data view”. After that we click into “Variable view” and change the names
of all the variables we have selected and we change the measures to “scale” for all
the variables. (see fig.1)

10
Fig. 1

Once we have opened the file in SPSS, select Analyze >Dimension reduction
>factor. At this point, a window will open and you will see all your variables on
the left-hand side. Select the variables you wish to include in the analysis and
move them over to the Variables section on the right hand side, (see fig.2). Here
we are moving 23 items except the one we have considered as dependent variable.
(If one variable depends upon or is a consequence of the other variable is termed as
dependent variable, and the variable that is antecedent to the dependent variable is
termed as independent variable. Here we are considering “Adopt innovation” as
dependent variable and rest as independent variable.

Fig.2

Then select the Descriptive button and in the section marked Correlation Matrix,
select Coefficients, significance level, determinant and KMO and Bartlett’s test of
sphericity and hit Continue. (see fig.3)

11
Fig.3

Now Click on the Extraction button and in the Method section make sure Principal
component is selected from the dropdown box. In the Analyze section make sure
Correlation matrix is selected. Under Display select Unrotated factor solution and
tick the check box beside Scree plot. In the Extract section you will see two
options with radio buttons beside each, the first is Eigen values greater than 1 and
this is the default. For now, leave this as it is. Click Continue.

Fig.4

12
Next click on the Rotation button and select Varimax as this is an orthogonal
rotation technique which maximises the variances of loadings on the new axes.
And in display select rotated solution then click continue.

Fig.5

Next click on scores and click on save as variable. Click continue(see fig.6)

Fig.6

13
Now click on options and tickmark the radio button of excluded case listwise,
further select small coeffiecients and enter the value 0.65 beside the box of
Absolute value below. By choosing this option SPSS will hide coefficients less
than 0.65. This is a useful tool for a number of reasons. Firstly, it helps
interpretation as we can see more clearly where particular items load. Secondly, it
highlights items with loadings less than 0.65 on all dimensions. When an item does
not load on a dimension (i.e. has loadings less than 0.65 on all dimensions) it may
indicate the item is unreliable and as a result may be a candidate for deletion.
Finally, this also shows whether any items cross-load. This means an item is
loading on more than one dimension which would lead us to question the
reliability.

(Fig.7)

Click continue and ok. The Factor Analysis output will then open in a second
window known as Output file.

14
OUTPUT INTERPRETATION:
1. Test the assumptions:
The value of the Kaiser-Meyer-Olkin Measure of Sampling Adequacy
(KMO) was checked. This should be either 0.6 or above. A high value close
to 1 indicates that factor analysis is useful for our data. It indicates
proportion of variance in our variables that might be caused by underlying
factors. For our example KMO is .918 which is well within acceptable limits
(see Figure 8). The Bartlett‘s Test of Sphericity should be significant (less
than 0.05) and in this we have met this criterion as the test is significant
(p=.000).

KMO and Bartlett's Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .918

Bartlett's Test of Sphericity Approx. Chi-Square 828.175

df 105

Sig. .000

(Fig. 8)

2. No. of factors to be extracted:

The first and most popular method for deciding on the retention of factors is
Kaiser‘s eigen value greater than 1 criterion . This rule specifies all factors
greater than one are retained for interpretation. This method offers the
advantage of being easy to understand and is also the default method on
most programs. In fact, this method may lead to arbitrary decisions, for
example it does not make sense to retain a factor with an Eigen value of 1.01
and then to regard a factor with an Eigen value of 0.99 as irrelevant. A

15
technique which overcomes some of the deficiencies inherent in Kaiser‘s
approach is Scree Test. The Scree Test graphically presents the Eigen values
in descending order linked with a line. This graph is then scrutinized to
determine where there is a noticeable change in its shape and this is known
as the point of inflexion. Once you have identified the point at which the last
significant break takes place, only factors above and excluding this point
should be retained.

(Fig.9)

if we are to apply Kaiser‘s eigenvalue greater than 1 criterion we would extract


only one factor from the dataset. This is determined by examining the Total
Variance Explained table (See Figure 9) wherein the total Eigen values for the first
dimension is 8.268 which accounts for 55.120% of the variance extracted. If we
look to the line below this, we see the fourth factor has not met the eigenvalue

16
greater than 1 criterion as it has an eigen value of 0.728. let’s examine the Scree
plot (see Figure 10) to find the point of inflexion (elbow). In our case the most
obvious break is at Factor 4. A scree plot displays the eigenvalues associated with
a component or factor in descending order versus the number of the component or
factor. We can use scree plots in principal components analysis and factor analysis
to visually assess which components or factors explain most of the variability in
the data. Hence here we are getting 3 factors at 72.640% of variance.

(Fig.10)
3. Factor Rotation and its Interpretation : The table we are most interested
in is the Component Matrix which displays the rotated factor loadings and is
used to interpret the dimensions. However, before beginning interpretation,
the first thing we need to check is for cross-loadings. A cross-loading is an

17
item with coefficients greater than .65 on more than one dimension. To help
with this we requested all loadings less than .65 be suppressed in the output
to aid interpretation. As we can see, our example is free from cross-loadings
as all items load on only one dimension. The second thing we need to check
is whether there are items that do not load on any of the factors, i.e. have
loadings less than .65 on all dimensions. If we found items cross-loading or
not loading at all, this would suggest they are poor/unreliable items and may
need to be deleted from the analysis. If this were to happen, you would need
to re-run your analysis without the offending item.

Now we are required to give a common name to all the factors according to the
items in it. For the 1st dimension the suitable name is ―Features, for the 2nd
dimension ―Locality, and for the 3rd one ―Discount .

18
See figure below.

Third factors were extracted explaining 72.640% of the variance. This was decided
based on eigen values, cumulative variance and inspection of the scree plot. We
now see the component score coefficient for each of the factors according to the
Varimax rotation method. We now look into component transformation matrix
below.

Thus we could know that what all factors or dimensions are there which are the
most important part of this survey with the help of statistical tool.

REGRESSION ANALYSIS:

A statistical process for estimating the relationship among variables. More


specifically, regression analysis helps one understand how the typical value of the
dependent variable (or 'criterion variable') changes when any one of the
independent variables (predictor variable) is varied. When estimation is done on
the basis of one independent variable then it’s simple regression but when
estimation is carried out on more than one independent variable then it’s multiple

19
regression analysis. Regression also gives us R squared, called as coefficient of
determination which tells us how good our model is. Value ranges from 0 to 1,
with 0 being a terrible model and 1 being a perfect model. It explains percentage of
variance in dependent variable by independent variable. Say, for example if R=0.8
then 64% of variability in dependent variable is explained by selected independent
variables.

Regression analysis using spss


Once we are done with the factor analysis and we are aware of the factors which
were the outcome of factor analysis, we are now supposed to do regression
analysis. As we have given different names to all the factors therefore these factors
are considered to be the independent factors. The independent factors namely are
Features, Locality,Discount.
In the spss, select Analyze>Regression>Linear. At this point a window will open
and you will have to put your dependent variable in dependent column and all the
independent factors in the independent one. And select method as ―stepwise. See
figure below.

20
Then click on the Statistics and select Model fit, confidence interval level (95%),
R squared change, Description and Collinearity diagnostic, durbin watson then
click Continue and Ok (see below). When u click Ok a separate output will open
in another tab which will show the interpretation.

21
Interpretation of output:
we need to see here is what are the variables entered/removed. This will help us to
know what factors are taken into consideration for the regression analysis. Here we
can see that three variables entered namely Discount, Locality,Features. The most
important factor here comes Discount followed by others.

The resulting SPSS output tables are shown below. The model fit output consists
of a ―Model Summary table and an ―ANOVA table. The former includes the
multiple correlation coefficient, R, its square, R2, and an adjusted version of this
coefficient. The multiple correlation coefficient R = 0.651 and R square = 0.523, or
52.3%. The adjusted R square was 0.493 for the last iteration and standard error of
estimate was 0.62989. The changed statistics shows that R square change was
0.424, F change was 16.214 and the significance change was 0.000(which should
be less than 0.05).

If we see the Anova table, the sum of squares at the fourth step is equal to 45.486
and the significance level is equal to 0.000

22
Next we look into coefficient table. This table also includes the Beta weights
(which express the relative importance of independent variables) and the
collinearity statistics. Here we can see that the significance level for all the factors
is less than 0.05 which is acceptable for regression analysis. The p-value for each
term tests the null hypothesis. A low p-value (< 0.05) indicates that you can reject
the null hypothesis. We use the coefficient p-value to determine which terms to
keep in the regression model. If we go to the last column of the coefficient table we
can see in collinearity statistics that the tolerance and VIF is same for all the
factors which is equal to 1.000. Variance inflation factors measure how much the
variance of the estimated regression coefficients are inflated as compared to when
the predictor variables are not linearly related. It is used to explain how much
amount multicollinearity(correlation between predictors) exists in a regression
analysis

23
T-test
A t-test is an analysis of two populations means through the use of statistical
examination. A t-test with two samples is commonly used with small sample sizes,
testing the difference between the samples when the variances of two normal
distributions are not known.
A t-test looks at the t-statistic, the t-distribution and degrees of freedom to
determine the probability of difference between populations; the test statistic in the
test is known as the t-statistic. To conduct a test with three or more variables, an
analysis of variance (ANOVA) must be used.

T-test PROCEDURE USING SPSS


In the SPSS go to Analyze/Compare means/Independent sample T-test. A window
will appear on the screen. Move the dependent variable i.e.; Preference to the Test
variable list and gender to the grouping list. Then we need to define the group by
clicking on Define group, a small box will appear and we need to write 1for male
i.e.; group 1 and 2 for female i.e.; group 2.

24
INTERPRETATATION OF OUTPUT
On the basis of gender the factor i.e Feature,Locality,Discount does not show sign
difference greater than 0.5 .So we can say that Feature,Locality,Discount(Factors)
do not shows significant difference on the basis of different gender

25
CONCLUSION
In this study, we examined some factors due to gaming that impacted the life of
people. A conceptual model was used in order to assess the effects of variables on
each other using regression analysis.

Results indicated that Feature, Locality, Discount were the main factors on
which buying convenience of shoes depends. Feature was the most important
factor that has preferred by consumer

According to the T-tests done, the findings were as follows

 Gender had no significant difference on features .


Male and female have equal preferences of feature
 Gender had no significant difference on locality .
Male and female have equal preferences of locality
 Gender had no significant difference on discount .
Male and female have equal preferences of discount

26
LIMITATIONS

 Using questionnaire as data gathering tools caused the respondents


unanswerable to the questions exactly according to what they think
and behave because of their hurrying nature.

 The sample of study was limited to only 70 respondents, thus study


may lack generalizability.
 The inexperience of researcher makes analysis less precise when
compared to professional analysis.
REFERENCES:
 Data analysis in management with spss software by Prof.
J.P.Verma

 Armstrong,S,J. and Soelberg,P.(1968) On the interpretation of


factor analysis from
http://repository.upenn.edu/cgi/viewcontent.cgi?article=1016&cont
ext=marketing_papers

 Googlescholar.com

27

Anda mungkin juga menyukai