Statistical technique predict someones score on one variable on the basis of their scores on several other variables. Dependent variable = criterion variable Independent variable = predictor variable Beta (standardised regression coefficients
R Square
observed value and the predicted value of the criterion variable R2 proportion of the variance in the criterion variable which is accounted by the proposed model
variables in the model and the number of observation (measure of success of the model)
- x100
To know how well a set of variables is able to predict a particular outcome To know which variable is the best predictor of an outcome To judge whether particular predictor variable is still able to predict an outcome when the effects of other variable are controlled
Dependent variable (criterion variable) should be continuous. The predictor variables should be measured on a ratio, interval or ordinal scale. A nominal predictor variable is legitimate but only if it is dichotomous. The number of sample must exceed the number of predictor variable (N>50+8m*) *m = number of IV
The relationship between the predictor variables and criterion variable is linear. The errors/residuals are distributed with equal variance (homoscedasticity aka homogeneity of variance). The errors/residuals are independent Independent observation The errors/residuals are normally distributed
Descriptive- histogram with normal curve Levene statistics or boxplot Do scatterplot
Representative sample and proper specification of the model (no omitted variables) No autocorrelation of the errors
No outlier distortion Multicollinearities and singularity among predictor variables are not assumed
To assess relative contribution of each variable Enter Method variables entered simultaneously (small number) Forward enter one at a time in an order determined by their strength with effect of adding done simultaneously Backward enter all and the weakest predictor removed, regression recalculated and re entered if necessary, repeated until perfection
assessed, if contribution significant, it will be retained, all other variable will be retested again and removed end up with smallest possible predictor variables (most sophisticated) Remove variables removed from the model in a block and then value are assessed instead of one by one
Null
There is no relationship between X variables
and Y variables that the fit of the observed Y values to those predicted by the multiple regression equation is no better that we would expect by chance. Adding X variable to the multiple regression does not improve the fit of the multiple regression equation any more than expected by chance
SPSS for Psychologist, www.palgrave.com/pdfs/0333734718.pd f Multiple Regression http://ccnmtl.columbia.edu/projects/q mss/multiple_regression/hypothesis_testin g.html http://udel.edu/~mcdonald/statmultreg. html