Anda di halaman 1dari 1

Y =b1 X

Y = bj *Xj

n=DFtotal+1 MOE = k * SEE SE Mean =

SSE = Y SEE(standard error of estimate) = square root of MSE R2 = 1 SSE/SSR.

StDev
N

errors away from mu.] Correlation (use abs value): If

T-ratio =

|r|<.5

meanmu
SE Mean

[T-ratio:Sample mean is T standard

, the correlation between X and Y is weak. If

|r| .5

, the

correlation between X and Y is strong. Confidence Interval (1- =C.I.) .= We are 90% confident that the population mean lies between// K=1 @
68%//K=2 @ 95%//K=3 @ 99% CI= One Sample T-test :[mean-SE mean*k, mean+SE mean*k] k is the critical value Sample coefficient is T standard
deviations away from 0. Regression (confidence interval for Y): point estimator: Y= X*coef Margin of error: plus or minus k*X*SE Coef
Decision Rule: P value < : reject Ho; significant.. P value > : not reject Ho; not significant ..two tail test p-value = 2* (one tail test p-value)Risk: Risk
loving : >0.05 (large ) Risk averse: <0.05 (small ) (afraid of type 1 error)//Type I error: rejecting Ho when its true (false positive)// Type II error: not
rejecting Ho when its false (false negative) //statistical inference - obtain information about a population from information contained in a sample
Inferential Statistics: Does- provide meaningful results with a sample size of 5, provides estimates of population parameters w/ MOE, is tied to
distributional assumptions// Does Not: need 30% of underying pop to be trueStandard Error of Mean (k is C.I.) = k * SEE..F-test Multiple
Regression: Ho:1= 2= k=0 Ha: i0 for at least one I from 1 to k must be different.. F-Test One way ANOVA - Ho:1= 2= 3== k Ha: at
least one is different.. FIT-Test of a Pop Regression Model: Ho:p2=0 Ha:p2>0.... F-Distribution: One-tail to right and is defined only for
nonnegative values of FNonparametric (Sign and Wilcoxin): [statistics rely on non-ordinal measures ] two sided sign test: Ho:M=Mo Ha:MMo// one
sided sign test: Ho:M=Mo Ha:M<Mo OR Ho:M=Mo Ha: M>Mo Parametric (everything else): Problem with using it when inappropriate is you may not
get the proper results, may not be able to trust your results, decision makers may act on basis of improper findings, others may justifiably criticize your
findings Wilcoxon Signed-Rank One-Sample Test: pop Has symmetric dist. & tests for mean (like t test) &for median (like sign test).*ExPost*involves forecasting the most recent observations after withholding them from the estimation period Ex-Ante forecasting uses an estimation
period that includes the most recent observations. Conditional Forecasting : one or more explanatory variables must be guessed b/c values are not known
with certainty for the period forecast. Unconditional Forecasting occurs when the FVs of all explanatory variables are known with certainty.
Contingency Forecasts: several forecasts, one for each alternative scenarios that could arise Time Trend Models can be forecast unconditionally In
simple regression//F & t tests same hypothesis & yield identical p values ... Regression and ANOVA: do not both use categorical independent variables,
Regression & AOV both report mean square ratios AOV table AOV uses replication, balanced design, and randomized design to confound test
results//Treatment= the indep var//Factor=what the treatment actually is//Response=what youre looking for. To test the balance of AOV, analyze the N, if
they are close it is nearly balanced. If same, balancedOne difference between the sign test and the t test is that the sign test does not assume a normal
distribution... Limited Estimation Period: Annual is limited (any time constraint) Forecasting Calculations (Error, Bias, MAE, RMSE, MAPE):
Error- EF=YF-YA Bias- AVG of (Forecast-Actual)Mean Absolute Error- calculate errors, take absolute values, get AVGRoot Mean Square Errorsquare errors, take AVG, square root it Mean Absolute Percent Error- absolute errors, divide by actual error, times 100, then take averagePrediction
Interval: Wider interval because indep var are not near the sample means// Also, XX issued when indep var lie outside the range of sample data
(min/max)//Also, P.I. in regression output are valid for forecasting only if, for period being forecast: values of indep var are known with certainty, model
does not change, indep var are not outside their range during estimation periodSign Test(median): will ask for greater (or less than) some #, add up the
amount of #s that fall into that category from the list, find that # on chart and add up P-values that from that # up (or down) respectively Extrapolation:
Basically a continuing pattern seen in past Multicollinearity: Potential problem prevents variables from testing significantANOVA: Similarities- both
linear, quant dep var, analyzed w/ ANOVA tables, use F-test (can also use P-value Decision Rule), analyze variability of dependant var. DifferencesANOVA (exper) factors are categorical, Regress has quant indep var, testing means in ANOVA and slopes in regress, ANOVA loses more DF..
We are justified to use the t-test because the sample size is large enough to apply central limit theorem (n>4)We are justified to use the sign test
because ALWAYS.regression forecast intervals=prediction interval, classical regression: parameters B constant least squares estimators are unbiased,
efficient, and have low S.DVariation in dependent variable y explained by model=R-sq Sample mean how many Std Err from Hypo mean (test of
minus Data Mean, all divided by SE Mean)STDev known = normal distribNull hypo on Matched Pair T-test = two means are identicalPerson who
OCCASIONALLY makes large forecast errors regardless of quality of forecasts rest of the time should choose model with results in minimum RMSE In
SIMPLE REGRESSION (1 indep variable) models, the F-test and the T-test both test statistically equivalent hypotheses and yield the identical p-values for
each testOther things equal, a regression model is more likely to test significant if: sample size n is large, R 2 is large, the used for the test is large, there
are few indep var in the modelCI for predicting a particular outcome of the dep var given the values for each indep var: the prediction interval for Y
Forecast Intervals in regress models are type of prediction interval Sample sizes APPROXIMATELY EQUAL, assumptions for AOV: moderate departures
from norm dont invalidate AND moderate differences among treatment STDev dont invalidatePossible problems with constructing completely
randomized design: may not compel participation, unethical to force.., if volunteers do it then self selection bias may result, participation may require
expensive compensation AOV not rejecting null hypo says the staffing hours are not significantly different from one week to the nextIncreased
Forecasting Error occurs when hypothetical scenario DOES alter the responseSource of random disturbance in regression due to modeling error:
including only most important var ARE assumptions of classical regression model: parameters are constant, E() is zero, is unncorr w/ each of the
indep var, each is uncorr w/ every other If all classical regres model assumptions are valid, least-squares estimators: are unbiased, have min STDev
among all unbiased estimators, are efficient Central Limit Theorem: in many regress situations the assumption of normally distributed is approx true for
large samples Omitting important variables is NEVER justifiedPolicy sales increase doesnt change B1
FTest
Reg

DF

SUM OF
SQ

MEAN SQ

F-Test
Exp

DF

SUM OF
SQ

MEAN SQ

REG

SSr

MSr=SSr/
k

=MSr/MSe

Treat

k-1

SStr

MStr=SSt
r/(k-1)

=MStr/MS
e

Anda mungkin juga menyukai