Anda di halaman 1dari 3

1.

T-statistic
The "t'' statistic is computed by dividing the estimated value of the parameter by its standard error. This statistic is a measure of the likelihood that the actual value of the parameter is not zero. The larger the absolute value of t, the less likely that the actual value of the parameter could be zero.
If the model fit is satisfactory (ie. R-squared value is sufficiently high), how should the t-stat be interpreted... 1. The t-stat can be a measure of the relative strength of prediction (is more reliable than the regression coefficient because it takes into account error), and the generalisability of the findings beyond the sample. 2. A t-stat of greater than 1.96 with a significance less than 0.05 indicates that the independent variable is a significant predictor of the dependent variable within and beyond the sample. 3. The greater the t-stat the greater the relative influence of the independent variable on the dependent variable. 4. A t-stat of less than 1.96 with a significance greater than 0.05 indicates that the independent variable is NOT a significant predictor of the dependent variable BEYOND the sample. However, as the model is a good fit with the sample - it does not detract from its value within the sample... it only affects generalisability outside the sample.
Assuming you mean the t-statistic from least squares regression, the t-statistic is the regression coefficient (of a given independent variable) divided by its standard error. The standard error is essentially one estimated standard deviation of the data set for the relevant variable. To have a very large t-statistic implies that the coefficient was able to be estimated with a fair amount of accuracy. If the t-stat is more than 2 (the coefficient is at least twice as large as the standard error), you would generally conclude that the variable in question has a significant impact on the dependent variable. High t-statistics (over 2) mean the variable is significant. What if it's REALLY high? Then something is wrong. The data points might be serially correlated.

2. R-Square R- Square = proportion of variation in response variable explained by or due to the predictor variable. Values fall between 0 and 1 and a low value indicates that the predictor variable explains very little variation in the response variable.
R Square This is the most important number of the output. R Square tells how well the regression line approximates the real data. This number tells you how much of the output variables variance is explained by the input variables variance. Ideally we would like to see this at least 0.6 (60%) or 0.7 (70%).

R-Square is the proportion of variance in the dependent variable which can be predicted from the independent variable.

Note that this is an overall measure of the strength of association, and does not reflect the extent to which any particular independent variable is associated with the dependent variable. (It is because there are many factors which cause Variance in the Dependent variable. So, R- Square tells that how much percentage of variance in the dependent variable is due to Independent variable
Predicted R-squared is used in regression analysis to indicate how well the model predicts responses for new observations, whereas R-squared indicates how well the model fits your data. For example, you work for a financial consulting firm and are developing a model to predict future market conditions. The model you settle on looks promising because it has an Rsquared of 87%. However, when you calculate the predicted R-squared you see that it drops to 52%. This may indicate an overfitted model and suggests that your model will not predict new observations nearly as well as it fits your existing data.

3. Adjusted R-Square
As predictors are added to the model, each predictor will explain some of the variance in the dependent variable simply due to chance. One could continue to add predictors to the model which would continue to improve the ability of the predictors to explain the dependent variable, although some of this increase in R-square would be simply due to chance variation in that particular sample. The adjusted R-square attempts to yield a more honest value to estimate the R-squared for the population. The value of R-square was .542, while the value of Adjusted Rsquare was .532. There isnt much difference because we are dealing with only one variable. When the number of observations is small and the number of predictors is large, there will be a much greater difference between R-square and adjusted R-square. By contrast, when the number of observations is very large compared to the number of predictors, the value of R-square and adjusted R-square will be much closer. 4. P-value The p-value is a percentage. It tells you how likely it is that the coefficient for that independent variable emerged by chance and does not describe a real relationship. A p-value of .05 means that there is a 5% chance that the relationship emerged randomly and a 95% chance that the relationship is real. It is generally accepted practice to consider variables with a p-value of less than .1 as significant, though the only basis for this cutoff is convention. This value is associated with F. These values are used to answer the question "Do the independent variables reliably predict the dependent variable?". The p-value is compared to your alpha level (typically 0.05) and, if smaller, you can conclude "Yes, the independent variables reliably predict the dependent variable".

If the p-value were greater than 0.05, you would say that the group of independent variables does not show a statistically significant relationship with the dependent variable, or that the group of independent variables does not reliably predict the dependent variable.
P-value of each coefficient and the Y-intercept The P-Values of each of these provide the likelihood that they are real results and did not occur by chance. The lower the P-Value, the higher the likelihood that that coefficient or Y-Intercept is valid. For example, a P-Value of 0.016 for a regression coefficient indicates that there is only a 1.6% chance that the result occurred only as a result of chance.

5. F-Value
As with the p-value, the lower the significance F value, the greater the chance that the relationships in the model are real.

Anda mungkin juga menyukai