Anda di halaman 1dari 5

1) One-Sample T-Test:

There would taking one sample for the whole group.The one sample t-test is used when
we want to know whether our sample comes from a particular population but we do not
have full population information available to us. For instance, we may want to know if a
particular sample of college students is similar to or different from college students in
general. The one-sample t-test is used only for tests of the sample mean. Thus, our
hypothesis tests whether the average of our sample (M) suggests that our students
come from a population with a know mean (m) or whether it comes from a different
population.
2) Independent-Samples T Test:
The Independent-Samples T Test procedure compares means for two groups of
cases. Ideally, for this test, the subjects should be randomly assigned to two groups,
so that any difference in response is due to the treatment (or lack of treatment) and
not to other factors. This is not the case if you compare average income for males and
females. A person is not randomly assigned to be a male or female. In such situations,
you should ensure that differences in other factors are not masking or enhancing a
significant difference in means. Differences in average income may be influenced by
factors such as education (and not by sex alone).
Example:
Patients with high blood pressure are randomly assigned to a placebo group and a
treatment group. The placebo subjects receive an inactive pill, and the treatment
subjects receive a new drug that is expected to lower blood pressure. After the
subjects are treated for two months, the two-sample t test is used to compare the
average blood pressures for the placebo group and the treatment group. Each patient
is measured once and belongs to one group.

Statistics:
For each variable: sample size, mean, standard deviation, and standard error of the
mean. For the difference in means: mean, standard error, and confidence interval (you
can specify the confidence level). Tests: Levene's test for equality of variances and
both pooled-variances and separate-variances t tests for equality of means.
3) Paired-Samples T Test:
This feature requires the Statistics Base option.
The Paired-Samples T Test procedure compares the means of two variables for a
single group. The procedure computes the differences between values of the two
variables for each case and tests whether the average differs from 0.
Example:
In a study on high blood pressure, all patients are measured at the beginning of the
study, given a treatment, and measured again. Thus, each subject has two measures,
often called before and after measures. An alternative design for which this test is
used is a matched-pairs or case-control study, in which each record in the data file
contains the response for the patient and also for his or her matched control subject.
In a blood pressure study, patients and controls might be matched by age (a 75-year-
old patient with a 75-year-old control group member).
Statistics:
For each variable: mean, sample size, standard deviation, and standard error of the
mean. For each pair of variables: correlation, average difference in means, t test, and
confidence interval for mean difference (you can specify the confidence level).
Standard deviation and standard error of the mean difference.



4) ONE WAY ANOVA:
In statistics, one-way analysis of variance (abbreviated one-way ANOVA) is a
technique used to compare means of two or more samples (using the F distribution).
This technique can be used only for numerical data.
The ANOVA tests the null hypothesis that samples in two or more groups are drawn
from populations with the same mean values. To do this, two estimates are made of the
population variance. These estimates rely on various assumptions.(The ANOVA
produces an F-statistic, the ratio of the variance calculated among the means to the
variance within the samples. If the group means are drawn from populations with the
same mean values, the variance between the group means should be lower than the
variance of the samples, following the central limit theorem. A higher ratio therefore
implies that the samples were drawn from populations with different mean values.
Typically, however, the one-way ANOVA is used to test for differences among at least
three groups, since the two-group case can be covered by a t-test (Gosset, 1908).
When there are only two means to compare, the t-test and the F-test are equivalent; the
relation between ANOVA and t is given by F = t
2
.
Statistics:
For each group: number of cases, mean, standard deviation, standard error of the
mean, minimum, maximum, and 95% confidence interval for the mean. Levene's test for
homogeneity of variance, analysis-of-variance table and robust tests of the equality of
means for each dependent variable, user-specified a priori contrasts, and post hoc
range tests and multiple comparisons:
5) REPEATED ANOVA:
As with any ANOVA, repeated measures ANOVA tests the equality of means. However,
repeated measures ANOVA is used when all members of a random sample are
measured under a number of different conditions. As the sample is exposed to each
condition in turn, the measurement of the dependent variable is repeated. Using a
standard ANOVA in this case is not appropriate because it fails to model the correlation
between the repeated measures: the data violate the ANOVA assumption of
independence. Keep in mind that some ANOVA designs combine repeated measures
factors and non-repeated factors. If any repeated factor is present, then repeated
measures ANOVA should be used.
This approach is used for several reasons. First, some research hypotheses require
repeated measures. Longitudinal research, for example, measures each sample
member at each of several ages. In this case, age would be a repeated factor. Second,
in cases where there is a great deal of variation between sample members, error
variance estimates from standard ANOVAs are large. Repeated measures of each
sample member provides a way of accounting for this variance, thus reducing error
variance. Third, when sample members are difficult to recruit, repeated measures
designs are economical because each member is measured under all conditions.
Repeated measures ANOVA can also be used when sample members have been
matched according to some important characteristic. Here, matched sets of sample
members are generated, with each set having the same number of members and each
member of a set being exposed to a different random level of a factor or set of factors.
When sample members are matched, measurements across conditions are treated like
repeated measures in repeated measures ANOVA.
For example, suppose that you select a group of depressed subjects, measure their
levels of depression, and then match subjects into pairs having similar depression
levels. One subject from each matching pair is then given a treatment for depression,
and afterwards the level of depression of the entire sample is measured again. ANOVA
comparisons between the two groups for this final measure would be most efficient
using repeated measures ANOVA. In this case, each matched pair would be treated as
a single sample member.


T-Statistics:
Sample used to determine whether a hypothesis will be accepted or rejected. If the
test statistic is too far off of the original hypothesis, it will be rejected. Conversely,
when a test statistic is close to the original hypothesis, it will likely be accepted.
F- Statistics:
For analyzing the quality of fits obtained with different parameter values, the variance of
the fit (chi-square) is a very useful statistical quantity. The ratio of the chi-square of two
fits is distributed like a Fisher (F) distribution. Therefore, this distribution can be used to
judge if a given variance increase (e.g. after change of a parameter value) has a
magnitude in a range that could occur just by statistical error in the data (assuming
normally distributed noise), or if the variance increase is significant.
R-squared:
In statistics, the coefficient of determination R
2
is the proportion of variability in a data
set that is accounted for by a statistical model. In this definition, the term "variability" is
defined as the sum of squares. There are equivalent expressions for R2 based on an
analysis of variance decomposition.

Anda mungkin juga menyukai