Research quality is heavily dependent on the individual skills of the researcher and more
easily influenced by the researcher's personal biases and idiosyncrasies.
Rigor is more difficult to maintain, assess, and demonstrate.
The volume of data makes analysis and interpretation time consuming.
It is sometimes not as well understood and accepted as quantitative research within the
scientific community
The researcher's presence during data gathering, which is often unavoidable in qualitative
research, can affect the subjects' responses.
Issues of anonymity and confidentiality can present problems when presenting findings
Findings can be more difficult and time consuming to characterize in a visual way.
PMCID: PMC2987281
PMID: 21179252
they are useful to obtain detailed information about personal and group feelings,
perceptions and opinions
they can save time and money compared to individual interviews
they can provide a broader range of information
they offer the opportunity to seek clarification
they provide useful material eg quotes for public relations publication and presentations
there can be disagreements and irrelevant discussion which distract from the main focus
they can be hard to control and manage
they can to tricky to analyse
they can be difficult to encourage a range of people to participate
some participants may find a focus group situation intimidating or off-putting;
participants may feel under pressure to agree with the dominant view
as they are self-selecting, they may not be representative of non-users.
McNicol, Sarah (2004), "The evalued toolkit: a framework for the qualitative evaluation of electronic
information services", VINE, The Journal of Information & Knowledge Management Systems 34(4), pp.
172-175.
Questions for student focus groups
https://www.dmu.ac.uk/documents/.../questions-for-student-focus-groups.pdf
It will be easy to get lost in what follows. Here is the list of steps that we will follow in the
example below:
1. Calculate the sample means for each of our samples as well as the mean for all of
the sample data.
2. Calculate the sum of squares of error. Here within each sample, we square the
deviation of each data value from the sample mean. The sum of all of the squared
deviations is the sum of squares of error, abbreviated SSE.
3. Calculate the sum of squares of treatment. We square the deviation of each
sample mean from the overall mean. The sum of all of these squared deviations is
multiplied by one less than the number of samples we have. This number is the
sum of squares of treatment, abbreviated SST.
4. Calculate the degrees of freedom. The overall number of degrees of freedom is
one less than the total number of data points in our sample, or n - 1. The number
of degrees of freedom of treatment is one less than the number of samples used, or
m - 1. The number of degrees of freedom of error is the total number of data
points, minus the number of samples, or n - m.
5. Calculate the mean square of error. This is denoted MSE = SSE/(n - m).
6. Calculate the mean square of treatment. This is denoted MST = SST/m - `1.
7. Calculate the F statistic. This is the ratio of the two mean squares that we
calculated. So F = MST/MSE.
Software does all of this quite easily, but it is good to know what is happening behind the scenes.
In what follows we work out an example of ANOVA following the steps as listed above.
Sample from population #1: 12, 9, 12. This has a sample mean of 11.
Sample from population #2: 7, 10, 13. This has a sample mean of 10.
Sample from population #3: 5, 8, 11. This has a sample mean of 8.
Sample from population #4: 5, 8, 8. This has a sample mean of 7.
The mean of all of the data is 9.
For the sample from population #1: (12 – 11)2 + (9– 11)2 +(12 – 11)2 = 6
For the sample from population #2: (7 – 10)2 + (10– 10)2 +(13 – 10)2 = 18
For the sample from population #3: (5 – 8)2 + (8 – 8)2 +(11 – 8)2 = 18
For the sample from population #4: (5 – 7)2 + (8 – 7)2 +(8 – 7)2 = 6.
We then add all of these sum of squared deviations and obtain 6 + 18 + 18 + 6 = 48.
Degrees of Freedom
Before proceeding to the next step, we need the degrees of freedom. There are 12 data values and
four samples. Thus the number of degrees of freedom of treatment is 4 – 1 = 3. The number of
degrees of freedom of error is 12 – 4 = 8.
Mean Squares
We now divide our sum of squares by the appropriate number of degrees of freedom in order to
obtain the mean squares.
Tables of values or software can be used to determine how likely it is to obtain a value of the F-
statistic as extreme as this value by chance alone.
The dependent t-test (also called the paired t-test or paired-samples t-test) compares
the means of two related groups to determine whether there is a statistically significant
difference between these means.
You need one dependent variable that is measured on an interval or ratio scale (see
our Types of Variable guide if you need clarification). You also need one categorical
variable that has only two related groups.
Join the 10,000s of students, academics and professionals who rely
on Laerd Statistics.TAKE THE TOUR PLANS & PRICING
What is meant by "related groups"?
This is used to compare the means of two variables for a single group. The procedure computes
the differences between values of the two variables for each case and tests whether the average
differs from zero. For example, you may be interested to evaluate the effectiveness of a
mnemonic method on memory recall. Subjects are given a passage from a book to read, a few
days later, they are asked to reproduce the passage and the number of words noted. Subjects are
then sent to a mnemonic training session. They are then asked to read and reproduce the passage
again and the number of words noted. Thus each subject has two measures, often called before
(or pre) and after (or post) measures.
An alternative design for which this test is used is a matched-pairs or case-control study. To
illustrate an example in this situation, consider treatment patients. In a blood pressure study,
patients and control might be matched by age, that is, a 64-year-old patient with a 64-year-old
control group member. Each record in the data file will contain responses from the patient and
also for his matched control subject.
Write down the null and alternative hypotheses for the Paired-samples t test (dependent t test):
Null Hypothesis (Ho): There is no difference in the average number of words recalled before and
after training.
T & P: The Tweedledee and Tweedledum of a T-test
T and P are inextricably linked. They go arm in arm, like Tweedledee and Tweedledum. Here's why.
When you perform a t-test, you're usually trying to find evidence of a significant difference between
population means (2-sample t) or between the population mean and a hypothesized value (1-sample
t). The t-value measures the size of the difference relative to the variation in your sample data. Put
another way, T is simply the calculated difference represented in units of standard error. The greater
the magnitude of T, the greater the evidence against the null hypothesis. This means there is greater
evidence that there is a significant difference. The closer T is to 0, the more likely there isn't a
significant difference.
Remember, the t-value in your output is calculated from only one sample from the entire population.
It you took repeated random samples of data from the same population, you'd get slightly different t-
values each time, due to random sampling error (which is really not a mistake of any kind–it's just
the random variation expected in the data).
How different could you expect the t-values from many random samples from the same population to
be? And how does the t-value from your sample data compare to those expected t-values?
Alternative Hypothesis (H1): There is a difference in the average number of words recalled
before and after training.
I developed an excel template that calculates independent two sample t test. It also writes summary
report which is based on p-value. This spreadsheet can handle up to 10,000 cases.
Meaning :
The independent t test allows researchers to evaluate the mean difference between two
populations using the data from two separate samples.
Purpose :
The general purpose of the independent t test is to determine whether the sample mean difference
obtained is a real difference between the two populations or simply the result of sampling error.
Examples :
Interpretation :
Check the p-value. It is the lowest level of significance at which we can reject our null hypothesis. It
helps researchers determine if their hypotheses are correct.
Suppose the p-value is 0.39, i.e. greater than 0.05 (or 5 percent), it can be concluded that we don't
reject the null hypothesis at 5% significance level. That implies there is no mean difference between
the two groups.
Suppose the p-value is 0.029, i.e. less than 0.05 (or 5 percent), it can be concluded that we reject the
null hypothesis at 5% significance level. That implies there is a mean difference between the two
groups.