Anda di halaman 1dari 37

Dr.

Aqeel Khan
draqeelkhan@gmail.com
a VARIABLE is a measurable
characteristic that varies.
It may change from group to
group, person to person, or even
within one person over time.
1. Dependent Variable: variable being affected by
the independent variable.
2. Independent Variable: "input and dependent
variable output
3. Extraneous Variable: are those factors in the
research environment which may have an effect
on the dependent variable(s) but which are not
controlled.
Extraneous variables are dangerous. They may
damage a study's validity,
If they cannot be controlled, extraneous variables
must at least be taken into consideration when
interpreting results.
Supporting theory

A theoretical framework guides your research

Discuss the theories that your study is based


on or the theories you will be testing

4
Draw a simple diagram to show how the
variables in your study are related, example:

5
A sample is a smaller (but hopefully
representative) collection of units from a
population used to determine truths
about that population (Field, 2005).
It is the process obtaining information
about an entire population by examining
part of it.
Sample a set of cases that is drawn
from a larger pool and used to make
generalizations about the population
Using data to say something (make an
inference) with confidence, about a
whole (population) based on the study of
a only a few (sample).
It is the process obtaining information
about an entire population by examining
part of it.
To whom do you want to generalize
your results?
Doctors
School children
Indians
Women aged 15-45 years
is it permitted?
practicable
sensitive area (difficult
and problematic)
Thereare 2 types of
sampling:
Probability
sampling
Non-Probability sampling
involve random selection
A sample will be representative of the
population from which it is selected if each
member of the population has an equal
chance (probability) of being selected.
Probability samples are more accurate than
non-probability samples
Such designs are also referred to as 'self-
weighting' because all sampled units are
given the same weight.
Non-probability sampling: does
not involve random selection
Any sampling method where some
elements of population have no
chance of selection
or where the probability of selection
can't be accurately determined.
Non-probability Sampling
Convenience

Purposive

Probability Sampling
Simple random
Stratified random
Systematic random
Cluster/area random
Multi-stage random
1. Convenience sampling
Selecting units because of their availability or
easy access.
Selection based on ones convenience, by
accident, or haphazard way
Common in popular surveys, public view or
opinion (e.g. by-the-road-side interviews)
Serious bias only one group included
Sampling with a purpose in mind

Sampling involves pre-


determined criteria. E.g. special
education students
Simple Random Sampling
Each individual is chosen random and
entirely by chance, such that each individual
has the same probability of being chosen at
any stage during the sampling process
without any rule
Simplest, fastest, cheapest
Have to choose a person with
certain criteria.
If we have 10 person of population
and we want to choose 3 then we
list all 10 person and choose every
3rd from 10
Population (University Students)
Age Ethnicity Gender

--Break population into meaningful strata and take


random sample from each stratum
---Can be proportionate or disproportionate within strata
When:
sub-group inferences are needed
represent not only the overall population, but also key
subgroups of the population
Is a process to choose our sample according
to certain certain sections
Like in Malaysia we have northern
region/middle /eastern and southern region
Sampling involves analysis of geographic
units
Sampling involves extensive travelling
* Three-stage stratified (Locality first,
then, ethnic, then, family status).

Suburban
Rural Stage 1:
Urban
Locality

C M C I M Stage 2:
M C I I
Ethnic

Stage 3:
MD UD UD MD MD UD
Family
status
"consistency" or "repeatability" of your
measures.
Consistency over time (test-retest)
The extent to which an instrument yields the same
score when given to a respondent on two different
occasions.
When same instrument given to same group in 2
different times. The results of time 1 compared
with time 2 ..inorder to determine hoe well the
instrument consistently gets the same results.
Divide the instrument
items into two halves;
scores are compared to
determine instrument
score consistency.

(when a gr of people complete an instrument


and then all of the items on the instrument that
are measuring the same construct are split in
half to forms to sets of items, then the two sets
of items are compared to each other to see
how well they consistently measure the
construct)
When the decisions from different raters
(researchers/expert) are compared to each
other to see how consistent the raters
decisions are
Average inter-item
Correlation
In the example, an
average inter-item
correlation .90 with the
individual correlations ranging from .84
(minimum) to .95 (maximum).
Results from the two version/forms
compared inorder to determine the
consistency of the results between the similar
versions of an instrument.
Through statistics, Cronbach's (alpha) is a
coefficient of internal consistency.
is the test measuring what its supposed to?

Five types of validity:


1. Face
2. Content
3. Criterion
4. Predictive
5. Construct
When simply researchers look at the
items of the instrument an give their
opinion if the items appear to
accurately measure what they are
trying to measure
It is least scientific
Check relevant content domain for the
construct.
This approach assumes that you have a good
detailed description of the content domain.
For example, does an IQ questionnaire have
items covering all areas of intelligence
discussed in the scientific literature?
Tests ability to sample the content that is
being measured.
It compares the test with other measures or
outcomes (the criteria) already held to be
valid.
For example, employee selection tests are
often validated against measures of job
performance (the criterion),
IQ tests are often validated against measures
of academic performance (the criterion).
Predictive Validity Does the test predict
future success?
Ex: Math ability should be able to predict
how well a person will do in an
engineering-based profession.
Concurrent validity: Instrument data and
criterion data are gathered at nearly the
same time and results are compared
Construct validity
The extent to which there is evidence that a
test measures a particular construct.
Discriminant validity
Measures of different constructs should not
correlate highly with each other.
Convergent validity
Measures of constructs that are related to
each other should be strongly correlated.

Anda mungkin juga menyukai