STUDY GUIDE
Area of concern where there is a gap in the knowledge base needed for practice.
Research problem
Concise, clear statement of the specific goal or aim of the study that is generated
from the research problem
Purpose statement
A Set of highly abstracted, related to constructs that broadly explains the
phenomena of interest, expresses assumptions, and reflects a philosophical stance
Conceptual framework
Concise interrogative statements that are worded in the present tense and includes
one or more variables
Research question
Formal statement of the expected relationship(s) between two or more variables in a
specified population
Hypothesis
An intervention or activity that is manipulated by the researcher to create an effect
on the dependent variable also referred to as the cause
Independent variable
The response, behavior, effect or outcome that is predicted and measured in
research
Dependent variable
A group of individuals who meet the sampling criteria and to which the study
findings will be generalized
Target population
Description of how variables or concepts will be measured or manipulated in a study
Operational definition
Statements that are taken for granted or are considered true, even though they have
not been scientifically tested
Assumption
STUDY GUIDE
Change in instruments between the pre and posttest rather than actual results of a
treatment
Instrumentation (threat to validity)
Movement or regression of extreme scores toward the man in studies with a pre/post
test design
Braham & Williams Generated 2015
STUDY GUIDE
Event not related to the study but occurs during the study
History
Individual differences that exist in the subjects before they are chosen to participate
in a study
Selection bias
The capacity of the study to detect differences, minimum power is 80%
Power
Represents the consistency of the measure obtained
Reliability
Focused on comparing two versions of the same instrument
equivalence
Addresses the correlation of various items within the instrument or internal
consistency; determined by split-half reliability
homogeneity
The degree to which an instrument measures what it is supposed to measure
validity (instrument)
Tests the relationship of two concepts making the study more specific
theoretical framework
Conducted to reduce, organize, and give meaning to data
data analysis
STUDY GUIDE
STUDY GUIDE
STUDY GUIDE
Narrative Research
Participatory Action Research
Mixing Worlds - Mixed Methods Research
Ethnography
Qualitative research methodology for investigating cultures - to 'learn from'
Underlying assumption of ethnography
Every human group eventually evolves a culture that guides the member's view of the
world
Culture with Ethnography
Culture is not tangible, it is inferred from the words, actions, and products of
members of a group
Emic vs etic
Emic= insider perspective
Etic=outsider perspective
Goal of Ethnography
To uncover tacit (unclear, unwritten, unspoken) knowledge
Phenomenological Research
Seeks to understand people's everyday life experiences and perceptions.
Why? Because truth about reality is found within these experiences
Very small participant group (~10) and in-depth conversations
Useful for poorly defined or understood human experiences
What does phenomenological research ask?
What is the essence (meaning) of this experience or phenomenon and what does it
mean?
What is essence in phenomenological research?
Essence is what makes a phenomenon what it is, the essential aspects
Grounded Theory
Inductive theory building research.
Seeks to understand the 'why' of peoples actions by asking the people themselves,
then, they 'ask' the data:
STUDY GUIDE
STUDY GUIDE
Whole pie
Sampling
The process of selecting a portion of the population (subset) to represent the entire
population
Representative sample is one whose main characteristics most closely match those of
the population
A slice of the pie
Sampling Bias
Distortions that arise when a sample is not representative of the population from
which it was drawn
Human attributes are not homogeneous, so we need representatives of all the variety
that exists
Always want to think about who did NOT participate in any given study
Risk of sampling bias, when sample is not a characteristic sample of whole
population
Always think about who didn't participate
Inclusion/exclusion Criteria
The criteria used by a researcher to designate the specific attributes of the target
population, and by which participants are selected (or not selected) for participation
in a study.
Exclusion: Only wants English
What is necessary to qualify for the study?
NONPROBABILITY SAMPLING
Nonrandom Less likely to produce a representative sample
Methods:
Convenience
Snowball
Quota
Consecutive
Purposive
Convenience Sampling
Selection of the most readily available people as participants in a study
Very common method
High risk of bias/weakest form
Why? The sample may be atypical
Example: Nurse distributing questionnaires about vitamin use to first 100 contacts
Braham & Williams Generated 2015
STUDY GUIDE
10
Quota Sampling
The nonrandom selection of participants in which the researcher pre-specifies
characteristics of the sample's subpopulations (or strata), to increase its
representativeness
Convenience sampling methods are used, ensuring an appropriate number of cases
from each strata
Improvement over strict convenience sampling, but still prone to bias
Consecutive sampling
'Rolling enrollment' of ALL people over a period of time (the longer period the
better)
Reduces risk of bias, but is not always practical or relevant to the study question
Purpose Sampling
Can be used for both qualitative and quantitative designs but preferably qualitative
Hand picking the sample hand pick professionals who have knowledge of what u are
sampling
also called judgment sampling
Ask questions such as who will be most knowledgeable, most typical?
Limited use in quantitative research
Valuable in qualitative research
Snowball Sampling
Has greater affinity for qualitative designs
The selection of participants by means of referrals from earlier participants; also
referred to as network sampling
Helps identify difficult to find participants
Has limitations... as do the others
Theoretical Sampling
Qualitative sampling method
Members are selected based on emerging findings/theory
Aim to discover untapped categories and properties, and to ensure adequate
representation and full variation of important themes
Non probability
PROBABILITY SAMPLING
Random selection of participants/elements
Different from random assignment into groups
Each element has an equal and independent chance of being selected into the study
STUDY GUIDE
11
Methods:
o Simple random sampling
o Stratified
o Cluster
o systematic
Random Sampling
The section of a sample such that each member of a population has an equal
probability of being included
Free from researcher bias
Rarely feasible, especially when you have a large population
STUDY GUIDE
POWER ANALYSIS: procedure for estimating how large a sample should be. The
smaller the predicted differences between groups, the greater the sample size will
need to be
Done pre-sampling
Sampling plan is an important area to critique and evaluate
Interviews
o Unstructured
o Semi-structured
Focus groups
Life histories/diaries
Records or other forms documents
Observation
Quantitative
12
STUDY GUIDE
13
o May miss important responses (can't ask about what you don't know about)
o Have to pick from a certain group of responses, may not accurately capture how
they feel, may tweak the validity of the test
Scales (Quantitative)
Issue of validity
Is the topic at hand likely to tempt respondents to present themselves in the best
light?
Are they being asked to reveal potentially undesirable traits?
Should we trust that they actually feel/act the way they say they do?
Observation
Sometimes fits better than self-report, depending on the question and population
(e.g., behavior of autistic children)
Again, biases and other issues come into play:
o Observer bias leading to faulty inference or description (intra and inter-rater
reliability)
o Validity questions - am I just seeing what I want to see or what I thought beforehand I would see?
o Hasty decisions may result in incorrect classifications or ratings
o Observer 'drift'
When possible, these issues are mitigated through thorough observer training,
breaks, re-training, well-planned timing of observations
WHAT IS MEASUREMENT?
Measurement involves rules for assigning numbers to qualities of objects in order to
designate the quantity of the attribute
We're familiar with the 'rules' for measuring temp, weight, etc.
Rules are also developed for measuring variables/attributes for nursing studies
Braham & Williams Generated 2015
STUDY GUIDE
14
Advantage of Measurement
Enhances objectivity = that which can be independently verified (2 nurses weighing
the same baby using the same scale)
Fosters precision ("rather tall" versus 6'2") and allows for making fine distinctions
between people
Measurement is a language of communication that carries no connotations
Levels of measurement:
Nominal-scale
The lowest level of measurement that involves the assignment of characteristics into
categories
o Females - category 1
o Males - category 2
The number assigned to the category has no inherent meaning (the numbers are
interchangeable)
Useful for collecting frequencies
Levels of Measurement:
Ordinal-scale
A level of measurement that ranks, in 'order' (1, 2, 3, 4) the intensity or quality of a
variable along some dimension.
1 = is completely dependent
2 = needs another person's assistance
3 = needs mechanical assistance
4 = is completely independent
Does not define how much greater one rank is than another (no relative value given)
Level of Measurement:
Interval-scale
A level of measurement in which an attribute of a variable is rank ordered on a scale
that has equal distances between points on the scale
EXAM Scores - 100 - 90 80
Likert scales and most other questionnaires fall here
The differences between scores are meaningful
Amenable to sophisticated statistics
Ratio-scale measurement
A level of measurement in which there are equal distances between score units, and
that has a true meaningful zero point.
o Weight - 200 lbs is twice as much as 100 lbs.
o Visual analog scale - 'No pain' is a true zero
Braham & Williams Generated 2015
STUDY GUIDE
15
Higher levels of measurement are preferred because more powerful statistics can be
used to analyze the information
Errors of Measurement
Values and scores from even the best measuring instruments have a certain amount
of error
That which is random and varied
Obtained score = True score + Error
o "Obtained score" is the score for one participant on the scale/questionnaire
o "True score" is what the score would be IF the measure/instrument could be
infallible.
o "Error" can be both random/varied (we just have to deal with this) and systematic
(bad)
Factors related to Errors of Measurement = THESE ARE BIASES
Situational contaminants
Response set biases
Transitory (changing) personal factors
Administration (different persons collecting information) variations
Item sampling what items on test, but do they capture item of interest?
Reliability of Measuring Instruments
Reliability: The consistency and accuracy with which an instrument measures the
attribute it is designed to measure
A reliable instrument is close to the true score; it minimizes error
Test-Retest Reliability:
Assesses the stability of an instrument by giving the same test to the same sample
twice, then comparing the scores
Gets at the question of time-related factors that may introduce error.
Only appropriate for those characteristics that don't change much over time
Internal Consistency:
The degree to which the subparts (each item) of an instrument are all measuring the
same attribute or trait
Cronbach's alpha is a reliability index that estimates the internal consistency or
homogeneity of an instrument (the closer to +1.00, the more internally consistent the
instrument)
Best means of assessing the sampling of items
Braham & Williams Generated 2015
STUDY GUIDE
16
Interrater Reliability:
The degree to which two raters or observers, operating independently, assign the
same values for an attribute being measured or observed. The more congruence,
the more accurate/reliable the instrument
Validity The degree to which an instrument measures what it is intended to measure.
o You can have reliability without validity, but you can't have validity without
reliability
Content Validity
The degree to which the items in an instrument adequately cover the whole of the
content of interest
Usually evaluated by a panel of experts in the content area
Criterion-Related Validity
The degree to which scores on an instrument are correlated with an external
criterion
Is there a clearly established criterion?
Simulation = accurate reflection of nursing skill. If a written test attempts to
capture the info in a simulation, the simulation becomes the criterion by which
validity can be tested
Construct Validity
The degree to which an instrument measures the construct under investigation.
What exactly is being measured? Could it be something other than what it looks
like?
DESCRIPTIVE STATISTICS
Synthesize and describe the data set (information)
Example - what is the average weight loss of patients with cancer?
Provide foundational information and theory for inferential statistics
Helps you assess the representativeness of the sample
These are valuable
Inferential Statistics
Provide a means for drawing conclusions about a population, given the data from
a sample
Based on the laws of probability
Allows objective criteria for hypothesis testing
STUDY GUIDE
17
Research hypothesis:
Patients exposed to movie on breastfeeding will breastfeed longer than those who do
not see the movie
A tentative conclusion about the relationship between two variables
Null hypothesis:
There is no difference in breastfeeding length between the two groups: 1) seeing
movie 2) not seeing movie
Our goal: rejection of the null hypothesis, because we cannot directly demonstrate
that the research hypothesis is correct
p value (significance of a relationship)
values tell you whether the results are likely to be real
Simply means that the results are not likely to be attributed to a chance occurrence
In a study, 'significance' refers to an investigator's hypothesis being supported
Effect size analysis:
Conveys the estimated magnitude of a relationship without making any statement
about
whether the apparent relationship in the data reflects a true relationship
DISCUSSION SECTION
What's here? Interpretation of study findings
Requires making multiple inferences
Inference: use of logical reasoning to draw conclusions based on limited
information
Can we really make valid inferences based on 'stand-ins'? Yes, if we use rigorous
design
Investigators are often indirect (at best) in addressing issues of validity - you must
be the judge
Assessing good research design
the main question:
To what degree did the investigators provide reliable and valid evidence?
Investigator's primary design goal - control confounding variables
Confounding variable:
An extraneous, often unknown variable, that correlates (positively or negatively)
with both the dependent variable and the independent variable.
Braham & Williams Generated 2015
STUDY GUIDE
18
Confounding is a major threat to the validity of inferences made about cause and
effect (internal validity)
Intrinsic (internal)
Come with the research subjects
These are factors that are simply characteristics of the individual subject
Example: Physical activity intervention to improve left sided movement in pts
with CVA
Age/Gender
Smoking HX/Physical activity HX
All are extraneous variables, and all likely related to the outcome variable
(dependent)
Associated with research subject
Extrinsic (external)
Are part of the research situation
Result in 'situational contaminants'
If not sufficiently addressed, these factors raise question about whether something
in the study context influenced the results
Associated with research situation or context, situational contaminants, instead of
the variable alone
Controlling extrinsic factors
Goal create study and data collection condition that don't change from participant to
participant
What does this look like in a study?
o All data collected in the same setting
o All data collected at the same time of day
o Data collectors use a formal script, (interviews, observations) are trained in
delivery of any verbal communication
o Intervention protocols are very specific
Controlling Intrinsic Factors
1. Random assignment into groups:
goal is to have groups that are equal with respect to ALL confounding variables,
not just the ones we know about
Controlling Intrinsic Factors
2. Homogeneity:
limits confounders by including only people who are 'the same' on the confounding
variable
Braham & Williams Generated 2015
STUDY GUIDE
19
STUDY GUIDE
20
Events occurring at the same time as the study that may impact outcomes (flu
shot example)
STUDY GUIDE
21
When transcribing is it accurate and valid?
o Researchers do own transcribing, responsibility of researcher to do own
transcription and making sure it is accurate
o Overall goal is to get to know your data
"Immersing" oneself in the data getting to know own data "drowning in data"
Maintaining Files: Computers vs. Manual cutting out and putting into files and
developing systems doing hard copy manor
STUDY GUIDE
22
Identify commonalities within the data that brings meaning to the experience
under study
o you need to ask what are the relationships within and among the identified
themes are.
Iterative process:
Initial themes are identified, then analyst returns to the data with those themes
in mind, asking, "does this fit"? Refining and clarifying process... circular
process of looking at data, stepping back identifying themes, going back to
previously determined themes, checking with other participants themes, then
going back again
Seeing if it fits, is it accurate, do others find similar themes and connections
ANALYTIC PROCEDURES - Validation of findings:
aim is to minimize bias associated with analysis by only one researcher
Think about validity through rechecking work, and see if it is actually about
data and not about biases that came. Researchers are good at identifying own
biases. Researchers acknowledge biases and let them work with them instead of
against them. It is the consumers responsibility to recognize that and see if the
biases get in the way or not
Risk of only getting the perspective of that one person. You need to work to
mitigate that risk
ANALYTIC PROCEDURES - Integration:
Developing an overall structure - either a theory, conceptual map, or
overarching description
This integration piece is the 'so what?' piece
Look at the end of the research and think so what... The researcher needs to do
a good job at linking the study to practice, and the so what needs to be focused
so people can see how to utilize the research. If they can't you need to be
asking a certain question or reword it...
Difficult to get funded for qualitative research, and it does have a bearing on
the write-up and why they think it is important and relevant
Goal is to develop an overall perspective or idea that is useful at the bedside
Find in a conceptual map, or a concept analysis, or an overarching description
of moving from an inductive way, and see what these participants experience.
ANOVA
Analysis of variance tests for differences in the means for three or more
groups.
STUDY GUIDE
23
Compares how much members of a group differ or vary among one another
with how much the members of the group differ or vary from the members of
other groups.
In other words, the test analyzes variance, comparing the variance within a
group with the variance between groups. For example, an ANOVA test of
respiratory complications in three groups of patients categorized by smoking
status calculates how much variation there is in respiratory complications
within the patient group that smokes, the patient group that never smoked, and
the patient group that formerly smoked. It then calculates the amount of
variation in respiratory rate between the smoking patients, the patients who
never smoked, and the former smokers
T-TEST
Computes a statistic that reflects the differences in the means of a variable for
two different groups or at two different times for one group.
The two groups being tested might consist of anything of interest to nursing,
such as men and women, single parent families and two-parent families, those
who quit smoking and those who did not, or hospitals with level-one trauma
centers and hospitals without them
REGRESSION
The statistical procedure that we use to look at connections among three or
more variables
Measures how much two or more independent variables explain the variation
in a dependent variable.
The regression procedure allows us to predict future values for the dependent
variable based on values of the independent variables.
A regression analysis gives the information needed to know how much
different factors independently contribute or connect to a dependent variable.
NULL HYPOTHESIS
HYPOTHESIS
Is stated in the positive and predicts the nature and strength of a relationship or
difference among variables.
It is the researchers hope that the results of a study support the prediction.
STUDY GUIDE
24
BLINDING
The process of preventing those involved in a study (participants, intervention
agents, or data collectors) from having information that could lead to a bias,
e.g., knowledge of which treatment group a participant is in; also called
masking.
ANONYMITY
Protection of participants confidentiality such that even the researcher cannot
link individuals with the data they provided.
CHI SQUARED
Is used to test hypotheses about differences in proportions.
PEARSON
Establishes a linear relationship
Most often used correlation index is Pearsons r (the productmoment
correlation coefficient), which is computed with interval or ratio measures.
There are no fixed guidelines on what should be interpreted as strong or weak
relationships, because it depends on the variables.
If we measured patients body temperature orally and rectally, an r of .70
between the two measurements would be low. For most psychosocial
variables (e.g., stress and depression), however, an r of .70 would be high.
Perfect correlations (+1.00 and 1.00) are rare.
Correlation coefficients describe the direction and magnitude of a relationship
between two variables, and range from 1.00 (perfect negative correlation)
through .00 to +1.00 (perfect positive correlation). r,
Used with interval- or ratio-level variables.
INFERENTIAL STATISTICS
Based on laws of probability, allow researchers to make inferences about a
population based on data from a sample.
The sampling distribution of the mean is a theoretical distribution of the means
of an infinite number of same-sized samples drawn from a population.
Sampling distributions are the basis for inferential statistics.
COVARY/COVARIANCE
When two variables are connected in some way, they are said to covary.
Two variables covary when changes in one are connected to consistent changes
in the other.
For example, height and weight covary in healthy growing children.
As the height of a child increases, the weight usually increases as well.
Braham & Williams Generated 2015
STUDY GUIDE
25
PARAMETRIC STATISTICS
These are numbers that meet two key criteria: (1) the numbers must generally be
normally distributedthat is, the frequency distribution of the numbers is
roughly bell shaped and
(2) the numbers must be interval or ratio numbers, such as age or intelligence
scorethat is, the numbers must have an order, and there must be an equal
distance between each value
NONPARAMETRIC STATISTICS
Used for numbers that do not have a bell-shaped distribution and are categoric
or ordinal.
Categoric or ordinal numbers represent variables for which there is no
established equal distance between each category, such as numbers used to
represent gender or rating of preference for car color. In the predictors of life
satisfaction study, gender would be a nonparametric statistic, whereas life
satisfaction scores would be a parametric statistic.
CONFIDENTIALITY
Protection of study participants so that data provided are never publicly
divulged.
BASIC RESEARCH
Research designed to extend the base of knowledge in a discipline for the sake
of knowledge production or theory construction, rather than for solving an
immediate problem
APPLIED RESEARCH
Research designed to find a solution to an immediate practical or clinical
problem.
ATTRITION
The loss of participants over the course of a study, which can create bias by
changing the composition of the sample initially drawn.
CODING
The process of transforming raw data into standardized form for data processing
and analysis; in quantitative research, the process of attaching numbers to
categories; in qualitative research, the process of identifying recurring words,
themes, or concepts within the data.
COEFFICIENT ALPHA (CRONBACHS ALPHA)
A reliability index that estimates the internal consistency of a measure
comprised of several items or subparts.
Braham & Williams Generated 2015
STUDY GUIDE
26
STUDY GUIDE
27
Integrative review
Focused review and synthesis of the literature on a specific area that follows
specific steps of literature integration and synthesis without statistical analysis.
STUDY GUIDE
28
Meta-analysis
Summarizes a number of studies focused on a topic using a specific statistical
methodology to synthesize the findings in order to draw conclusions about the area
of focus.
4 strategies for critical reading:
1. Preliminary
2. Comprehensive
3. Analysis
4. Synthesis
Preliminary
Familiarizing yourself with the content-skimming the content.
Comprehensive
Understanding the parts of the researcher's purpose or intent.
Analysis
Understanding the parts of the study.
Synthesis
Understanding the whole article and each step of the research process in a study.
Levels of evidence:
1-7 (greatest to least)
Level 1
Level 2
A well-designed RCT
Level 3
Quasi-experimental study
-Controlled trial WITHOUT randomization
Level 4
Single non-experimental study (case-control, correlational, cohort studies.
STUDY GUIDE
29
Level 5
Systematic reviews of descriptive and QUALITATIVE studies
Level 6
Single descriptive or QUALITATIVE studies
Level 7
Opinion of authorities and/or reports of expert committees.
THE RESEARCH QUESTION
What presents the idea that is to be examined in the study and is the foundation of
the research study
THE HYPOTHESIS
The aims or objectives the investigator hopes to achieve with the research, not the
question to be answered.
NULL HYPOTHESIS
STUDY GUIDE
30
INDEPENDENT VARIABLE
DEPENDENT VARIABLE
LITERATURE REVIEW
CONCEPT
THEORY
CONCEPTUAL DEFINITION
OPERATIONAL DEFINITION
Braham & Williams Generated 2015
STUDY GUIDE
31
This type of definition includes the method used to measure the concept.
Ex: Taking 4 steps without assistance
Cochrane review
What should be your first choice when looking for theoretical, clinical or research articles?
Structure of concepts and/or theories that provides the basis for development of research
questions or hypotheses
The overall purpose of the literature review is to present a strong knowledge base for the
conduct of the research study.
STUDY GUIDE
32
As students what should be our first choice when looking for theoretical, clinical, or
research articles?
Refereed or peer-reviewed journals have a panel of internal and external reviewers who
review submitted manuscripts for possible publication. The external reviewers are drawn
from a pool of nurse scholars, and possibly scholars from other related disciplines who
are experts in various specialties. In most cases, the reviews are "blind"; that is, the
manuscript to be reviewed does not include the name of the author(s).
In contrast to quantitative studies, the literature reviews of qualitative studies are usually
handled in a different manner. How is this so?
There is often little known about the topic under study. The literature may be conducted
at the beginning of the study or after that data analysis is completed with qualitative.
What is the purpose of the World Health Organization (WHO)'s designated Collaborating
Centers throughout the United States?
To provide research and clinical training in nursing to colleagues around the globe
4 ways that qualitative findings can be used in EBP (4 modes of clinical application from
Kearney)
1. Insight or empathy
2. Assessment of status or progress
3. Anticipatory guidance
4. Coaching
Kearney's Categories of Qualitative Findings
1. Descriptive categories
2. Shared pathway or meaning
Braham & Williams Generated 2015
STUDY GUIDE
33
Phenomenon is vividly portrayed from a new perspective; provides a map into previously
uncharted territory in the human experience of health and illness.
Auditability
The researcher should include enough information in the report to allow the reader to
understand how the raw data lead to the interpretation.
Fittingness
The researcher provides enough detail in a qualitative research report for the reader to
evaluate the relevance of the data to nursing practice.
Means of control:
Homogenous sample
Consistent data-collection procedures
Manipulation of IV
Randomization
STUDY GUIDE
34
Homogenous sample
Participants in the study are homogenous or have similar extraneous variables that might
affect the dependent variable.
Homogeneity of the sample limits generalizability or the potential to apply the results of a
study to other populations.
Constancy
Data collection procedures are the same for each subject; data collected in the same
manner and under the same conditions.
Randomization
Internal validity
Asks whether the IV really made the difference or the change in the dependent variable.
Established by ruling out other factors or threats as rival explanations.
Must be established before external validity can be established.
VALIDITY
STUDY GUIDE
35
INTERNAL VALIDITY
The confidence that an experimental treatment or condition made a difference and that
rival explanations were systematically ruled out through study design and control
EXTERNAL VALIDITY
The ability to generalize the findings from research study to other populations, places,
and situations
TRIANGULATION
Combined use of two or more theories, methods, data sources, investigators, or analytical
methods to study the phenomenon.
The researches use of multiple sources to confirm a finding. This can increase the
credibility of the results
Cross-checking conclusions using multiple data sources, methods or researchers to study
the phenomenon
BRACKETING
STUDY GUIDE
36
In nursing the descriptive design can be used to develop a theory, identify problems,
make decisions, or determine what others are doing so they can design effective nursing
interventions.
DESCRIPTIVE STUDIES
LONGITUDINAL STUDY
STUDY GUIDE
37
Expensive
Large samples are expensive
CORRELATION DESIGN
Involves the analysis of two variables and seek to determine strength of relationship
asks "What is the relationship between patient satisfaction and the timeliness and effectiveness of
pain relief in a fast track emergency unit"?
Benefits of correlation design
Uncomplicated
Flexibility in exploring relationships
Practical applications
No data manipulations
Limitations of correlation design
Lack control
Lack randomization
Suppressor variable may be cause
Spurious relationships
Prediction (Regression) design
No assurance of causality
Requires relatively large sample size
CASE STUDIES
STUDY GUIDE
38
It is purely descriptive, relies on depth of detail to reveal the characteristics and responses
in a single case.
"What are the appropriate assessments and interventions for a patient experiencing
paraplegia after heart surgery? What was the course of the condition and expected
responses from single patient's perspective?"
No insight provided
No baseline measurement
Causation cannot be inferred
Results cannot be generalized
RANGE
Distance between the two most extreme values in a data set
MEAN
Average
MEDIAN
Middle number
MODE
Most common number
What is the benefit of using inferential statistics?
They allow the researcher to determine the probability that random error is responsible for the
outcome, and they give the reader information about the size of the effect
Inferential statistics
STUDY GUIDE
39
Inferential analysis is the most common type of quantitative analysis used in research for
evidence.
Similar to experimental designs but using convenience samples or existing groups to test
interventions
Ex Post Facto
Case Study
Only descriptive
Only one person/unit/community
Paired T Test
ANOVA
Quasi Experimental
These individuals are studied at the same time as the experimental group
Retrospective
Braham & Williams Generated 2015
STUDY GUIDE
40
Cross-sectional
Longitudinal
Prospective
Case Study
Predictive
Mean
Mode
Standard Deviation
This unit of measurement expresses variability of the data in reference to the mean.
It provides a numerical estimate of how far on an average the separate observations are from the
mean
Descriptive Analysis
Apriori
Braham & Williams Generated 2015
STUDY GUIDE
41
Quantitative Research
Research Design
A plan that outlines the overall approach to a study, grounded in a set a belief about
knowledge and linked to the nature of the research question
Focused on answering the research question with the greatest credibility
Based on the purpose of the study
Outline's one's study
Provides the details
Macro view
overall approach from specific paradigm or belief system
which questions can be answered best from which perspective
Research question
Researcher expertise
Purpose of the study
Resources
Previous research and use of instruments
Requirements for control (keep the extraneous variables in check)
Issues about internal and external validity (generalizability)
Phases of research
Researchers expertise
STUDY GUIDE
42
Educational preparedness
Novice expert
Previous experiences in related research
Resources
Time
How much money will this cost
what resources do you require
I.E. - statistician, documents, computer, materials, instruments
Do you require or have personnel resources
Do you have support from the institution
Previous research
External validity
To what extent can you generalize to the larger population by controlling the sample
population?
How do you control this? Control who is in your sample- make sure its generalized to the
population
Internal validity
Trustworthiness of findings
assign subjects to groups equitably
document equivalence of the study group
STUDY GUIDE
43
Types of variables
dependent
independent - causes something to happen
extraneous or confounding - nothing to do with it but still might screw you up
all must be operationalized - so you can measure them
Design decisions
Descriptive research
Survey designs
Cross sectional- across cohorts
Longitudinal overtime
Case studies
Single subject design (response to event)
Phenomenology
ethnography - the women who lived with apes - how the apes lived - Jane Goodall
Cross sectional
Multiple cohorts at a single point in time
Braham & Williams Generated 2015
STUDY GUIDE
44
Longitudinal
Follows study participants over time
Case study
In depth analysis
Phenomenology
Qualitative looking at the life experience of one subject
Ethnography
A deep study of one culture
Designs that describe relationships
Random selection
Quasi experimental
Bias
The distortion of true findings by factors other than those being studied
Researcher
Measurements
STUDY GUIDE
45
Subjects
sampling procedures
Data
Statistical analyses
extraneous variables
Sampling Theory
Sampling theory was developed to mathematically determine the most effective way to
acquire a sample that would accurately reflect the population of study.
Why do we take samples instead of studying an entire population?
What happens if we test an infinite number of random samples?
A mathematical theorem that is the basis for the conclusion that larger samples will
represent a population more accurately than small ones
Population Sample
Elements Sampling
criteria - characteristics of the participants
Sampling frames - list of the lists of possible participants in sample
Sampling plans
Representatives - sampling criteria determine this
Randomization
Sampling factors
Sampling criteria
Sampling frame
Sampling Plan
Inferential statistics
Understanding a small part in order to infer or predict the whole truth about the whole
STUDY GUIDE
46
Probability Sampling
P value
Probability that you have made a mistake or wrongly interpreted your findings
.05
Cronbach's alpha
Probability sampling
Cluster Sampling
Progressively smaller
STUDY GUIDE
47
Systematic Sampling
Convenience sample
Quota sampling
Purposive Sample
Qualitative surveys
Extreme case
Expert case
Heterogeneity
Networking
Snowballing
Like characteristics
Sampling error
Difference in expected findings based on an infinite number of samples and your sample.
This is a mathematical calculation
MEASUREMENT STRATEGIES
Validity
STUDY GUIDE
48
Reliability
Consistency
Measurement strategy
Primary Data
Secondary data
STUDY GUIDE
49
Level of measurement
Nominal
Ordinal
Interval
Ratio
Nominal
The name 'Nominal' comes from the Latin nomen, meaning 'name' and nominal data are
items which are differentiated by a simple naming system.
The only thing a nominal scale does is to say that items being measured have something
in common, although this may not be described.
Nominal items may have numbers assigned to them. This may appear ordinal but is not -these are used to simplify capture and referencing.
Nominal items are usually categorical, in that they belong to a definable category, such as
'employees'.
Example
The number pinned on a sports person.
A set of countries.
Nominal
Ordinal
Ordinal
Items on an ordinal scale are set into some kind of order by their position on the scale.
This may indicate such as temporal position, superiority, etc.
STUDY GUIDE
50
The order of items is often defined by assigning numbers to them to show their relative
position. Letters or other sequential symbols may also be used as appropriate.
Ordinal items are usually categorical, in that they belong to a definable category, such as
'1956 marathon runners'.
You cannot do arithmetic with ordinal numbers -- they show sequence only.
Example
The first, third and fifth person in a race.
Pay bands in an organization, as denoted by A, B, C and D.
Interval Measurement
Interval data (also sometimes called integer) is measured along a scale in which each
position is equidistant from one another.
This allows for the distance between two pairs to be equivalent in some way.
This is often used in psychological experiments that measure attributes along an arbitrary
scale between two extremes.
Interval data cannot be multiplied or divided.
Example
my level of happiness, rated from 1 to 10.
Temperature, in degrees Fahrenheit.
Ratio Measurement
Random error
Does not affect the mean score but does affect the variability and standard deviation.
Can be corrected by increasing the sample size
Systemic error
Braham & Williams Generated 2015
STUDY GUIDE
51
Inappropriate sampling
Errors in measurement and procedures
Missing data
Threats to validity
Hawthorne effect
The extent to which subject change their behavior simply because they know that that
behavior is being studied
External validity
The extent to which the findings can be generalized beyond the population in the study
Extraneous variables
variables that have irrelevant association with the dependent variable but can affect the
study
influencing factors that you will need to include or disclose in your study
factors that you will need to control, otherwise they will affect the outcome of your study
Population validity
Population
Sample
STUDY GUIDE
52
External Validity
the ability to generalize the findings from a study to other populations, places and
situations
Network sampling
Non-probability
Random selection
Random assignment
Sampling frame
the available population/ the potential participants who meet the def. of the population
and are accessible to the researcher
Snowball sampling
Referral - violates randomness and independence, each subject is asked to recruit other
subjects.
Systematic sampling
the first subject is drawn randomly, and remaining subjects are at predetermined intervals
Target population
STUDY GUIDE
Data collection must be
Clear
Unbiased
Reliable
Valid
Designed to answer the research question
Physiological measures
Require calibration
Examples include
Vital signs
Weight, height, BMI
Scales, sphygmomanometers, otoscopes, thermometers, stethoscopes, etc.
Psychometric measures
Subjective
Usually self-report
Many have already been validated and shown to be reliable,
Examples
Pain scales
Visual Analog Scales
Depression scales
Interviews
One-on-one
Researcher acts as the instrument
Focus groups
Written
Narratives
Journals
Observation
Participative
Surveys
53
STUDY GUIDE
54
Questionnaires
Likert Scales
Forced Response
Other
Braham & Williams Generated 2015
STUDY GUIDE
55
Review: Reliability
What is validity?
According to APA, 5th addition, validity addresses the appropriateness, meaningfulness, and
usefulness of the specific inferences made from the instrument score. Or - the extent to which an
instrument measures what it was intended to measure.
Validity in Measures
Instrument Sensitivity (more options provide for greater sensitivity)
Yes / No; SA / A / DN / D / SD; Open ended
Ounces vs. Pounds
Visual analog scale
Dichotomous vs. visual analog
Validity in Qualitative Research
STUDY GUIDE
56
Researcher effects
Triangulation
Combined use of two or more theories, methods, data sources, investigators, or analytical
methods (may include corroborating evidence from different source)
Weighing evidence (sifting)
Contrast and comparisons
(between previous findings and reviewers' analyses)
Ruling out spurious findings
Those that are ingenuous / not true
Replicating
Check rival explanations
Negative evidence
Feedback from informants
Review: threats to internal validity in quantitative research
Historical effects - events occur during the study that have an influence on the outcome of
the study - Control by random sampling to distribute effects across all groups.
Maturation effects - effects of the passage of time control by matching subjects by age,
use ancova to measure effects of time.
Testing effects - treatment - subjects reactions (rxns) that are due to the effect of being
observed/ control by use unobtrusive measures + use placebos
Instrumentation effects - influence on the outcome from the measurement itself, not the
intervention control by calibration of instruments, document reliability
Placebo effects - subjects perform differently because they are aware they are in a study
or as a rxn to being treated
Multiple treatment effects
o Mortality - subject attrition due to drop outs, loss of contact, death control by
project expected attrition and over sample, carefully screen subjects
o Selection effects - subjects are assigned to groups in a way that does not distribute
characteristics evenly across both groups/ control by random selection, random
assignments, matching subjects, stratified samples
o John Henry Effect - compensatory rivalry
o Hawthorne Effect - subjects behave differently not because of intervention but
because they are in a study.
o Effects the treated subjects and the untreated ones.
Demoralization
STUDY GUIDE
57
Changes in the presumed cause MUST be related to the changes in the presumed effect
The presumed cause MUST occur before the presumed effect
There are no other plausible alternative explanations
What is important?
UNSTRUCTURED INTERVIEW
An interview in which the researcher asks respondents questions without having a
predetermined plan regarding the content or flow of information to be gathered.
UNSTRUCTURED OBSERVATION
The collection of descriptive data through direct observation that is not guided by a
formal, pre-specified plan for observing or recording the information.
SEMISTRUCTURED INTERVIEW
An open-ended interview in which the researcher is guided by a list of specific topics to
cover.
SCIENTIFIC METHOD
A set of orderly, systematic, controlled procedures for acquiring dependable, empirical
and typically quantitativeinformation; the methodologic approach associated with the
positivist paradigm.
RETROSPECTIVE DESIGN
STUDY GUIDE
58
A study design that begins with the manifestation of the outcome variable in the present
(e.g., lung cancer), followed by a search for a presumed cause occurring in the past (e.g.,
cigarette smoking).
PROPOSAL
A document communicating a research problem, proposed procedures for solving the
problem, and, when funding is sought, how much the study will cost.
PROSPECTIVE DESIGN
A study design that begins with an examination of a presumed cause
(e.g., cigarette smoking) and then goes forward in time to observe presumed effects
(e.g., lung cancer): also called a cohort design
POSITIVELY SKEWED DISTRIBUTION
An asymmetric distribution of values with a disproportionately high number of cases at
the lower end; when displayed graphically, the tail points to the right.
PRETESTPOSTTEST DESIGN
An experimental design in which data are collected from research subjects both before
and after introducing an intervention; also called a before after design.
PRIMARY SOURCE
First-hand reports of facts or findings; in research, the original report prepared by the
investigator who conducted the study.
OBJECTIVITY
The extent to which two independent researchers would arrive at similar judgments or
conclusions (i.e., judgments not biased by personal values or beliefs).
CODING
The process of transforming raw data into standardized form for data processing and
analysis; in quantitative research, the process of attaching numbers to categories; in
qualitative research, the process of identifying recurring words, themes, or concepts
within the data.