Anda di halaman 1dari 7

Instrument in Quantitative research: An Overview

Instruments are tools used to gather data for a particular research topic. Some of the common
instruments are used for quantitative research are tests (performance-based or paper-and-
pencil), questionnaires, interviews, and observations. The last two instruments are used more
often in qualitative research. However, they can also be employed in quantitative studies as long
as the required responses or analyzed data are numerical in nature.

When using instruments that are prone to subjectivity (e.g., observation, interview, assessment,
of performance task), you may consider having another coder or evaluator to help you gather
and analyze data. This is to improve the validity and reliability of data. Then, compute for the
inter-coder or inter-rater agreement. Inter-coder or inter-rater agreement refers to the level of
concurrence between the scores given by two or more raters.

Typically, you consider a number of aspects in describing your instruments. These include the
following:

1. The actual instrument used

2. The purpose of the instrument

3. The developer of the instrument (an institution or other researchers)

4. The number of items or sections in the instruments

5. The response format used (multiple choice, yes or no)

6. The scoring for the responses

7. The reliability and validity of the study

There are three ways of developing and instrument for quantitative research. The first of these
is adopting an instrument. This means that you will utilize an instrument that has been used in
well-known institutions or reputable studies and publications. Some of the popular sources of
instruments include a professional journals and websites, such as Test in Print and the IRIS
DIGITAL repository. Adopting an instrument means that you do not have to spend time
establishing its validity and reliability since they have already been tested by their developers and
other researchers.

Page 1 of 7
Sometimes, however the available test does not generate the exact data that you want to obtain.
In this case, you may either modify and existing instrument or create your own instrument.

As you develop your instrument, be guided by the instruments used in students similar to yours.
Make sure, however, that the items contained in your instruments are aligned with your research
questions or objectives. Remember that inadequacies in your research instrument will yield
inaccurate data, thereby making the results of your study questionable.

Instruments Validity
Whether your instrument is adopted, modified, or self-created, it is necessary to ensure its
validity and reliability. Validity refers to the degree to which an instrument measures what it is
supposed to measure. For example, in measuring the speaking proficiency of students, speaking
performance have a greater validity than multiple choices tests. This is because multiple choice
tests do not necessarily require students to demonstrate their speaking skills. Speaking
performances, on the other hand, oblige students to show their oral communication skills. Thus,
there is a guarantee that this is the variable that is being measured.

Validity has several types, namely, face validity, content validity, construct validity, concurrent
validity, and predictive validity.

1. An instrument has a face validity when it “appears” to measure the variables being studied.
Hence checking for face validity is a subjective process. It does not ensure that the instrument
has actual validity.

2. Content validity refers to the degree to which an instrument covers a representative sample
(or specific elements) of the variable to be measured. Similar to face validity. Assessing content
validity is a subjective process which is done with the help of a list of specifications. This list of
specifications is provided by experts in your field of study.

3.The third type of validity is known as construct validity. it is the degree to which an instrument
measures the variables as a whole. Thus, the instrument is able to detect what should exist
theoretically. A construct is often an intangible or abstract variable such as personality,
intelligence, or moods. If your instrument cannot detect this intangible construct, it is considered
invalid.

4. The last two types of validity are categorized as criterion validity. This refers to the degree that
an instrument predicts the characteristics of a variable in a certain way. This means that the
instrument produces results similar to those of another instrument in measuring a certain

Page 2 of 7
variable. Therefore, a correlation between the results obtained through this instrument and
another is ensured. Hence criterion validity is evaluated through statistical method.

Criterion validity be typed as concurrent and predictive. An instrument has concurrent validity
when it is able to predict the results similar to those of a test already validated in the past. In
some instances, concurrent validity is said to be ensured when two instrument are employed
simultaneously. An example of testing concurrent validity is whether an admission test produces
results similar to those of the National Achievement Test. On the other hand, an instrument has
predictive validity when it produces results similar to those of another instrument that will be
employed in the future. An example of testing predictive validity is employing college admission
tests in mathematics. This may be used to predict the future performance of the students on
mathematics.

Let us use the scenario to have a better understanding of the different types of instrument
validity. For example, you are conducting study on the writing skills of students. You first decide
to use a 10-item true-or-false test. This may ensure face validity as it appears to measure one’s
knowledge of writing however, it lacks content, construct, and criterion validities. This type of
instrument lacks construct validity since it does not measure actual writing skills such as content
knowledge, cohesiveness and organization, and facility with the language. These are the variable
which are considered constructs since they are highly abstract. You may rework the test to focus
on the grammar skills of students. Still, this test then lacks content validity, since is fails to cover
other items under writing skills. These include sentence structure, organization, and content
relevant to the topic, to be considered, these items must be taken note of in an actual writing
test, thus helping address the instrument’s lack of content validity. It must be able to predict the
academic performance of students in their English subjects; this is a measure of predicative
validity. in addition, the instrument must also produce results similar to those of and English
writing test which has been previously administered. This will prove that the most recent writing
test administered has concurrent validity

Instrument Reliability
Another important factor you need to consider when preparing or selecting instruments is their
reliability. Reliability refers to the consistency of the measures of an instrument. Reliability is an
aspect involve in the accuracy of measurement. There are four types of reliability: test-retest
reliability, equivalent forms reliability, internal consistency reliability, and inter-rater reliability.

Page 3 of 7
1. Test-retest reliability is achieved by administering an instrument twice to the same group of
the participants and then computing the consistency of scores. It is often ideal to conduct the
retest after a short period of time (e.g., two weeks) in order to record a higher correlation
between the variables tested in the study.

2. Equivalent forms of reliability is measured by administering two identical in all aspects except
the actual wording of items. In short, the two test have the same coverage, difficulty level, test
type, and format. An example of a procedure involving equivalent forms of reliability is
administering a pretest and a posttest.

3. Internal consistency reliability is a measure of how well the items in two instruments measure
the same construct. There are three ways of measuring the internal consistency reliability. The
spit-half coefficient (or split half reliability) is obtained by the administering a single instrument
aimed at the measuring only one construct. However, upon computing results, the items of the
instruments are divide (or “split”) Into two sets; the results for these two sets are then compared
with each other: Cronbach’s alpha measures reliability with the respect of each item and
construct being examined buy the instrument. Lastly the Kuder-Richardson formula test
reliability in terms of instruments of a dichotomous nature, such as yes or no tests.

4. Finally there is inter-rater reliability which measures the consistency of scores assigned by two
or more raters on a certain set of results. The Kappa coefficient is one of the most popular
statistical tools in measuring inter-rater reliability. The higher the value of the kappa coefficient,
the more reliable the instrument is. A coefficient value of at least 0.70 indicates that the
instrument is reliable

Planning the Data Collection Procedure


Now that you have learned the important factors to consider in developing your research
instrument, you can now plan the steps you will take in your actual data gathering. These steps
are typically clustered into three phases: before, during, and after the data collection.

Before

1. Develop your data collection instruments and materials.

2. Seek permission from the authorities and heads of the instruments or communities where
you will conduct your study.

Page 4 of 7
3. Select and screen the population using appropriate sampling techniques.

4. Train the raters, observers, experimenters, assistants, and other research personnel who may
be involved in data gathering.

5. Obtain informed contest from the participants. An informed consent form is a document that
explains the objectives of the study and the extent of the participants’ involvement in the
research. It also ensures the confidentially of certain information about the participants and
their responses.

6. Pilot –test the instruments to determine potential problems that may occur when they are
administered.

During

1. Provide instructions to the participants and explain how the data will be collected.

2. Administer the instruments, and implement the intervention or treatment, if applicable.

3. As much as possible, utilize triangulation in your method. Triangulation is a technique for


validating data using two or more sources and methods.

After

1. Immediately encode or transcribe and archive your data.

2. Safeguard the confidentially of your data.

3. Later, examine and analyze your data using the appropriate statistical tools

Treatment in Experimental and Quasi-Experimental studies


In experimental and quasi-experimental studies, a section that details the intervention or
treatment used must be incorporated. In this section, you need to clearly describe and
distinguish the procedures you used for both the treatment group and control group. Here are
some of the steps that you can take in describing the intervention procedure.

Page 5 of 7
1. Write an introductory paragraph that contains background information relevant to the
experimental. Establish the context in which the experiment has been conducted, and state
the duration of implementing the procedure for both the control and treatment groups. For
instance, if your study has been performed in collection with a specific school subject,
mention this in you methodology. You also need to indicate, for instance, that your data
collection procedure will take place for one semester. Describe the key differences and
similarities between the control and treatment groups. For example, you are conducting a
study on the effectiveness of a curriculum you originally developed. You may state that the
control group uses the standard curriculum, while the treatment group uses the curriculum
you have developed.

2. Extensively describe the procedure that you will use in the treatment group. If your study
will use a pretest-posttest design, explain how you will administer the pretests, implement
the intervention, and administer the posttests. A pretest-posttest design measures the
pretests, undertake the intervention, and skills of the participants after implementing an
intervention.

3. Clearly explain the procedure that you will use the treatment group. Describe how you will
control and manipulate the variables to achieve the intended outcome. For instance, if your
study will use a pretest-posttest design, describe how you will administer the pretests,
undertake the intervention, and administer the posttest for the treatment group.

4. You need to explain the basis for undertaking a particular step in your intervention. For
instance, you are conducting a study on the linguistic proficiency of students. You like to
implement a pretest-posttest design, and plan to prohibit hr participants from using a
dictionary. You need to explain the reason behind this decision. For example, you may say
that you like to further challenge the vocabulary proficiency of the participants.

Aside from the preceding steps, you can also use a table to further differentiate the control and
treatment groups from each other. In addition, you can use flowcharts to explain a complex
data collection procedure. Your data collection procedure must be enough to enable other
researchers to replicate your study. Through this, the validity of your findings can be further
ensured.

Designing the Data Analysis Procedure


After planning your data collection, you still now proceed to designing your data analysis
procedure. In quantitative research, data analysis involves the use of statistical test to
address your research questions or objectives. These statistical tests shall examine the

Page 6 of 7
relationship between the dependent variable and independent variable. The data involving
these two variables are also known as the bivariate data.
One of the important considerations in data gathering is identifying which statistical
information is the most important in your analysis. This can be done by determining whether
you will use a parametric or non-parametric test. A parametric test rests on a number of
assumptions about the distribution of data (or the frequency of their occurrence). On the other
hand, a non-parametric test rests on a few or no assumption regarding the distribution of
the data. Parametric test are often mathematical in nature, and thus utilize more statistical
information. It also means that parametric tests can detect whether indeed the independent
variable significantly affects the dependent variable or not.
Here are some points to consider when choosing statistical tests for your study.
1. Use parametric tests if you are using interval or ratio scales. Use non-parametric tests if
your measurement scale is ordinal or normal.

2. Use parametric tests if your sample size is 30 or more per group. Use non-parametric tests
if your sample size is fewer than 30.

3. Use parametric tests if the distribution of your data is normal. Use non-parametric tests if
the distribution of your data deviates markedly as from normality or normal distribution.

To find out if your data is in normal distribution, check the value of the following using
available statistical software such as SPSS Statistics or Statistica:

a. Kurtosis or the measure of the heaviness of the tail of the distribution, thus
indicating the presence of numerous outliers in your data.

b. Skewness or the lack of evenness in the distribution of data.

Kurtosis and skewness both help describe certain aspects of data distribution. The closer
their value to zero is, the more the data reach normal distribution. Knowing the kurtosis and
skewness of your data will help you determine the right statistical technique to use in your
research

Source: Barrot, J. (2018). Practical research 2 for senior high school. C & E Publishing: Manila.

Page 7 of 7

Anda mungkin juga menyukai