Anda di halaman 1dari 4

Article Outline

I. Dissemination of pilot/feasibility/exploratory research


II. Tricks of the trade for publishing pilot/feasibility/exploratory research
III. Reference

Barbara Resnick, PhD, CRNP, FAAN, FAANP

View Large Image | View Hi-Res Image | Download PowerPoint Slide

Pilot research is generally defined as small studies that test research protocols, measures, recruitment strategies

and experiences, interventions or components of an intervention, and data analysis techniques. Pilot work is also

referred to as feasibility research. The advantages to pilot or feasibility work are well known and focus on

assuring that study related activities are worthwhile in terms of being a question or area of inquiry that impacts a

sufficient number of individuals to make it valuable; doable or practical in real world settings so that translation

can occur; measurable with regard to the concepts proposed and instruments being utilized; likely to be

successful in terms of recruitment; implementation of the intervention and acceptance/participation from those

recruited into study activities. Essentially, pilots are done for process related issues in research such as being

able to estimate recruitment rates and retention rates in larger trials, to determine the need for resources such as

hours of time for staff to complete the consent process, for management related issues such as those around

obtaining and managing data, and for specific scientific related issues such as measurement or intervention
feasibility and safety.

Pilot studies allow the researcher to obtain accurate information about how long it takes to recruit a participant. In

research involving older adults with cognitive impairment this is particularly important. The length of time it takes

to work with a cognitively impaired individual to explain the study, complete an evaluation to sign consent, obtain

assent if the individual cannot self-consent and contact a proxy and go through the consent process is critical to

anticipate. Otherwise the research team will be left with insufficient resources to complete recruitment activities.

Likewise pilot work will allow the researchers to gain an understanding of the challenges that may be
encountered related to data collection such as the amount of time required, the participant's ability to

understand and answer the questions posed and/or the ability of evaluators to reliably observe participants to

complete more objective measures. With the use of increased technologically driven measures (e.g., Actigraphy;

Fitbit; wireless electronic monitoring of medications) for data collection and data entry it is important to test how
these devices will work in the field, particularly when they involve the engagement of the study participant (e.g.,
wearing a device and returning it to the investigator).

Another way in which to conceptualize pilot work is as exploratory work to determine future areas of inquiry. This

can be done through qualitative studies identifying challenges among individuals to adhere to health behaviors,

for example, or to describe timing and methods for educational opportunities. Pilot work might utilize data that is

being collected in clinical settings, as noted in the work of Brotemarkle et al in this supplement, and utilize this

data to drive future research. Brotemarkle and her team noted that little was being done, or documented, related

to pain management among older trauma patients before, during and after therapy sessions. Future work will

focus on meeting that clinical need for patients and implementing pain management techniques that expand

beyond pharmaceutical interventions. Given the increased use of electronic medical records across all settings

the opportunity to utilize routine clinical data to identify areas for further inquiry or intervention is an exciting way
in which to think about pilot/exploratory work.

From a funding perspective, pilot work certainly helps to assure the funder that the study can be done and that it

is a worthwhile study in which to invest. Moreover, the completion of a pilot shows that the research team can

and has worked together to successfully complete a project. Publication of the pilot findings provides further
support for the value of the work.

Unfortunately, there are some risks to completing pilot work. The first being that there is the risk that without

substantial resources the full intervention may not be implemented or it may not be implemented in the way fully

intended. Thus there is the risk of the intervention being non-effective or the researcher may not be able to

adequately recruit the proposed number of individuals. Conversely it is possible that a small, focused pilot results

in a false sense of confidence that the research team can easily recruit participants as the five needed for the

pilot were quickly obtained. This can have implications on a larger study and ultimately result in not having
sufficient resources to get the work done. Pilot testing also may use up some of the potential study sites or

participants. It is generally recommended that pilot participants and settings should NOT be used in subsequent

larger trials. This can create a bias as these individuals have already been exposed to the intervention or
measures or whatever aspect of the study is being pilot tested.

Dissemination of pilot/feasibility/exploratory research


Jump to Section

So given the value of pilot work why is it so darn hard to get this work published?! Much of the reason for lack of

publication is that researchers are so focused on publishing effective, positive intervention results that they tend

to try and write up a small pilot study focused on efficacy. Based on the purpose and focus of pilot work these

studies are not designed to test efficacy and are certainly not powered to be able to detect differences between

groups or even differences over time. Another major challenge in getting this work published is that the
researcher tries to support the study by indicating that a pilot was done due to lack of resources (i.e., funding) or

lack of time to fully test the intervention in a true randomized efficacy trial. In addition, some researchers try and
support a small study by referencing other work that used similarly small samples and obtained efficacy. Science
should drive the rational for the work being done and not convenience or resources.

Tricks of the trade for publishing


pilot/feasibility/exploratory research
Jump to Section

There is no bad study and no study that should not be disseminated for others to learn from. The study, however,

needs to be written in such a way that the findings are optimized. First, it should be clearly stated in the title and

throughout the paper that this is a pilot/feasibility study and not being done to establish efficacy. The research

team must be cautious not to over interpret their findings. The introduction should focus on the rational for the

study building off prior work in the area and why a pilot will add to what is currently known. If the study is testing

an intervention it should be clear that the focus is on feasibility of the intervention. It should be clearly articulated

how feasibility will be tested. The Treatment Fidelity Model established by the Behavior Change

Consortium1provides a useful model to help guide the researcher in a comprehensive evaluation of treatment

fidelity by evaluating the five different components of treatment fidelity (Table 1). Hypotheses around treatment
fidelity should be stated or the researcher should describe how each aspect of treatment fidelity will be evaluated.

Table 1Treatment fidelity plan.

Treatment
fidelity focus Description of treatment fidelity indicators
Design Focuses on whether the study is consistent with the underlying theory and that those in the treatment
group received treatment while those in the control group only were exposed to the control
intervention/routine care.

Training Addresses skill acquisition and maintenance in those providing the intervention. This can be guided
by a training manual on ongoing observation of the interventionists.

Delivery Is the assessment of the interventionist's ability to present the intervention as intended.

Receipt Focuses on whether or not the participant received and understood the intervention as intended. This
can be evaluated via observations of doing a certain behavior or through paper and pencil testing.

Enactment Establishes if the participants are able to carry out the activities taught in their own environment.
This might be evidence that the individual use a pill box daily if the intervention was implementation
of such a pill box as a medication reminder.

View Table in HTML

As in all research the manuscript describing the study should include a methods section which provides

information around the study design; sample; procedure/intervention; measures; data analysis; results and

discussion. The discussion should focus on feasibility related issues and not on efficacy. If by chance the

intervention is effective careful consideration should be given to the limitations of the work and possible bias. For

example, the intervention may have been done with a very small and select group of highly motivated volunteers

or there may not be a control group and thus the improvement over time may be a testing effect. Worry less
about selling the intervention or idea and more on learning from the experiences during the study implementation.
This supplement is focused on pilot work across a variety of areas including measurement development and

testing, piloting of interventions or simply trying out innovative approaches such as the use of Robots to provide

wellness visits or management of congestive heart failure in the community. These papers are not intended to

demonstrate efficacy. Rather the goal is to share these approaches and attempts to learn from the challenges

experienced by other, identify next steps and work towards new ways in which to provide care to older adults. I

applaud these researchers and hope that you too will find their work helpful and that you can appreciate it from
the perspective of process rather than outcomes.

Reference
Jump to Section

1. Bellg, A., Borrelli, B., Resnick, B. et al. Enhancing treatment fidelity in health behavior change studies: best
practices and recommendations from the behavior change consortium. Health Psychol. 2004; 23: 452456
o View in Article
o | Crossref

o | PubMed

o | Scopus (693)

Anda mungkin juga menyukai