Anda di halaman 1dari 12

CLES + T reported the highest Cronbach's a this systematic review was 76.

9% ( Burrai et
of 0.82>0.93 with excellent quality in al., 2012 ) and 71.2% ( De Witte et al.,
Watson et al. (2014) and 0.95 (0.80>0.96) 2011 ) for the CLES tool, and 72.8% (
with good quality in Tomietto et al. (2012) . Bergjan and Hertel, 2013 ) for the CLES + T
CLES and SECEE have demonstrated tool. Moreover, when considering only the
similar internal consistency coefficients but original versions of tools, the majority
with poor methodological quality ( De Witte reported good explained variance (from 50%
et al., 2011; Saarikoski and Leino-Kilpi, to 60%) but the methodological quality was
2002 ). poor or fair; in fact, only six studies
4.3.3. Keandalan estimated this psychometric property by
Taking reliability as the proportion of the adopting good methodology: Newton et al.
total variance in the data that is due to true (2010) reported an explained variance of
differences among learning environments as 51% in validating the CLEI; Salamonson et
well as the extent to which scores are the al. (2011) reported an explained variance of
same for repeated measurements ( Mokkink 63.3% in the CLEI-19; Saarikoski and
et al., 2010 ), only three studies ( Hosoda, Leino-Kilpi (2002) , Saarikoski et al. (2005)
2006; Gustafsson et al., 2015; Tomietto et reported 64% for the CLES instrument,
al., 2009 ) performed a test-retest evaluation while Tomietto et al. (2012) and
as reported in Table 2 , with poor or fair Papastavrou et al. (2015) reported 67.2%
methodological quality. and 67.4% respectively, in validating the
4.3.4. Measurement error CLES + T. Finally, only one study estimated
Measurement error, which is the systematic the structural validity using an excellent
and random error of a respondent score not quality methodological approach, achieving
attributed to true changes in the 58.2% of variance in validating the CLES +
construct under measurement ( Mokkink et T ( Watson et al., 2014 ). As reported in
al., 2010 ), was reported only by Gustafsson Table 2 , some authors (eg, Hosoda, 2006;
et al. (2015) in validating the CLES + T, as Papastavrou et al., 2010, 2015; Saarikoski
reported in Table 2 . and Leino-Kilpi, 2002 ) used Exploratory
4.3.5. Structural validity Factor Analysis (EFA); others (eg, Newton
The majority of studies (21 out of 26) et al., 2010; Salamonson et al., 2011 ) used
assessed the structural validity as required Principal Component Analysis (PCA) or
by the COSMIN procedures. More precisely, both (eg, Newton et al., 2010; Salamonson
the structural validity is the degree to which et al., 2011 ), whilst still others (eg, Bos et
the scores of an instrument are an adequate al., 2012; Tomietto et al., 2012; Vizcaya-
reflection of the dimensionality of Moreno et al., 2015 ) used Confirmatory
the construct to be measured ( Mokkink et Factor Analysis in addition to EFA and PCA.
al., 2010 ). struktural validity findings, when More recently, D'Souza et al. (2015) used
reported, were concordant with the Structural Equation Modeling. Moreover,
construct (dimensions) of the instrument, but Hosoda (2006) considered only students'
the methodological quality that emerged was question- naires when performing the EFA,
poor or fair in 14 of 21 studies due to whereas students and staff nurses' data were
insufficient sample size, no explanation with considered together by Dunn and Burnett
regard to the treatment of the missing items (1995) . Differently, Chuan and Barnett
and lack of precision in reporting the (2012) did not specify whether they
performed analysis. The highest explained had considered the data collected from
variance estimated by studies included in students and from educators differently.
Thus, findings regarding the structural Konvergen
validity estimations are not comparable Validity, r
given the differences in methodological Pearson
evaluation analyses performed and in the kriterium
quality of the methods used. Keabsahan
4.3.6. pengujian hipotesis Cross-
Hypotheses testing assessment as expected kultural
mean differences Keabsahan
between groups or as expected correlations a Cronbach total score
between instrument and/or range across
scores and other variables, such as the scores faktor
of other instruments ICC
( Mokkink et al., 2010 ) were estimated in SEM, SDC
eight studies out of 26 iya nih
with poor or fair methodological quality ( Perbedaan
Table 2 ). Menurut explained%,
findings, three levels of variables were metode Sebuah

considered in the Tool, r


hypotheses testing to date: Pearson
- Individual variables (CLEI-19): as p value
differences between worker iya nih
and non-worker students ( Salamonson et <0.01
al., 2011 ); ++
- Educational variables (CLEDI; CLEI-19; SECEE
CLE instrument; CLES; Sand-Jecklin
CLES + T; SECEE): as differences with (2009)
regard to the academic year 0.94
attended, placement duration, types of shifts, 0.820.94
types of supervi- ++
sory relationships, number of briefing and 59, EFA, CFA
debriefing meetings ++
with the nurse teacher, and differences iya nih
between students and ++
clinical tutors scores ( Chuan and Barnett, CLE
2012; Hosoda, 2006; instrumen
Table 2 ( Continued ) Chuan and
Instrument Authors, year Barnett (2012)
Konsistensi internal 0.867
Reliability Measurement 0.6580.875
Kesalahan +
kadar iya nih
Keabsahan +
Struktural 54, PCA
Keabsahan +
hipotesis iya nih
Testing, Yes ++
Diubah Convergent validity, defined by the
CLES + T COSMIN tool as hypotheses
D'Souza et al. testing measured with regard to the expected
(2015) relations with other
0.84 instruments ( Mokkink et al., 2010 ), was
+ estimated only by Chan
iya nih (2001, 2003) who did not specify which
+ comparative instruments
SEqM were considered and therefore had poor
++ methodology quality
Legend. + poor; ++ fair; +++ good; ++++ ( Tabel 2 ). Moreover, the correlation values
excellent; CFA Confirmatory Factor that emerged were poor,
Analysis; EFA Explorative Factor Analysis; from 0.39 to 0.45 ( Chan 2001, 2003 ).
ICC Intraclass Correlation Coefficient; PCA 4.3.8. validitas kriteria
Principal Criterion validity, a comparison of the tool
Component Analysis; SDC Smallest under validation
Detectable Change; SEM, Standard Error of with an acknowledged gold standard
Measurement; SEqM, Structural Equation instrument (Mokkink et al.,
Model. 2010) was estimated in only two studies
a When CFA was used, the data has not been reported here, in the interest of summarization; reporting good
however, the data is available in the included studies or from the authors of this
correlations, 0.93 between CLES and the
ulasan.
CLE scale ( Saarikoski
b Data not reported in the study.
et al., 2005 ) and 0.76 between CLEDI and
c In the content validity evaluation, nursing students were also involved.
CLES ( Hosoda, 2006 ),
68
both applying fair methodological quality (
I. Mansutti et al. / International Journal of
Tabel 2 ).
Nursing Studies 68 (2017) 60 72
4.3.9. Cross-cultural validity
Although 14 translated instruments were
halaman 10
used, only seven
Papastavrou et al., 2010; Saarikoski et al.,
( Bergjan and Hertel, 2013; De Witte et al.,
2002; Salamonson
2011; Henriksen et al.,
et al., 2011; Sand-Jecklin, 2009; Warne et
2012; Johansson et al., 2010; Tomietto et al.,
al., 2010 );
2009, 2012; Vizcaya-
- Macro-variables (CLES; CLES + T): as
Moreno et al., 2015 ) assessed cross-cultural
differences in the percep-
validity by adopting
tions of students in different European
methodological quality from poor to fair (
countries, types of higher
Table 2 ). In all studies,
educational institutions (university colleges
tools were forward- backward- translated
vs. polytechnics), or
only once, and only De
in higher educational institutions established
Witte et al. (2011) , Henriksen et al. (2012)
for more or less
and Vizcaya-Moreno
than 20 yearsthus with a different
et al. (2015) performed a pre-test with the
experience in nursing
translated instrument.
education ( Saarikoski et al., 2002; Warne et
5. Diskusi
al., 2010 ).
5.1. Clinical learning environment
4.3.7. Convergent validity
instruments
To the best of our knowledge this is the first the CLES + T scale which was validated in
psychometric >10 countries ( Bos et al.,
systematic review of instruments evaluating 2012; Warne et al., 2010 ) has emerged as a
clinical learning trend in recent years,
environment quality in nursing education. In thus developing an international framework
our systematic capable of accumu-
review, a total of 26 studies emerged that lating evidence on instrument validity and of
estimated the reliability comparing data.
and validity of eight instruments in 16 The instruments emerged are composed of
different countries, mainly two ( Salamonson
across Europe. et al., 2011 ) to 11 factors ( D'Souza et al.,
The first instrument underwent the 2015 ) and from 19
validation process with ( Salamonson et al., 2011 ) to 57 items (
data collected in 1993 ( Dunn and D'Souza et al., 2015 ). Beberapa
Burnett,1995 ) whereas the latest factors are similar across instruments, such
was based on data collected from 2011 to as 'Supervisory
2012 ( Vizcaya-Moreno relationship' and 'Ward atmosphere', whereas
et al., 2015 ), indicating that this research the 'Hierarchy/
field spans over 20 years, ritual' factor has appeared only in the
a period during which there has been a recently modified CLES + T,
tremendous amount of thus reflecting cultural commonalities and
change in nursing programmes, hospital differences in health-
environments and care settings that may affect the perceptions
student profiles ( Anderson, 2010 ). of students ( D'Souza
Two different strategies of tool et al., 2015 ).
development, thus a first and The shortest instrument emerged is the
second generation instrument, can be CLEI-19 ( Salamonson
identified. The first were et al., 2011 ) composed of two factors
conceptually-based (CLE scale, CLEDI, ('Satisfaction' and 'Person-
CLES, CLES + T, SECEE) and alisation') including 19 items, whilst the
developed from prominent learning theories modified CLES + T is the
mainly established in most complex, composed of 57 items and 11
the 1980s and 1990s. The second-generation factors ( D'Souza et al.,
instruments were 2015 ). In general, instruments have
developed from previously well-established increased the number of
instruments in clini- factors and items over the years, possibly
cal environments (eg, modified CLES + T due to the increased
based on CLE and complexity of the clinical learning
CLES + T, D'Souza et al., 2015 ) or in other environments ( Palese et al.,
learning environments 2016 ).
(eg, CLEI based on the University Furthermore, homogeneity has emerged in
Classroom Environment the metrics: the
Inventory). In addition, assessing the majority have used a 5-point Likert scale to
validity and reliability of express the evaluation
well-established instruments in different from strongly disagree to strongly agree (eg,
countries, as occurred for Hosoda, 2006;
D'Souza et al., 2015; Dunn and Burnett, experience in clinical practice, as occurs in
1995 ) and from totally Spain ( Zabalegui and
disagree to totally agree ( De Witte et al., Cabrera, 2009 ). There is a need for future
2011 ). However, Likert research to include entire
scales with a mid-point may introduce a cohorts of students, who may have different
central tendency bias in expectations and
that participants may avoid extreme perceptions, and also nursing programmes
response categories. Hanya based on four years of
Chan (2001, 2003) and Newton et al. (2010) pendidikan.
used a 4-point Likert The sampling method used in the studies
scale, whereas Burrai et al. (2012) used a 6- was not always
point Likert scale. Di reported and the response rate was varied,
addition, agree/disagree Likert scales may from 41.6% to 100%
introduce an acquies- when students were volunteers. Although
cence bias (participants may agree with greater accuracy in the
statements as presented), sampling methods are suggested, the low
social desirability bias, and lack of response rates may
reproducibility ( Jamieson, 2004; reflect dissatisfaction among students and
Nadler et al., 2015 ). the lack of desire to
5.2. Population and settings participate, due to fear of the consequences
The studies involved from 42 ( Gustafsson (eg, impact on the
et al., 2015 ) to 1903 clinical competences evaluation). Given that
( Warne et al., 2010 ) students; participants this may affect the
were recruited from a perceptions, future studies should specify
single nursing programme (eg, Hosoda, also when students
2006 ) or different completed the instrument, before or after
programmes located in different countries their clinical compe-
(eight in Warne et al., tence evaluation.
2010 ). The largest study involved 2768 In addition, current instruments have mostly
participants ( Sand-Jecklin, been subjected to
2009 ) but the amount of the sample validation processes in public hospitals, in
composed exclusively of specific wards, such as
nursing students was not declared. I. Mansutti et al. / International Journal of
Participants were mainly female and this Nursing Studies 68 (2017) 60 72
may have introduced 69
a gender bias that should be addressed in the
future as recent halaman 11
changes documented in several countries medical units and surgery. With the
show an increased transition in the focus of
proportion of males among nursing students nursing education from hospitals to
( Loughrey, 2008 ). Itu communities and primary
majority of students were in the 2 or 3
nd rd health care settings, more emphasis should
year of their programme be given to validating
and no studies involved 4 year students
th instruments that are capable of measuring
who have an intense clinical learning
environments across different settings with siswa. This gap should be addressed in
different missions future studies.
(private/public, academic or not) and A few studies estimated reliability, although
different patient profiles test-retest
(eg, Accident and Emergency department vs. procedures may be easier with nursing
nursing homes). students given their
5.3. Methodological quality evaluation and tersedianya. However, the duration of the
comparison of the clinical rotations, as well
psychometric properties as their frequency, may have threatened the
A varying number of psychometric potential for
properties have been undertaking a second evaluation for the
estimated in the included studies, from one same unit after one or
to six. Selanjutnya, two weeks when students have already
the methodological quality of these moved on to their next
estimations was heteroge- pengalaman belajar. Furthermore,
neous, with the majority from poor to fair. measurement error was esti-
Therefore, limited mated only in one study ( Gustafsson et al.,
comparison is possible across the estimated 2015 ): as a
properties of the consequence, comparisons of reliability and
available instruments, threatening the measurement error
identification of the most across different instruments are limited.
reliable and valid tool in evaluating clinical Internal consistency and structural validity
learning environments. have been estimat-
With regard to content validity, concepts and ed for the majority of the tools, but with
constructs were different quality of
rarely assessed for their significance (eg, methodological approaches, compromising
only two studies also in this case
calculated the Content Validity Index: De comparisons across instruments. Structural
Witte et al., 2011; validity was evaluated
D'Souza et al., 2015 ) and were rarely using different statistical analyses.
judged for their relevance and Specifically, Bergjan and Hertel
comprehensiveness of the target population ( (2013) , Burrai et al. (2012) and De Witte et
Mokkink et al., 2010 ). al. (2011) obtained the
Nursing students were not involved in the highest proportion of explained variance
majority of the studies, when validating the CLES
thus resulting in a fundamental flaw. In the and the CLES + T. Nevertheless, they all
process of tool used modified instruments,
development all authors took account of changing, removing or adding some items
expert opinions (eg, and using a different
nurse educators), thus failing to consider Likert scale, thus threatening the ability to
that the learning clinical make comparisons with
experience is subjective and that it is the original tools. Moreover, their structural
important to elicit elements validity values were
that influence the quality of the experience also affected by the poor methodological
as perceived by quality adopted.
Finally, convergent and criterion validity guidelines ( Mokkink et al., 2010 ) that were
have rarely been developed for health
dinilai. Whereas in the case of the first status measures and not specifically for
generation of instruments nursing education
(eg, CLE scale: Dunn and Burnett, 1995 ) instrumen. In addition, the guidelines have
the lack of available only recently been
knowledge in the field possibly threatened establishedwhen the majority of the tools
comparison with gold were validatedthus,
standards, since valid and reliable tools have in reporting their findings, authors may not
been documented, an have been supported
increased tendency to evaluate convergent by the methodological quality
and criterion validity is recommendations included in these
diharapkan. Without criterion validity pedoman. However, multiple contacts with
evaluation, it is not certain authors aiming to
that instruments evaluate the same collect unpublished data with regard to some
constructs used by other tools properties, have been
( McDowell, 2006 ). dilakukan.
5.4. keterbatasan Moreover, the COSMIN guidelines apply
Several limitations affect this systematic the worst score
review. Aiming to counts method ( Mokkink et al., 2010 ),
develop a focused search strategy, only two thus, instead of an average
databases were evaluation of the trends, it emphasises
searched (MedLine, CINAHL) in problems in the measure-
accordance with their relevance to ment of psychometric properties. In
nursing literature; only those MeSH terms addition, responsiveness, or
accepted in the the ability of an instrument to detect change
database dictionary of the above-mentioned in the measured
databases were construct over time (as required by the
considered, thus relevant text words such as COSMIN procedure), was
scale, tool, not evaluated in this review due to the
measurement were not considered; in absence of longitudinal
addition, only studies studies among those included.
published in English or Italian were 6. Kesimpulan
included. Selain itu, Eight instruments evaluating the quality of
Boolean operators OR/AND were not used clinical learning
within each element of environments in nursing education have
the Population/Intervention and Outcome been exposed to a
elements. Thus, other validation process to date. First-generation
instruments may have been developed and instruments have
circulated as grey been developed from different learning
literature, as well as in different languages, theories, whereas
therefore introducing a second-generation instruments have been
potential publication bias. developed from the
Second, the assessment of the studies was first-generation, mixing, revising, and
based on COSMIN integrating different
instruments already validated. In the studies Tidak ada.
included in this Ucapan Terima Kasih
review, not all relevant psychometric We thank all corresponding authors of the
properties have been included studies (Bos
estimated and often the methodological E., Burrai F., De Witte N., D'Souza MS,
approaches used are Gustafsson M., Henriksen N.,
poor or fair. In addition, a lack of Johansson UB, Newton JM, Papastavrou E.,
homogeneity in reporting Tomietto M., Vizcaya-
participants and setting data, with a large Moreno MF, Saarikoski M., Salamonson Y.
amount of missing data and Seaton P.) for their
within the studies, has emerged thus kerja sama. We are grateful for their
threatening the external generosity and time devoted
validity of the instruments. to improve the quality and consistency of
There is a need to address future research in the assessment
the field by dilakukan.
completing the processes of validation Referensi
undertaken to date for the Anderson, B., 2010. A perspective on
available instruments; by using higher changing dynamics in nursing over the past
quality of methods. Baru 20
instruments developed should also estimate tahun. Br. J. Nurs. 19 (October (18)), 1190
all psychometric 1191 (14>27) .
properties with increasing quality of the Benner, P., 2003. From Novice to Expert:
methodologies. SEBUAH Excellence and Power in Clinical Nursing
minimum data set regarding students (eg, Practice. McGraw-Hill, Milan .
duration of the clinical Bergjan, M., Hertel, F., 2013. Evaluating
placement, tutorial modelsfor example, students' perception of their clinical
one-to-one or peer placementstesting the clinical learning
education with other students), their status environment and supervision and
(supernumerary or nurse teacher scale (CLES + T scale) in
not, paid or not, alone or with other Germany. Nurse Educ. Today 33
students), and settings (November (11)), 13931398 .
(private, public, hospital, community units), Bloom BS,, 1964. Stability and Change M
is also strongly Human Characteristics. John Wiley & Sons,
recommended in future studies, aiming to New York .
increase the external Bos, E., Alinaghizadeh, H., Saarikoski, M.,
validitas temuan. Kaila, P., 2012. Validating the 'clinical
Konflik kepentingan learning environment, supervision and nurse
Tidak ada. teacher' CLES + T instrument in
pendanaan primary healthcare settings using
Tidak ada. confirmatory factor analysis. J. Clin. Nurs.
70 21
I. Mansutti et al. / International Journal of (June (11>12)), 17851788 .
Nursing Studies 68 (2017) 60 72 Brown, JS, Collins, A., Duguid, P., 1989.
Situated cognition and the culture of
halaman 12 learning. Educ. Res. 18, 3242 .
persetujuan etis
Burrai, F., Cenerelli, D., Sebastiani, S., Gustafsson, M., Blomberg, K., Holmefur,
Arcoleo, F., 2012. Reliability analysis and M., 2015. Test-retest reliability of the
structure factorial exploration of Clinical Clinical Learning Environment, Supervision
Learning Environment of Supervision and Nurse Teacher (CLES + T) scale.
(CLES). Scenario 29 (December (4)), 4147 Nurse Educ. Pract. 15 (4), 253257 .
. Henderson, A., Briggs, J., Schoonbeek, S.,
Chan, D., 2001. Combining qualitative and Paterson, K., 2011. A framework to develop
quantitative methods in assessing a clinical learning culture in health facilities:
hospital learning environments. Int. J. Nurs. ideas from the literature. Nurs. Rev.
Stud. 38 (August (4)), 447459 . 58, 196202 .
Chan, D., 2003. Validation of the clinical Henriksen, N., Normann, HK, Skaalvik,
learning environment inventory. West J. MW, 2012. Development and testing of the
Nurs. Res. 25 (August (5)), 519532 . Norwegian version of the Clinical Learning
Chuan, OL, Barnett, T., 2012. Student, tutor Environment, Supervision and Nurse
and staff nurse perceptions of the Teacher (CLES + T) evaluation scale. Int. J.
clinical learning environment. Nurse Educ. Nurs. Educ. Scholarsh. 9 (September
Pract. 12 (July (4)), 192197 . (18)) .
D'Souza, MS, Karkada, SN, Parahoo, K., Hooven, K., 2014. Evaluation of instruments
Venkatesaperumal, R., 2015. Perception of developed to measure the clinical
and satisfaction with the clinical learning learning environment: an integrative review.
environment among nursing Nurse Educ. 39 (November-
siswa. Nurse Educ. Today 35 (June (6)), December (6)), 316320 .
833840 . Hosoda, Y., 2006. Development and testing
De Witte, N., Labeau, S., De Keyzer, W., of a Clinical Learning Environment
2011. The clinical learning environment and Diagnostic Inventory for baccalaureate
supervision instrument (CLES): validity and nursing students. J. Adv. Nurs. 56
reliability of the Dutch version (December (5)), 480490 .
(CLES + NL). Int. J. Nurs. Stud. 48 (May Jamieson, S., 2004. Likert scales: how to
(5)), 568572 . (Ab)use them. Med. Ed. 38, 12171218 .
Dewey, J., 1933. How We Think. DC Heath Johansson, UB, Kaila, P., Ahlner-Elmqvist,
and Company, Lexington, MA . M., Leksell, J., Isoaho, H., Saarikoski, M.,
Dunn, SV, Burnett, P., 1995. The 2010. Clinical learning environment,
development of a clinical learning supervision and nurse teacher evaluation
environment scale: psychometric evaluation of the
scale. J. Adv. Nurs. 22 (December (6)), Swedish version. J. Adv. Nurs. 66
11661173 . (September (9)), 20852093 .
Flott, EA, Linden, L., 2016. The clinical Knowles, M., 1990. The Adult Learner: A
learning environment in nursing education: a Neglected Species, 4th edition Gulf
concept analysis. J. Adv. Nurs. 72 (March Publishing, Houston .
3()), 501513 . Kolb, DA, Fry, R., 1975. Towards an applied
Fraser, BJ, Treagust, DF, Dennis, NC, 1986. theory of experiential learning. In:
Development of an instrument for Cooper, CL (Ed.), Theories of Group
assessing classroom psychosocial Processes. John Wiley & Sons, London, pp.
environment at universities and college. 3357 .
Stud. Levett-Jones, T., Lathlean, J., 2009. The
Higher Educ. 11 (1), 4354 . ascent to competence conceptual
framework: an outcome of a study of (Ed.), Research into Nurse Education.
belongingness. J. Clin. Nurs. 18 (October Croom Helm, London .
(20)), 28702879 . Palese, A., Destrebecq, A., Terzoni, S.,
Loughrey, M., 2008. Just how male are male Grassetti, L., Altini, P., Bevilacqua, A.,
nurses? J. Clin. Nurs. 17, 13271334 . Brugnolli,
McDowell, I., 2006. Measuring Health. A A., Benaglio, C., Dalponte, A., De Biasio,
Guide to Rating Scales and Questionnaires, L., Dimonte, V., Gambacorti, B., Fasci, A.,
3rd ed. Oxford University Press, New York . Grosso, S., Mansutti, I., Mantovan, F.,
Moher, D., Liberati, A., Tetzlaff, J., Altman, Marognolli, O., Montalti, S., Nicotera, R.,
DG, PRISMA Group, 2009. Preferred Perli, S., Randon, G., Stampfl, B., Tollini,
reporting items for systematic reviews and M., Canzan, F., Zannini, L., Saiani, L., 2016.
meta-analyses: the PRISMA Validation of the Italian Clinical Learning
statement. BMJ 21 (July (339)), b2535 . Environment Instrument (SVIAT).
Mokkink, LB, Terwee, CB, Patrick, DL, Assist. Inferm. Ric. 35 (January-March (1)),
Alonso, J., Stratford, PW, Knol, DL, Bouter, 2935 .
LM, de Vet, HC, 2010. The COSMIN Papastavrou, E., Lambrinou, E., Tsangari,
checklist for assessing the methodological H., Saarikoski, M., Leino-Kilpi, H., 2010.
quality of studies on measurement properties Student nurses experience of learning in the
of health status measurement clinical environment. Nurse Educ.
instruments: an international Delphi study. Pract. 10 (May (3)), 176182 .
Qual. Life Res. 19 (May (4)), 539 Papastavrou, E., Dimitriadou, M., Tsangari,
549 . H., 2015. Psychometric Testing of the
Moos, RH, Trickett, EJ, 1974. Classroom Greek Version of the Clinical Learning
Environment Scale Manual, 1st edition Environment-Teacher (CLES + T). Glob. J.
Consulting Psychologists Press, Palo Alto, Kesehatan Sci. 8 (September (5)), 49573
CA . (1) .
Moss, R., Rowles, CJ, 1997. Staff nurse job Papathanasiou, IV, Tsaras, K., Sarafis, P.,
satisfaction and management style. Nurs. 2014. Views and perceptions of nursing
Manage. 28 (January (1)), 3234 . students on their clinical learning
Nadler, JT, Weston, R., Voyles, EC, 2015. environment: teaching and learning. Perawat
Stuck in the middle: the use and Educ. Today 34 (January (1)), 5760 .
interpretation of mid-points in items on Quinn, FM, 1995. The Principles and
questionnaires. J. Gen. Psychol. 142 (2), Practice of Nurse Education, 3rd edition
7189 . Chapman and Hall, London .
Newton, JM, Jolly, BC, Ockerby, CM, Saarikoski, M., Leino-Kilpi, H., 2002. The
Cross, WM, 2010. Clinical learning clinical learning environment and
environment inventory: factor analysis. J. supervision by staff nurses: developing the
Adv. Nurs. 66 (June (6)), 13711381 . instrument. Int. J. Nurs. Stud. 39
Oliver, R., Endersby, C., 1994. Teaching and (March (3)), 259267 .
Assessing Nurses: A Handbook for Saarikoski, M., Leino-Kilpi, H., Warne, T.,
Preceptors. Bailliere Tindall, London . 2002. Clinical learning environment and
Orton, HD, 1981. Ward Learning Climate. supervision: testing a research instrument in
Royal College of Nursing, London . an international comparative
Orton, HD, 1983. Ward learning climate and belajar. Nurse Educ. Today 22 (May (4)),
student nurse response. In: Davis, BD 340349 .
Saarikoski, M., Isoaho, H., Leino-Kilpi, H., Tomietto, M., Saiani, L., Palese, A., Cunico,
Warne, T., 2005. Validation of the clinical L., Cicolini, G., Watson, P., Saarikoski, M.,
learning environment and supervision scale. 2012. Clinical learning environment and
Int. J. Nurs. Educ. Scholarsh. 2 supervision plus nurse teacher (CLES +
(Article 9) . T) scale: testing the psychometric
Saarikoski, M., Isoaho, H., Warne, T., Leino- characteristics of the Italian version. G. Ital.
Kilpi, H., 2008. The nurse teacher in Med. Lav. Ergon. 34 (Suppl. 2 B), B7280
clinical practice: developing the new sub- (Apr-Jun) .
dimension to the Clinical Learning Vizcaya-Moreno, MF, Prez-Caaveras,
Environment and Supervision (CLES) Scale. RM, De Juan, J., Saarikoski, M., 2015.
Int. J. Nurs. Stud. 45 (August (8)), Development and psychometric testing of
12331237 . the Clinical Learning Environment,
Salamonson, Y., Bourgeois, S., Everett, B., Supervision and Nurse Teacher evaluation
Weaver, R., Peters, K., Jackson, D., 2011. scale (CLES + T): the Spanish version.
Psychometric testing of the abbreviated Int. J. Nurs. Stud. 52 (January (1)), 361
Clinical Learning Environment 367 .
Inventory (CLEI-19). J. Adv. Nurs. 67 Warne, T., Johansson, UB, Papastavrou, E.,
(December (12)), 26682676 . Tichelaar, E., Tomietto, M., Van den
Sand-Jecklin, K., 2000. Evaluating the Bossche, K., Moreno, MF, Saarikoski, M.,
student clinical learning environment: 2010. An exploration of the clinical
development and validation of the SECEE I. Mansutti et al. / International Journal of
inventory. Southern Online J. Nurs. Nursing Studies 68 (2017) 60 72
Res. 1 (4) . 71
Sand-Jecklin, K., 2009. Assessing nursing
student perceptions of the clinical learning halaman 13
environment: refinement and testing of the learning experience of nursing students in
SECEE inventory. J. Nurs. Meas. 17 nine European countries. Perawat
(3), 232246 . Educ. Today 30 (November (8)), 809815 .
Schn, DA, 1983. The Reflective Watson, PB, Seaton, P., Sims, D., Jamieson,
Practitioner: How Professionals Think in I., Mountier, J., Whittle, R., Saarikoski, M.,
Action. 2014. Exploratory factor analysis of the
Basic Books, New York . Clinical Learning Environment,
Soemantri, D., Herrera, C., Riquelme, A., Supervision and Nurse Teacher Scale (CLES
2010. Measuring the educational + T). J. Nurs. Meas. 22 (1), 164180 .
environment in health professions studies: a Wilson-Barnett, J., Butterworth, T., White,
systematic review. Med. Teach. E., Twinn, S., Davies, S., Riley, L., 1995.
32 (12), 947952 . Clinical support and the project 2000
Tomietto, M., Saiani, L., Saarikoski, M., nursing student: factors influencing this
Fabris, S., Cunico, L., Campagna, V., Palese, process. J. Adv. Nurs. 21, 11521158 .
A., Zabalegui, A., Cabrera, E., 2009. New
2009. Assessing quality in clinical nursing education structure in Spain.
educational setting: Italian validation of the Perawat
clinical learning environment and Educ. Today 29 (July (5)), 500504 .
supervision (CLES) scale. G. Ital. Med. Lav. 72
Ergon. 31 (Suppl .3 B), B4955 (Jul-Sep) . I. Mansutti et al. / International Journal of
Nursing Studies 68 (2017) 60 72

Anda mungkin juga menyukai