Anda di halaman 1dari 15

Cogn Process (2014) 15:143157

DOI 10.1007/s10339-013-0586-9

RESEARCH REPORT

The representation of conceptual knowledge: visual, auditory,


and olfactory imagery compared with semantic processing
Massimiliano Palmiero Rosalia Di Matteo
Marta Olivetti Belardinelli

Received: 6 December 2012 / Accepted: 11 October 2013 / Published online: 12 December 2013
 Marta Olivetti Belardinelli and Springer-Verlag Berlin Heidelberg 2013

Abstract Two experiments comparing imaginative processing in different modalities and semantic processing
were carried out to investigate the issue of whether conceptual knowledge can be represented in different format.
Participants were asked to judge the similarity between
visual images, auditory images, and olfactory images in the
imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in
both experiments. The degree of similarity between the
imaginative and semantic items was changed across
experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible
between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the
visual imaginative processing could be differentiated from
the semantic processing in terms of accuracy. These results
showed that the visual and auditory imaginative processing
can be differentiated from the semantic processing,
although both visual and auditory images strongly rely on

M. Palmiero (&)
Department of Life, Health and Environmental Sciences,
University of LAquila, LAquila, Italy
e-mail: massimiliano.palmiero@univaq.it
R. Di Matteo  M. O. Belardinelli
ECONA, Interuniversity Centre for Research on Cognitive
Processing in Natural and Artificial Systems, Rome, Italy
R. Di Matteo
Department of Neuroscience and Imaging, G. dAnnunzio
University, Chieti, Italy
M. O. Belardinelli
Department of Psychology, Sapienza University of Rome,
Rome, Italy

semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are
discussed in the frame of the imagery debate.
Keywords Imagery  Semantic representation 
Sensory modality

Introduction
How people represent conceptual knowledge is a longdebated issue. One important approach posits that conceptual knowledge is distributed across different attribute
domains, such as vision, touch, and olfaction (e.g., Allport
1985; Barsalou 1999). Thus, the distinction between visual
knowledge and non-visual knowledge has been drawn (see
Thompson-Schill 2003, for a review), leading to the claim
that visual knowledge is represented differently from nonvisual knowledge. This claim is tied to the question of
which format is used to store and represent information. In
this direction, the investigation into the nature of mental
imagery can shed light on the representation of conceptual
knowledge. Indeed, it is assumed that access to conceptual
knowledge is necessary in order to create a mental image
(e.g., Kan et al. 2003). Thus, the central question is the
extent to which mental imagery relies on perceptual representations, as opposed to propositional representations.
According to perceptual theories (e.g., Kosslyn et al.
2006), imagery and perception share common mechanisms
and processes. Different studies revealed imageryperception interference (e.g., Craver-Lemley and Arterberry
2001; Perky 1910; Segal and Fusella 1970) and imagery
perception facilitation (e.g., McDermott and Roediger
1994) in visual modality, suggesting that visual images and
visual percepts share the same system of information

123

144

processing. An image-percept equivalence was also found


in producing visual-motor adaptation (Finke 1979), orientation-specific color adaptation (Finke and Schmidt 1977,
1978), and in terms of functions describing the relation
between resolution and eccentricity in the visual field
(Finke and Kosslyn 1980; Finke and Kurtzman 1981).
Recently, central and larger stimuli were found to yield
faster RTs than peripheral stimuli during both visual perception and visual imagery, indicating an overlap between
perception and imagery for retinotopic areas (Marzi et al.
2006). In this direction, the presence of a visual defect due
to myopia or astigmatism was found to correspond to an
analogous defect in visual imagery, that is, the blurring of
perception resulted in less vivid mental images (Palermo
et al. 2013). Yet, visual images can be rotated (e.g.,
Campos 2012; Shepard and Metzler 1971), scaled (e.g.,
DAngiulli and Reeves 2007), scanned (e.g., Kosslyn et al.
1978; Borst and Kosslyn 2008), transformed in shape and
color (e.g., Dixon and Just 1978), and inspected (e.g.,
Thompson et al. 2008), all processes that draw upon the
physical laws of the external world.
However, propositional theories (Pylyshyn 1973, 1981,
2002, 2003) assume a modal approach to imagery: propositional representations underlie imagery, and information is
symbolically represented in a language-like system. Pylyshyn (2002) argues that mental images are neither spatial nor
depictive in nature for different empirical reasons, such as the
use of tacit knowledge to simulate the relevant phenomenon
(e.g., the scanning process). This view is consistent with the
difficulty of replicating findings of visual image-percept
equivalence (Broerse and Crassini 1980, 1981, 1984; IntonsPeterson 1983; Intons-Peterson and White 1981). Yet, people
cannot find more than one interpretation of the duckrabbit
ambiguous figure in their mental images (Chambers and
Reisberg 1985), although it is easy to find the alternative
interpretation of such a figure in perceptual conditions.
Studies that reported the dissociation in neuropsychological
patients between visual imagery and visual perception (e.g.,
Behrmann et al. 1992; Guariglia et al. 1993) also seem to
support propositional theories.
The advent of neuroimaging techniques offered the
scientific community new insights to solve the imagery
debate. However, the cortical overlap between imagery and
perception in different modalities was found to vary
according to different factors (see Olivetti Belardinelli
et al. 2011, for a review). In an attempt to overcome the
debate, Paivios (1986) dual coding theory proposed the
notion that two main systems are available to represent
information: language, conveying symbolic representations, and visual imagery, allowing analogical representations. Interactions would exist between these systems, since
a verbal description can be generated from a mental image,
or a mental image can be generated from a verbal

123

Cogn Process (2014) 15:143157

description. Similarly, Kosslyn (1980, 1994) suggested that


a visual image is generated from a semantic representation
that accesses stored visual information about the object;
visual information is then loaded into the visual buffer,
which functions as a coordinate space that temporarily
maintains and manipulates information. Lloyd-Jones and
Vernon (2003) also proposed that visual images are generated from semantic representations, since they found a
semantic interference from visual object recognition on
visual imagery. In addition, the activation of brain areas
involved in storage of semantic knowledge (Gardini et al.
2005; Mazard et al. 2005; Mellet et al. 1996) was found
during the generation of visual imagery.
Nevertheless, considering non-visual imagery modalities (Olivetti Belardinelli et al. 2004, 2009, 2011; Palmiero
et al. 2009), the issue of how people represent conceptual
knowledge is even harder to address. Within the auditory
domain1 (see Hubbard 2010 for a review), auditory imagery also seems to rely on semantic representations. Indeed,
auditory images can prime subsequently perceived environmental sounds from the same category (Stuart and Jones
1996). In this regard, Schneider et al. (2008) demonstrated
that the congruency between sounds and pictures was
judged faster when the sound was appropriate to the picture
than when the sound was not appropriate to the picture.
Following Hubbard (2010), this result shows that an
auditory image evoked by an object belonging to the same
category can prime a subsequent auditory percept. In
addition, auditory images of environmental sounds preserve structural properties of auditory stimuli, such as
loudness information (Intons-Peterson 1980) and pitch
information (Intons-Peterson et al. 1992).
Within the olfactory domain, the issue of olfactory
imagery is even more controversial (see Stevenson and
Case 2005 for a review). Zucco (2003) questioned the
existence of conscious olfactory images. Indeed, the
instruction to imagine odors or to remember odors from
words did not improve performance in a subsequent odor
recognition test (Crowder and Schab 1995; Herz 2000).
However, Lyman and McDaniel (1986, 1990) found that
verbal encoding of odors enhanced the olfactory long-term
retention. Moreover, olfactory imagery and olfactory perception were found to share common properties, such as
intensity (Algom and Cain 1991; Carraco and Ridout
1993), fruitiness and familiarity (Carraco and Ridout
1993). Yet, as in real perception, imagery of pleasant odors
generally involves larger sniffs than imagery of unpleasant
odors (Bensafi et al. 2003), especially in good imagers
(Bensafi et al. 2005), who sniff odors longer and judge
1

Given that the present paper explores auditory imagery of


environmental sounds, musical and verbal imagery are not
considered.

Cogn Process (2014) 15:143157

them as more edible and familiar (Bensafi and Rouby


2007). Olfactomotor activity during odor imagery and
during odor perception was also found similar both in
terms of respiratory volume and temporal characteristics
(Kleemann et al. 2008).
Thus, the present study aimed at exploring the extent to
which visual, auditory, and olfactory mental images rely on
perceptual representations and semantic representations.
Moving from Kosslyns study (1976) that distinguished
between visual imagery and semantic representations on
the basis of information retrieval times, two experiments
were carried out. In both experiments, participants were
asked to judge the similarity between mental images in the
visual, auditory, and olfactory modalities (imaginative
block), as well as the similarity between semantic representations, basically if two items belonged to the same
category (semantic block). Imaginative similarities and
semantic similarities between the items were tested by
means of two pre-experiments. The pre-experiment 1-A
allowed to select images generated from different categories, and semantic representations with no imaginative
similarity. The pre-experiment 2-A allowed to select images generated from the same categories, and semantic
representations with imaginative similarity. All items were
checked in terms of familiarity.
Two different hypotheses were formulated: (1) knowledge is processed according to imagery modality-specific
principles (perceptual approach), and mental image-based
representations are differentiated from semantic-based
representations in terms of response times or accuracy; (2)
knowledge relies mostly on the semantic system, being
organized according to more general principles (propositional approach), and mental image-based representations
are not differentiated from semantic-based representations
both in terms of response times and accuracy.

Experiment 1
Method
Participants
Eighty-seven participants were recruited (mean
age = 26.9; SD = 8.87); forty-seven of them were
females, forty were males. Participants were recruited on a
voluntary basis, mostly from the Department of Psychology at Sapienza, University of Rome.
Design
The experiment was designed to evaluate the effects of
processing (imaginative and semantic) and modality

145

(visual, auditory, and olfactory) on RTs and accuracy. A


2 9 3 within-subject design was used.
Material
Forty-eight triplets of items were used, eight triplets for
each condition. Triplets were formed according to these
criteria: first, the target item shares sensory properties with
one of the two comparison items, and semantic properties
with the other; second, the items are familiar. Therefore,
triplets encompassed both imaginative comparisons and
semantic comparisons. Two pre-experiments were carried
out in order to assess the degree of the imaginative
semantic similarities between the items (pre-experiment
1-A), and the degree of familiarity with the items (preexperiment 1-B) (see Appendix 1 for details).
Procedure
Each participant took part to an individual short training
session with practice trials. Afterward, participants carried
out the experimental testing session. In order to avoid
confusion, the type of processing was blocked and
instructions were presented at the beginning of each block.
Blocks were presented in random fashion. In the imaginative processing block, participants were asked to judge
the similarity of mental images in terms of sensory characteristics (shape, acoustical, or olfactory similarity). In
order to form mental images, participants were instructed
to: visualize in the minds eye the objects, focusing on the
shape (e.g., umbrella); hear in the minds hear the sounds
produced by the objects (e.g., clicking clock); smell in the
minds nose the odors referred to the objects (e.g.,
retouching bleach). For the auditory and olfactory images,
participants were stressed to concentrate on the sounds and
smells produced by the objects. In the semantic processing
block, participants were asked to judge the similarity of
semantic representations in terms of the conceptual category, considering possible inferences (e.g., that something
is a cat entails that it is a mammal, and suggests that it has a
very good sense of balance). In order to form semantic
representations, participants were instructed to refer to
categories (e.g., pearfruit). For both the types of processing, participants were given a concrete example.
For each trial, a triad of words describing common
objects, environmental sounds, or smells was presented on
the computer screen. The target word was displayed firstly at
the center of the computer screen: participants formed an
image or a semantic representation (category) and then
pressed the space bar, and two additional words were presented simultaneously. Participants were instructed to
associate the target word with one of the two comparison
words according to the imaginative similarity or the semantic

123

146

Cogn Process (2014) 15:143157

The 2 9 3 ANOVA carried out on GTs showed a main effect


for Modality [(F(2,172) = 5.51, MSE = .18, p \ .005,
g2 = .06)]. However, the post hoc analysis (Scheffe,
p [ 0.05) revealed no significant differences among the
visual, auditory, and olfactory modalities. The main effect for
Type of Processing [(F(1,86) = 1.97, MSE = .29,
p = .16)] and the interaction effect for Type of Processing
by Modality [(F(2,172) = 1.80, MSE = .12, p = .16)]
were not significant (see Fig. 1).

GENERATION TIMES (SEC.)

TYPE OF PROCESSING X MODALITY

2,3

2,1

1,9

IMAGINATIVE

SEMANTIC

TYPE OF PROCESSING
VISUAL MODALITY

AUDITORY MODALITY

OLFACTORY MODALITY

Fig. 1 Experiment 1GTs for the imaginative and semantic processing conditions in the visual, auditory, and olfactory modalities.
The error bars represent the standard errors of the means

123

TYPE OF PROCESSING X MODALITY


VISUAL MODALITY
AUDITORY MODALITY
OLFACTORY MODALITY

3,2

2,9

2,6

2,3
IMAGINATIVE

SEMANTIC

TYPE OF PROCESSING
Fig. 2 Experiment 1RTs for the imaginative and semantic processing conditions in the visual, auditory, and olfactory modalities.
The error bars represent the standard errors of the means

Results

2,5

3,5

RESPONSE TIMES (SEC.)

similarity, by pressing the rightward or the leftward arrow


key as quickly and accurately as possible (the side of the
correct answer was randomized). No triplet was repeated
across conditions. The experiment lasted about 20 min.
Three parameters were estimated: generation times
(GTs), response times (RTs), and accuracy of comparisons.
Both GTs and RTs were calculated just on correct responses;
accuracy was determined on the basis of the correct associations between mental images and correct association
between semantic representations as assessed by preexperiment 1-A. GT was measured from the time of target
item onset until the space bar was pressed. This parameter
was calculated in order to better understand the possible
differences in the participants ability to generate mental
images or semantic categories with respect to the Type of
Processing. RT was estimated from comparison items onset
until the rightward or leftward arrow key was pressed.

The 2 9 3 ANOVA carried out on RTs yielded a main


effect for Type of Processing [(F(1,86) = 27.18, MSE =
.24, p \ .001, g2 = .24)]: the semantic processing was
performed faster than the imaginative processing. The
ANOVA also showed a main effect for Modality
[(F(2,172) = 17.14, MSE = .18, p \ .001, g2 = .16)]: the
post hoc comparisons (Scheffe, p \ .05) revealed that the
visual and olfactory modalities provided faster RTs than
the auditory modality; no significant difference was found
between the visual and olfactory modalities. Finally, the
ANOVA also revealed a significant interaction effect for
Type of Processing by Modality [(F(2,172) = 8.06,
MSE = .18, p \ .0005, g2 = .08)]. The post hoc comparisons (Scheffe, p \ .05) demonstrated that the visual and
auditory modalities provided faster RTs for the semantic
processing than the imaginative processing; no significant
difference was found between the imaginative processing
and the semantic processing within the olfactory modality;
no difference was found between the visual and the
olfactory imaginative processing, this latter proving shorter
RTs than the auditory imaginative processing; finally, the
semantic processing was performed faster during the visual
modality than during the olfactory and auditory modalities
(see Fig. 2).
The 2 9 3 ANOVA conducted on accuracy scores
revealed a main effect for Modality [(F(2,172) = 50.15,
MSE = .01, p \ .0000001, g2 = .36): the post hoc comparisons (Scheffe, p \ .05) showed that the visual modality
provided higher accuracy than the olfactory and auditory
modalities. The ANOVA also revealed a significant interaction effect for Type of Processing by Modality

Cogn Process (2014) 15:143157

147

TYPE OF PROCESSING X MODALITY


VISUAL MODALITY

ACCURACY (%)

95%

AUDITORY MODALITY

OLFACTORY MODALITY

85%

75%

65%
IMAGINATIVE

SEMANTIC

TYPE OF PROCESSING
Fig. 3 Experiment 1mean accuracy for the imaginative and
semantic processing conditions in the visual, auditory, and olfactory
modalities. The error bars represent the standard errors of the means

[(F(2,172) = 6.36, MSE = .01, p \ .005, g2 = .06)]. The


post hoc comparisons (Scheffe, p \ .00005) showed that
the visual modality was associated with better accuracy than
the auditory and olfactory modalities in both the imaginative and the semantic processing. Within the auditory
modality, there was a significant difference between the
imaginative accuracy and the semantic accuracy: the latter
was higher than the former. Within the visual and olfactory
modalities, no significant difference was found between the
imaginative processing and the semantic processing. The
auditory modality provided higher accuracy than the
olfactory modality in the semantic processing. The main
effect for Type of Processing was not significant
[(F(1,86) = 3.53, MSE = .02, p = .06)] (see Fig. 3).
Discussion
The analysis on GTs shows that participants generated
different mental representations at the same speed,
regardless of the type of processing. Therefore, there was
no bias due to the task difficulty in the subsequent comparison task. The analysis on RTs shows that within the
visual and auditory modalities, mental images were processed more slowly than semantic representations. No
difference was found within the olfactory modality both in
terms of RTs and accuracy.
In particular, participants performed faster the semantic
processing than the visual imaginative processing. Longer
times for visual images than semantic representations are in
line with Kosslyns model (1980, 1994). Assuming that
mental images of known objects are generated from semantic

representations that access visual traces, which are subsequently loaded into the visual buffer, it is reasonable that the
visual imaginative processing is slower than the semantic
processing. In addition, the fact that visual images to be
compared were generated from different categories also
suggests that both the semantic and imaging search strategies
occurred in parallel. However, no difference between the
imaginative processing and the semantic processing was
found in terms of accuracy. This result shows that despite
additional processes, participants compared visual mental
images as accurately as semantic representations.
With respect to the auditory modality, auditory imaginative processing yielded longer RTs than the semantic
processing, suggesting that additional processes also
occurred for the auditory imagery. In addition, it is possible
that the auditory imaginative processing was also affected
by the temporal extension of environmental sounds. That
is, participants had to imagine the complete sounds before
deciding on the similarity of auditory images. Probably,
this operation has also affected accuracy, suggesting that
after the generation, images of environmental sounds are
difficult to sustain into the auditory buffer.
Regarding the olfactory modality, no difference in RTs
and in accuracy was found between the olfactory imaginative processing and the semantic processing. This suggests that the olfactory imagery system involves some kind
of semantic knowledge, which does not access the olfactory sensory trace. Indeed, if verbal information is available, the process switches from a sensory olfactory code to
a semantic code (Jonsson et al. 2011). In this regard, several studies showed that verbal coding plays a key role in
the olfactory processing. Comparing the verbal and visual
suppression tasks on olfactory short-term memory, Perkins
and McLaughlin Cook (1990) found that only tasks
involving verbal suppression interfered with the short-term
recognition of odors. Successively, Annett et al. (1995) and
Annett and Leslie (1996) showed that both olfactory recognition and free recall of odor names were impaired by
visual and verbal suppression tasks.
Overall, these results suggest that the imaginative processing can be differentiated from the semantic processing
within the visual and auditory modalities. However, in the
present experiment, imaginative comparisons were made
by using items belonging to different categories, whereas
semantic comparisons were made by using items belonging
to the same categories. That is, the imaginative items were
systematically less similar on a semantic level than the
semantic items. In order to clarify this issue, a new
experiment was carried out, using stimuli in which both
semantic relationships and imaginative relationships
between the target and the comparison words were at the
same level.

123

148

Cogn Process (2014) 15:143157

Experiment 2

TYPE OF PROCESSING BY MODALITY


2,0
VISUAL MODALITY

Method

Sixty-five participants were newly recruited on a voluntary


basis at the University of LAquila (mean age = 26;
SD = 4.75). Forty of them were females, twenty-five were
males.
Design, material, and procedure
The Experiment 2 was designed as the Experiment 1 in
order to evaluate the effects of processing (imaginative and
semantic) and modality (visual, auditory, and olfactory) on
response time and accuracy. Stimuli were composed by
couples of items instead of triplets. Couples of items were
used because of the scarcity of comparable items, especially in the auditory and olfactory imagery modalities.
However, the logic of the imaginative and semantic comparisons did not change at all. Two blocks of couples of
items were formed and presented in random fashion. The
first block included couples of items (selected from the
same category) encompassing imaginative comparisons in
the visual, auditory, and olfactory imagery modalities. The
second block included couples of items encompassing
semantic comparisons. The true couples included items
from the same category and with no similarity in terms of
sensory properties, whereas the false couples included
items from different categories and with similarity in terms
of sensory properties. Two pre-experiments were carried
out in order to assess the degree of the imaginative and
semantic similarities between the items (pre-experiment
2-A) and the degree of familiarity with the items (preexperiment 2-B) (see Appendix 2 for details). The procedure also was similar to Experiment 1. In its entirety, the
experiment lasted about 15 min. Generation times (GTs),
response times (RTs), and accuracy of comparisons were
estimated as in Experiment 1.
Results
The 2 9 3 ANOVA carried out on GTs showed that the main
effect for Type of Processing [(F(1,64) = .4, MSE = .26,
p = .53)], the main effect for Modality [(F(2,128) = 1.83,
MSE = .1, p = .06)], and the interaction effect for Type of
Processing by Modality [(F(2,128) = 2.55, MSE = .12,
p = .08)] were not significant (see Fig. 4).
The 2 9 3 ANOVA carried out on RTs yielded a main
effect for Modality [(F(2,128) = 9.74, MSE = .13,
p \ .0005, g2 = .13)]. The post hoc comparisons (Scheffe,
p \ .05) revealed that the olfactory modality provided

123

GENERATION TIMES (SEC.)

Participants

AUDITORY MODALITY

OLFACTORY MODALITY

1,8

1,6

1,4
IMAGINATIVE

SEMANTIC

TYPE OF PROCESSING
Fig. 4 Experiment 2GTs for the imaginative and semantic processing conditions in the visual, auditory, and olfactory modalities.
The error bars represent the standard errors of the means

faster RTs than the visual and auditory modalities; no


significant difference was found between the visual and
auditory modalities. The ANOVA also revealed an interaction effect for Type of Processing by Modality
[(F(2,128) = 9.75, MSE = .13, p \ .0005, g2 = .13)].
The post hoc comparisons (Scheffe, p \ .05) only revealed
significant differences between semantic conditions. The
main effect for Type of Processing [(F(1,64) = .12,
p = .7)] was not significant (see Fig. 5).
The 2 9 3 ANOVA conducted on accuracy scores
revealed a main effect for Type of Processing:
[(F(1,64) = 4.29, MSE = .03, p \ .05, g2 = .06)]: the
imaginative processing was performed with better accuracy
than the semantic processing. The ANOVA also revealed a
main effect for Modality [(F(2,128) = 5.84, MSE = .02,
p \ .005, g2 = .08)]. The post hoc comparisons (Scheffe,
p \ .05) revealed that the olfactory modality yielded
higher accuracy than the auditory modality. No difference
was found between the visual and olfactory modalities, and
between the visual and auditory modalities. Finally, the
analysis showed a significant interaction effect for Type of
Processing by Modality [(F(2,128) = 14.75, MSE = .02,
p \ .000005, g2 = .19)]. The post hoc comparisons
(Scheffe, p \ .05) showed that within the visual modality,
there was a significant difference between the imaginative
accuracy and the semantic accuracy: the former was higher
than the latter. Within the auditory and olfactory modalities, no significant difference was found between the
imaginative and the semantic accuracy. The visual and
olfactory modalities provided better accuracy than the
auditory modality in the imaginative processing. No

Cogn Process (2014) 15:143157

2,5

149

TYPE OF PROCESSING BY MODALITY


VISUAL MODALITY

AUDITORY MODALITY

RESPONSE TIMES (SEC.)

OLFACTORY MODALITY
2,3

2,1

1,9

1,7
IMAGINATIVE

SEMANTIC

TYPE OF PROCESSING
Fig. 5 Experiment 2RTs for the imaginative and semantic processing conditions in the visual, auditory, and olfactory modalities.
The error bars represent the standard errors of the means

TYPE OF PROCESSING BY MODALITY


VISUAL MODALITY

ACCURACY (%)

0,95

AUDITORY MODALITY

OLFACTORY MODALITY

0,85

0,75

processed at the same speed of semantic representations. In


addition, visual images were processed with higher accuracy than semantic representations, whereas the auditory
and olfactory images were processed as accurately as
semantic representations.
These results seem to contrast with previous results
(with the exception of the olfactory modality). However,
they clarify that when images to be compared were
generated from the same categories, the visual and the
auditory imaginative processing were performed at the
same speed of the semantic processing because the
semantic search strategy did not occur anymore. This
interpretation is consistent with the evidence that in
Experiment 2, as opposed to Experiment 1, the accuracy
of the imaginative processing is also improved. Indeed, on
the one hand, the visual imaginative processing was
processed with higher accuracy than the semantic processing, confirming that visual imagery can be differentiated from semantic representations in terms of accuracy.
On the other hand, the auditory imaginative processing
was processed with the same accuracy of the semantic
processing. Although this result does not allow to differentiate between the auditory imaginative processing
and the semantic processing, by comparing the two
experiments, it seems that images of environmental
sounds can be also generated.
Regarding olfactory imagery, no differentiation between
the olfactory imaginative processing and the semantic
processing was possible. In particular, the advantage that
occurred for the visual and the auditory imaginative processing in Experiment 2, when images to be compared
were generated from the same category, as opposed to
Experiment 1, did not occur for the olfactory imaginative
processing.
General discussion

0,65
IMAGINATIVE

SEMANTIC

TYPE OF PROCESSING
Fig. 6 Experiment 2mean accuracy for the imaginative and
semantic processing conditions in the visual, auditory, and olfactory
modalities. The error bars represent the standard errors of the means

difference was found between the visual and olfactory


modalities in the imaginative processing (see Fig. 6).
Discussion
The analysis on GTs confirms that participants generated
different mental representations at the same speed,
regardless of the type of processing. The analysis on RTs
shows that within all modalities, mental images were

The present study was aimed at investigating how people


represent conceptual knowledge, comparing visual,
auditory, and olfactory imagerywith semantic-based
representations. The idea was to explore the extent to
which mental imagery in different sensory modalities can
be differentiated from semantic representations. By
means of two experiments, we could demonstrate for the
first time that mental images and semantic representations cannot be equally differentiated within all sensory
modalities.
By comparing the two experiments, results revealed that
visual and auditory imagery can be differentiated from
semantic representations. This finding would support
Kosslyns approach. Indeed, when images to be compared
were generated from different categories (Experiment 1),
differences between the imaginative processing and the

123

150

semantic processing were found within the visual modality


in terms of RTs and within the auditory modality in terms
of RTs and accuracy. However, when images to be compared were generated from the same categories (Experiment 2), visual imagery was performed at the same speed
and even with better accuracy of semantic representations,
whereas no difference was found between auditory imagery
and the semantic processing in terms of RTs and accuracy.
In other words, when images to be compared were based on
different semantic representations, probably representations relying on the comparison words were not easily
matched with representations relying on the target words,
giving rise to a disadvantage. On the contrary, the match
between semantic representations of the comparison words
and semantic representations of the target words improved
the imaginative processing.
Vice versa, results do not allow to differentiate between
the imaginative processing and the semantic processing
within the olfactory modality, regardless the degree of the
imaginative and semantic similarities between the items.
We can infer that, in line with Pylyshyns (1973, 2002)
approach, participants used their tacit knowledge of
olfactory associations to carry out the olfactory imaginative
processing. Alternatively, as explained above, if verbal
information is available, process switches from a sensory
olfactory code to a semantic code (Jonsson et al. 2011), that
is, after accessing to the semantic representation, no
olfactory trace was used to generate a proper olfactory
image. Indeed, in both experiments, names of visual
sources of odors were actually used to evoke olfactory
images. These verbal cues contributed to prime only
semantic representations rather than olfactory images.
When participants tried to form an odor image from the
verbal cue, they could not access to the olfactory sensory
short-term memory (Andrade and Donaldson 2007) in
order to generate a conscious olfactory image. This would
suggest that odor-related information is organized within
verbal networks in which the names of visual sources are
very important. In this direction, Jehl et al. (1997) found
that olfactory recognition memory was found enhanced
when veridical (source of odor) or personally generated
names for odors rather than chemical names were used
during the learning session.
To conclude, interestingly, these outcomes show that
mental images and semantic representations cannot be
differentiated within all sensory modalities. In particular,
these results highlight that images of odors are hardly
generated when cued by semantic information related to
the visual source of the odor. However, future studies
should consider stimuli in perceptual format. Indeed,
when semantic information is reduced, the encoding and
the retention are driven mainly by perceptual processes.
According to Stevenson et al. (2007), odor naming does

123

Cogn Process (2014) 15:143157

not facilitate the generation of olfactory images. However, a recent study demonstrated that the ability to
maintain and compare online odors varied according to
the degree of verbalization of odors (Jonsson et al.
2011). Thus, in the present study, the degree of verbalization of images of odors (cued by names of visual
sources of odors) might have masked the effect of the
perceptual information preserved in olfactory images
(Rinck et al. 2009), especially in the second experiment,
resulting that both kinds of processing (semantic and
imaginative) lead to similar performance in terms of RTs
and accuracy.
Finally, the everyday experience also suggests that it is
much easier for people with no olfactory expertise to form
visual and auditory mental images than olfactory images.
According to Royet et al. (2013), even the activation of the
olfactory primary cortex in non-expert does not definitively
indicate that the image of odor has been generated, given
that the olfactory primary cortex may be incidentally
reactivated during sniffing, odor expectation, cross-modal
recall of information, or the ignition of semantic words. In
this direction, the activity in the olfactory primary cortex
was found to be modulated by expertise in generating or
using olfactory images (Bensafi et al. 2007), as well as in a
group of perfumers, during both odor perception and odor
imagination (Plailly et al. 2012). Therefore, future research
should consider participants with the same training and
familiarity in vision, audition, and olfaction in order to test
the difference in the imaginative processing across
modalities.

Appendix 1: Pre-experiment 1-A was aimed at assessing


the degree of imaginative and semantic similarities
between the items composing the triplets
of Experiment 1
Method
Eighty-four participants recruited at the University of
Chieti filled out a questionnaire designed to assess the
degree of imaginative and semantic similarities between
the items. For each triplet, three pairs of items were
formed. For example, given the visual triplet
(A) umbrella, (B) leather gloves, and (C) mushroom, the
pairs of items were AB (semantic similarity), BC (null
comparison), and CA (imaginative similarity by shape).
Participants were asked to score these couples of items on
a 7-point scale, ranging from 1 (no similarity) to 7 (very
similar). For each triplet, each couple of items was scored
on perceptual and categorical dimensions. Therefore,
participants expressed six judgments for each triplet

Cogn Process (2014) 15:143157

Results and discussion


The analysis was performed by item, that is individual
items instead of subjects were used as observational
units. Therefore, the factors comparison, modality,
and relationship were treated as between variables. The
univariate ANOVA comparison by modality by relationship revealed a main effect for comparison
[F(1,270) = 5.27, MSE = 0.50, p \ .05, g2 = .01]: categorical comparisons were rated more similar than perceptual comparisons; main effect for relationship
[F(2,270) = 173.72, MSE = 0.50, p \ .0001, g2 = .56]:
the post hoc comparisons (Scheffe, p \ .05) showed that
both the imaginative and semantic relationships were
higher than the null relationships and that there was no
difference between the imaginative and semantic relationships. The interaction effect for comparison by
relationship was also significant [F(2,270) = 248.64,
MSE = 0.50, p \ .0001, g2 = .64]: the post hoc comparisons (Scheffe, p \ .05) demonstrated that perceptual
comparisons were rated higher than categorical comparisons with the imaginative relationship, whereas categorical comparisons were rated higher than perceptual
comparisons with the semantic relationship; no difference was found between the perceptual and categorical
comparisons with the null relationships, and both the
imaginative and semantic relationships were rated higher
than the null relationship; most important, no difference
between the imaginative relationship and the semantic
relationship was found. Finally, the analysis also
revealed the interaction effect for comparison by
modality
by
relationship
[F(4,270) = 8.89,
MSE = 0.50, p \ .0001, g2 = .11]: the post hoc comparisons (Scheffe, p \ .05) clarified that perceptual
comparisons were rated higher than categorical comparisons in all modalities with the imaginative relationship; categorical comparisons were rated higher than
perceptual comparisons in all modalities with the
semantic relationship; no difference between the categorical and perceptual comparisons was found in all
modalities with the null relationship; yet, both the perceptual and categorical comparisons were rated higher in
all modalities with the imaginative and semantic relationships than the null relationships. Finally, no significant difference was found across modalities both in
terms of the categorical and perceptual comparisons, and
the imaginative, semantic, and null relationships. The

COMPARISON X MODALITY X RELATIONSHIP


6

RATING OF SIMILARITY

according to the following criteria: two types of comparisons (perceptual/categorical) and three types of relationship (imaginative/semantic/null). This method allowed
to check whether the type of comparison affected the type
of relationship.

151

1
Categorial Perceptual

Categorial Perceptual

Categorial Perceptual

IMAGINATIVE
RELATIONSHIP

NULL
RELATIONSHIP

SEMANTIC
RELATIONSHIP

VISUAL MODALITY

AUDITORY MODALITY

OLFACTORY MODALITY

Fig. 7 Pre-experiment 1-Arating of similarity for the imaginative,


null and semantic relationships in the visual, auditory, and olfactory
modalities, with respect to both the categorical and the perceptual
comparisons. The error bars represent the standard errors of the
means

main effect for modality [F(2,270) = .48, MSE = 0.50,


p = .61], the interaction effect for comparison by
modality [F(2,270) = 1.09, MSE = 0.50, p = .33], and
the interaction effect for relationship by modality
[F(4,270) = 1.68, MSE = 0.50, p = .15] were not significant. Overall, these results showed that all triplets
were balanced in terms of imaginative and semantic
similarities across modalities (see Fig. 7).

Pre-experiment 1-B was aimed at assessing the degree


of familiarity with the items composing the triplets
of Experiment 1
Method
Ninety-six participants recruited at the University of Chieti
rated the familiarity of each item (48 for each modality,
144 in total) on a 7-point scale, ranging from 1 (no familiar
at all) to 7 (very familiar). Participants were instructed to
rate the familiarity of items be referring to the specific
modality they belonged to. Thus, assuming that familiarity
with items may affect the strength of relationships (imaginative or semantic), for each participant, the familiarity
score of critical relationships within each modality was
computed. For example, given the familiarity scores of the
visual triplet umbrella (6.14), leather gloves (4.11), and
champignon (3.82), the familiarity score of the semantic

123

152

Cogn Process (2014) 15:143157

COMPARISON BY MODALITY BY RELATIONSHIP

RATING OF SIMILARITY

relationship within the three modalities under investigation


were used for the subsequent analysis.
Results and discussion

Categorical

Perceptual

Categorical

Perceptual

IMAGINATIVE RELATIONSHIP SEMANTIC RELATIONSHIP


VISUAL MODALITY

AUDITORY MODALITY

OLFACTORY MODALITY

Fig. 8 Pre-experiment 1-Bfamiliarity scores for the imaginative


and semantic relationships in the visual, auditory, and olfactory
modalities. The error bars represent the standard errors of the means

relationship was formed by umbrella and leather gloves


(6.14 ? 4.11), whereas the familiarity score of the imaginative relationship was formed by umbrella and champignon (6.14 ? 3.82). Finally, these sums were averaged
across participants, and relevant means for each type of

Target item

The analysis was performed by item, as in Pre-experiment


1-A. The univariate ANOVA modality by relationship
revealed that only the main effect for modality was significant
[F(2,90) = 7.60,
MSE = 3.51,
p \ .001,
g2 = .14]: the post hoc comparisons (Scheffe, p \ .05)
showed that the olfactory triplets were generally less
familiar than the auditory and visual triplets. Although this
result could lead to the conclusion that triplets were
unbalanced in terms of familiarity, given that the main
effect of relationship [F(1,90) = .46, MSE = 3.51,
p = .49] and the interaction effect for modality by relationship [F(2,90) = .17, MSE = 3.51, p = .84] were not
significant, it is possible to conclude that the imaginative
and semantic relationships of each triplet were not affected
by the familiarity in any modality (see Fig. 8).
List of stimuli used in Experiment 1. For each triplet, the
target item and the comparison items are shown. Three
stimuli could not be translated properly in English form
(marked by an asterisk). In these cases, the Italian form is
indicated. Note: PI = PerceptualImaginative relationships; PS = PerceptualSemantic relationships; PN =
PerceptualNull relationships; CI = CategoricalImaginative relationships; CS = CategoricalSemantic relationships; CN = CategoricalNull relationship.

Imaginative item

Semantic item

PI

PS

PN

CI

CS

CN

Umbrella

Mushroom

Leather gloves

5.25

1.43

1.30

1.88

4.08

1.60

Ripe peach

Tennis ball

Pear

5.14

2.30

1.73

1.75

5.52

1.68

Drawing ruler

Hair comb

Drawing pencil

4.69

2.29

1.58

1.87

5.88

1.73

Engagement ring

Wagon wheel

Silver brooch

4.44

2.26

1.58

1.54

5.21

1.54

Doughnut ring

Rolled python

Birthday cake

4.98

3.58

2.92

1.69

5.85

1.45

Slithering snake

Winding road

Lizard lying in the sun

4.57

3.19

2.05

2.10

5.57

1.87

TV remote control

Mobile phone

Video-recorder

5.68

1.83

1.75

3.33

5.39

3.27

Reaping hook

Wooden boomerang

Sharp knife

4.89

2.83

1.98

2.12

4.93

1.90

Videotape

Closed book

Musical CD

5.18

1.85

1.64

2.00

5.27

1.93

Cowboy boot

Shape of Italy

Leather belt

6.38

1.44

1.26

2.02

5.08

1.31

Bank card

Telephone card

Metal coin

6.13

1.49

1.33

3.50

4.17

2.42

Dessert fork

Pitch-fork

Glass

5.44

1.24

1.20

2.57

4.96

1.60

Aerial ladder

Railway lines

Winding staircase

4.05

3.31

1.81

1.77

5.63

1.99

Trouser jeans

Reversed V

Suede jacket

5.05

1.42

1.44

1.49

5.58

1.65

Big cooking pot

Top hat

Blackened pan

5.80

2.63

2.24

1.60

5.83

1.37

Electric guitar

Tennis racket

Gran piano

5.06

1.44

1.36

1.73

5.80

1.48

Crying baby

Meowing cat

Laughing woman

4.82

2.30

2.12

3.08

4.23

2.26

Clapping public

Trampling on cement

Roaring crowd

3.49

3.40

1.64

2.10

5.08

2.06

Ripping paper

Crackling wood

Crackling polystyrene

2.96

3.25

4.20

3.18

4.51

2.40

Visual triplets

Auditory triplets

123

Cogn Process (2014) 15:143157

153

continued
Target item

Imaginative item

Semantic item

PI

PS

PN

CI

CS

CN

Creaking eagle

Screaming woman

Barking dog

4.31

2.38

2.42

3.26

4.74

2.85

Snapping fingers

Clicking clock

Thumping the wood

4.43

2.13

2.40

2.26

3.25

2.26

Drilling dentist

Humming hornet

Drilling blacksmith

4.42

3.49

2.49

1.94

5.36

1.63

Spitting cat

Puffing vapor

Purring cat

3.74

2.85

3.10

2.57

5.18

1.70

Welding the metal

Electric discharge

Hammering

4.14

2.50

1.92

3.04

4.32

2.08

Roaring lion

Rumbling motor

Trumpeting elephant

4.49

2.18

2.46

1.96

5.19

1.83

Mouse click

Ticking clock

Modem noise

5.55

1.80

1.86

2.73

5.05

2.02

Playing with trombone

Ship siren

Playing with piano

5.07

1.71

1.93

2.92

5.49

2.02

Opening the can

Lighting the lighter

Uncorking Champagne

4.18

2.56

1.98

2.20

5.30

2.15

Snapping the bones

Creaking the beams

Clapping the hands

3.79

2.80

1.64

2.35

4.48

1.57

Ringing the bells

Tinkling the keys

Striking the gong

3.58

2.93

1.83

3.73

4.25

2.21

Clock stroke

Bell stroke

Trilling alarm

4.08

2.74

2.25

3.86

4.80

3.29

Shooting with the gun

Tamer thwarting

Cannon roaring

3.79

3.82

1.90

2.42

5.37

2.26

Olfactory triplets
Retouching bleach

Varnish jar

Eraser

5.71

1.67

1.27

4.12

4.48

2.37

Vanilla candle

Cream

Incense large candle

4.00

3.05

1.46

2.33

5.42

1.46

Fresh mussel

See port

Fish oil

5.74

3.32

3.58

5.19

4.30

3.50

Gorgonzola (cheese)*

Sweet feet

Stracchino (cheese)*

5.80

2.87

2.32

1.99

6.01

1.45

Sulfurous water

Rotten eggs

Sea

5.25

2.06

1.54

2.00

5.07

1.30

Nursery school

Vegetable soup

Bakery

3.40

1.82

1.38

2.21

2.04

3.14

Nail polish

Turpentine can

Mascara

4.43

2.10

1.50

2.45

5.48

1.64

Cinnamon doughnut

Liquore Sambuca*

Croissant with cream

4.55

2.21

1.83

3.13

5.44

2.04

Rotten potatoes

Rotting fish

Fried potatoes

4.65

1.46

1.32

3.74

5.17

2.01

Lemon juice

Effervescent magnesia

Fresh orange juice

3.61

3.45

3.00

2.76

5.71

2.65

Glass of grappa

Ethyl alcohol

Beer mug

5.20

2.04

1.85

3.98

5.24

3.12

Petrol can

Nail solvent

Car oil

3.45

3.51

2.11

2.15

4.76

1.69

Pure ammonia

Cat urine

Washing-up liquid

3.32

2.02

1.36

2.15

3.77

1.56

Oil refinery

Bug plant

Olive tree

2.08

4.33

1.94

1.86

4.89

3.12

Black truffle

Lighter gas

Caviar canape

2.46

3.17

1.63

1.54

4.85

1.55

Onion soup

Sweaty armpit

Spelt broth

4.80

2.00

1.50

1.89

5.69

1.33

Appendix 2: Pre-experiment 2-A was aimed at assessing


the degree of the imaginative and semantic similarities
between the items composing the couples of Experiment 2
Method
Seventeen participants enrolled at the University of
LAquila filled out a questionnaire, which asked to score the
imaginative similarity (in the specific imagery modality
tested), the semantic similarity between couples of items, as
well as the familiarity with each item on a 7-point scale,
ranging from 1 (no similarityno familiarity) to 7 (very
similarvery familiar). Two blocks of couples of items were
formed. The first block included couples of items selected
from the same category (e.g., musical instruments). For
each category, one true trial (e.g., penpencil) and one false
trial (e.g., castanetsvioloncello) were included. The second
block included couples of items encompassing the semantic
comparisons. The true trials were selected from the same

category and were dissimilar in terms of sensory properties


(e.g., pearbanana). The false trials were selected from
different categories and were similar in terms of sensory
properties (e.g., plumball rice). Trials were formed
according to these criteria in order to get the imaginative
and semantic comparisons balanced in terms of relationships between items.
Results and discussion
The analysis designed to assess the degree of the imaginative and semantic similarities between the items was
performed as in Pre-experiment 1-A, that is including the
factors comparison, modality, and relationship as
between variables. The univariate ANOVA revealed a main
effect of comparison [F(1,84) = 8.60, MSE = 2.33,
p \ .005, g2 = .09]: perceptual comparisons were rated
more similar than categorical comparisons; main effect for
relationship [F(1,84) = 18.49, MSE = 2.33, p \ .0001,

123

154

Pre-experiment 2-B was aimed at assessing the degree


of familiarity with the items composing the couples
of Experiment 2

MODALITY BY RELATIONSHIP
VISUAL MODALITY

AUDITORY MODALITY

11

FAMILIARITY SCORE

OLFACTORY MODALITY
10

IMAGINATIVE

SEMANTIC

RELATIONSHIP
Fig. 9 Pre-experiment 2rating of similarity for the imaginative and
semantic relationships in the visual, auditory, and olfactory modalities, with respect to both the categorical and the perceptual
comparisons. The error bars represent the standard errors of the
means

MODALITY BY RELATIONSHIP
14

VISUAL MODALITY

AUDITORY MODALITY

OLFACTORY MODALITY
13

FAMILIARITY SCORE

g2 = .18]: semantic relationships were rated higher than the


imaginative relationship. The interaction effect for comparison by relationship was also significant [F(1,84) = 7.36,
MSE = 2.33, p \ .01, g2 = .08]: the post hoc comparisons
(Scheffe, p \ .05) demonstrated that perceptual comparisons
were rated higher than categorical comparisons with the
semantic relationship only; no difference was found between
the perceptual and categorical comparisons with the imaginative relationship; yet, the semantic relationship was rated
higher than the imaginative relationship only in the perceptual comparisons. Finally, the main effect for modality
[F(2,84) = 1,80, MSE = 2.33, p = .17], the interaction
effect for comparison by modality [F(2,84) = .26,
MSE = 2.33, p = .77], the interaction effect for relationship by modality [F(2,84) = .25, MSE = 2.33, p = .77],
and the interaction effect for comparison by relationship by
modality [F(2,84) = .90, MSE = 2.33, p = .90] were not
significant. Overall, these results showed that trials were
balanced in terms of imaginative and semantic similarities
across modalities. Indeed, the result that the semantic relationship was rated higher than the imaginative relationship
only in the perceptual comparisons demonstrated that participants scored higher semantic comparisons given that trials
were formed by coupling items from the same categories,
regardless of the type of trials (false or true), and scored lower
false trials. Interestingly, the lack of difference between the
perceptual and categorical comparisons with the imaginative
relationships, and between the two types of relationships in
the categorical comparisons confirm the reliability of the
material as regards the degree of the imaginative and
semantic similarities (see Fig. 9).

Cogn Process (2014) 15:143157

12

11

10

Method, results, and discussion


9

Concerning the familiarity issue, data were analyzed as in Preexperiment 1-B: factors modality and relationship were
treated as between variables. The univariate ANOVA revealed
that only the main effect for modality was significant
[F(2,42) = 3.66, MSE = 2.25, p \ .05, g2 = .14]: however,
the post hoc comparisons (Scheffe, p [ .05) showed no difference between modalities. In addition, the main effect of
relationship [F(2,42) = .93, MSE = 2.25, p = .33] and the
interaction effect for modality by relationship [F(2,42) = .45,
MSE = 2.25, p = .64] were not significant. These results
showed that trials were formed by items that are equally
familiar, and clarify that the imaginative and semantic relationships were not affected by familiarity in any modality (see
Fig. 10).

123

IMAGINATIVE

SEMANTIC

RELATIONSHIP
Fig. 10 Pre-experiment 2familiarity scores for the imaginative and
semantic relationships in the visual, auditory, and olfactory modalities. The error bars represent the standard errors of the means

List of stimuli used in Experiment 2. For each couple of


item, the target item and the comparison item are shown.
Three stimuli could not be translated properly in English
form (marked by an asterisk). In these cases, the Italian
form is indicated. Note: PI = PerceptualImaginative
relationships; PS = PerceptualSemantic relationships;
CI = CategoricalImaginative relationships; CS = CategoricalSemantic relationships.

Cogn Process (2014) 15:143157

155

Imaginative trials
Target

Conceptual trials
Comparison item

Target

Comparison item

PI

PS

CI

CS

Visual modality
Pen

Pencil

Yes

Drawing rule

Stapler

Yes

6.352941

6.882353

1.882353

6.705882

Highlighting pen

Staple puller

No

Marking pen

Mascara

No

1.764706

4.647059

4.058824

2.411765

Tennis ball

Baseball

Yes

Pear

Banana

Yes

4.882353

6.411765

1.588235

5.470588

Javelin

Rugby Ball

No

Plum

Ball rise

No

1.588235

4.235294

1.823529

Check

Banknote

Yes

Radio

Television

Yes

5.294118

6.647059

2.588235

5.882353

Credit card

Coin

No

Remote control

Mobile phone

No

1.352941

6.352941

4.588235

1.647059

Wild boar

Pig

Yes

Cow

Sheep

Yes

6.058824

6.647059

2.823529

6.058824

Lamb
Auditory modality

Hen

No

Slithering snake

Winding road

No

1.294118

5.411765

4.588235

3.647059

Cracking crow

Croaking frog

Yes

Neigh horse

Barking dog

Yes

4.235294

5.117647

2.352941

5.882353

Trumpeting
elephant

Roaring panther

No

Roaring lion

Rumpling motor

No

1.941176

6.117647

4.294118

1.941176

Heavy rain

Light hailstone

Yes

Drilling
blacksmith

Cord wrap

Yes

5.058824

6.647059

1.647059

5.588235

Stormy sea

Lightning

No

Pulley

Creaking door

No

1.764706

4.235294

4.352941

1.411765

Shooting with the


gun

Shooting with the


rifle

Yes

Saxophone

Clarinet

Yes

5.529412

6.882353

2.058824

4.058824

Hand grenade

Flame thrower

No

Trombone

Ship siren

No

2.117647

5.941176

4.882353

2.117647

Growl dog

Growl wolf

Yes

Harp

Flute

Yes

6.411765

6.882353

3.235294

6.470588

Cockcrow

Meowing cat

No

Triangle

Tinkling the keys

No

1.823529

5.588235

5.176471

2.647059

Olfactory modality
Petrol

Naphtha

Yes

Grapefruit

Mandarin

Yes

6.588235

3.117647

6.529412

Coal

methane

No

Clementine

Orange jam

No

3.176471

4.470588

4.647059

5.176471

Caramel

Candy floss

Yes

Pudding

Sweetened
cream

Yes

5.117647

6.588235

3.705882

Chocolate

Apple pie

No

Anise donut

Liquore
Sambuca*

No

1.823529

5.352941

5.470588

2.882353

Fresh mussel

Oysters

Yes

Smoked salmon

Codfish

Yes

5.941176

6.823529

3.294118

5.058824

Anchovies

Cuttlefish

No

Fresh Tuna Fish

See port

No

3.529412

6.647059

4.882353

3.235294

Grappa*

Ethyl alcohol

Yes

Softening agent

Degreaser

Yes

5.176471

6.352941

4.117647

5.823529

Bitter Campari*

Rum

No

Pure ammonia

Cat urine

No

2.764706

6.058824

3.764706

References
Algom D, Cain WS (1991) Remembered odors and mental mixtures:
tapping reservoirs of olfactory knowledge. J Exp Psychol Hum
Percept Perform 17:11041119
Allport DA (1985) Distributed memory, modular subsystems
and dysphasia. In: Newman SK, Epstein R (eds) Current
perspectives in dysphasia. Churchill Livingstone, Edinburgh,
pp 207244
Andrade J, Donaldson L (2007) Evidence for olfactory store in
working memory. Psychologia 50:7689
Annett JM, Leslie JC (1996) Effect of visual and verbal interference
tasks on olfactory memory: the role of task complexity. Br J
Psychol 87:447460
Annett JM, McLaughlin Cook N, Leslie J (1995) Interference with
olfactory memory by visual and verbal task. Percept Mot Skills
80:13071317
Barsalou LW (1999) Perceptual symbol systems. Behav Brain Sci
22:577660

Behrmann M, Winocur G, Moscovitch M (1992) Dissociation


between mental imagery and object recognition in a braindamaged patient. Nature 359:636637
Bensafi M, Rouby C (2007) Individual differences in odor imaging
ability reflect differences in olfactory and emotional perception.
Chem Senses 3:237244
Bensafi M, Porter J, Pouliot S, Mainland J, Johnson B, Zelano C,
Young N, Bremner E, Aframian D, Khan R, Sobel N (2003)
Olfactomotor activity during imagery mimics that during
perception. Nat Neurosci 6:11421144
Bensafi M, Pouliot S, Sobel N (2005) Odorant-specific patterns of
sniffing during imagery distinguish Bad and Good olfactory
imagers. Chem Senses 30:521529
Bensafi M, Sobel N, Khan RM (2007) Hedonic-specific activity in
piriform cortex during odor imagery mimics that during odor
perception. J Neurophysiol 98:32543262
Borst G, Kosslyn SM (2008) Visual mental imagery and visual
perception: structural equivalence revealed by scanning processes. Mem Cogn 36:849862

123

156
Broerse J, Crassini B (1980) The influence of imagery ability on color
aftereffects produced by physically present and imagined
induction stimuli. Percept Psychophys 28:560568
Broerse J, Crassini B (1981) Misinterpretations of imagery-induced
McCoilough effects: a reply to Finke. Percept Psychophys 30:9698
Broerse J, Crassini B (1984) Investigations of perception and imagery
using CAEs: the role of experimental design and psychophysical
method. Percept Psychophys 35:153164
Campos A (2012) Measure of the ability to rotate mental images.
Psicothema 24:431434
Carraco M, Ridout J (1993) Olfactory perception and olfactory
imagery: a multidimensional analysis. J Exp Psychol Hum
Percept Perform 19:287301
Chambers D, Reisberg D (1985) Can mental images be ambiguous?
J Exp Psychol Hum Percept Perform 11:317328
Craver-Lemley C, Arterberry ME (2001) Imagery-induced interference on a visual detection task. Spat Vis 14:101119
Crowder RG, Schab FR (1995) Imagery for odors. In: Sehab ER,
Crowder RG (eds) Memory for odors. Erlbaum, Mahwah, NJ,
pp 93107
DAngiulli A, Reeves A (2007) The relationship between selfreported vividness and latency during mental size scaling of
everyday items: phenomenological evidence of different types of
imagery. Am J Psychol 120:521551
Dixon P, Just MA (1978) Normalization of irrelevant dimensions in
stimulus comparisons. J Exp Psychol Hum Percept Perform 4:3646
Finke RA (1979) The functional equivalence of mental images and
errors of movement. Cogn Psychol 11:235264
Finke RA, Kosslyn SM (1980) Mental imagery acuity in the
peripheral visual field. J Exp Psychol Hum Percept Perform
6:244264
Finke RA, Kurtzman HS (1981) Mapping the visual field in mental
imagery. J Exp Psychol Gen 110:501517
Finke RA, Schmidt MJ (1977) Orientation-specific color aftereffects
following imagination. J Exp Psychol Hum Percept Perform
3:599606
Finke RA, Schmidt MJ (1978) The quantitative measure of pattern
representation in images using orientation specific color aftereffects. Percept Psychophys 23:515520
Gardini S, De Beni R, Cornoldi C, Bromiley A, Venneri A (2005)
Different neuronal pathways support the generation of general
and specific mental images. NeuroImage 27:544552
Guariglia C, Padovani A, Pantano P, Pizzamiglio L (1993) Unilateral
neglect restricted to visual imagery. Nature 364:235237
Herz RS (2000) Verbal coding in olfactory versus non-olfactory
cognition. Mem Cogn 28:957964
Hubbard TL (2010) Auditory imagery: empirical findings. Psychol
Bull 136:302329
Intons-Peterson MJ (1980) The role of loudness in auditory imagery.
Mem Cogn 8:385393
Intons-Peterson MJ (1983) Imagery paradigms: how vulnerable are they
to experimenters expectations? J Exp Psychol Hum Percept
Perform 9:394412
Intons-Peterson MJ, White AR (1981) Experimental naivete and
imaginal judgments. J Exp Psychol Hum Percept Perform
7:833843
Intons-Peterson MJ, Russell W, Dressel S (1992) The role of pitch in
auditory imagery. J Exp Psychol Hum Percept Perform
18:233240
Jehl C, Royet JP, Holley A (1997) Role of verbal encoding in shortand long-term odor recognition. Percept Psychophys
59:100110
Jonsson FU, Mller P, Olsson MJ (2011) Olfactory working memory:
effects of verbalization on the 2-back task. Mem Cogn 39:10231032
Kan IP, Barsalou LW, Solomon KO, Minor JK, Thompson-Schill SL
(2003) Role of mental imagery in a property verification task:

123

Cogn Process (2014) 15:143157


fMRI evidence for perceptual representations of conceptual
knowledge. Cogn Neuropsychol 20:525540
Kleemann AM, Kopietz R, Albrecht J, Schoepf V, Pollatos O,
Schreder T, May J, Linn J, Brackmann H, Wiesmann M (2008)
Investigation of breathing parameters during odor perception and
olfactory imagery. Chem Senses 34:1113
Kosslyn SM (1976) Can imagery be distinguished from other forms of
internal representation? Evidence from studies of information
retrieval time. Mem Cogn 4:291297
Kosslyn SM (1980) Image and mind. Harvard University Press,
Cambridge, MA
Kosslyn SM (1994) Image and brain: the resolution of the imagery
debate. MIT Press, Cambridge, MA
Kosslyn SM, Ball TM, Reiser BJ (1978) Visual images preserve
metric spatial information: evidence from studies of image
scanning. J Exp Psychol Hum Percept Perform 4:4760
Kosslyn SM, Thompson WL, Ganis G (2006) The case for mental
imagery. Oxford University Press, New York
Lloyd-Jones TJ, Vernon D (2003) Semantic interference from visual
object recognition on visual imagery. J Exp Psychol Learn Mem
Cogn 29:563580
Lyman B, McDaniel M (1986) Effect of encoding strategies on longterm memory for odors. Q J Exp Psychol A 38:753765
Lyman B, McDaniel M (1990) Memory for odors and odor names:
modalities of elaboration and imagery. J Exp Psychol Learn
Mem Cogn 16:656664
Marzi CA, Mancini F, Metitieri T, Savazzi S (2006) Retinal
eccentricity effects on reaction time to imagined stimuli.
Neuropsychologia 44:14891495
Mazard A, Laou L, Joliot M, Mellet E (2005) Neural impact of the
semantic content of visual mental images and visual percepts.
Cogn Brain Res 24:423435
McDermott K, Roediger H (1994) Effects of imagery on perceptual
implicit memory tests. J Exp Psychol Learn Mem Cogn
20:13791390
Mellet E, Tzourio N, Crivello F, Joliot M, Denis M, Mazoyer B
(1996) Functional anatomy of spatial mental imagery generated
from verbal instructions. J Neurosci 16:65046512
Olivetti Belardinelli M, Di Matteo R, Del Gratta C, De Nicola A,
Ferretti A, Tartaro A, Bonomo L, Romani GL (2004) Intermodal sensory image generation: an fMRI analysis. Eur J Cogn
Psychol 16:729752
Olivetti Belardinelli M, Palmiero M, Sestieri C, Nardo D, Di Matteo
R, Londei A, DAusilio A, Ferretti A, Del Gratta C, Romani GL
(2009) An fMRI investigation on image generation in different
sensory modalities: the influence of vividness. Acta Psychol
132:190200
Olivetti Belardinelli M, Palmiero M, Di Matteo R (2011) How fMRI
technology contributes to the advancement of research in mental
imagery: a review. In: Peres JFP (ed) Neuroimaging for
cliniciancombining research and practice. InTech Open
Access Publisher, Rijeka (Croatia), pp 329346
Paivio A (1986) Mental representations. New York, Oxford University Press, A dual coding approach
Palermo L, Nori R, Piccardi L, Zeri F, Babino A, Giusberti F,
Guariglia C (2013) Refractive errors affect the vividness of
visual mental images. PLoS ONE 8:e65161
Palmiero M, Olivetti Belardinelli M, Nardo D, Sestieri C, Di Matteo
R, DAusilio A, Romani GL (2009) Mental imagery generation
in different modalities activates sensory-motor areas. Cogn
Process 10:268271
Perkins J, McLaughlin Cook N (1990) Recognition and recall of
odors: the effects of suppressing visual and verbal encoding
processes. Br J Psychol 81:221226
Perky CW (1910) An experimental study of imagination. Am J
Psychol 21:422452

Cogn Process (2014) 15:143157


Plailly J, Delon-Martin C, Royet JP (2012) Experience induces
functional reorganization in brain regions involved in odor
imagery in perfumers. Hum Brain Mapp 33:224234
Pylyshyn ZW (1973) What the minds eye tells the minds brain: a
critique of mental imagery. Psychol Bull 80:124
Pylyshyn ZW (1981) The imagery debate: analogue media versus
tacit knowledge. Psychol Rev 87:1645
Pylyshyn ZW (2002) Mental imagery. In search of a theory. Behav
Brain Sci 25:157182
Pylyshyn ZW (2003) Return of the mental image: are there really
pictures in the brain? Trends Cogn Sci 7:113118
Rinck F, Rouby C, Bensafi M (2009) Which format for odor images?
Chem Senses 34:1113
Royet JP, Delon-Martin C, Plailly J (2013) Odor mental imagery in
non-experts in odors: a paradox? Front Hum Neurosci 7:87
Schneider TR, Engel AK, Debener S (2008) Multisensory identification of natural objects in a two-way crossmodal priming
paradigm. Exp Psychol 55:121132
Segal SJ, Fusella V (1970) Influence of imaged pictures and sounds
on detection of visual and auditory signals. J Exp Psychol
83:458464

157
Shepard RN, Metzler J (1971) Mental rotation of three-dimensional
objects. Science 171:701703
Stevenson RJ, Case TI (2005) Olfactory imagery: a review. Psychon
Bull Rev 12:244264
Stevenson RJ, Case TI, Mahmut M (2007) Difficulty in evoking odor
images: the role of odor naming. Mem Cogn 35:578589
Stuart GP, Jones DM (1996) From auditory image to auditory percept:
facilitation through common processes? Mem Cogn 24:296304
Thompson WL, Kosslyn SM, Hoffman MS, van Der Kooij K (2008)
Inspecting visual mental images: can people see implicit
properties as easily in imagery and perception? Mem Cogn
36:10241032
Thompson-Schill SL (2003) Neuroimaging studies of semantic
memory: inferring how from where. Neuropsychologia
43:280292
Zucco MG (2003) Anomalies in cognition: olfactory memory. Eur
Psychol 8:7786

123

Anda mungkin juga menyukai