of Predictable Speech
Carine Signoret and Mary Rudner
Objectives: The precision of stimulus-driven information is less critical the perceived clarity of degraded speech is enhanced by both
for comprehension when accurate knowledge-based predictions of the predictability based on form (i.e., phonological/lexical) at the
upcoming stimulus can be generated. A recent study in listeners without word level and on meaning (i.e., semantic) at the sentence level
hearing impairment (HI) has shown that form- and meaning-based pre- (Signoret et al. 2018). For individuals with hearing impairment
dictability independently and cumulatively enhance perceived clarity of
(HI) who may struggle particularly hard to understand degraded
degraded speech. In the present study, we investigated whether form-
Downloaded from https://journals.lww.com/ear-hearing by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWnYQp/IlQrHD3Le0ic6pTQVAkPIHH+KGauzJeEs7rXrR75AsRfqkAleMvzeosLceWFg== on 02/15/2019
0196/0202/2018/XXXX-00/0 • Ear & Hearing • © 2019 The Authors. Ear & Hearing is published on behalf of
the American Auditory Society, by Wolters Kluwer Health, Inc.• Printed in the U.S.A.
1
<zdoi; 10.1097/AUD.0000000000000689>
2 Signoret and Rudner / EAR & HEARING, VOL. XX, NO. XX, 00–00
2005; Experiment 1 in Sheldon et al. 2008a; Thiel et al. 2016). benefit associated with form- and meaning-based predictability
Only one study to our knowledge has previously investigated is also related to working memory (Signoret et al. 2018). The
NVS perception for individuals with HI (Souza & Boike 2006) Ease of Language Understanding model (Rönnberg et al. 2013)
and reported lower speech recognition performance for older proposes two particular roles of working memory in speech
individuals with HI than younger adults without HI at NVS 4 processing: predictive and postdictive. Predictively, working
and 8 bands (as well as clear speech), but similar speech rec- memory is likely to be involved in the storage of knowledge-
ognition performance at NVS 1 and 2 bands. However, age but based predictions while postdictively, working memory is likely
not the degree of hearing loss (mild to severe) was a signifi- to be involved during the explicit processing of degraded speech
cant predictor of NVS recognition performance, possibly due when a mismatch occurs between knowledge-based predictions
to greater lexical knowledge associated with age (Wingfield and the auditory signal. As the perceived auditory signal is
et al. 2005; Sheldon et al. 2008a) or slower processing. In the always degraded for individuals with HI, it is likely that they
study of Neger et al. (2014) comparing younger and older adults use linguistic and cognitive abilities differently than individu-
(of whom one-third had untreated HI), processing speed was a als without HI (Wingfield et al. 2005; Rogers et al. 2012). In
significant predictor of NVS perception performance for older particular, individuals with HI are probably used to utilizing
adults only. To our knowledge, no study has yet investigated working memory capacity in its predictive function to minimize
NVS perception for individuals with HI. However, as the en- slow and effortful explicit processing (Rudner et al. 2011). If
velope of the speech signal is preserved in both sensorineural the speech signal matches knowledge-based predictions, lex-
HI and NVS, we assume that individuals with HI and hearing ical access will be faster. For individuals with HI using hearing
aids will be able to perceive NVS in much the same way as aids, there is evidence that degraded speech recognition is more
individuals with normal hearing. Because no difference in NVS dependent on working memory when speech is more highly
perceptual learning has been reported between younger and degraded and no meaning-based predictions are available (Rud-
older adults without HI, NVS perception of individuals with HI ner et al. 2011). Because knowledge-based predictions are de-
with hearing aids should be similar to NVS perception of older pendent on the lexicon, it could be argued that the larger the
adults without HI. lexicon, the better the quality of the meaning-based predictions,
NVS is a key tool for investigating the influence of knowl- and the higher the probability of implicit processing of the in-
edge-based support during degraded speech perception. For coming signal (Benard et al. 2014). It is important to evaluate
example, giving access to the content of the upcoming audi- whether the benefit of form- and meaning-based predictions
tory sentence before hearing the speech signal leads to a clearer
during perception of degraded speech is associated with these
perception of the sentence (Davis et al. 2005). For individuals
cognitive and linguistic factors. This knowledge could be used
without HI, this effect, known as the “pop-out effect,” has been
in the design of technological supports such as written text as-
investigated extensively (Davis et al. 2005; Davis & Johnsrude
sistance for individuals with HI and consequently improve their
2007; Hervais-Adelman et al. 2011; Wild et al. 2012), whereas
experience of everyday life in social situations.
it has not, to our knowledge, been reported whether individuals
In the present study, we investigated how form- and mean-
with HI experience the pop-out effect. For individuals without
ing-based predictions influence the perceptual clarity of de-
HI, presenting matching text only 200 msec before presenting
graded speech for individuals with HI. Based on our previous
a degraded spoken sentence, improves the perceived clarity of
the sentence more than meaningless consonant strings (Wild study investigating this question for individuals without HI
et al. 2012; Signoret et al. 2018). This improvement in percep- (Signoret et al. 2018), we compared perceptual clarity ratings
tual clarity is thought to be related to knowledge-based predic- of spoken sentences with high or low semantic coherence that
tions about the phonological/lexical form of the upcoming word were degraded with noise-vocoding and preceded by match-
influencing upcoming speech processing (Sohoglu et al. 2014; ing or nonmatching (nM) text primes. Matching text primes
Signoret et al. 2018). Perceptual clarity of degraded spoken sen- allowed generation of form-based predictions while coherence
tences is also greater for sentences with high compared with allowed generation of meaning-based predictions (Signoret et
low semantic coherence (Signoret et al. 2018). Such an im- al. 2018). Individuals with moderate to severe sensorineural
provement may be explained by the fact that knowledge-based hearing loss and wearing hearing aids were recruited to partic-
predictions about the semantic meaning of the upcoming sen- ipate in the present study. Because NVS seems to be perceived
tence facilitate processing by reducing the number of lexical similarly by individuals with and without HI, we expected to
candidates (Federmeier 2007). The facilitative effects of pho- find main effects of both form- and meaning-based predictions
nological/lexical form- and semantic meaning-based predic- as well as a significant interaction showing that meaning-based
tions are independent but also additive, suggesting that several predictions enhance perceptual clarity even when form-based
sources of knowledge-based predictions (i.e., at different lin- predictions can be made based on the text primes provided.
guistic levels, see Pickering & Garrod 2013) could combine We also investigated how linguistic and cognitive factors are
to improve the perceptual clarity of degraded speech (Signoret associated with the enhancement of the perceptual clarity of
et al. 2018). Although the independent and additive facilita- degraded speech by form- and meaning-based predictions.
tive effects of form- and meaning-based predictions have been We hypothesized that (1) greater working memory capacity
highlighted for individuals without HI (Signoret et al. 2018), it will correlate with higher clarity ratings for heavily degraded
is unclear whether the pattern of effects would be similar for spoken sentences with low semantic coherence (Rudner et al.
individuals with HI. 2011) and (2) greater verbal fluency (indexing a larger mental
The ability to understand degraded speech is associated with lexicon) will correlate with higher clarity ratings for slightly
working memory (Rudner et al. 2011), especially for individu- degraded spoken sentences with high semantic coherence
als with HI. For individuals without HI, the perceptual clarity (Benard et al. 2014).
Signoret and Rudner / EAR & HEARING, VOL. XX, NO. XX, 00–00 3
(Wechsler 1981). The test was administered following the and orthographically dissimilar items, e.g., DAGS/LAX, (i.e.,
standard procedure and scoring was calculated with time bonus. day/salmon pronounced [daks/laks]). Nonrhyming pairs in-
Size-Comparison Span test: The Size-Comparison Span test cluded orthographically similar items, e.g., STÅL/STEL (i.e.,
(SiCSpan; Sörqvist et al. 2010) is sensitive to working memory steel/stiff), and orthographically dissimilar items, e.g., CYKEL/
capacity. In the SiCSpan test, questions in Swedish about the SPADE (i.e., bike/shovel). Thirty-two pairs, eight of each type,
relative size of two objects in the same category were presented were presented.
on a screen (e.g., is a cow larger than a cat?). After a yes/no Semantic decision task: Participants were asked to decide
button press response (right hand for yes/left hand for no), an- whether the presented word belonged to a predefined semantic
other word designating an object belonging to the same object category. Four categories (animals, lands, sports, and vegeta-
category was presented (e.g., crocodile). The participants were bles) were used. Within each of 4 blocks (1 per category), 24
instructed to maintain this intermediate word in memory for word items were presented, of which 16 belonged to the cate-
later recall. Participants began the test with two questions (i.e., gory and 8 did not.
two intermediate words to recall) and the difficulty increased
progressively until six questions (i.e., intermdiate words to re-
Overall Procedure
call). Immediately after the final intermediate word had been
After informed consent was obtained, PTA and a handed-
presented, the participants were cued to recall. To do this suc-
ness questionnaire (Oldfield 1971) were administered. The cog-
cessfully, the participants had to recall all the intermediate
nitive/linguistic test battery was administered in the following
words, while inhibiting report of the words included in the size-
order: BD, FAS, VIPS (warm-up, lexical decision task, phono-
comparison questions. The score was calculated as the number
logical decision task, and semantic decision task), and SiCSpan.
of correct responses (CRs) during the recall phase. The max-
The perceptual clarity experiment was administered last. The
imum score is 40.
total duration of the test session was approximately 75 min.
Linguistic Abilities • Verbal Fluency test: Verbal fluency
was tested using an adaptation of the first part of the FAS test
(Spreen & Strauss 1998). The participants were given 90 sec Statistical Analysis
to write down as many Swedish words as possible beginning As variance was expected to be too low at NV1 and clear
with the letter F. They were instructed to avoid repetitions, non- sound quality levels, clarity ratings for these two levels were not
existent words, variations of the same word and proper nouns planned to be included in the analysis of variance (ANOVA).
and not to care about the orthography. The participants were Mean clarity ratings were then entered in a 3*Sound Quality
informed that after the 90 sec were up, they would be given (NV12, NV6, NV3) × 2*Prime (M, nM) × 2*Coherence (HC,
the opportunity to correct their spelling if they wished, so that LC) repeated-measures ANOVAs in Statistica 13 (Hill &
would not spend time attending to orthography during the test. Lewicki 2005). The Student–Newman–Keuls test was applied
The score corresponded to the number of correct words given for post hoc comparisons (Howell 1998). An alpha level of 0.05
by the participant. was used after Greenhouse-Geisser correction where appro-
Verbal Information Processing Speed test: Verbal abilities priate. Due to the small sample size and the comparison between
were tested using the Verbal Information Processing Speed different measurement scale units, Spearman rank correlations
(VIPS) test (Hällgren et al. 2001). Participants responded by were chosen for calculating correlations between clarity ratings
pressing the yes button with the right hand and the no button (Likert-scale) and scores (CRs or RTs) on the cognitive/lin-
with the left hand. Scores were calculated by averaging reaction guistic test battery, to confirm (1) the involvement of working-
times (RTs) for CRs. memory capacity in the most degraded condition (NV3) of the
Warm-up task: This test was designed to obtain baseline RT. experiment and (2) the involvement of linguistic abilities at low
The participants were asked to respond by pressing the appro- levels of sound quality degradation. Spearman rank correlations
priate button as fast as possible when “yes” or “no” (i.e., “ja” were also calculated to determine the association between cog-
or “nej” in Swedish) appeared on the screen. Ten trials were nitive/linguistic abilities working-memory and verbal fluency
presented in total to train the participants to use the response and the perceptual clarity benefits contingent on matching text
buttons. A success rate of 60% is mandatory to allow the par- primes and semantic coherence. Bonferroni correction for mul-
ticipant to perform the other tasks of the VIPS test. The RTs to tiple comparisons was applied where appropriate.
this task will be subtracted to the RTs on the other tasks in order
to obtain an unbiased measure of access speed (i.e., excluding RESULTS
motor reaction in pressing a button or general processing speed,
see Carroll et al. 2016). Perceptual Clarity Experiment
Lexical-decision task: Participants were asked to judge The clarity ratings at the five sound quality levels (clear,
whether three-letter strings were legitimate Swedish words. NV12, NV6, NV3, and NV1) with and without text primes (M,
Forty items were used in the test: 20 real familiar Swedish nM, respectively) and for sentences of high and low semantic
words and 10 pseudowords (i.e., pronounceable letter strings) coherence (HC, LC, respectively) are shown in Table 1.
and 10 nonwords (i.e., unpronounceable consonant strings). Repeated-measure ANOVAs conducted on the 3 × 2×2
Phonological decision task: Participants were asked to de- conditions revealed a main effect of sound quality [F(1.68,35.37)
cide whether two visually presented words rhymed or not. = 83.31; p < 0.001; η2p = 0.799]. Posthoc comparisons (Stu-
Thirty-two pairs of words were presented; half of the pairs dent–Newman–Keuls) demonstrated that clarity was rated
rhymed, half did not. In each case, half of the pairs were or- higher at NV12 (M = 3.76; SE = 0.48) than at NV6 (M = 3.04;
thographically similar. For example, rhyming pairs included SE = 0.46), and higher at NV6 than at NV3 (M = 2.41; SE
orthographically similar items, e.g., KATT/HATT (i.e., cat/hat) = 0.35; all ps < 0.001). Clarity was rated higher in the
Signoret and Rudner / EAR & HEARING, VOL. XX, NO. XX, 00–00 5
TABLE 1. The mean clarity ratings and the associated SD are reported for the five sound quality levels (clear, NV12, NV6, NV3, and
NV1), as well as for the M and nM primes, and for the sentences of HC and LC
HC, high semantic coherence; LC, low semantic coherence; M, matching; nM, nonmatching.
M (M = 3.69; SE = 0.58) than in the nM (M = 2.45; SE = 0.51) primes are available but semantic coherence gives no benefit in
condition as revealed by a main effect of prime [F(1,21) = 61.69; perceived clarity when text primes are nM, except at the NV12
p < 0.001; η2p = 0.746] demonstrating that matching text primes sound quality level.
enhance perceptual clarity. Clarity was rated higher in HC Similarly, a significant interaction between Sound Quality
(M = 3.32; SE = 0.56) than in LC (M = 2.82; SE = 0.49) sen- and Prime [F(1.27,26.61) = 3.89; p = 0.050] was due to the differ-
tences as revealed by a main effect of coherence [F(1,21) = 18.68; ence in perceptual clarity between M and nM conditions (i.e.,
p < 0.001; η2p = 0.471] demonstrating that semantic coherence the effect of prime) being smaller at NV12 (M = 4.24 and 3.28,
enhances perceptual clarity. respectively), than at NV6 (M = 3.73 and 2.35, respectively)
There was a significant interaction between Prime and Co- and NV3 (M = 3.09 and 1.73, respectively) sound quality level,
herence [F(1,21) = 9.02; p = 0.007; η2p = 0.301; see Fig. 2]: the although all differences were statistically significant (all ps
difference in perceptual clarity between HC and LC sentences, < 0.006), suggesting an increasing effect of prime with decreas-
i.e., the effect of coherence, was significant in the M condition ing sound quality level. The interaction between Sound Quality
(M = 4.06 and 3.32, respectively; p < 0.001) but not in the nM and Coherence was not significant.
condition (M = 2.57 and 2.33, respectively; p > 0.05, n.s.). This Little variance was expected at NV1 and clear sound quality
finding indicates that semantic coherence facilitates perceptual levels, but the inspection of the results indicated different find-
clarity only when providing full phonological/lexical informa- ings. Detailed results for these two sound quality levels indi-
tion up front. cated that, at NV1, 3 participants rated perceptual clarity higher
The two-way interaction between Prime and Coherence for M than nM sentences, suggesting that they perceived greater
was tempered by a three-way interaction among sound quality, clarity with matching text primes even when speech was un-
prime, and coherence [F(2,42) = 3.37; p < 0.05, η2p = 0.138]. This intelligible. One participant gave low ratings overall and rated
suggests that the effects of prime and coherence both depend on clear sentences as one. This participant had the worst hearing
the sound quality level (Fig. 3). The effect of prime (M versus thresholds on intermediate frequencies (0.5 to 2 kHz). When we
nM) was observed at all sound quality levels for all sentences. removed these participants from the analysis (N = 18, 9 women;
In other words, for individuals with HI, matching text primes Mage = 69.22, SDage = 6.13), variance decreased at the NV1
increase the perceived clarity of speech irrespective of the sound quality level (M = 1.05; SD = 0.155), but we still observed
sound quality level and of the semantic coherence of the sen- substantial variance at the clear sound quality level (M = 6.50;
tences. The effect of coherence (HC versus LC) was observed SD = 0.767). This could be explained by the fact that even at the
at all degraded sound quality levels in M condition (all ps clear sound quality level, individuals with HI still do not per-
< 0.001) but only at the NV12 sound quality level in nM condition ceive clear speech as clear. Indeed, at this sound quality level,
(p < 0.001). In other words, the perceived clarity of degraded the benefit of prime (0.328) but not coherence (0.092) is still
sentences increases with semantic coherence if matching text observable. Supplementary analysis including clear and NV1
sound quality levels to obtain a 5*Sound Quality (clear, NV12,
NV6, NV3, NV1) × 2*Prime (M, nM) × 2*Coherence (HC, LC)
repeated-measures ANOVAs confirmed that the effect of coher- TABLE 2. Cognitive/linguistic test battery results
ence was not significant at the clear sound quality level (p =
CRs (%) SD (%) RTs (msec) SD (msec)
0.31). However, excluding these participants from the analysis
had a negligible impact on the results and thus we retained them BD 35.32 ±10.83
in the analysis. FAS 13.55 ±5.54
SiCSpan 14.76 ±6.42
VIPS 91.25 ±6.16 1156.54 ±261.26
Comparison of Individuals With or Without HI Warm-up 86.84 ±12.5 825.83 ±244.42
To examine differences in perceptual clarity ratings between Lexical 96.05 ±6.49 1631.14 ±466.74
listeners with and without HI (Signoret et al., 2018), a repeated- Phonological 81.32 ±15.44 961.72 ±216.45
measures ANCOVA was conducted. As no variance was Semantic 96.37 ±2.53 834.19 ±124.71
observed at the clear and NV1 Sound quality levels for listen-
ers without HI, mean clarity ratings were entered in a 3*Sound Mean CRs and SD are reported for BD, SiCSpan, and verbal fluency (FAS) tests. Mean RTs
and SD for CRs are reported for the tasks (warm-up, lexical decision task, phonological
Quality (NV12, NV6, NV3) × 2*Prime (M, nM) × 2*Coher- decision task, and semantic decision task) included in the VIPS test.
ence (HC, LC). As the two groups are not matched on age (M = BD, block design; CR, correct responses; RT, reaction times; SiCSpan, size-comparison
span; VIPS, verbal information processing speed.
69.41, SD = 5.62 and M = 31.34, SD = 10.32, respectively, for
listeners with and without HI; t(40) = 14.81, p < 0.001), this var-
iable was entered as a covariate. Results revealed a main effect scores were however positively associated with BD scores,
of sound quality [F(2,78) = 21.23; p < 0.001], showing better per- which is not surprising as BD is used as a general measure of
ceptual clarity rating with higher sound quality level, as well as nonverbal intelligence.
a main effect of prime [F(1,39) = 9.58; p < 0.004] and a marginal Spearman’s rho correlations were calculated to examine
interaction between Prime and Hearing status [F(1,39) = 3.97; p the relations between linguistic and cognitive skills and the
= 0.053]. Posthoc analysis (unequal N Tukey HSD) suggested perceptual clarity ratings. Confirming our hypothesis, work-
that benefit in perceptual clarity from matching text primes was ing memory capacity was associated with perceptual clarity
larger for listeners with HI than listeners without HI (mean ben- ratings in the most difficult condition of perception, i.e., with
efit = 1.24 and 0.83, respectively), while no significant differ- LC sentences preceded by nM primes at NV3 sound quality
ence in perceptual clarity ratings was observed between the two level (r = 0.512; p = 0.018), possibly reflecting explicit pro-
groups when matching text primes were presented (p = 0.458). cessing. No other significant correlation was found to confirm
our hypothesis.
Cognitive/Linguistic Test Battery Spearman’s rho correlations were also calculated to explore
Results obtained in the different tests of the cognitive/lin- the relations between linguistic and cognitive skills and the
guistic battery are displayed in Table 2 and are comparable with benefit obtained through the provision of matching text primes
performances observed in previous literature with similar age and semantic coherence. The benefit of matching text primes
groups (for BD test see, Troyer et al. 1994; for FAS test, see was calculated as the perceptual clarity difference between M
Tombaugh et al. 1999; and for VIPS test, see Hällgren et al. and nM conditions collapsed over other factors, and the benefit
2001), except for SiCSpan scores that are lower than previously of semantic coherence was calculated as the perceptual clarity
reported (M = 29.0, SD = 5.01) for a group of hearing aid users difference between HC and LC sentences collapsed over other
of 67.4 years old with mild to moderate hearing loss (Heinrich factors. Significant correlation after Bonferroni correction was
et al. 2016). Three participants elected not to complete the VIPS found between the benefit of semantic coherence and verbal
tests and one elected not to complete the SiCSpan test. fluency (r = 0.540; p = 0.009). This was especially true when
Intercorrelations between tests of the cognitive/linguistic matching text primes were available (r = 0.56; p = 0.007), but
battery are shown in Table 3. VIPS RTs were negatively associ- not when nM text primes were presented (r = 0.42; p > 0.05,
ated with working memory abilities (i.e., SiCSpan scores) but n.s.), indicating that the benefit of semantic coherence is asso-
this correlation did not survive Bonferroni correction. SiCspan ciated with better verbal fluency when form-based prediction
could be made.
TABLE 3. Spearman’s rho correlation coefficients between
the tests in the cognitive/linguistic test battery: mean correct
responses for nonverbal ability with BD test, working memory Auditory Skills
with SiCSpan test, and verbal fluency with FAS test and mean We also examined with Spearman’s rho correlations the re-
reaction times for the decision tasks (lexical, phonological, and lation among auditory skills (i.e., PTA thresholds) and age. The
semantic) included in the VIPS test results showed that, in the sample studied, age was not significantly
associated with HI on low frequencies thresholds (averaged across
Spearman’s Rho BD FAS SiCSpan
125, 250, and 500 Hz; r = 0.35, p = 0.11), speech frequencies
FAS 0.205 thresholds (averaged across 1000, 2000, and 4000 Hz; r = 0.12,
SiCSpan 0.670* 0.018 p = 0.61) or high frequencies thresholds (averaged across 6000
VIPS −0.316 −0.336 −0.546 and 8000 Hz; r = −0.05, p = 0.84). This is likely due to the fact
Phonological −0.272 −0.300 −0.518 that the studied sample was recruited from a clinical population.
Lexical −0.453 −0.238 −0.530
Semantic −0.324 −0.308 −0.551
DISCUSSION
*p < 0.001.
BD, block design; SiCSpan, size-comparison span; VIPS, verbal information processing
The present study aimed to investigate whether mean-
speed. ing-based predictions (driven by semantic coherence) and
Signoret and Rudner / EAR & HEARING, VOL. XX, NO. XX, 00–00 7
form-based predictions (driven by the phonological/lexical the benefit of meaning-based predictability is consistent what-
form of words) could enhance perceived clarity of degraded ever the amount of speech degradation. This suggests that for
speech for individuals with HI and how this knowledge-based individuals with HI, form-based predictability is always valu-
predictability is associated with linguistic and cognitive factors. able, while meaning-based expectations can only be generated
The results showed that form-based predictability improved after a certain amount of information related either to the form-
the perceived clarity of degraded speech for individuals with based predictability or to the quality of the speech signal. Future
HI, irrespective of meaning-based predictability. However, in studies should investigate if other types of predictability, such
contrast to individuals without HI (Signoret et al. 2018), in the as emotion-based predictability, would affect degraded speech
absence of form-based predictability, meaning-based predic- perception in a similar manner. Emotion-based predictability
tions seem to be only possible when speech is only slightly de- could be manipulated for example with the congruency between
graded (NV speech 12 bands), i.e., where the participants could the prosody and the content of a sentence, or by comparing neu-
probably hear well enough without the matching text primes tral versus emotional words or sentences. The present findings
to benefit from sentence coherence. When it is more severely demonstrated that meaning- and form-based predictability have
degraded (NV speech 6 and 3 bands), the benefit of meaning- cumulative, but not independent, facilitative effects on de-
based prediction was only seen when form-based predictions graded speech perception for individuals with HI. This supports
could also be made. The benefit in terms of perceptual clarity the idea that more precise predictions (at different linguistic
of meaning-based predictions was positively related to verbal levels) about upcoming spoken words can help compensate for
fluency, as measured by the FAS test (Spreen & Strauss 1998) sensory degradation. It could be the case that emotion-based
in an adapted written procedure, but was not related to work- predictability has also an additive facilitative effect on degraded
ing memory capacity as measured with SiCSpan test (Sörqvist speech perception.
et al. 2010). The effect of form-based predictability was also observed,
The present study is the first to our knowledge to investi- although smaller, in clear sentences. The clear level of sound
gate both form- and meaning-based predictability on the per- quality was originally included as a control of good task com-
ceptual clarity of speech for individuals with HI. The results pliance. However, it is not surprising to observe a facilitative
demonstrated that the presentation of matching text primes effect of form-based predictability at this clear sound quality
before the degraded spoken sentences improved the perceived level. Indeed, participants included in this study are individu-
clarity of NV speech clarity at all levels of sound quality for als with moderate to severe symmetrical sensorineural HI.
individuals with HI by allowing form-based predictions. This This means that even if the signal is perfectly clear in the
result demonstrates that predictions based on the phonological/ environment, it is still perceived with some internal degrada-
lexical form at a word level improve the perceived clarity of tions that are not entirely compensated by hearing aid ampli-
degraded speech for individuals with HI. Because perceived fication. At a sound quality level that is too low to provide
clarity and intelligibility of NV speech are highly correlated for intelligibility even for persons with no HI (i.e., NVS with one
individuals without HI (Obleser et al. 2008), we speculate that band), 3 participants showed an effect of form-based predict-
matching text primes will also enhance the intelligibility of de- ability, by reporting better perceptual clarity ratings for sen-
graded speech for individuals with HI. The results of the present tences preceded by matching text primes than nM text primes.
study extend previous results observed for younger individuals This may indicate that these participants were particularly
without HI (Freyman et al. 2017; Signoret et al. 2018) to older influenced by the presentation of prior matching text primes.
individuals with moderate to severe sensorineural HI. It is possible that these participants perceived auditory illu-
Unsurprisingly, the results also demonstrated that meaning- sions due to the presence of matching text primes (Rogers
based predictability improved perceptual clarity, confirming et al. 2012) and may have relied more on the form-based
that semantic coherence influences degraded speech perception predictions as the stimulus-driven information decreased.
for individuals with HI (Benichov et al. 2012) and extending Future studies should further investigate the importance of
this finding to its perceptual clarity. This effect was observed form-based prediction for individuals with HI, as has been
consistently when form-based predictions could be made at the done for individuals without HI (Sohoglu et al. 2014). Typical
same time (i.e., when matching text primes were available be- questions to be answered relate to the duration and optimiza-
fore the presentation of the degraded spoken sentences). Statis- tion of this facilitative effect. Using this knowledge, we can
tical comparison with previous results showed that the benefit improve the design of social environments, especially where
of form-based predictability was larger for listeners with than important information must reach the listeners (as in airports
without HI. Further, listeners with HI and without HI rated the or train stations for example).
perceptual clarity of degraded sentences at similar levels when In the present study, the benefit in terms of perceptual clarity
there was support of form-based predictability. This suggests of meaning-based predictions was positively related to verbal
that listeners with HI not only use lexical knowledge to make fluency, suggesting that for individuals with HI, the ability to
form-based predictions when listening to NVS but can actually mobilize the lexicon contributes to the strength of meaning-
use those predictions to achieve as good a sense of clarity as based predictions. This is in line with previous results showing
younger listeners without hearing difficulties. that lexicon is related to knowledge-based predictions (Benard
While meaning-based predictability appeared to be de- et al. 2014). This benefit of meaning-based predictability was
pendent on form-based predictability, i.e., for individuals with particularly related to verbal fluency when matching text primes
HI, there was no significant benefit of meaning-based predict- were presented beforehand, suggesting that individuals with HI
ability if no form-based predictions could be made. Also, it is use their lexical knowledge to benefit from meaning-based pre-
important to note that the effect of meaning-based predictability dictability particularly when form-based predictability is avail-
was not dependent on the sound quality level, suggesting that able. As we have already discussed, meaning-based prediction
8 Signoret and Rudner / EAR & HEARING, VOL. XX, NO. XX, 00–00
Rogers, C. S., Jacoby, L. L., Sommers, M. S. (2012). Frequent false hearing Sjölander, K., Heldner, M. (2004). Word level precision of the NALIGN au-
by older adults: The role of age differences in metacognition. Psychol tomatic segmentation algorithm. Procedings of the XVIIth Swedish Pho-
Aging, 27, 33–45. netics Conference. Fonetik 2004, Stockholm University, pp. 116–119.
Rönnberg, J., Lunner, T., Zekveld, A., et al. (2013). The Ease of Lan- Sohoglu, E., Peelle, J. E., Carlyon, R. P., et al. (2014). Top-down influences
guage Understanding (ELU) model: Theoretical, empirical, and clinical of written text on perceived clarity of degraded speech. J Exp Psychol
advances. Front Syst Neurosci, 7, 31. Hum Percept Perform, 40, 186–199.
Rosemann, S., Gießing, C., Özyurt, J., et al. (2017). The contribution of Sörqvist, P., Ljungberg, J. K., Ljung, R. (2010). A sub-process view of
cognitive factors to individual differences in understanding noise- working memory capacity: Evidence from effects of speech on prose
vocoded speech in young and older adults. Front Hum Neurosci, 11, 11. memory. Memory, 18, 310–326.
Roth, T. N., Hanebuth, D., Probst, R. (2011). Prevalence of age-related Souza, P. E., & Boike, K. T. (2006). Combining temporal-envelope cues
hearing loss in Europe: A review. Eur Arch Otorhinolaryngol, 268, across channels: Effects of age and hearing loss. J Speech Lang Hear
1101–1107. Res, 49, 138–149.
Rudner, M., Rönnberg, J., Lunner, T. (2011). Working memory supports lis- Spreen, O., & Strauss, E. (1998). A Compendium of Neuropsychological
tening in noise for persons with hearing impairment. J Am Acad Audiol, Tests: Administration, Norms, and Commentary (2nd ed.). New York,
NY: Oxford University Press.
22, 156–167.
Thiel, C. M., Özyurt, J., Nogueira, W., et al. (2016). Effects of age on long
Schneider, B. A., Avivi-Reich, M., Daneman, M. (2016). How spoken lan-
term memory for degraded speech. Front Hum Neurosci, 10, 473.
guage comprehension is achieved by older listeners in difficult listening
Tombaugh, T. N., Kozak, J., Rees, L. (1999). Normative data stratified by
situations. Exp Aging Res, 42, 31–49.
age and education for two measures of verbal fluency: FAS and animal
Shannon, R. V., Fu, Q.-J., Galvin, J. (2004). The number of spectral channels naming. Arch Clin Neuropsychol, 14, 167–177.
required for speech recognition depends on the difficulty of the listening Troyer, A. K., Cullum, C. M., Smernoff, E. N., et al. (1994). Age effects
situation. Acta Oto-Laryngol Suppl, 50–54. on block design: Qualitative performance features and extended-time
Shannon, R. V., Zeng, F. G., Kamath, V., et al. (1995). Speech recognition effects. Neuropsychology, 8, 95–99.
with primarily temporal cues. Science, 270, 303–304. Wechsler, D. (1981). The psychometric tradition: Developing the Wechsler
Sheldon, S., Pichora-Fuller, M. K., Schneider, B. A. (2008a). Effect of age, adult intelligence scale. Contemp Educ Psychol, 6, 82–85.
presentation method, and learning on identification of noise-vocoded Wild, C. J., Davis, M. H., Johnsrude, I. S. (2012). Human auditory cortex is
words. J Acoust Soc Am, 123, 476–488. sensitive to the perceived clarity of speech. Neuroimage, 60, 1490–1502.
Sheldon, S., Pichora-Fuller, M. K., Schneider, B. A. (2008b). Priming and Wingfield, A., Tun, P. A., McCoy, S. L. (2005). Hearing loss in older adult-
sentence context support listening to noise-vocoded speech by younger hood: What it is and how it interacts with cognitive performance. Curr
and older adults. J Acoust Soc Am, 123, 489–499. Dir Psychol Sci, 14, 144–148.
Signoret, C., Johnsrude, I., Classon, E., et al. (2018). Combined effects of World Health Organization. (2018). WHO global estimates on prevalence
form- and meaning-based predictability on perceived clarity of speech. J of hearing loss. Retrieved October 8, 2018, from http://www.who.int/
Exp Psychol Hum Percept Perform, 44, 277–285. deafness/estimates/en/.