Anda di halaman 1dari 10

This article was downloaded by: [b-on: Biblioteca do conhecimento online UA] On: 23 April 2013, At: 07:49

Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of New Music Research

Publication details, including instructions for authors and subscription information:

Methodologies of Brain Research in Cognitive Musicology

Mari Tervaniemi & Titia L. van Zuijen Version of record first published: 09 Aug 2010.

To cite this article: Mari Tervaniemi & Titia L. van Zuijen (1999): Methodologies of Brain Research in Cognitive Musicology, Journal of New Music Research, 28:3, 200-208 To link to this article:

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Journal of New Music Research, 28 (1999), No. 3, pp. 200^208

0929-8215/99/2803-200$15.00 # Swets & Zeitlinger

Methodologies of Brain Research in Cognitive Musicology

Mari Tervaniemi and Titia L. van Zuijen Cognitive Brain Research Unit, Department of Psychology University of Helsinki, Finland

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

ABSTRACT This tutorial introduces the principles of the most common non-invasive methods of modern brain research which have been used in determining the neural functions underlying music perception. The methods described are electroencephalography (EEG), magnetoencephalography (MEG), positron emission tomography (PET), and functional magnetic resonance imaging (fMRI). In addition, evidence from brain-lesioned patients will be briefly illuminated.

INTRODUCTION In the following, the most common non-invasive methods of modern brain research investigating the functional changes in the human brain during music perception will be introduced. In addition, some landmarks of observations based on brainlesioned patients will be given. The main focus of this paper will be on the macro-level functions of human brain, since, due to ethical reasons, no single-cell recordings can be conducted with human subjects (the only exceptions being recordings during surgical operations or via depth electrodes implanted for diagnostic reasons). This paper will focus on introducing and comparing

functional principles of these methods in order to give a general framework for the paradigms and results reviewed elsewhere in the present issue. Throughout the text the reader is reminded about the historical perspectives of neuromusicology: although neuromusicology has its roots in clinical observations back to the 19th century it has been only during the last 12 decades that more concentrated efforts have been made to reveal the complex but intriguing relationship between musical abilities and their neural substrate(s). In this development, of crucial importance have been the parallel steps taken in cognitive science in general and specifically in cognitive musicology, in methodological and theoretical aspects of brain research, as well as in digital sound processing which enables controlled sound stimulation during brain recordings.

BRAIN-LESIONED PATIENTS In the auditory modality, the first observations combining functions of a specific brain area to auditory perception were made as early as during the second half of 19th century by Wernicke (1874). He demonstrated that a patient who had suffered from a severe deficit in speech understanding, exhibited a broad lesion in the temporal lobe



Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

of the left hemisphere in an autopsy. During the following decades, several researchers confirmed this importance of the left hemisphere in speech perception as well as in production. The earliest observations of a corresponding role of the right hemisphere in music perception were made in the beginning of the 20th century. It was shown that a lesion or lesions in the right temporal lobe may selectively deteriorate any of the subskills important in musical performance or in music recognition (for reviews, see Benton, 1977; Marin, 1982; Basso, 1993; Dalla Bella & Peretz, 1999; Samson, 1999). Consequently, the ability to sing, play, and/or read musical scores can deteriorate as a result of a right-hemispheric lesion. At the same time, however, researchers have cautioned against deriving firm conclusions about the mechanisms of healthy human brain on the basis of brainlesioned patients. This caution results from the consideration that a variety of neural connections exist in the brain: from the primary auditory cortex, for example, there are ascending connections at least to the parietal, occipital, and frontal cortices and descending connections through the midbrain to the cochlea (Pickles, 1988; Buser & Imbert, 1992). Thus, a lesion in the auditory cortex within the temporal lobe unavoidably also prevents functioning of the subsequent neural networks, making it difficult or even impossible to conclude which of these networks played the most crucial role in the perceptual/motoric skill under interest. This pitfall is overtaken if a double dissociation between two skills can be established for instance, regarding language and musical skills or pitch and timbre perception (see Dalla Bella & Peretz, 1999; Samson, 1999).

Fig. 1.

An electrode cap with 64 EEG electrodes (Lectron Ltd., Finland).

ELECTROENCEPHALOGRAPHY (EEG) As early as in 1920's, the first successful recordings of electroencephalogram (EEG) were conducted (Berger, 1929; Davis et al., 1939). Since those times, the method has improved remarkably especially after digital technology could be used to acquire and analyse data in 1970's and onwards. However, despite those technical developments the basis of the EEG method remained unchanged:

the electric field generated by neural activity is recorded over time with electrodes attached to the scalp (see Fig. 1). The strength of this electric field at each electrode is approximately proportional to the potential difference between that electrode and a reference electrode. This reference electrode can be placed on different locations assumed to lack brain activity, for instance, on an earlobe or the nose. It is also possible to calculate off-line the average strength on all recording electrodes and subtract this from the strengths measured on individual electrodes, this technique being called the averaged reference. The EEG is a by-product of brain cells' information transfer in which intra- and extra cellular current flows are modulated with specific membrane mechanisms.When these current flows synchronize, potential differences summate, and become strong enough to be recorded with EEG. The post-synaptic activity of pyramidal dendrites (rather than action potentials) in the cortex particularly possess these characteristics and is therefore regarded as the main source of the EEG (e.g., Schaul, 1998). Thus, in EEG, coherent activity of numerous cortical neurons (approximated by 10 000) is recorded. EEG can be used to measure spontaneous brain activity which is then analyzed in the frequency or time domains. EEG signals can also be investigated after averaging tens or hundreds of presentations of stimulus, for instance, a sound or a melody. In this way, the brain activity specific to processing this stimulus is isolated from other parallel neural activity. Also these responses can be analyzed in the frequency or time domain. All these techniques will be briefly introduced in the following.



Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

EEG analysis in frequency domain With techniques based on Fourier transformation, the frequency spectra of the spontaneous EEG can be investigated. Four main frequency bands are alpha (813 Hz), beta (1330 Hz), delta (0.54 Hz), theta (47 Hz), and gamma (& 40 Hz) (Steriade, 1993; Hari & Salmelin, 1997). Each of these frequency bands reflects a specific state of the brain: alpha waves are generally taken as a sign of relaxed wakefulness, beta waves of mental activity, delta and theta waves reflect sleep. Gamma-band activity has been associated with a so called binding process in which the multimodal (auditory, visual, tactual) sensory information is combined to form uniform percepts of single object. EEG is commonly used in the clinical diagnosis of, for example, epilepsy, since occurrence of spikes with deteriorated or accelerated rhythm reveal abnormal brain activity (Niedermeyer & Lopes da Silva, 1993). In cognitive neuroscience, spectral changes in EEG are quantified while investigating, for example increased coherence above certain brain areas during a mental task such as listening to a musical piece compared with imaging playing

the same piece (Petsche et al., 1996). These spectral changes in EEG can be studied either across a relatively long time span (15 minutes) or as timelocked to specific stimuli (see below). Event-related potentials (ERPs) By averaging across several tens or hundreds of EEG signals following a repeated stimulus presentation, event-related potentials (ERPs) can be separated from other ongoing brain activity. Assuming that intrinsic noise of the recordings and activity related to other mental processes not time-locked to the stimulus processing under interest are canceled out by this averaging method, the remaining activity can be attributed to the neural processing of the stimulus under interest (see Fig. 2). Thus, following a stimulus (e.g., a sound) several ERP deflections can be observed which reflect different stages of information processing with millisecond accuracy (equal to the sampling rate employed). The earliest of the deflections occur only 115 ms after a stimulus presentation. These responses are generated in the brainstem and labeled after their temporal order (IVII) (Picton,

Fig. 2.

The N1 response elicited by a chord is illustrated as a function of an increasing number of averaged EEG epochs (the number of epochs is indicated below each curve). Single EEG signal is displayed in the right upper corner. The recording site is illustrated in the schematic head (middle). In the lower right corner is the response with 100 averages filtered with commonly used band pass 130 Hz.



Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

1980). Subsequent middle-latency responses N0, P0, Na, Pa, Nb which can be observed 1540 ms after stimulus onset are generated in the auditory cortex (e.g., Celesia, 1976). These deflections are labeled according to their polarity (N for negative, P for positive) and their temporal order. For example P1, N1, P2, and N2 waves peak between 50 and 200 ms, and these early deflections are mainly determined by the stimulus features. Some other ERP deflections with longer latency reflect memory- and attention-related stimulus processing (Ntnen, 1992). These deflections and components with longer latencies are referred to by their polarity and latency or, alternatively, by their specific function (Johnson & Baron, 1995; Regan, 1989). For instance, the P300 (P3) wave is a positive wave peaking 300 ms after the presentation of a target tone in a discrimination task (Donchin, 1981), the N400 is a negative component peaking around 400 ms after the onset of a visually or auditorily presented word which violates the semantic context created by preceding words (Kutas & Hillyard, 1980), the Late Positive Component ( LPC) is a positive component peaking 500800 ms after the onset of a context violating musical tone (Besson et al., 1995; Besson, in the present issue), and the mismatch negativity ( MMN ) is a negative component elicited after a tone which is discrepant with the present content of neural representation formed by the majority of the stimuli (Ntnen et al., 1978; Tervaniemi, in the present issue). In sum, these late ERP components can be regarded as the neural reflections of psychological functions, postulated by psychological theories. Event-related desynchronization and synchronization (ERD and ERS) Originally the technique of combining time-locked and spectral EEG analysis was developed in the field of visual neurocognition: it was found that the alpha activity, time-locked to visual stimuli, is selectively suppressed or enhanced as a function of mental activity (Pfurtscheller, 1977). Intrinsically, this technique measures the within-subject changes in the cortical activation which is time-locked to the employed stimuli. Thus: event-related desynchronization (ERD) refers to decreased cortical activation

in alpha band, indexing more active (``conscious'') neural state whereas event-related synchronization (ERS) refers to increased cortical activation in alpha band, indexing more relaxed neural state. In the auditory modality, corresponding phenomena were demonstrated only a few years ago (Krause et al., 1995; Krause, in the present issue).

MAGNETOENCEPHALOGRAM (MEG) Magnetoencephalogram records the magnetic fields generated by neural activity with ultrasensitive superconducting sensors (SQUIDs) placed inside a helmet-shaped device. Any current creates a magnetic field and the measurable magnetic fields originate, as in EEG, from coherent activity of the dendrites of pyramidal cells. The methods for analyzing EEG and MEG are the same, both frequency and time domain analysis can be applied to spontaneous or averaged signal (see, for instance, Ntnen et al., 1994). The stimuluslocked MEG epochs are called ERF (event-related fields) and magnetic counterparts of ERP components are usually indexed with an `m'. For example, N1m and MMNm are the magnetic counterparts of the N1 and the MMN, respectively (see Fig. 3). Source localization with EEG and MEG So far we have described EEG and MEG as two related methods for revealing temporal aspects of cognitive brain functions. ERPs and ERFs can also be used for localizing the brain sources which are active at a certain point of time, for instance at the peak of an evoked brain response. With respect to this technique there are two important differences which distinguish between the localization performance based on EEG or MEG. One is an advantage for EEG above MEG and the other one is an advantage of MEG above EEG. First, the head consists of different tissues and the electric field has to pass these tissues before it can be measured on the scalp. The different conductivities of these tissues and especially the lower conductance of the skull with respect to underlying tissue distort the electrical field pattern. In the model of the head, used for localizing the brain activity, it is needed to take these distortions into account. This



spatial accuracy of about 5 mm can be achieved. In a similar situation for EEG with electrode arrays of 64100 channels, the accuracy is around 1 cm.

POSITRON EMISSION TOMOGRAPHY (PET) About 20 years ago, the first investigations about the magnitude of the cerebral blood flow and metabolic changes in the brain during mental tasks were conducted by positron emission tomography (PET) (e.g., Mazziotta et al., 1982). The rationale underlying PET methodology is that changes in neuronal activity in a given brain area are paralleled by changes in for example amount of oxygen or glucose consumption. This area is identified by utilizing radio-active labels which are injected into the blood. The locus of maximal activation change in brain metabolism can be localized by determining the locus of emitted gamma rays by a PET scanner (see Elliott, 1994; Frackowiack et al., 1997). The intensity of the radiation emitted by the labels decays exponentially, which limits the duration of the PET investigation from tens of seconds to tens Fig. 3. Whole-head magnetometer with 306 recording channels placed inside the helmet (Neuromag of minutes, depending of the half life of the label (see Fig. 4). Ltd., Finland). The PET investigations need always at least two experimental conditions so that their metabolic introduces uncertainties which affects the final states can be compared by a subtraction operation source locations. This is not a problem in MEG and thus the brain activity specific to a given mensince magnetic fields that are measured are not tal operation can be resolved. For instance, comaffected by conductivity boundaries. On the other parisons like ``listen to melodies with lyrics vs. listen hand, in MEG only activation from sources with a to melodies without lyrics'' could be used to reveal tangential orientation with respect to the head sur- the neural circuits involved in processing lyrics face are modelled, sources with radial orientations and ``count the number of tones within a melody being undetectable (see Hmlinen et al., 1993). vs. count the number of adjectives sung in a meloThe advantage of EEG when compared with MEG dy'' the effect of subjects' attention to verbal vs. nonis that all possible orientations of neural sources verbal aspects of songs (for actual comparisons, can be localized although the accuracy is lower see Zatorre, the present issue). than that in MEG. In the data analysis, the PET data of each Source localization techniques are based on esti- individual subject is first transformed into voxel mation procedures which do not have an unique matrices (one voxel for instance, denoting an 2 6 solution. The accuracy of these location estimates 2 6 2 mm area) and then into normalized anadepends on, for instance, the number of simulta- tomical space (``an average brain'') to allow approneous active sources and the number of channels priate comparisons across subjects (Talairach & used to measure EEG or MEG. With favorable Tournoux, 1988). Thereafter, the statistical signifisource configurations and the most modern MEG cance of metabolic change within each voxel is devices, consisting of 122 to over 300 channels, a evaluated (Frackowiack et al., 1997). The resulting

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013



Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

Fig. 4.

Schematic illustration of the PET scanner. The sensor array detects the locus of radiation by localizing the focus of gamma ray emission inside the helmet.

data enables one to determine the activated area with less than 1 cm accuracy. This accuracy is specifically of value in localizing activity in deep (subcortical) structures where the spatial accuracy of MEG is relatively poor. In addition, it is important to note that with PET, the activated brain area can be directly determined whereas in EEG and MEG the activated brain area is calculated by modeling the generator locus of the recorded activity. On the other hand, since metabolic and blood-flow changes are slow and also the half lives of radioactive labels are in second-to-minute range, only overall changes in brain activity as a function of subject's task can be determined (comparable to spectral EEG changes).

ever, since no radioactive labels are needed, the temporal resolution of the fMRI recordings is not dependent on the half-life times of those labels, instead, the changes in oxygen content and thus the increase in the cerebral blood flow is measured directly. As a result, fMRI reaches time resolution of less than 1 second. Since its spatial resolution is also relatively good (below 1 cm), it will be probably one of the most employed methods in future in cognitive neuroscience. At this time, there are relatively few studies conducted with fMRI employing sound stimulation, partially due to device noise present during the recordings (see, however, Binder et al., 1993, 1994; Wessinger et al., 1997).

CONCLUSIONS FUNCTIONAL MAGNETIC RESONANCE IMAGING (FMRI) In principle, functional magnetic resonance imaging is comparable to PET: it records the changes in the amount of oxygen present in the brain rather than changes in electric currents like EEG and MEG (Elliott, 1994; Moonen et al., 1990). HowThe present tutorial aims at summarizing the methods most relevant to cognitive musicology for recording neural mechanisms underlying music perception and performance (see Table 1). Obviously, each of these methods has its intrinsic pros and cons, especially when brain activity related to any specific cognitive process is to be iso-



Table 1. The summary of the above-reviewed non-invasive brain research methods. Method EEG Type of physiologicalTemporal activity recorded resolution Electric 1 msec Spatial resolution & 1 cm Deficit(s) Poor spatial resolution Strength(s) Good temporal resolution Very common in hospitals Cheap to buy and to maintain Good spatial + temporal resolution



1 msec

58 mm

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

Source localization reliable only with cortical tangential sources Relatively expensive to buy and to maintain Need of radioactive labels ? same subjects can be studied only in & 1 year interval Expensive to buy and to maintain Device noise during the measurements


Cerebral bloodflow/ 1 min metabolism

1 cm

Relatively common in hospitals Direct measure of activated areas Detects the subcortical sources Direct measure of activated areas Detects the subcortical sources


Cerebral bloodflow/ 0.5 sec metabolism

5 1 cm

lated. The optimal solution is to conduct experiments by employing multiple methods, either in parallel or in serial manner. Those experiments would be most informative if they combined electric and metabolic methods. By now, such studies have only been conducted with relatively simple experimental questions, as inherited from the early times of each method as separate. However, in near future, we hope to document such steps to be taken also in the field of cognitive neuromusicology. As the content of the present Special Issue indicates, the knowledge available among neuromusicologists concerns mainly ``cognitively perceiving human beings'', instead of emphasizing ``emotional performing musicians''. There are two main reasons for this current emphasis. First, the brain research methods which are in use at the moment were developed while having a ``passive'' subject in mind: a subject with gross muscle movements like finger movements to mimic (or to perform) the instrumental playing during the experiment would cause tremendous problems for data analysis; either because gross muscle movements have their additional

effects on electrical brain activity (EEG, MEG) or because even very slight changes in the head position (caused by intentional finger movements) destroy the stability needed for reliable localization for evoked brain activity (MEG, PET, fMRI). Second, the present theories about emotional changes caused by music and especially their connections to underlying neural functions are not so well established that those could have been transformed into experimental ideas by now (for recent progress in clinical studies, see Peretz & Cagnon, in press; Peretz, Cagnon, & Bouchard, in press). Fortunately, since brain research methods as well as theories about emotion are under continuous development, we can hope to see that neuromusicology soon covers also playing and feeling musicians.

ACKNOWLEDGEMENTS The authors thank Dr. Maria Jaramillo and Kalevi Reinikainen, M.Sci., for their comments on the earlier version of the manuscript.



Basso, A. (1993). Amusia. In F. Boller & J. Grafman (Eds.) Handbook of Neuropsychology, Vol. 8 (pp. 373390). Oxford: Oxford University Press. Benton, A.L. (1977). The Amusias. In M. Critchley & R.A. Henson (eds.), Music and the Brain (pp. 378397). Southampton: The Camelot Press Ltd. Berger, H. (1929). ber das elektrenkephalogramm des Menchen. 1st report. Archiv fr Psychiatrie und Nervenkrankenheiten, 87, 527570. Besson, M. & Fa| ta, F. (1995). An event-related potential (ERP) study of musical expectancy: Comparison between musicians and non-musicians. Journal of Experimental Psychology: Human Perception and Performance, 21, 12781296. Besson, M. (1999). The musical brain: The neural substrate of music perception. Journal of New Music Research, 28, 246256. Binder, J.R., Rao, S.M., Hammeke, T.A., Frost, J.A., Bandettini, P.A., & Hyde, J.S. (1994). Effects of stimulus rate on signal response during functional magnetic resonance imaging of auditory cortex. Cognitive Brain Research, 2, 3138. Binder, J.R., Rao, S.M., Hammeke, T.A., Yetkin, F.Z., Jesmanowicz, A., Bandettini, P.A., Wong, E.C., Eskowski, L.D., Goldstein, M.D., Haughton, V.M., & Hyde, J.S. (1993). Functional magnetic resonance imaging of human auditory cortex. Annals of Neurology, 35, 662672. Buser, P. & Imbert, M. (1992). Audition. Cambridge, Massachusetts: The MIT Press. Celesia, G.G. (1976). Organization of auditory cortical areas in man. Brain, 99, 403414. Dalla Bella, S. & Peretz, I. (1999) Music agnosias: selective impairments of music recognition after brain damage. Journal of New Music Research, 28, 209216. Davis, H., Davis, P.A., Loomis, A.L., Harvey, R.N., & Hobart, G. (1939). Electrical reactions of the human brain to auditory stimulation during sleep. Journal of Neurophysiology, 2, 500514. Donchin, E. (1981). Surprise!...Surprise? Psychophysiology, 18, 493513. Elliott, L.L. (1994). Functional brain imaging and hearing. Journal of the Acoustical Society of America, 96, 13971408. Frackowiack, R.S.J., Friston, K.J., Frith, C.D., Dolan, R.J., & Mazziotta, J.C. (1997). Human Brain Function. San Diego, Academic Press. Hmlinen, M., Hari, R., Ilmoniemi, R.J., Knuutila, J., & Lounasmaa, O.V. (1993). Magnetoencephalography Theory, instrumentation, and applications to noninvasive studies of the working human brain. Review in Modern Physics, 65, 413497. Hari, R. & Salmelin, R. (1997). Human cortical oscillations: a neuromagnetic view through the skull. Trends in Neurosciences, 20, 4449. Johnson, R. Jr. & Baron, J.C. (1995). (Eds.) Handbook of Neuropsychology. Amsterdam: Elsevier. Krause, C.M. (1999). Event-related desynchronization (ERD) and synchronization (ERS) during auditory information processing. Journal of New Music Research,

28, 257265. Krause, C.M., Lang, H.A., Laine, M., Kuusisto, M.J., & Prn, B. (1995). Cortical processing of vowels and tones as measured by event-related desynchronization. Brain Topography, 8, 4756. Kutas, M. & Hillyard, S.A. (1980). Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207, 203205. Marin, O.S.M. (1982). Neurological aspects of music perception and performance. In D. Deutsch (Ed.) The Psychology of Music (pp. 453477). New Y ork, Academic Press. Mazziotta, J.C., Phelps, M.E., Carson, R.E., & Kuhl, D.E. (1982). Tomographic mapping of human cerebral metabolism: auditory stimulation. Neurology, 921937. Moonen, C.T.W., van Zijl, C.M., Frank, J.A., Le Bihan, D., & Becker, E.D. (1990). Functional magnetic resonance imaging in medicine and physiology. Science, 250, 5361. Ntnen, R., Gaillard A.W.K., & Mntysalo, S. (1978). Early selective-attention effect on evoked potential reinterpreted. Acta Psychologica, 42, 313329. Ntnen, R. (1992). Attention and Brain Function. Hillsdale: Lawrence Erlbaum Associates. Ntnen, R., Ilmoniemi, R., and Alho, K. (1994). Magnetoencephalography in studies of human cognitive brain function. Trends in Neurosciences, 17, 389395. Niedermeyer, E. & Lopes da Silva, F.H. (eds.) (1993). Electroencephalography. Basic Principles, Clinical Applications, and Related Fields. Third Edition. Baltimore: Williams & Wilkins. Peretz, I. & Cagnon, L. (in press). Dissociation between recognition and emotion for melodies. Neurocase. Peretz, I., Cagnon, L., & Bouchard, B. (in press). Music and emotion: perceptual determinants, immediacy and isolation after brain damage. Cognition. Petsche, H., von Stain, A., & Filz, O. (1996). EEG aspects of mentally playing an instrument. Cognitive Brain Research, 3, 115123. Pfurtscheller, G. (1977). Graphical display and statistical evaluation of event-related desynchronization. Electroencephalography and clinical Neurophysiology, 42, 817826. Pickles, J.O. (1988). An Introduction to the Physiology of Hearing. Academic Press, London. Picton, T.W. (1980). The use of human event-related potentials in psychology. In: I. Martin & P.H. Venables (Eds.) Techniques in Psychophysiology. New York, Wiley, pp. 357395. Regan, D. (1989). Human Brain Electrophysiology: Evoked Potentials and Evoked Magnetic Fields in Science and Medicine. New York: Elsevier. Samson, S. (1999). Musical function and temporal lobe structures: A review of brain lesion studies. Journal of New Music Research, 28, 217228. Schaul, N. (1998). The fundamental neural mechanisms of electroencephalography. Electroencephalography and clinical Neurophysiology, 106, 101107. Steriade, M. (1993). Cellular substraces of brain rhythms. In: E. Niedermeyer & F.H. Lopes da Silva (Eds.) Electroencephalography. Basic Principles, Clinical Applications, and Related Fields (pp. 2762). Third Edition. Baltimore: Williams & Wilkins.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013



Downloaded by [b-on: Biblioteca do conhecimento online UA] at 07:49 23 April 2013

Talairach J. & Tournoux P. (1988). Co-planar Stereotaxic Atlas of the Human Brain: 3-dimentional Proportional System: An Approach to Cerebral Imaging. Stuttgart: Thieme. Tervaniemi, M. (1999). Pre-attentive processing of musical information in the human brain. Journal of New Music Research, 28, 237245. Wernicke, C. (1874). Der Aphasische Symptomencomplex: Eine Psychologische Studie auf anatomischer Basis. English translation in Wernicke's Works on Aphasia. A Sourcebook and Review (G.H. Eggert). Mouton Publishers, The Hague, 1977. Wessinger, C.M., Buonocore, M.H., Kussmaul, C.L., & Mangun, G.R. (1997). Tonotopy in human auditory cortex examined with functional magnetic resonance imaging. Human Brain Mapping, 5, 1825. Zatorre, R.J. (1999). Brain imaging studies of musical perception and musical imagery. Journal of New Music Research, 28, 229236.

Department of Psychology P.O. Box 13, FIN-000 14 University of Helsinki Finland Tel.: +358-9-191-23408 Fax: +358-9-191-22924 E-mail: Titia L. van Zuijen accomplished her degree in psychology in 1997 at the University of Amsterdam. She specialised in psychophysiology and investigated both empirical and methodological issues of event related MEG and EEG research and source modelling. Currently she is preparing her doctoral thesis at the Cognitive Brain Research Unit at the University of Helsinki about the auditory perception of temporally complex (musical) sounds.

Mari Tervaniemi Department of Psychology P.O. Box 13, FIN-000 14 University of Helsinki Finland Tel.: +358-9-191-23408 Fax: +358-9-191-22924 E-mail: Url: mari Mari Tervaniemi received her doctoral degree in psychology in 1997 in the University of Helsinki, Finland. In her thesis she clarified by the means of EEG and MEG recordings the neuronal mechanisms which are automatically activated during presentation of musical sound information. Thereafter she has investigated the usefulness of the ERPs in clinical settings as further clarified the neural basis for music perception with EEG, MEG, and PET recordings. For further information, see* mari.