Urantia Book

Grupo de Aprendizes da Informação Aberta

Contact

Superior Index    Go to the next: Chapter 5

Print Files: A4 Size.

Book in Text Format (txt).

Chapter 4
Hearing


The Nature of Consciousness
A Hypothesis

Susan Pockett
Original Book
Hearing
    4.1  Spatial electromagnetic patterns and auditory experience
    4.2  Temporal electromagnetic patterns and auditory experience
    4.3  Evidence that middle latency waves of the auditory evoked potential correlate with simple sensation
    4.4  Relationship of global 40 Hz waves to auditory evoked potentials
    4.5  Summary

     This chapter asks the question "does the brain generate patterns in the electromagnetic field that correlate with auditory sensations?". The answer is again "yes."

4.1  Spatial electromagnetic patterns and auditory experience

     Again work from Walter Freeman's lab leads the way in answering the question addressed in this chapter (Barrie, Freeman, & Lenhart, 1996 [23]). The experiments in this case were structured as follows.

     In each of 5 rabbits, an array of 64 electrodes spaced 0.79 mm apart was implanted over the left auditory cortex, under surgical anesthesia. After a week's convalescence, each rabbit was then classically conditioned to discriminate between two kinds of 100 ms tones, one of 500 Hz and the other 5,000 Hz. The conditioning paradigm used a brief electric shock delivered to the cheek 3 seconds after one or the other of the two kinds of tone. The tone that was paired with the shock was called the conditioned stimulus (CS+) and the tone that was not paired with the shock was called the unconditioned stimulus (CS-). Rabbits learned to associate the CS+ with the shock within three or four trials, as evidenced by a change in their breathing as soon as they heard the tone that was going to be followed by a shock. Three weeks after the start of the experiment, the shock was switched to association with the other kind of tone. As before, the rationale for this conditioning procedure was that it provided unequivocal behavioral evidence that the rabbits were hearing and discriminating particular stimuli.

     When a rabbit had learned a particular discrimination, each tone was played into its headphones 20 times, randomly sequenced in inter-trial interval and stimulus order, to yield a 40-record experiment. Each time a tone was delivered, three seconds worth of EEG was recorded before and three seconds after the stimulus. Then an impressive panoply of pattern recognition procedures were applied to the data, off-line.

     The results were similar to those found in the olfactory system, with two major differences. First, no breathing-related EEG activity was found in the auditory system, as it was in the olfactory system. Secondly, a temporal EEG pattern time-locked to the stimulus was detectable in the auditory system. This was not the case in the olfactory system, where responses were time-locked to inhalations rather than to stimulus onset.

     The results with regard to pattern classification did mimic those seen in the olfactory system. After normalization and suitable spatial and temporal filtering, the spatial patterns of EEG amplitude recorded across the array of electrodes could be classified at greater than 99% accuracy as being associated with either the negatively reinforced tone (CS+) or the unreinforced tone (CS-). As with the olfactory results, these spatial patterns did not correlate strictly with the kind of stimulus, but with the meaning the stimulus had for the animal. For example, the spatial pattern evoked by a 500 Hz tone was constant between week 1 and week 2 of the experiment, but changed in week 3 when this tone changed from being the CS- to being the CS+ (or vice versa). Unfortunately the question of whether the spatial pattern evoked by a 500 Hz CS+ tone was different from the spatial pattern evoked by a 5,000 Hz CS+ tone is not explicitly answered in this paper, although the data were presumably available to conduct that analysis. The answer would be very interesting, because it would show whether the patterns simply reflected an emotional response to a conditioned stimulus, or whether they correlated with a particular stimulus plus the emotional response to it.

     Again as in olfactory system, the pattern information in these auditory experiments was not localized to any particular set of recording electrodes. The whole array consisted of 64 electrodes, but as few as 16 randomly selected channels were all that were necessary to correlate the electromagnetic patterns with the conditions under which they were recorded. Also, no "edges" were detected for the spatial patterns, in the sense of channels near the margins of the arrays having less information for the classification of the patterns than those in the center. The authors of the study interpret these findings as evidence that the patterns resembled the distributed representation in a hologram, in that the same information seemed to be present on every channel, with the resolution depending on the number of channels available but not on their location. In my opinion a more likely scenario is simply the interpretation given at the end of Chapter 3: that the pattern as a whole is the important thing, and the area occupied by the fluctuations defining the pattern was large enough so that the density of channels could be reduced from a total of 64 in the given area to as few as 16 before spatial aliasing occurred. The physical size of the electrode arrays in these experiments was of the order of 6 mm x 6 mm, so the lack of edge effect suggests that overall size of the relevant pattern was probably larger than 36 mm2. This insight that the pattern as a whole is the factor defining a conscious experience fits with the postulated need for patterns correlating with consciousness to have a global quality.

     Methodologically, these experiments stack up against the suggested requirements in Chapter 1 in much the same way as do the olfactory experiments described in Chapter 3. The experiments are well controlled in terms of artefact rejection, and in terms of keeping general physiological variables (with the important exception of arousal) constant. However they do deliberately conflate both memory processes and the emotional effects of the conditioned stimulus (i.e. expectation of an electric shock) with the subjects' simple conscious experience of auditory sensations. The experimenters adopted the procedures that resulted in this latter feature for good reasons: (a) it was necessary to ask the subjects whether or not they had experienced a certain sensation, and since rabbits can't talk they had to be asked this question by a more circuitous method, and (b) human subjects can talk, but it would have been much more difficult both ethically and practically to implant an array of electrodes on the pial surface of a human subject. The reason it was better to implant electrodes on the pial surface than on the scalp was that electromagnetic patterns are significantly blurred by the time they reach the scalp because of volume conduction through the skull, and this blurring could be expected to impede spatial pattern recognition. However, whatever the excellence of the reasons, the outcome was that significantly more than simple sensations were measured in these experiments.

     Let us, then, examine some instances of spatiotemporal electromagnetic patterns which correlate with auditory sensations in human subjects, who can be asked directly about their experience. As mentioned, spatial electric field patterns are seriously blurred when recorded non-invasively at the scalp and in any case the time after a discrete stimulus at which the corresponding sensation is experienced is likely to be quite important. So to start with, we will switch our attention from the spatial patterns that are correlated with particular experiences to temporal patterns.

4.2  Temporal electromagnetic patterns and auditory experience

     In all sensory modalities, it is possible to record by electroencephalography (EEG) or magnetoencephalography (MEG) a stereotyped series of waves that are time-locked to a sensory stimulus. These are called evoked potentials, because they are evoked by a stimulus. When the stimulus is auditory, they are called auditory evoked potentials (AEPs).

     Since the amplitude of evoked potentials is very small compared with the amplitude of the total EEG or MEG, it is usually necessary to average the responses to a number of successive stimuli in order to pull the waves that are evoked by the stimulus out of the general EEG "noise". This stimulus-locked averaging procedure is a commonly used trick for extracting signal from noise, which basically works because at any given time point after a stimulus, random noise is equally likely to be positive or negative and so over a few hundred trials it averages out to be zero. The signal, on the other hand, is always the same (always positive or always negative) at any given time post-stimulus and therefore it is not affected by averaging. Thus the averaging procedure suppresses noise (or at least all those signals that are random with respect to the stimulus) and allows identification of the EEG waves which are evoked by the stimulus.

     Averaging produces a series of peaks and troughs which are more or less constant over all normally hearing human adults (with the proviso that in the middle latency section of the response, the negative/positive Na/Pa complex is consistently observed in normal subjects but all the other waves are variable in latency and morphology, as well as frequency of occurrence across subjects (Mendel & Goldstein, 1971 [189]; Ozdamar & Kraus, 1983 [214]; Picton, Hillyard, Krausz, & Galambos, 1974 [223]; Streletz, Katz, Hohenberger, & Cracco, 1977 [282])). These peaks and troughs of the evoked potential are given names, not because they are necessarily generated by particular discrete neural events (for example the N1 a.k.a. N100 wave is thought to have at least six different neural generators (Naatanen & Picton, 1987 [205])) but because they are convenient points for measurement. Notwithstanding this however, it is possible to make hypotheses about the functions of whatever neural activity generates the various waves.

     Data given in the following sections support the proposal that the parts of the AEP that correlate with the simple conscious experience of an auditory stimulus are the waves of the middle latency response, particularly those occurring 30 to 50 ms after the stimulus. Waves earlier than this are proposed to be the correlates of preconscious processing and later waves are proposed to correlate with various processes underlying perceptual functions such as working memory, comparisons with previous stimuli and orienting to novel events. While all of these latter functions may be regarded as vital to the generation of consciousness in the broader sense, it is argued that simple "raw" conscious sensation corresponds to the neural activity underlying the middle latency waves of the evoked potential.

     The following sections providing evidence for this contention will be numbered in the same way as the requirements set out in Chapter 1 for identifying an electromagnetic pattern with a specific subjective experience.

4.3  Evidence that middle latency waves of the auditory evoked potential correlate with simple sensation

     1.1 (i) (a) Auditory evoked potentials and removal of the physical stimulus

     By definition, auditory evoked potentials do not occur when there is no physical auditory stimulus with which to time-lock the averaging process. In a test of inter- and intra-judge reliability of visual scoring of the auditory middle latency response, it became increasingly difficult to detect middle latency responses as the intensity of the stimulus decreased, but silent controls were always identified as response absent (Mendel, Saraca, & Gerber, 1984 [190]). Thus this requirement is certainly fulfilled.

     1.1 (i) (b) Auditory evoked potentials and blocking of the ears

     To my knowledge there is no published evidence specifically saying that the requirement is fulfilled. It seems very likely that it would be.

     1.1 (i) (c) Auditory evoked potentials and sensation threshold

     Both the auditory brainstem response (ABR) and the middle latency response (MLR) have been used clinically for predicting subjective auditory threshold. In other words, when the intensity of the physical stimulus is turned right down, both these waveforms have very similar thresholds to the subjective threshold for hearing an auditory sensation. Of the two waveforms, middle latency responses are the better predictors of sensation threshold (Picton & Durieux-Smith, 1988 [221]). Prediction of pure tone subjective thresholds from tone-pip MLR thresholds is more accurate than that from click ABR thresholds (Xu, Vel, Vinck, & Cauwenberge, 1996 [316]). MLR thresholds are thus reported as being useful for objectively assessing the hearing of compensation claimants for occupational noise-induced hearing loss.

     The steady state form of the middle latency response (i.e. the response elicited by presentation of stimuli at 40 Hz) has also been shown in a number of studies to have a threshold similar to the sensation threshold (Galambos, Makeig, & Talmachoff, 1981 [107]; Stapells, Linden, Suffield, Hamel, & Picton, 1984 [271]; Stapells, Makeig, & Galambos, 1987 [272]).

     Later waves of the AEP such as N1 and P2 do not correlate with sensation threshold. Breathing concentrations of nitrous oxide sufficient to increase the threshold for eliciting N1 and P2 and to decrease their amplitude by an amount equivalent to a 30-dB reduction in stimulus intensity was observed to have no effect whatsoever on subjective auditory pure-tone threshold (Houston, McClelland, & Fenwick, 1988). Thus the subjective act of hearing appears to be unrelated to the amplitude or latency of either N1 or P2.

     1.1 (i) (d) Auditory evoked potentials and habituation to repetitive stimuli

     When the frequency of discrete auditory stimuli such as clicks is turned up, some waves of the auditory evoked potential "habituate" or disappear after a certain repetition frequency is reached, even though the subject still reports accurately hearing the discrete sounds. These waves clearly do not represent conscious sensations per se.

     Waves N1 (around 100 ms) and P1 or Pb (around 50-80 ms) fall into this category. The amplitude of P1 has been reported to decline when the stimulus rate exceeds 1 Hz, while subjective detectability of clicks presented at more than 1 per second clearly does not (Erwin & Buchwald, 1986 [82]). In this study the filter used was 10-300 Hz, a bandpass which has been shown to introduce artifacts in a similar study (Jones & Baxter, 1988 [133]). However a second study (Freedman, Adler, Waldo, Pachtman, & Franks, 1983 [90]), which used a much wider pass band (1-1000 Hz), also found that the amplitude of P1 decreased by 80% in response to a second stimulus 0.5 ms after the first (in normal subjects, but not in schizophrenics). It thus seems likely that P1 correlates with some sort of non-conscious processing rather than with raw auditory sensation.

     Wave N1 likewise declines or disappears completely at stimulus rates which are clearly detectable subjectively. Many studies, reviewed by Naatanen and Picton (Naatanen & Picton, 1987 [205]), show that auditory N1 waves are exceptionally sensitive to the rate at which stimuli are presented. N1 habituates quickly in response to stimuli more frequent than about 0.1 Hz, and at this frequency stimuli are quite clearly audible as separate events.

     Stimulus rate does not significantly affect the amplitude of Pa, however (Tucker & Ruth, 1996 [295]). There is no change in Pa amplitude at least up to a stimulus repetition rate of 11 Hz. This is consistent with the hypothesis that Pa is correlated with subjective sound sensation, while N1 is not.

     1.1 (i) (e) Auditory evoked potentials and anesthesia

     It has been suggested (Pockett, 1999 [227]) that the response of the various waves of the auditory evoked potential to the induction of and recovery from general anesthesia shows that simple auditory sensation corresponds to the middle latency waves of the AEP. The argument for this is a multipartite one, which runs as follows.

     There are two main classes of general anesthetics; the non-specific and the receptor specific anesthetics. Commonly used non-specific agents include the anesthetic gases halothane, enflurane and isoflurane, and the injectible agent propofol. These agents probably act in a non-specific way by changing the fluidity of neuronal cell membranes. Commonly used receptor-specific agents include the analgesic opioids (e.g. fentanyl, sufentanil and alfentanil), the anxiolytic benzodiazepines (e.g. diazepam a.k.a. Valium, flunitrazepam and midazolam), the sleep-inducing barbiturates (e.g. sodium pentothal, methohexital and pentobarbital) and the dissociative anesthetic ketamine. Each of these classes of receptor-specific agents is known to act on a particular class of neurotransmitter receptors in the brain. In general, non-specific and receptor-specific anesthetic agents induce different states of consciousness and have concomitantly different effects on auditory evoked potentials.

     The waves of the auditory evoked potential that occur earlier than 10 ms post-stimulus are called the auditory brainstem response (ABR), because they originate in the brainstem. The ABR is at least grossly unaffected by clinical anesthesia due to a wide variety of general anesthetics (Hall, 1990), so it is unlikely to represent the neural correlates of auditory consciousness.

     The waves of the middle latency response (MLR), however, are significantly affected when consciousness is lost during general anesthesia of the sort that is induced by non-specific anesthetics. Halothane, enflurane, isoflurane and propofol all cause a progressive reduction of the waves Na, Pa and Nb as concentration of the anesthetic increases. This decrease in Na, Pa and Nb is well correlated with a reduction in awareness, as measured by response to a verbal command to raise a finger or squeeze the experimenter's hand (Davies, Mantzaridis, Kenny, & Fisher, 1996 [71]; Newton, et al., 1992; Thornton, Heneghan, James, & Jones, 1984 [288]).

     In contrast, receptor specific anesthetics from the benzodiazepine and opioid classes do not necessarily affect the auditory middle latency response, even in doses which do abolish both responsiveness to verbal command and the formation of explicit memories (Schwender, Faber-Zullig, Klasing, Poppel, & Peter, 1994 [252]; Schwender, Kaiser, Klasing, Peter, & Poppel, 1994 [250]; Schwender, Klasing, Madler, Poppel, & Peter, 1993a [248]; Schwender, et al., 1993 [250]). Unlike the non-specific anesthetics studied however, the receptor specific anesthetics in these experiments did preserve significant motor signs of wakefulness, such as coughing, blinking and purposeful movements of the limbs. Also, it was clearly shown in these experiments that the formation of implicit memories for auditory material was correlated with preservation of auditory mid-latency responses. There is a case to be made that the only things that are abolished by the combination of opioids and benzodiazepines that is usually administered when light anesthesia is desirable clinically are pain, explicit memory formation and any sense of things mattering, such as would cause one to bother responding to a command to squeeze someone's hand (which is the general test used to assess awareness). This accords with the generally expressed opinion of working anesthetists that the state induced by opioid/benzodiazepine treatment is not "anesthesia" per se, although it does allow surgical interventions to be performed without causing the patient distress. Thus it is highly likely that light opioid/benzodiazepine "anesthesia" allows a level of awareness of auditory sensations which is not present in the deeper anesthesia induced by non-specific anesthetics.

     A third anesthetic state, different from either of the two described above, is induced by the so-called "dissociative" receptor-specific anesthetic ketamine. Ketamine is known to cause unpleasant dreams or hallucinations of a sort that are not reported with the other varieties of anesthetic (White, Way, & Trevor, 1982 [307]) and there is some justification for regarding the ketamine-induced state as more akin to a dream-ridden sleep from which one cannot awaken, than to anesthesia. As might be predicted if this were the case (see the section on auditory evoked potentials and sleep) ketamine has little obvious effect on the transient form of the auditory middle latency response (Schwender, et al., 1994 [251]; Schwender, Klasing, Madler, Poppel, & Peter, 1993b [249]) and is reported to actually increase the amplitude of the auditory steady state response evoked by presentation of auditory stimuli at 40Hz (Plourde, Baribeau, & Bonhomme, 1997 [225]). It may well be that externally driven auditory experience still occurs during ketamine intoxication (as it does during REM sleep), but that the auditory sensations are not processed in quite the same way as they are in the normal waking state, being instead incorporated into ongoing dreams.

     So all of the above evidence fits the proposal that simple sensory awareness covaries with the amplitude of middle latency auditory evoked potentials. Less work has been done on the effects of anesthetics on the later waves in the auditory evoked potential. In general, later components of the AEP are more sensitive to anesthetics than the middle latency components (deBeer, et al., 1996 [73]), but it appears that their disappearance on induction of anesthesia and reappearance on recovery does not correlate at all well with the disappearance and reappearance of auditory sensation. One study (Houston, McClelland, & Fenwick, 1988) found that concentrations of nitrous oxide which did affect the threshold for the production of N1 and P2 waves of the auditory evoked potential did not affect the subjective threshold for detection of a soft sound. Another study (Plourde & Picton, 1991 [224]) found that when patients were asked to press a button on hearing a particular sound during induction of anesthesia, correctly detected sounds (hits) did evoke N1 and P3 waves, whereas undetected sounds (misses) did not. This suggests that N1 and P3 do correlate with auditory sensation. However during emergence from anesthesia in this study (a) there was generally a small N1 and occasionally also a small P3 for misses and (b) neither N1 nor P3 were necessarily present when a correct hit was scored. Thus these later waves of the auditory evoked potential were sometimes present when there was no awareness of the sound and sometimes absent when there was awareness of the sound. This evidence suggests that N1 and P3 waves represent some feature of auditory processing other than raw sensory awareness of the sound.

     1.1 (i) (f) Auditory evoked potentials and sleep

     The story of what various different investigators have reported as happening to middle latency auditory evoked potentials during sleep boils down to a cautionary tale about the artefacts that can be introduced by over-zealous band-pass filtering of the EEG. Early studies (e.g. that of Erwin and Buchwald, 1986 [82]) used filters with a rather restricted pass band (10-300 Hz) and reported no change in the middle latency waves Pa (30-40 ms) or Nb (45-55 ms) during any stage of sleep. Only the Pb component occurring between 55 and 80 ms was reported to disappear during slow wave sleep. However a slightly later study (Jones & Baxter, 1988 [133]) using a less restrictive pass band (0.3-3000 Hz) reported significant increases in the latency of Pa and sometimes complete disappearance of Nb during stages 2 and 3/4 of sleep. This latter study shows a sobering comparison between data recorded with a wide-bandpass filter and the same data after it has been digitally filtered using the more restrictive band-pass used in the earlier study and demonstrates how an artifact introduced by the tighter filter gives the erroneous impression that little change occurs in the amplitudes of Pa and Nb during slow wave sleep. A second study using a wide filter (Deiber, Ibanez, Bastuji, Fischer, & Mauguiere, 1989 [74]) again shows changes in Pa, Nb and Pb components of the auditory evoked potential in sleep. During stage 2 sleep, both latency and amplitude of Pa increased and neither Nb nor Pb could be identified at all. From stage 2 to stage 4 there was a slight further increase in Na and Pa latencies, with a more rounded Pa waveform. In deep, stage 4 sleep there was a dramatic reduction in amplitude of both Na and Pa.

     Interestingly, in both of the wide-bandpass studies there was essentially no difference between the middle latency auditory evoked potentials recorded in REM sleep and those recorded during wakefulness. On the hypothesis that middle latency waves of the auditory evoked potential represent conscious awareness of the sound stimulus, this finding suggests that during REM sleep the subject is in some sense conscious of external sound stimuli, but that these conscious experiences may be incorporated into ongoing dreams rather than being interpreted in the normal waking context.

     1.1 (i) (g) Auditory evoked potentials and selective attention

     As discussed in Chapter 1, although the concept of attention is distinct from the concept of consciousness theoretically, in practical terms it is very difficult to separate the two. Some degree of attention to a stimulus is probably required for that stimulus to be consciously perceived. Thus if paying attention to a stimulus enhances a particular brain activity that is time-locked to the stimulus, this supports the hypothesis that the brain activity represents consciousness of the stimulus, but such support can not be regarded as unequivocal or conclusive, because the brain activity could also be a correlate of (a) the act of paying attention (b) a pre-conscious processing step or (c) processing at a higher level than simple sensory awareness (e.g. detection of difference or novelty, or activation of associations with some previous experience). Nevertheless, it is instructive to examine the brain-generated electromagnetic patterns that are affected by selective attention.

     The effects of selective listening on scalp-recorded auditory evoked potentials have been studied extensively. Pioneering work by Hillyard's group (Hillyard, Hink, Schwent, & Picton, 1973 [125]) reported that the earliest segment of the auditory evoked potential that was increased when it was evoked by attended rather than unattended tones was the N1 wave. This difference between attended and unattended waveforms is variously known as the "N1 effect" (Hillyard, Hink, Schwent, & Picton, 1973 [125]), the "negative difference wave" or "Nd" (Hansen & Hillyard, 1980 [119]) and the "processing negativity" (Naatanen, Gaillard, & Mantysalo, 1978 [203]). It consists of at least two overlapping phases, the first of which has a scalp distribution similar to N1 itself and the second of which has a more frontal distribution and may last for hundreds of milliseconds (Giard, Perrin, Pernier, & Peronnet, 1988 [113]; Hansen & Hillyard, 1980 [119]). The "processing negativity", which has a partly frontal distribution, has been proposed as reflecting the degree of matching of a stimulus to an attentionally controlled template (Alho, Paavilainen, Reinikainen, Sams, & Naatanen, 1986 [8]; Alho, Sams, Paavilainen, & Naatanen, 1986 [9]; Naatanen, 1982 [204]). In terms of the model of attention sketched in Chapter 1, at least the frontal portion of this wave probably originates with the activity of neural generators of the process(es) of attention, rather than representing the effects of attention on conscious experience.

     However, the conclusion that waves earlier than N1 were not affected by attention was revised when later work from the Hillyard group, using a task which required more highly focussed selective attention, showed that the middle latency waves between 20 and 50 ms after the stimulus were also altered by selective attention (Woldorff & Hillyard, 1991 [310]). These effects were paralleled by variations in target discrimination performance. MEG studies have also shown that neuromagnetic brain responses in the ranges 20-80 ms and 80-130 ms were increased in amplitude by selective attention (Woldorff, et al., 1993 [311]). The sources of these responses were reported to be in the auditory cortex, above the ears. This M20-50 effect was too small to be seen in individual subjects, but a grand-average approach allowed its generator to be localized 2-14 mm medial to the grand-average M100s and the individual M100s, on the supratemporal plane in, or slightly lateral to, Heschl's gyrus (i.e. in the primary auditory cortex). This location suggests that what we are seeing here may be the effect of an attentional process on the generator of sensation. Thus whatever attentional process is operating in this situation must itself be activated before the neural information reaches the primary sensory cortex. The scheme in Chapter 1 would suggest that such an attentional process may be mediated by the thalamic spotlight, which probably has the capacity to pre-regulate the responsiveness (i.e. the resting membrane potential) of whatever well-defined region of the primary auditory cortex responds to auditory stimuli of the frequency of interest. Thalamic activity itself is too deep in the brain to evoke a noticeable electric or magnetic field potential on the scalp, so all we see in EEGs or MEGs is its cortical effect.

     In contrast to the situation just described, the frontal attention system is cortical and thus does evoke noticeable electric and magnetic evoked potentials. The mismatch negativity or MMN is a wave in the auditory evoked potential which peaks at 150-250 ms after a deviant sound in a repetitive sequence. Both MEG and EEG recordings show that this wave originates from a primary source in the auditory cortex and a secondary source in frontal cortex (Alho, 1995 [10]). Such a dual origin of the MMN in auditory and frontal cortices suggests that a template active in the prefrontal working memory area may be involved in deciding whether or not any given stimulus is different from those preceding it, and thus in generating the main MMN component in the auditory cortex. Early studies using low load selective attention tasks all showed that the MMN was not affected by selective attention. However again, a more recent study using a dichotic listening task which requires highly focussed attention to another sound source clearly demonstrates attentional modulation of the MMN generated in the auditory cortex (Woldorff, Hillyard, Gallen, Hampson, & Bloom, 1998 [312]).

     In summary, a supportable hypothesis explaining the data above is that they reflect the actions of (at least) two attention systems, which have separate effects on the components of the auditory evoked potential. The proposed scenario is that the thalamic attention system primes certain areas of the primary auditory cortex (those which are activated by stimuli with the desired frequency characteristics) to be especially sensitive, thus generating a larger evoked response in the 20-80 ms post-stimulus range to attended stimuli. This represents an enhanced "raw" sensory experience of the stimuli to which attention is being paid. Then a booster system in the form of a frontal attention module cuts in at around 100 ms. This generates enhanced responses in the later auditory evoked potential range to stimuli which fit a certain expected template. The enhancement allows evoked responses to attended stimuli to be further processed more effectively than responses to non-attended stimuli, so that they can better be associated with past events or used as a trigger for future actions.

     1.1 (ii) (a) Auditory evoked potentials and deafness

     The results concerning auditory evoked potentials and partial deafness are essentially the same as those previously described for sensation threshold. Both ABRs and MLRs have been used as predictors of sensation threshold, but MLRs give the more accurate predictions. Later waves are not good predictors of sensation threshold.

     Total deafness is associated with a total lack of auditory evoked potentials. No results have been reported on the measurement of auditory evoked potentials for any "deaf-hearing" syndrome equivalent to "blindsight" (see Chapter 5).

     1.2 Auditory evoked potentials covarying with subjective experience rather than properties of the physical stimulus

     (i) Auditory evoked potentials and perceptual illusions

     (a) Pitch

     The human primary auditory cortex consists of Brodman areas 41 and 42, which are located within and around the lateral or Sylvian fissure, on the supratemporal plane of the transverse (Heschl) gyrus, just above the ear. In humans, as in all other species studied down to fish, it has been well demonstrated that the primary auditory cortex is tonotopically organized. This means that the spatial location of the neurons which best respond to particular sounds varies in a regular fashion according to the frequency of the sound.

     Middle latency responses show such a tonotopic organisation in the primary auditory cortex, with the source of the Pam/Pa wave being progressively more superficial with higher stimulus frequencies (Pantev, et al., 1995 [219]). The site of generation of the brain electromagnetic event that occurs 100 ms after an auditory tone also varies in the auditory cortex as a function of the frequency of the tone (Pantev, et al., 1988 [216]; Romani, Williamson, & Kaufman, 1982 [238]), but the N1 tonotopic map is a mirror image of the middle latency map. The depth of the source of the N1m or N1 wave from the surface of the brain increases linearly with the logarithm of stimulus frequency, with higher frequency stimuli being associated with deeper N1m sources. This N1/N1m map lies several mm posterior to the Pa/Pam mirror map and it is controversial whether the N1m/N1 wave is due to activation of secondary auditory areas (Pantev, et al., 1995 [219]) or to a region of the primary auditory cortex (Yamamoto, Uemura, & Llinas, 1992 [318]). The post-stimulus latency of the maximal N1m also varies with stimulus frequency and intensity (Stufflebeam, Poeppel, Rowley, & Roberts, 1998 [283]).

     N1 is associated with the detection of differences between the current stimulus and immediately preceding ones (Naatanen & Picton, 1987 [205]). The mismatch negativity component of N1 arises from a neural change-detection system that compares the current stimulus to some kind of representation of whatever stimuli occurred within the previous 160 to 170 ms (Yabe, et al., 1998 [317]). So from the point of view of our argument, the logarithmic nature of the transform from tone frequencies to spatial location of N1 maxima perfectly fits with the logarithmic nature of Fechner's Law, the psychophysical law concerning ability of a subject to detect differences in pitch. (see Appendix A).

     The idea that this depth-related electromagnetic pattern is correlated with pitch (which is purely a subjective phenomenon) rather than stimulus frequency (which is an external physical property of the stimulus) is further reinforced by the results of another set of experiments, on a phenomenon known as the missing fundamental frequency (Pantev, Hoke, Lutkenhoner, & Lehnertz, 1991 [217]). If the auditory stimulus is a particular kind of complex tone in which the so-called fundamental frequency is missing, the subjective perception of a listener is that the pitch of the sound is that of the fundamental frequency (which is not actually present in the stimulus at all). In such a case, it turns out that perceived pitch is what determines the location of the MEG-detected dipole source, and not the actual spectral contents of the stimulus.

     (b) Sound localization

     To localize the position of a sound source, animals use the cues of interaural differences in the intensity and time of arrival of sound waves in the air. Most of the detailed computation to allow this localization is done at sub-cortical levels, but the conscious perception of sound locality has been related to a particular auditory evoked potential (McEvoy, Picton, Champagne, Kellett, & Kelly, 1990 [186]). In order to elicit this potential, a continuous noise was generated by running a sequence of random numbers through a digital to analogue converter and playing the result through earphones. When two channels were programmed to generate identical noise stimuli but one channel was delayed relative to the other, the subject localized the sound to the side receiving the leading stimulus. Changes in the relative timing of the two stimuli caused the noise to shift in its lateralization. Since no difference could be detected in the ongoing monaural noise, any potentials evoked by these shifts were postulated to be specifically related to the binaural interaction. In this situation the scalp response recorded from the midline at the top of the head contained a positive-negative-positive complex with peak latencies of 75, 136 and 220 ms. This response was reported as being similar to that evoked by the onset of a monaural stimulus but smaller and significantly later.

     In the early experiments referred to above no earlier evoked potential components could be elicited that were related to perceived lateralization shifts. However later experiments along similar lines but using MEG again implicated the neural events at around 30 ms after stimulus onset. P30m showed a tendency, similar to but smaller than that of N100m, to be larger to stimuli with contralaterally-leading inter-aural time differences (McEvoy, Makela, Hamalainen, & Hari, 1994 [188]). So far it has not possible to resolve any spatial patterns relating to conscious localization of sounds. Attempts to do so resulted in the statement that neurons in the human auditory cortex that were sensitive to interaural time differences were "not organized into a large-scale, orderly representation, which could be resolved by MEG" (McEvoy, Hari, Imada, & Sams, 1993 [187]).

     (ii) Auditory evoked potentials and loudness

     The amplitude of the MLR wave Pa increases as the stimulus intensity is turned up from 40 dB nHL to 70 dB nHL (Tucker & Ruth, 1996 [295]). This is consistent with the hypothesis that Pa is correlated with subjective sound sensation.

     No clear correlation has been reported between the electrically measured N1 and perceived loudness: some subjects have an N1 that increases in amplitude with the stimulus intensity, whereas others have an N1 that saturates or becomes smaller at high stimulus intensity (Adler & Adler, 1989 [5]; Picton, Goodman, & Bryce, 1970 [222]). However, the N1-amplitude/stimulus-intensity slope is related to the amount of frontal midline theta activity in the general EEG, which in turn correlates with the degree of attention (Bruneau, Roux, Guerin, Garreau, & Lelord, 1993 [41]) and this does hint at a correlation between N1 amplitude and perceived loudness. Depth from the surface of the brain of the MEG-measured source of mN1 decreases with increasing stimulus intensity (the opposite of the case with increasing frequency) (Pantev, Hoke, Lutkenhoner, & Lehnertz, 1991 [217]), so there is some evidence for a pattern of correlation between the spatial features of N1 and loudness.

     (iii) Auditory evoked potentials and perceptual changes due to learning

     Tonotopic patterns in the primary auditory cortex do change with learning (Recanzone, Schreiner, & Merzenich, 1993 [233]). When owl monkeys were taught a frequency discrimination task, the size of the area in the primary auditory cortex that responded to the restricted frequency range in which the monkeys had learned to discriminate small differences was increased.

4.4  Relationship of global 40 Hz waves to auditory evoked potentials

     It has long been known that either steady state (Galambos, 1982 [108]; Galambos, Makeig, & Talmachoff, 1981 [107]) or transient (Pantev, et al., 1991 [218]) auditory stimulation elicits an oscillatory response at about 40 Hz. For some time there was an ongoing controversy about whether the 40 Hz steady state response that is generated by repetitive delivery of click stimuli at a frequency of 40 Hz was simply an algebraic addition of overlapping transient responses, or whether it represented the driving of an endogenous 40 Hz rhythm. Recently the evidence has been coming down on the side of the idea that there are endogenous 40 Hz rhythms in the brain, which can be reset or driven by sensory stimuli.

     First, a new method of analysing MEG recordings called magnetic field tomography (Ribary, et al., 1991 [236]) showed that a 40 Hz oscillation is probably continuously generated by the brain, sweeping along the cortex from front to back once every 12.5 ms. The authors suggest that this oscillation can best be observed and averaged when it is reset and enhanced by sensory input. Secondly, convolution of the auditory stimulus-evoked MEG response by a Gabor wavelet (which gives a continuous measure of frequency-specific power over time) showed that there exist both a 40 Hz response which is not phase-locked to the discrete auditory stimulus provided and also one which is phase-locked to the stimulus (Tiitinen, Sinkkonen, May, & Naatanen, 1994 [292]). In these experiments the amplitude of the non-phase-locked 40-Hz waves was twice as high over the vision-related occipital cortex as it was over the auditory cortex (where the phase-locked activity was highest). This again suggests that there exists an endogenous 40-Hz oscillation, which may be "captured" locally by a particular sensory stimulus and become phase-locked to that stimulus.

     The 40-Hz auditory evoked EEG response to transient stimulation is significantly larger when subjects pay attention to the stimulus than when they ignore it (Tiitinen, et al., 1993 [291]), which fits with the idea that it is the correlate of conscious awareness of the sound. However this response is insensitive to changes in qualitative stimulus features such as frequency (Tiitinen, Sinkkonen, May, & Naatanen, 1994 [292]). The latter observation can be explained in a number of ways. First, the 40-Hz response per se may have nothing to do with the contents of consciousness. I do not favor this explanation. Secondly, the methods so far used to measure and/or process the 40-Hz oscillation may not be sensitive enough to pick up stimulus-related frequency or amplitude modulations that do exist. This seems quite likely, considering that the temporal spreading introduced by the measurement method is enough to produce evoked waves that apparently begin before the onset of the stimulus (Tiitinen, et al., 1993 [291]). Finally, pitch information may be represented by a spatial feature of the spatiotemporal electromagnetic pattern, rather than by a temporal feature. Evidence supporting the last possibility is given above, in the section on spatial patterns correlating with pitch perception.

4.5  Summary

     It is clear that the 4-dimensional electromagnetic patterns we originally postulated to exist whenever auditory subjective experience exists have not yet been described in any detail. However there are significant indications that such patterns do exist, and we already have a great many useful pointers as to where and when to look for them. From the available evidence, we might predict that the spatiotemporal electromagnetic pattern which covaries with the raw sensation of sound is likely to be found covering the extent of the primary auditory cortex, at a time approximately 30 to 50 ms after a simple auditory stimulus. The pattern correlating with the subjective recognition that this sound is different from the sounds preceding it should be found covering the extent of the primary and possibly the secondary auditory cortices, from about 40 ms to about 250 ms after the stimulus. The pattern covarying with the subjective experience of any associations the sound evokes and consequent decisions as to what to do about the situation that gave rise to the sound is likely to be found about 300 ms after the stimulus, extending over a much wider extent of the cortex, taking in various association areas.

     Thus, inasmuch as it is possible to isolate one auditory percept from the continuing stream of auditory consciousness, any particular percept must be considered as being a temporally smeared event which is probably continues to develop for up to about half a second after the stimulus. The spatial patterns in the electromagnetic field associated with the percept almost certainly change in time, spreading outwards from the primary auditory cortex until they eventually occupy virtually the entire cerebral cortex of the brain.

Bibliography

[5]
Adler, G., & Adler, J. (1989). Influence of stimulus intensity on AEP components in the 80- to 200-millisecond range. Audiology, 28, 316-324.
[8]
Alho, K., Paavilainen, P., Reinikainen, K., Sams, M., & Naatanen, R. (1986). Separability of different negative components of the event-related potential associated with auditory stimulus processing. Psychophysiology, 23(6), 613-23.
[9]
Alho, K., Sams, M., Paavilainen, P., & Naatanen, R. (1986). Small pitch separation and the selective-attention effect on the ERP. Psychophysiology, 23(2), 189-97.
[10]
Alho, K. (1995). Cerebral generators of mismatch negativity (MMN) and its magnetic counterpart (MMNm) elicited by sound changes. Ear and Hearing, 16, 38-51.
[23]
Barrie, J.M., Freeman, W.J., & Lenhart, M.D. (1996). Spatiotemporal analysis of prepyriform, visual, auditory, and somesthetic surface EEGs in trained rabbits. Journal of Neurophysiology, 76(1), 520-39.
[41]
Bruneau, N., Roux, S., Guerin, P., Garreau, B., & Lelord, G. (1993). Auditory stimulus intentsity responses and frontal midline theta rhythm. Electroencephalography and Clinical Neurophysiology, 86, 213-216.
[71]
Davies, F.W., Mantzaridis, H., Kenny, G.N.C., & Fisher, A.C. (1996). Middle latency auditory evoked potentials during repeated transitions from consciousness to unconsciousness. Anaesthesia, 51, 107-113.
[73]
deBeer, N.A., Hooff, J.C.v., Brunia, C.H., Cluitmans, P.J., Korsten, H.H., & Beneken, J.E. (1996). Midlatency auditory evoked potentials as indicators of perceptual processing during general anaesthesia. British Journal of Anaesthesia, 77, 617-624.
[74]
Deiber, M.P., Ibanez, V., Bastuji, H., Fischer, C., & Mauguiere, F. (1989). Changes of middle latency auditory evoked potentials during natural sleep in humans. Neurology, 39, 806-813.
[82]
Erwin, R.J., & Buchwald, J.S. (1986). Midlatency auditory evoked responses: differential effects of sleep in the cat. Electroencephalography and Clinical Neurophysiology, 65, 373-382.
[90]
Freedman, R., Adler, L.E., Waldo, M.C., Pachtman, E., & Franks, R.D. (1983). Neurophysiological evidence for a defect in inhibitory pathways in schizophrenia: a comparison of medicated and drug-free patients. Biological Psychiatry, 18, 537-551.
[107]
Galambos, R., Makeig, S., & Talmachoff, P.J. (1981). A 40-Hz auditory potential recorded from the human scalp. Proceedings of the National Academy of Sciences of the United States of America, 78(4), 2643-7.
[108]
Galambos, R. (1982). Tactile and auditory stimuli repeated at high rates (30-50 per sec) produce similar event related potentials. Annals of the New York Academy of Sciences, 388, 722-8.
[113]
Giard, M.H., Perrin, F., Pernier, J., & Peronnet, F. (1988). Several attention-related wave forms in auditory areas: a topographic study. Electroencephalography and Clinical Neurophysiology, 69, 371-384.
[119]
Hansen, J.C., & Hillyard, S.A. (1980). Endogenous brain potentials associated with selective auditory attention. Electroencephalography & Clinical Neurophysiology, 49(3-4), 277-90.
[125]
Hillyard, S.A., Hink, R.F., Schwent, V.L., & Picton, T.W. (1973). Electrical signs of selective attention in the human brain. Science, 182(108), 177-80.
[133]
Jones, L.A., & Baxter, R.J. (1988). Changes in the auditory middle latency responses during all-night sleep recording. British Journal of Audiology, 22, 279-285.
[186]
McEvoy, L.K., Picton, T.W., Champagne, S.C., Kellett, A.J., & Kelly, J.B. (1990). Human evoked potentials to shifts in the lateralisation of a noise. Audiology, 29, 163-180.
[187]
McEvoy, L., Hari, R., Imada, T., & Sams, M. (1993). Human auditory cortical mechanisms of sound lateralization: II. Interaural time differences at sound onset. Hearing Research, 67(1-2), 98-109.
[188]
McEvoy, L., Makela, J.P., Hamalainen, M., & Hari, R. (1994). Effect of interaural time differences on middle-latency and late auditory evoked magnetic fields. Hearing Research, 78(2), 249-57.
[189]
Mendel, M.I., & Goldstein, R. (1971). Early components of the averaged electroencephalographic response to constant clicks during all-night sleep. J. Speech and Hearing Research, 14, 829-840.
[190]
Mendel, M.I., Saraca, P.A., & Gerber, S.E. (1984). Visula scoring of the middle latency response. Ear and Hearing, 5, 160-165.
[203]
Naatanen, R., Gaillard, A.W., & Mantysalo, S. (1978). Early selective-attention effect on evoked potential reinterpreted. Acta Psychologica, 42(4), 313-29.
[204]
Naatanen, R. (1982). Processing negativity: an evoked-potential reflection of selective attention. Psychological Bulletin, 92(3), 605-40.
[205]
Naatanen, R., & Picton, T. (1987). The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology, 24(4), 375-425.
[214]
Ozdamar, O., & Kraus, N. (1983). Auditory middle latency responses in humans. Audiology, 22, 34-49.
[216]
Pantev, C., Hoke, M., Lehnertz, K., Lutkenhoner, B., Anogianakis, G., & Wittkowski, W. (1988). Tonotopic organization fo the human auditory cortex revealed by transient auditory evoked magnetic fields. Electroencephalography and Clinical Neurophysiology, 69, 160-170.
[217]
Pantev, C., Hoke, M., Lutkenhoner, B., & Lehnertz, K. (1991). Neuromagnetic evidence of functional organization of the auditory cortex in humans. Acta Oto-Laryngologica - Supplement, 491, 106-115.
[218]
Pantev, C., Makeig, S., Hoke, M., Galambos, R., Hampson, S., & Gallen, C. (1991). Human auditory evoked gamma-band magnetic fields. Proceedings of the National Academy of Sciences of the United States of America, 88(20), 8996-9000.
[219]
Pantev, C., Bertrand, O., Eulitz, C., Verkindt, C., Hampson, S., Schuierer, G., & Elbert, T. (1995). Specific tonotopic organizations of different areas of the human auditory cortex revealed by simultaneous magnetic and electric recordings. Electroencephalography and Clinical Neurophysiology, 94, 26-40.
[222]
Picton, T.W., Goodman, W.S., & Bryce, D.P. (1970). Amplitude of evoked responses to tones of high intensity. Acta Otolaryngologica (Stockholm), 70, 77-82.
[223]
Picton, T.W., Hillyard, S.A., Krausz, H.I., & Galambos, R. (1974). Human auditory evoked potentials. I. Evaluation of components. Electroencephalography & Clinical Neurophysiology, 36(2), 179-90.
[221]
Picton, T.W., & Durieux-Smith, A. (1988). Auditory evoked potentials in the assessment of hearing. Neurologic Clinics, 6, 791-808.
[224]
Plourde, G., & Picton, T.W. (1991). Long-latency auditory evoked potentials during general anesthesia: N1 and P3 components. Anesthesia and Analgesia, 72, 342-350.
[225]
Plourde, G., Baribeau, J., & Bonhomme, V. (1997). Ketamine increases the amplitude of the 40-Hz auditory steady state response in humans. British Journal of Anaesthesia, 78, 524-529.
[227]
Pockett, S. (1999). Anesthesia and the electrophysiology of auditory consciousness. Consciousness and Cognition, 8, 45-61.
[233]
Recanzone, G.H., Schreiner, C.E., & Merzenich, M.M. (1993). Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys. The Journal of Neuroscience, 13, 87-103.
[236]
Ribary, U., Ioannides, A.A., Singh, K.D., Hasson, R., Bolton, J.P., Lado, F., Mogilner, A., & Llinas, R. (1991). Magnetic field tomography of coherent thalamocortical 40-Hz oscillations in humans. Proceedings of the National Academy of Sciences of the United States of America, 88(24), 11037-41.
[238]
Romani, G.L., Williamson, S.L., & Kaufman, L. (1982). Tonotopic organization of the human auditory cortex. Science, 216, 1339-1340.
[248]
Schwender, D., Klasing, S., Madler, C., Poppel, E., & Peter, K. (1993a). Effects of benzodiazepines on mid-latency auditory evoked potentials. Canadian Journal of Anaesthesia, 40, 1148-1154.
[249]
Schwender, D., Klasing, S., Madler, C., Poppel, E., & Peter, K. (1993b). Mid-latency auditory evoked potentials during ketamine anaesthesia in humans. British Journal of Anaesthesia, 71, 629-632.
[250]
Schwender, D., Rimkus, T., Haessler, R., Klasing, S., Poppel, E., & Peter, K. (1993). Effects of increasing doses of alfentanil, fentanyl and morphine on mid-latency auditory evoked potentials. British Journal of Anaesthesia, 71, 622-628.
[251]
Schwender, D., Faber-Zullig, E., Fett, W., Klasing, S., Finsterer, U., Poppel, E., & Peter, K. (1994). Mid-latency auditory evoked potentials in humans during anesthesia with S(+) ketamine - a double-blind, randomized comparison with racemic ketamine. Anesthesia and Analgesia, 78, 267-274.
[252]
Schwender, D., Faber-Zullig, E., Klasing, S., Poppel, E., & Peter, K. (1994). Motor signs of wakefulness during general anaesthesia with propofol, isoflurane and flunitrazepam/fentanyl and midlatency auditory evoked potentials. Anaesthesia, 49, 476-484.
[271]
Stapells, D.R., Linden, D., Suffield, J.B., Hamel, G., & Picton, T.W. (1984). Human auditory steady state potentials. Ear and Hearing, 5, 105-113.
[272]
Stapells, D.R., Makeig, S., & Galambos, R. (1987). Auditory steady-state responses: threshold prediction using phase coherence. Electroencephalography & Clinical Neurophysiology, 67(3), 260-70.
[282]
Streletz, L.J., Katz, L., Hohenberger, M., & Cracco, R.Q. (1977). Scalp recorded auditory evoked potentials and sonomotor responses: an evaluation of components and recording techniques. Electroencephalography and Clinical Neurophysiology, 43, 196-206.
[283]
Stufflebeam, S.M., Poeppel, D., Rowley, H.A., & Roberts, T.P. (1998). Peri-threshold encoding of stimulus frequency and intensity in the M100 latency. Neuroreport, 9, 91-94.
[288]
Thornton, C., Heneghan, C.P., James, C.P.H., & Jones, J.G. (1984). Effects of halothane or enflurane with controlled ventilation on auditory evoked potentials. British Journal of Anaesthesia, 56, 315-322.
[291]
Tiitinen, H., Sinkkonen, J., Reinikainen, K., Alho, K., Lavikainen, J., & Naatanen, R. (1993). Selective attention enhances the auditory 40-Hz transient response in humans. Nature, 364(6432), 59-60.
[292]
Tiitinen, H., Sinkkonen, J., May, P., & Naatanen, R. (1994). The auditory transient 40-Hz response is insensitive to changes in stimulus features. Neuroreport, 6(1), 190-2.
[295]
Tucker, D.A., & Ruth, R.A. (1996). Effects of age, signal level and signal rate on the auditory middle latency response. Journal of the American Academy of Audiology, 7, 83-91.
[307]
White, P.F., Way, W.L., & Trevor, A.J. (1982). Ketamine - its pharmacology and therapeutic uses. Anesthesiology, 56, 119-136.
[310]
Woldorff, M.G., & Hillyard, S.A. (1991). Modulation of early auditory processing during selective listening to rapidly presented tones. Electroencephalography & Clinical Neurophysiology, 79(3), 170-91.
[311]
Woldorff, M.G., Gallen, C.C., Hampson, S.A., Hillyard, S.A., Pantev, C., Sobel, D., & Bloom, F.E. (1993). Modulation of early sensory processing in human auditory cortex during auditory selective attention. Proceedings of the National Academy of Sciences of the United States of America, 90(18), 8722-6.
[312]
Woldorff, M.G., Hillyard, S.A., Gallen, C.C., Hampson, S.R., & Bloom, F.E. (1998). Magnetoencephalographic recordings demonstrate attentional modulation of mismatch-related neural activity in human auditory cortex. Psychophysiology, 35(3), 283-92.
[316]
Xu, Z.M., Vel, E.D., Vinck, B., & Cauwenberge, P.V. (1996). Middle-latency responses to assess objective thresholds in patients with noise-induced hearing losses and Meniere's disease. European Archives of Oto-Rhino-Laryngology, 253, 222-226.
[317]
Yabe, H., Tervaniemi, M., Sinkkonen, J., Huotilainen, M., Ilmoniemi, R.J., & Naatanen, R. (1998). Temporal window of integration of auditory information in the human brain. Psychophysiology, 35(5), 615-9.
[318]
Yamamoto, T., Uemura, T., & Llinas, R. (1992). Tonotopic organization of human auditory cortex revealed by multi-channel SQUID system. Acta Oto-Laryngologica, 112(2), 201-4.