Cargando…
Primary auditory cortex representation of fear‐conditioned musical sounds
Auditory cortex is required for discriminative fear conditioning beyond the classical amygdala microcircuit, but its precise role is unknown. It has previously been suggested that Heschl's gyrus, which includes primary auditory cortex (A1), but also other auditory areas, encodes threat predicti...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley & Sons, Inc.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7268068/ https://www.ncbi.nlm.nih.gov/pubmed/31663229 http://dx.doi.org/10.1002/hbm.24846 |
Sumario: | Auditory cortex is required for discriminative fear conditioning beyond the classical amygdala microcircuit, but its precise role is unknown. It has previously been suggested that Heschl's gyrus, which includes primary auditory cortex (A1), but also other auditory areas, encodes threat predictions during presentation of conditioned stimuli (CS) consisting of monophones, or frequency sweeps. The latter resemble natural prosody and contain discriminative spectro‐temporal information. Here, we use functional magnetic resonance imaging (fMRI) in humans to address CS encoding in A1 for stimuli that contain only spectral but no temporal discriminative information. Two musical chords (complex) or two monophone tones (simple) were presented in a signaled reinforcement context (reinforced CS+ and nonreinforced CS−), or in a different context without reinforcement (neutral sounds, NS1 and NS2), with an incidental sound detection task. CS/US association encoding was quantified by the increased discriminability of BOLD patterns evoked by CS+/CS−, compared to NS pairs with similar physical stimulus differences and task demands. A1 was defined on a single‐participant level and based on individual anatomy. We find that in A1, discriminability of CS+/CS− was higher than for NS1/NS2. This representation of unconditioned stimulus (US) prediction was of comparable magnitude for both types of sounds. We did not observe such encoding outside A1. Different from frequency sweeps investigated previously, musical chords did not share representations of US prediction with monophone sounds. To summarize, our findings suggest decodable representation of US predictions in A1, for various types of CS, including musical chords that contain no temporal discriminative information. |
---|