The hypothesis that spoken presentation of information might better enable relational recall than written presenta- tion stems from evidence suggesting that people are better able to rapidly form complex interpretations of spoken text than of written text. For example, Jakimik and Glenberg (1990) found that people are better able to resolve anaphors (which require them to resolve the temporal structure of the text) for spoken text than for written text. Carroll and Koru- kina (1999) found better memory for elements of complex spoken text than for written text. These results suggest that comprehension and memory for relational items may be bet- ter for spoken presentations than for written presentations. Of course, the possibility that people can more read- ily derive complex representations of spoken text than of written text is somewhat counterintuitive. One might ex- pect that the written modality would better support care- ful, deliberative processing and would thus foster deeper encodings than would spoken text.
In addition to the features of the stimulus, the participants’ direction of attention also affects ear advantages. Hugdahl and Andersson (1986) investigated the impact of attention on the right ear advantage for verbal stimuli. Participants listened to pairs of competing CV syllables presented dichotically under three conditions: a non- forced, free recall condition (i.e. focus on any ear); a forced-right ear condition (i.e. focus on the right ear); and a forced-left ear condition (i.e. focus on the left ear). In the non-forced condition, participants recalled more CV syllables presented to the right ear than the left. However, in the forced-right ear condition, significantly more CV syllables were recalled from the right ear than the left ear and the opposite effect occurred from the forced-left ear condition. The authors concluded that in addition to linguistic processing these ear advantages were due to the participants’ direction of attention. Similarly, Jancke, Buchanan, Lutz, and Shah (2001) investigated the impact of attention on hemispheric activation from linguistic and prosodic features of words. One dichotic listening condition involved word detection while the other required emotional prosody detection. Irrespective of the task, fMRI indicated focusing attention to the left ear increased activation in the right auditory cortex, whereas focusing attention to the right ear increased activation in the left auditory cortex. Thus, hemispheric activity was enhanced in the opposite auditory cortex to the ear which participants directed attention and this activity did not depend on the feature of the stimulus being processed. Hugdahl et al. (2009) argue that the three forced
Participants were asked to state whether they felt in control of the vehicle at any point during each scenario (Q4 Table 4). Comparing the autonomous and manual driving scenarios shows participants felt more in control during the manual driving scenarios in comparison to the autonomous driving scenarios (F(1,14)=80.66 p<0.001). The auditorypresentation methods show a significant difference between scenarios (F(4,56) = 14.004 p<0.001). Pairwise comparison with a Bonferroni adjusted alpha again showed significant differences between no sound and all other scenarios (standard sound (p=0.001), user inferred (p=0.002), static spatialised (p<0.001), and dynamic spatialised (p=0.002)). We asked a number of other questions during the Likert- scale stage of our evaluation. These questions asked participants whether the sound parameters: Pitch (Q5), Repetition (Q6), Duration (Q7), Timbre (Q8) and Volume (Q9), respectively, played a role in enhancing their awareness to the intended actions of the vehicle. Results show significant differences between driving scenarios (manual & autonomous) for Pitch (F(1,14)=8.007 p=0.013) and Timbre (F(1,14)=5.237 p=0.038). This finding raises the question why these particular sonic attributes were more significantly different in comparison to Repetition, Duration, & Volume during the manual driving scenarios. All of the auditorypresentation methods were significant different to the no sound presentation method. This was expected as none of these sonic parameters were present during this auditorypresentation method (p<0.001 for all). Three final questions were asked after manual driving scenarios only. These questions related to a participant’s own actions and the impact the different auditorypresentation methods had. All auditorypresentation methods containing sound were significantly different when compared in a pairwise manner to the no sound presentation method (p<0.001 for all).
Our approach to the question of modality in short-term mem- ory derives from a radically different approach to short-term mem- ory itself. Bluntly, we regard short-term memory as a perceptual- motor task setting, in the same way that, for example, the goal- directed manual apprehension and manipulation of a solid object may be regarded as a perceptual-motor task setting. In the latter case, the task involves processes that render object-oriented visual perceptual representations that may provide control programs for the manual interaction with the object in order to accomplish the task-specific goals. In the case of short-term serial recall, the object of concern is the sequence of verbal material presented for repro- duction, and the motor system adopted for manipulation of this object is the articulatory control system involved in the production of speech. Overall performance in the setting is an outcome of per- ceptual and motor processes and the interactions between them. From this perspective, modality of presentation comes into play with respect to object formation processes as they operate in visual and auditorypresentation, and how the perceptual representations so formed afford, to greater or lesser degrees, facile manipulation of those objects and their constituents in the speech motor system. Key to this account are both the nature and the consequences of perceptual object formation. While objects are fundamental func- tional units for both vision and audition, there are important differ- ences between modality in how they are formed. As a generalization, processes of auditory object formation play out with respect to the temporal dimension, where extended acoustic events are grouped together over time, on the basis of gestalt-like properties of similarity and continuity of frequency, timbre, rhythm, and so on (e.g., Bregman, 1990). Visual object formation can also be characterized in terms of gestalt grouping cues, but
These findings suggest that there are several identifiable rhyth- mic skills that at least partially rely on distinct neural resources. As a result, each rhythmic skill may relate to language skills in different ways. Tracking the rhythmic structure of music, for example, may rely on phase-locking of ongoing slow oscilla- tions (1.5–7 Hz) in auditory cortex to the rhythmic structure of music (Large, 2008); this same neural mechanism has also been proposed for the tracking of the amplitude envelope of speech (Goswami, 2011). Alternately, tracking the rhythmic struc- ture of music may call upon motor planning regions, which could work in concert with the auditory system to predict when future beats will arrive (Patel and Iversen, 2014). This proce- dure could also underlie the tracking of rhythmic regularities in speech, such as the tendency for speakers to slow as they near the ends of phrases or sentences. Thus, although both entrainment and rhythmic discrimination (Strait et al., 2011) relate to language skills, the mechanisms underlying these two relationships may be different. Here we propose that auditory- motor entrainment and phonological skills relate due to a shared
The findings sited here show that alterations in MOC activity can influence hearing sensitivity but there is some doubt that these effects serve any purpose at the behavioural level as suggested by Scharf et al who found that a patient who had undergone vestibular neuronectomy ( includes OCB section) preferred to use his operated ear for telephone listening in addition to the finding that a series of other psychoacoustic functions were unimpaired following surgery (Scharf et al. 1994). Before the role of MOC efferents as part of hearing can be understood the limits to its operating range at the cochlear level must be fully elucidated. As sited above there are limits to the dynamic range of the effects in that MOC activity becomes less effective as cochlear stimulation increases. There are limits to the range of the MOC effects within both frequency and time domains. It seems reasonable that efferent activation should be associated with perstimulatory response suppression which is maximum near stimulus onset, since sound induced increase in firing rates of auditory fibres are maximum at stimulus onset and MOC inhibition of afferent activity
connections which make vocal learning possible and those crucial for synchronization. Speculation to date regarding the necessary preconditions for synchronization has largely focused on connections between auditory cortical areas, pre-motor regions, and the basal ganglia (Patel et al. 2009, Merchant and Honing 2014, Patel and Iversen 2014). However, our finding that high-frequency auditory neural precision (> 100 Hz, corresponding to time scales of 10 ms and shorter) is linked to synchronization skill suggests that it may be fruitful to examine structural and functional interactions between subcortical auditory regions and motor areas as well. For example, strengthening of the direct connection between the auditory midbrain and cerebellum could enable the rapid, precise auditory-motor integration necessary for both vocal learning and synchronization. Conversely, a lack of a strong connection between subcortical auditory and motor areas in species that cannot perform vocal learning could explain the lack of a benefit for auditory versus visual stimuli for rhesus monkeys performing perceptual- motor synchronization tasks (Zarco et al. 2009, Kraus and White-Schwoch in press).
17 A number of recent studies have used a new magnetic-resonance based method, diffusion tensor imaging (DTI) to examine auditory and receptive language regions in ASD. This technique provides a measure of the integrity of white matter tracts and thus an indication of neuronal connectivity. The first DTI study conducted in individuals with ASD was performed by Barnea-Goraly and colleagues in 2004. In this study, DTI imaging was used to investigate white matter structure in 7 children with high-functioning autism and 9 typically-developing age-matched controls. Results found evidence for reduced white matter integrity in a number of brain regions in the ASD group, including the superior temporal sulcus and medial temporal gyrus - both of which have been implicated in processing language (Barnea-Goraly et al., 2004). Further DTI research conducted by Lee and colleagues (2007) found evidence for aberrant white matter microstructure in the bilateral STG of 43 adolescents and adults with autism relative to age-matched controls. Similar findings were obtained in the right STG of 25 adolescents with autism by Cheng and colleagues (2010) while another recent DTI study observed evidence for hemispheric asymmetry in the STG of 30 adolescents and adults with autism relative to typically-developing controls (Lange et al., 2010). Several DTI studies have also identified white matter abnormalities in the corpus callosum, including the body which has extensive connections between auditory cortices and functions in the interhemispheric transfer of auditory information (Alexander et al., 2007; Barnea-Goraly et al., 2004; Keller et al., 2007). Moreover, a recent study found evidence for aberrant white matter connectivity in the arcuate fasciculus, a white matter fibre tract that connects the posterior STG and planum temporale (Wernicke‘s area) to premotor language regions involved in the planning of speech production (Fletcher et al., 2010).
Sensory processing changes, especially the aud- itory sense, have a wide range in autism spect- rum disorder (ASD), and based on neurobiolo- gically-based theories, abnormalities in the pro- cessing of time features of sensory inputs can contribute to the main complaints in this dis- order [4-6]. Abnormalities in the processing and integration of sensory inputs cause unusual sensory responses and socio-cognitive impair- ment in patients with autism [3,7-9]. Sensory symptoms in patients with ASD include atypical sensory sensitization (hypersensitivity or hypo- sensitivity), which seems to be more prevalent in the auditory domain . It should also be noted that in patients with autism, simple sen- sory stimuli processing is usually normal or sometimes enhanced, but the processing of com- plex stimuli is often significantly abnormal. For example, auditory tasks such as simple stimulus (pure tone) and low-level functions (such as detection and labeling) that are pro- cessed in the primitive auditory cortex can be done well in the ASD. But functions requiring higher levels of auditory processing (evaluation, attention), including comprehension of spatio- temporal compound stimuli like speech, are not particularly well executable in ASD [6,8, 10,11,15]. Such a finding supports the major theories related to this disorder, including the weak central coherence which means the ina- bility to combine information and their per- ceptual integration [13,16,17]. Various inter- ventions and therapies have been designed and recommended because of the heterogeneity of patients with ASD . Some methods of auditory rehabilitation in this field use music to improve the complaints of these patients [1,3]. Also, given that children with autism have difficulty in understanding noise in speech, those auditory interventions that can minimize the stress from mishearing are important for the health of children with autism and seem desirable to be taken into account at school [9,13,17]. Other methods include dichotic lis- tening training, neurofeedback, and cognitive
op positive attitudes towards auditory training thera- py for children with perceptual disorders. Also, gov- ernments at all levels should make adequate provi- sion for identification and screening materials for special needs individuals in various schools and em- ploy competent personnel to use them. The govern- ment should organise seminars and workshops on auditory perception intervention programmes for teachers of children with perceptual disorders. Lastly, the government should, as a matter of urgency, es- tablish an early intervention programme in line with those in advanced countries of the world. This pro- gramme should include professionals, parents and caregivers. The programme should be designed to ensure the participation of interdisciplinary teams to tackle various problems emanating from perceptual and other related disorders.
The present study correlated the electrophysiology of auditory processing as recorded by MEG with the neuro- chemistry of the auditory cortex as assessed by MRS. Our study, however, has several limitations. First, the small number of participants limited the power of the statistical analyses. In addition, we acquired data from only the left hemisphere. The 37-channel MEG system used here could be positioned close to the superior temporal plane, result- ing in a good signal-to-noise ratio, but could only record the neural activity of one hemisphere at a time. The processing of acoustic stimuli can differ between hemi- spheres, especially when speech stimuli are used . Similarly, the concentrations of neurochemicals, as assessed by MRS, may display interhemispheric differ- ences . Therefore, we cannot rule out that MEG-MRS associations for the right auditory cortex differ from those reported here for the left auditory cortex. For future stud- ies, a larger sample size and data collection from both hemispheres (using a whole-head MEG system) are desir- able. Moreover, we performed MEG recordings while par- ticipants were watching a silent movie (i.e., in a passive listening condition). The associations between MEG and MRS parameters reported here might change when MEG is recorded under top-down attentional modulation, as attention is known to influence auditory processing . In addition to the concentration of NAA and Cho, other factors have been shown to influence auditory cortical processing, such as serotonergic neurotransmission (see Background) . In a study combining electrophysiology and genetics, the amplitude increase of the N1/P2 compo- nent in response to increasing stimulus intensities (termed the loudness dependence) was significantly dif- ferent between individuals with different variants of the serotonin transporter gene. We expect that genetic studies will considerably contribute to future research on individ- ual auditory processing.
Some support for the attentional aspect of music was cited in earlier sections (see “Cognitions” section in Music and Psychology, and “Attention-grabbing qualities of music” in Music and Marketing), and is further corroborated by work on cross-modal responses (Anand, Holbrook and Stephens 1988) and disfluency (Mehta, Zhu and Cheema 2012). For example, Anand et al. (1988) showed that affective responses to a visual stimulus are mediated by cognitions when both verbal and instrumental auditory stimuli are heard simultaneously. That is, appraisal of the visual stimuli is influenced by attention to the auditory cue. In a more recent example, Mehta et al. (2012) support the attentional aspect of music by showing that a moderate level of noise (70 dB) increases processing difficulty, which in turn induces greater abstract thinking and more creativity. They suggest that this processing disfluency is a result of increased difficulty in focusing on a focal task, as the ambient sound attracts attention and subsequently detracts from the capacity to think concretely. Most famously, Kellaris and Kent (1992) found that
Various experiences including auditory, visual, tactile, etc. can affect memory system functions [14,15]. A touching factor could be an expe- rience. The memory system gets involved with frequent triggers. Therefore, sustainability of a gained experience and its confirmation depends on amount of repetition. About auditory stimuli, a single encountering or listening over and over to an stimulus, create an auditory experience and make indelible pathway in auditory memory . This phenomenon has been colloquially called mere exposure effect which helps in imp- roving the perception of repeated stimulus . Gilliland and Moore, relying on mere exposure effect and using repetition time of 25, proved that people prefer to listen to a classical piece of music that was not popular among the partici- pants at first. The results, also demonstrated the improvement in people's reaction time and comprehension of the track . This is because of the reorganization in neural circuits and new plasticity that is made by repeatedly experien- cing the stimuli.
Participants were diagnosed by a team of experienced developmental pediatricians. All children in the experi- mental group fulfilled the DSM-IV criteria for autism . Children with developmental language delay were inclu- ded if they failed to express words by 24 months or phrases by 3 years of age. We used Childhood Autism Rating Scale (CARS, Chinese Version)  to evaluate all the children. We also used the CARS to assess the the presentation of symptoms of autism. According to CARS, children who scored 30 - 36 points indicated mild to mo- derate autism (n = 92), and those whose scores ranged from 37 to 60 points manifested severe autism (n = 64). Exclusion criteria for both groups were the presence of hearing impairment, chromosomal abnormalities (e.g. Fra- gile x syndrome) or neurological disorders (e.g. epilepsy, seizure disorders). Whether a child had hearing impair- ment or not was determined by the auditory brainstem response. Those who did not complete all the evaluations were excluded as well. Informed consents were obtained from the caregivers of the subjects.
Finally, DCN pyramidal cells are more similar to hippocampal, cortical and cerebellar neurons because their synapses can be strengthened and weakened by certain patterns of input. Kv4 channels play an important role by enabling the cells to potentiate synapses based on the patterns of input and output. In hippocampal pyramidal cells, Hebbian plasticity is altered when Kv4.2 channel voltage sensitivity is shifted (Watanabe et al. 2002). This is due to Kv4.2 ability to attenuate back propagating action potentials that allow Ca 2+ to enter dendrites, and initiate signaling cascades that result in altered synaptic strength. When preceding depolarizing input inactivates Kv4.2, it will not be available to oppose inward current resulting from back propagating action potentials, and more Ca 2+ will enter the dendrites. If the depolarizing input is too far removed in time from the back propagating action potential, Kv4.2 can activate and open to attenuate Ca 2+ entry. Parallel fiber synapses on apical dendrites of pyramidal cells potentiate according to Hebbian rules like hippocampal pyramidal cells (Tzounopoulos et al. 2004). So far, this same type of synaptic plasticity has not been reported for other auditory brainstem neurons. Therefore, Kv4.3 is more than likely an important component of spike-timing dependent plasticity in pyramidal neurons. Since this channel can be modulated, pyramidal cells could alter the conditions under which synapses are potentiated or depressed. This ability would be very useful to neurons that integrate multiple types of sensory information.
The auditory system is an acoustic and cognitive wonder. Indeed, it has a remarkable ability to decompose the different sources of soundscape, even noisy, and instantly make sense of this entire noisy environment that reaches our eardrums. In addition, when several speakers are talking simultaneously, we are able to easily follow the speaker of interest. However, this is a problem that remains highly complex in digital signal processing. Indeed, the estimation of superposed signals in a real environment is the current problem posed. For this, several techniques have been developed to achieve the purpose of composite speech separation.
Audio Code (AC). This task was developed to be an auditory analogue of the symbol digit task described above. A code table is displayed at the top of the com- puter screen for the duration of the task, comprising of pictures of eight musical instruments arranged horizon- tally, to which one of the numbers one through eight was paired. The instruments include a snare drum, trumpet, guitar, cymbals, piano, bell, harp and violin. For each item, the sound of one of the instruments was presented via headphones at an intensity of 65 db. Participants re- sponded by left clicking the mouse on its corresponding digit in a 2 x 4 numerical response grid positioned at the bottom of the screen. Subsequent items commenced after a response was registered. Participants completed two familiarization phases: in the first, instrument names were presented and participants clicked on the correspond- ing instrument (2 trials each); in the second, instrument sounds were presented instead of text (2 trials each). Following this, participants were required to complete four test trials for each instrument correctly before they could proceed to the test phase. The outcome measure was the number of items correctly completed in 2 mi- nutes.