Audio-visual emotion recognition

Top PDF Audio-visual emotion recognition:

Construction of Japanese Audio Visual Emotion Database and Its Application in Emotion Recognition

Construction of Japanese Audio Visual Emotion Database and Its Application in Emotion Recognition

Despite these new approaches, acted emotion remains one of the most promising bases of emotion studies due to its potential. Controls imposed on the corpus allows for com- parison and analysis across different emotions and speak- ers. The prominent emotion content could be of advantage for preliminary study. Furthermore, a previous study had argued that careful design of research could balance the flaw of emotion portrayals (B¨anziger and Scherer, 2007). In this paper, we collect data from portrayal of various emotion occurrences by 14 native Japanese speakers. The novelty of our approach lies in the design of the portrayed emotion. In addition to the classic monologue reading, we include a number of situated dialogues. With the mono- logue, we gather prominent, simple, and isolated emo- tional speech, useful for preliminary multimodal emotion research. On the other hand, the dialogues are to provide less stereotypical, more subtle, and contextualized emo- tion, counteracting the shortcomings of typical emotion portrayal. Details of the corpus design and procedure of the construction will be described in the following sections.
Show more

5 Read more

Naturalistic Audio Visual Emotion Database

Naturalistic Audio Visual Emotion Database

The progress in the areas of research like emotion recognition, identification, syn- thesis, etc., relies heavily on the develop- ment and structure of the database. This paper addresses some of the key issues in development of the emotion databases. A new audio-visual emotion (AVE) database is developed. The database consists of au- dio, video and audio-visual clips sourced from TV broadcast like movies and soap- operas in English language. The data clips are manually segregated in an emotion and speaker specific way. This database is developed to address the emotion recog- nition in actual human interaction. The database is structured in such a way that it might be useful in a variety of applications like emotion analysis based on speaker or gender, emotion identification in multiple emotive dialogue scenarios etc.
Show more

8 Read more

Speech emotion recognition with early visual cross modal enhancement using spiking neural networks

Speech emotion recognition with early visual cross modal enhancement using spiking neural networks

All experiments are implemented using the Spiking neural Network simulator BRIAN [50]. We have used the same parameters as [34] in terms of input firing rates, membrane threshold and resting phase duration. The input layer of the network architecture consists of two groups of neurons each representing a modality. The number of neurons for each input neuron group is proportional to the size of the input; that is, the size of the audio features and video frame features. We use 40*388 and 100*100 input neurons for the auditory and visual input respectively. The input layer is then connected to a convolution excitatory layer which is connected to an inhibitory layer with a lateral inhibition, where neurons are connected to all neurons in the excitatory layer apart from the one receiving information from. Each input is divided into convolution features where a stride window moves through the input. The convolution window in the audio modality moves along the temporal axis. Convolutional windows are applied separately to each modality. That is, the visual and audio both have different configuration in terms of convolutional window and the number of features and the total excitatory neurons. We have experimented with various configuration and have chosen the best performing ones which are using 10 as the window size and the stride size for the auditory and 10 for the visual. The number of features is set to 60 for the auditory modality and 60 for the visual modality.
Show more

8 Read more

Audio Visual Speech Synthesis and Speech Recognition for Hindi Language

Audio Visual Speech Synthesis and Speech Recognition for Hindi Language

The synthesis of expressive speech is a target application with potential relevance in several areas, including the dynamic generation of multimodal media content and naturalistic human–machine interaction[8]. The expression is defined as the indicator of various emotional states that reflect in the speech waveform[15]. Expressive speech synthesis deals with synthesizing speech and adding various expressions related to different emotions and speaking styles to the synthesized speech.[9] The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood. Emotion which is a very important element in expressive speech synthesis. Expressions that are Happiness, Sadness, Fear, Surprise etc. have to be very important in storytelling application. Expressions play very important role into the Communication. In our system we have to detect Emotions according to the expressions. Our approach primarily consists of three steps: first we take the text and the target PAD values as input, and employ text-to-speech (TTS) engine to generate neutral speeches. Boosting-GMM is used to convert natural speech to emotional speech. The GMM method is more suitable for a small training set[6]. TD- PSOLA algorithm is useful for the Pitch duration. acoustic features of emotional speech has calculated by using MFCC.
Show more

5 Read more

An investigation of emotion recognition ability and metacognitive judgements of emotion recognition performance in trait narcissism

An investigation of emotion recognition ability and metacognitive judgements of emotion recognition performance in trait narcissism

displayed at 100 percent intensity levels in patients with NPD (Marissen et al., 2012). Participants were asked to identify static pictures of five emotional expressions (sad, anger, fear, happiness, and disgust). The researchers concluded that NPD patients had a general deficit for recognising emotions, particularly for the negatively valenced emotions of fear and disgust. Furthermore, despite performing worse than controls on the emotion recognition task, NPD patients reported that they would be able to identify the feelings of others as well as controls. Wai and Tiliopoulos (2012) assessed emotion identification ability in the dark triad members by presenting static black and white images of basic emotional expressions (fear, happy, sad, and angry) and found no significant emotion perception deficits associated with trait narcissism. Prior research has often examined emotion perception using static photos at 100% intensity (i.e., Ekman & Friesen, 1976). However, this is not reflective of everyday life in which emotional expressions are dynamic. New tasks have been developed to illuminate individual differences in the ability to perceive emotions displayed across varying intensity levels (Montagne et al., 2007). Such tasks may be capable of detecting more subtle impairments in emotion perception in high trait narcissism individuals.
Show more

73 Read more

Intelligent Facial Emotion Recognition

Intelligent Facial Emotion Recognition

Proposed a facial expression recognition framework with hvnLBP based feature extraction, mGA- embedded PSO-based feature optimization and diverse analyser based expression recognition. The proposed hvnLBP administrator performs horizontal and vertical neighborhood pixel correlation with recover the underlying discriminative facial features. It outperforms state-of-the-art LBP, LPQ, and conventional LBP for texture analysis. Moreover, a new PSO algorithm, i.e., mGA-embedded PSO, has been proposed to solve the premature convergence problem of conventional PSO in terms of feature optimization. The mGA-embedded PSO algorithm incorporates personal average experience and Gaussian mutation for velocity updating as well as employs the diversity maintenance strategy of mGA by keeping the original swarm in a nonreplaceable memory, which remains intact during the lifecycle of the algorithm to increase swarm diversity. Furthermore, it also keeps up an secondary swarm with a little population size of 5 to host the swarm head and four adherent particles with the most elevated/least connection with the head from the nonreplaceable memory to increase local and global search capabilities. The algorithm subsequently separates facial features into special areas for in-depth local sub dimension based search. Overall, the local exploitation and global exploration search mechanisms of the algorithm work cooperatively to guide the search toward the global optimal solutions. The empirical results indicate that our PSO algorithm outperforms other state-of-the-art PSO variants and conventional PSO and GA for optimal feature selection.
Show more

10 Read more

5 Policy Options: Audio and Audio-visual Content Services

5 Policy Options: Audio and Audio-visual Content Services

MultiChoice and M-Net said that historically the rationale for horizontal and cross-media limitations on control was to ensure plurality of voices and a diversity of content, particularly as regards news and current affairs programming. They argued that developments in recent years and the abundance of sources of every kind of content, including news and current affairs (originating from local and international sources), meant there is no longer any basis for retaining the existing cross media limits and that therefore the limitations should be reviewed. They said that these limits were introduced for a single channel analogue terrestrial commercial free-to-air environment and that traditional linear broadcasting services in South Africa are “facing increasing competition for the provision of audio-visual content from the Internet/over-the-top players, as well as from telecommunications operators (both fixed-line and mobile)”. They noted that many new content providers are multinational companies, not subject to South African regulation. 68
Show more

68 Read more

Facial Emotion Recognition: A Survey

Facial Emotion Recognition: A Survey

Facial modality have the core position in emotion recognition, however audio, text, psychological, body posture could also play an important role. Much progress has been made in the facial emotion recognition, but more work is still necessary to get a satisfactory framework. This survey describes the background of facial emotion recognition and presents the related works. Some of the publicly available datasets for researchers are also covered. A summary of some of the last five years papers from 2013 to 2018 show that there are many different techniques used for feature extraction and classification which some researchers use individually; others use a combination of these techniques to get a benefit of more than one of them. There are no unified methods defined in this field. The trend in recent research is towards the use of DL especially CNN and results reached in their experiments are actually encouraging.
Show more

9 Read more

Speech Emotion Recognition based on Voiced Emotion Unit

Speech Emotion Recognition based on Voiced Emotion Unit

This section explains the proposed method for segmentation of speech utterance into its emotion units. This study assume that an emotion unit should be investigated within the voiced segments. These segments comprise F0 info that are generally used to represent emotional state of the speaker. The most significant segments of an emotional utterance are the voiced segments that include vowels which are very important for SER, due to vowels are the richest part with emotional information [11, 12, 13]. Vowels Segmentation is very challenging task and require either prior knowledge such as the phoneme boundaries or using an ASR system to determine these boundaries. On the other hand, segmentation into voiced segments can be easily done using voice activity detection (VAD) with a very high performance [14, 15]. As a result, to keep the rich emotional information included in the vowel parts, and avoid the limitation of vowel segmentation, voiced segments are the best candidates for emotion unit investigation.
Show more

7 Read more

An Iterative Emotion Interaction Network for Emotion Recognition in Conversations

An Iterative Emotion Interaction Network for Emotion Recognition in Conversations

Based on the above idea, we propose an iterative emotion interaction network for emotion recogni- tion in conversations. This network explicitly models the emotion interaction between utterances, and meanwhile solves the problem of no gold labels at inference time by iterative improvement mechanism. Specifically, we first adopt an utterance encoder to obtain the representations of utterances and make an initial prediction for the emotions of all utterances. Next, we integrate the initial prediction and the utter- ances by an emotion interaction based context encoder to make an updated prediction for the emotions. Finally, we use the iterative improvement mechanism to iteratively update the emotions, in which a loss function is employed to constrain the prediction of each iteration and the correction behavior between two adjacent iterations.
Show more

11 Read more

Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

This paper has proposed audio-visual speech recognition methods using lip information extracted from side-face images, focusing on mobile environments. The methods individually or jointly use lip-contour geometric features (LCGFs) and lip-motion velocity features (LMVFs) as vi- sual information. This paper makes the first proposal to use LCGFs based on an angle measure between the upper and lower lips in order to characterize side-face images. Experi- mental results for small vocabulary speech recognition show that noise robustness is increased by combining this informa- tion with audio information. The improvement was main- tained even when MLLR-based noise adaptation was applied to the audio HMM. Through the analysis on the onset de- tection, it was found that LMVFs are effective for onset pre- diction and LCGFs are effective for increasing the phoneme discrimination capacity. Noise robustness may be further in- creased by combining these two disparate features.
Show more

9 Read more

The Effect of Reliability Measure on Integration Weight Estimation in Audio- Visual Speech Recognition R. RAJAVEL

The Effect of Reliability Measure on Integration Weight Estimation in Audio- Visual Speech Recognition R. RAJAVEL

Potamianos et al. has demonstrated that using mouth videos captured from cameras attached to wearable headsets produced better results as compared to full face videos [29]. With reference to the above, as well as to make the system more practical in real mobile application, around 70 commonly used mobile functions (isolated words) were recorded 30 times each by a microphone and web camera located approximately 5-10 cm away from the speaker’s right cheek mouth region. Samples of the recorded side-face videos are shown in figure 1. Advantage of this kind of arrangement is that face detection, mouth location estimation and identification of the region of interest etc. are no longer required and thereby reducing the computational complexity [9]. Most of the audio-visual speech databases available are recorded in ideal studio environment with controlled lighting or kept some of the factors like background, illumination, distance between camera and speaker’s mouth, view angle of the camera etc. as constant. But in this work, the recording was done in the office environment on different days with different values for the above factors to make the database suitable for real life applications. Also, the database included natural environment noises such as fan noise, birds sounds, sometimes other people speaking and shouting sounds.
Show more

10 Read more

Emotion Recognition-A Selected Review

Emotion Recognition-A Selected Review

analysis and dipole source analysis (BESA) filtered by low pass filter at 30Hz.Selected electrode groups for Event related potentials (ERPs) analysis is measured P1, N170, P2 components from scalp regions corresponding to sites in high density face elicited ERPs studies and scalp region of interest are identified by individual data and visual inspection of grand average. ERPs score is analyzed the effect of age and extracted the individual peak amplitude is determined from the stimulus condition and grand average ERPs across groups and latencies. Whenever the sphericity assumption was violated is used by the Greenhouse-Geisser-corrected degree of freedom. The Dipole source analysis is also known as Brain Electrical Source Analysis (BESA) was used to localize the scalp ERPs cortical source and model their equivalent spatiotemporal current dipole with certain orientation and time-varying dipole moments. The majority of the variance in ERPs waveform explained sufficiently of three dipole source pair showed by Principal Component Analysis (PCA). Finally dipole source analysis (BESA) of high-density of Event related potential (ERPs) is examined both spatial sequence location and temporal profile of early electrical brain waves source activity in response to emotional facial salient stimuli.
Show more

15 Read more

Emotion Recognition from Speech

Emotion Recognition from Speech

Gaussian Mixture Model is used as the model for classification in this work. Gaussian Mixture Models (GMMs) are considered good for evaluating density and for performing clustering [18]. The expectation-maximization algorithm is used for this purpose. GMMs are comprised of component functions called Gausses. The number of these Gausses in the mixture model is also referred to as the number of components. The total number of components can be altered based on the count of training data points. However, the model becomes more complex with the increase in the number of components. In this work, a GMM with 16 components is created for each emotion and these are iterated 30 times. The error count decreases with each iteration and finally reaches a constant. Fig. 2 shows the relationship between error count and number of iterations. The parameters of the model are stored in a diagonal covariance matrix.
Show more

5 Read more

Fusion for Audio Visual Laughter Detection

Fusion for Audio Visual Laughter Detection

Our goal was to perform automatic laughter detection by fusing audio and video signals on the decision level. We built audio and video-classifiers, and demonstrated that the fusion of the best classifiers significantly outperformed both single-modality classifiers. The best classifier were the following. For audio, the GMM classifier trained on RASTA-PLP features performed best, resulting in a AUC-ROC of 0.825. A mean of 16.9 Gaussian mixtures was used to model laughter, non-laughter was modeled using 35.6 Gaussian mixtures. The best video-classifier was a SVM-classifier with an AUC-ROC of 0.916, trained on windows of 1.20 seconds using a C = 2.46 and a γ = 3.8 × 10 −6 . The best audio-visual classifier was constructed by training a SVM on the output of these two classifiers, resulting in a AUC-ROC performance of 0.928. During the fusion we evaluated different feature-sets. For laughter-detection in audio, we obtained significantly better results with RASTA-PLP features than with PLP features. RASTA-PLP features have not been used before for laughter detection as far as we know. For laughter detection in video we successfully used features based on the PCA of 20 tracked facial points. The performance of the video classifiers was very close to the fused classifiers, which is a promising result for laughter detection in video. However, during this research we excluded instances that contain smiles. It is likely that our video-classifier also classifies smiles as laughter.
Show more

41 Read more

The Emotional Impact of Audio - Visual Stimuli

The Emotional Impact of Audio - Visual Stimuli

Induced affect is the emotional effect of an object on an individual. It can be quantified through two metrics: valence and arousal. Valance quantifies how positive or negative something is, while arousal quantifies the intensity from calm to exciting. These metrics enable researchers to study how people opine on various topics. Affective content analysis of visual media is a challenging problem due to differences in perceived reactions. Industry standard machine learning classifiers such as Support Vector Machines can be used to help determine user affect. The best affect-annotated video datasets are often analyzed by feeding large amounts of visual and audio features through machine-learning algorithms. The goal is to maximize accuracy, with the hope that each feature will bring useful information to the table.
Show more

78 Read more

Is audio visual perception 'amodal' or 'crossmodal'?

Is audio visual perception 'amodal' or 'crossmodal'?

the auditory scene; distinct sounds are grouped together in virtue of having the same source, so auditory perception is able to track sound sources over time and through changes; auditory perception is sensitive to whether a sound source maintains its cohesiveness over time – think of the difference between hearing a bottle drop to the floor and bounce, and hearing it break – and, since many sounds are such that their nature is partly determined by the structure – volume, shape, and material construction – of the object that produced them, many sounds are such that they could normally only have been produced by a single, cohesive, object. It would seem, then, that auditory perception can track the kinds of properties that are constitutive of being an object. So it’s not implausible to suggest that auditory perception represents the sources of sounds as having properties that are constitutive of being an object. Furthermore, there is some evidence that information about sound sources is bound together into a representation – an ‘auditory object’ file – that functions in auditory perception in a way that is similar to the way object files function in visual perception. 15
Show more

23 Read more

Extraction of Information of Audio-Visual Contents

Extraction of Information of Audio-Visual Contents

The understanding of an audio-visual work, the extraction of its informational content, requires the definition of the space the agent has to model in his representation. We believe that this space transcends the standard definition of the concept of rectangular frame that applies to the window an agent can observe when he looks at the representation of the audio-visual product through a playback system. The cognitive experience, and, therefore, the element to be considered, should not be restricted to the 720x576 pixels of the player or the movie screen. One must consider, at least, the three-dimensional space associated with the situation depicted in the work at a given time (with one frame), and must include all items that the agent is able to identify in this three- dimensional cube, including, of course, the experience of sound [Caballero, 2009]. Diverse sys- tems of image recognition orientated to the extraction of information from the three-dimensional reconstruction of the space gathered in the image have been developed along the same lines (Sminchisescu, 2006).
Show more

8 Read more

Improving audio-visual speech recognition using deep neural networks with dynamic stream reliability estimates

Improving audio-visual speech recognition using deep neural networks with dynamic stream reliability estimates

We compare the performance between different models, i.e., the GMM-based system and the DNN-based systems at dif- ferent training stages when using MFCC features in Fig. 1. We can see that DNN-based systems are able to outperform the GMM-based system for each SNR, where the largest rel- ative improvements are seen for very low SNR conditions. The recognition accuracy further increases when using the sMBR criterion for training, where a higher number of iter- ations (i.e., sMBR-5 vs. sMBR-1) yields slight performance improvements on average over all SNRs. For the sake of com- pactness, we thus limit the ensuing evaluation to using only the strongest DNN system (DNN sMBR-5).
Show more

6 Read more

Separation of Audio Visual Speech Sources: A New Approach Exploiting the Audio Visual Coherence of Speech Stimuli

Separation of Audio Visual Speech Sources: A New Approach Exploiting the Audio Visual Coherence of Speech Stimuli

It appears that the principle of an audio-visual algorithm for speech signals separation is theoretically sound and techni- cally viable. Of course, we are far from the end, and a num- ber of problems are still to be solved. We mention three main ones. Firstly, the optimisation procedure we used here is very rough, and we presently study more powerful gradient-based techniques that should be very important to speed up the algorithm, which is presently rather slow. Secondly, the sta- tistical models of the joint audio-visual probability could be based on more sophisticated functions, and particularly the assumption of temporal independence between consecutive frames could be replaced by more general assumptions, pos- sibly involving hidden Markov models. Last but not least, the speech source we used here is very simple, with only plosives and vowels uttered by one speaker, and the passage to more complex stimuli will considerably increase the difficulty.
Show more

9 Read more

Show all 10000 documents...