ABSTRACT: Music Information Retrieval (MIR) is the task of retrieving information from the music and it is the fastest growing area of music industry. Music Information Retrieval basically deals with the problems of querying and retrieving certain types of information from audio files with the help of large data set. Digital music is widely available in different digital formats due to explosive growth of information and multimedia technologies. Thus, the management and retrieval of such music is necessary for accessing music according to their meanings in respective songs. A lot of research and study has been going on in the field of music genre classification and musicmood detection in the recent years. The basic approach of the work presented in this paper is for automatic identification of genre and mood detection of underlying audio file by mining different audio features. BoF representation is the way to represent the complex and high dimensional audio data. In the bag-of-frames approach, the signal is represented as the long-term distribution of the ‘local’(frame-based) acoustic features. An early and late temporal pooling concept of sparse coding has been used to perform the classification tasks in more efficient way to improve the accuracy of the system.
Purpose: To investigate the effects of music therapy on depressive mood and anx- iety in post-stroke patients and evaluate satisfaction levels of patients and caregiv- ers. Materials and Methods: Eighteen post-stroke patients, within six months of onset and mini mental status examination score of over 20, participated in this study. Patients were divided into music and control groups. The experimental group partic- ipated in the music therapy program for four weeks. Psychological status was evaluated with the Beck Anxiety Inventory (BAI) and Beck Depression Inventory (BDI) before and after music therapy. Satisfaction with music therapy was evaluat- ed by a questionnaire. Results: BAI and BDI scores showed a greater decrease in the music group than the control group after music therapy, but only the decrease of BDI scores were statistically significant (p=0.048). Music therapy satisfaction in patients and caregivers was affirmative. Conclusion: Music therapy has a posi- tive effect on mood in post-stroke patients and may be beneficial for mood im- provement with stroke. These results are encouraging, but further studies are need- ed in this field.
Music recommendation and retrieval is of interest due to the increasing amount of audio data available to the aver- age consumer. Experimental data on similarity in mood of diﬀerent songs can be instrumental in deﬁning musical distance measures [1,2] and would enable the deﬁnition of prototypical songs (or song features) for various moods. These latter can then be used as the so-called mood pre- sets in music recommendation systems. With this in mind, we deﬁned an experiment to collect the relevant data. In view of the mentioned applications, we are interested in the perceived song mood (not the induced mood), anno- tation per song (not per part of a song), and annotation by average users (as opposed to expert annotators). Further- more, the test should be executed with a suﬃcient amount of participants as well as a good cross-section of music with clear moods covering the full range and, obviously, a proper set of mood labels (easy-to-use and discrimina- tive). The data collected in earlier studies on musicmood [3-12] only partially meet these requirements.
In this survey, different affective aspects of music related to human emotions and mood was discussed. Apart from that musical psychology is also studied. Firstly, discuss about the musicmood on psychological theories. Secondly emotional reaction on human body can be analyze by using parameter tempo, skin conductance, electrodermal activity (EDA) and heart rate (HR) signals. The heart rate monitoring system based on a microcontroller was offers the advantage of portability over tape-based recording systems. Finally, wearable sensors are consider for detecting mental stress. Music therapy also consider for dysmenorrheal problem.
In our present work, we have developed an au- tomatic mood classifier for Hindi music. Hindi is the national language of India. Hindi songs are one of the popular categories of Indian songs and are present in Bollywood movies. Hindi songs make up 72% of the music sales in India 1 . Main- ly, we have concentrated on the collection of Hindi music data annotated with five mood clas- ses 2 . Then, a computational model has been de- veloped to identify the moods of songs using several high and low level audio features. We have employed the decision tree classifier (J48) and achieved 51.56% of reasonable accuracy on a data set of 230 songs of five mood clusters.
Music is a kind of art of which its field of performance is composition and sound making to create beauty in form and expression. Today, more than ever, technology has brought music to the mankind, so that its role can be seen every day in social and emotional life. 1 Music captures attention, increases morale, feelings, arousal, and changes mood, also, by promoting mobility and activity, it increases work efficiency. 2,3 One of the goals of ergonomics is improvement of individual performance when performing a certain task and adapting it to his/her physical and mental characteristics. Music is a positive intervention used to improve performance and today in some industries music is used to enhance performance of individuals for increasing work efficiency. 4 The effects of music on physical activity are investigated in three areas: ergogenic; psychological; and physiological. Ergogenic effects of music
The purpose of this study was to explore whether group music therapy can reduce college students’ de- pression and improve their mental health level. Our study found that there was no significant difference in the depression scores of the participants in the two groups before the intervention. However, after the musical therapy, the depression scores of the experimental group participants reduced significantly, while there was no significant difference between the pre-test and post-test for the control group. This finding is in line with previ- ous studies, which found that group music therapy have significant effects on depression mood [16-18].This in- dicates that music therapy is effective in ameliorating the depression mood among college students, and our first hypothesis is supported.
In the past few years, research in Music Information Retrieval has been very active. It has produced automatic classification methods in order to deal with the amount of digital music available. A relatively recent problem is the automatic mood classification of music consisting in a system taking the waveform of a musical piece and outputting text labels describing the mood in the music (as happy, sad, etc...). It has already been demonstrated that audio-based techniques can achieve satisfying results to a certain extent. Using a few simple mood categories and carefully checking for reliable agreements between people, automatic classification based on audio features gives promising results. Psychological studies have shown initially at the Music Technology Group that part of the semantic information of songs resides exclusively in the lyrics. This means that lyrics can contain relevant emotional information that is not included in the audio.
In this paper, instead of creating a novel database and using one of the previous approaches to label it with emotion labels, we decided to apply emotion/mood labels to an already existing database, namely the Latin Music Database (LMD) . The LMD was originally developed for the task of automatic music genre classification and contains 3136 songs from ten different Latin music gen- res. One of the main differences between the LMD and other databases is that the genre labels were assigned to each song in the database by two ballroom and Brazilian cultural dances teachers with over ten years of experience. The main contribution of this paper is to present the Latin MusicMood Database, which is an extension of the LMD where each song in the LMD has one mood label associated with it. The process of assigning mood labels is presented in Section 2. In Section 3 we present a data analysis of this novel database. In Section 4, we present the related work, and in Section 5, we present the conclusions of this work.
Individual “mood” has presently received growing consideration as a useful technique for organizing and accessing music. Stress which changes person attitude is a major physical and psychological problem of individuals today. Many types of research have been carried out based on this study of mood, particularly in the U.S.A, Canada, Europe, and some part of Asia. However, while these studies are relevant, and help to solve the problem of mood change, still, researchers were not able to look into this important aspect in one of the 25 rapid growth markets in the world-Malaysia. The used music genre as an influence mechanism to predict mood individual and again identifies the classified music genre that predicts personal mood. The study adapts a model of Russell and Thayer to categorize selected attitudes in the study. The study uses quantitative survey method, and questionnaire form was designed and used as an instrument for data collection. Data were collected from 245 respondents from University Utara Malaysia (UUM) students and were analyzed using SPSS version 20. Results were presented in words, bar chart and table form. The study has found that the uses’ of music to predict individual mood is positively related to the aim and problem of the investigation. Result in part A of the study indicates that music can be used to influence particular mood. Meanwhile, findings in part B shows that the classified music genre was helpful to predict individual mood.
________________________________________________________________________________________________________ Abstract - Listening music in spare or free time is one of the best choices for most of the people. Music and Mood are closely linked together. Music shares a very special relation with human emotions or feelings. People are use to listen songs as per their mood. This paper proposes Hindi songs classification system based on mood using MFCC value of audio clip. MFCC value mainly gives the power value of song. Based on that value classification algorithm is applied to check whether that song belongs to which categories like happy, sad, romantic kind of mood based songs are addressed from dataset.
This study proposes a music-aided framework for affective interaction of service robots with humans. The framework consists of three systems, respectively, for perception, memory, and expression on the basis of the human brain mechanism. We propose a novel approach to identify human emotions in the perception system. The conventional approaches use speech and facial expressions as representative bimodal indicators for emotion recognition. But, our approach uses the mood of music as a supplementary indicator to more correctly determine emotions along with speech and facial expressions. For multimodal emotion recognition, we propose an effective decision criterion using records of bimodal recognition results relevant to the musical mood. The memory and expression systems also utilize musical data to provide natural and affective reactions to human emotions. For evaluation of our approach, we simulated the proposed human-robot interaction with a service robot, iRobiQ. Our perception system exhibited superior performance over the conventional approach, and most human participants noted favorable reactions toward the music-aided affective interaction.
Dealing with “every music that comes in”, we had proposed usage of the (rounded) median to provide a label even in the case of complete rater disagreement. This better fits the paradigm of a dimensional approach, as introduction of a garbage class would disrupt the ordinal structure. Alternatively, we had reduced the test instances by those that lack such agreement. As to be expected, more prototypical instances lead to higher performances. By that the overall accuracies and mean recall rate were found around 60% in the case of processing all instances, and around 70% in the case of prototypical representatives for the two three- class tasks of valence and arousal determination. For these constellations confusions were observed with the neighbour- ing classes, which raises practicability. Yet, clearly future eﬀorts will be needed before systems can fully automatically judge on musical mood no matter what music is provided. In addition, high variances between the labellings by four raters were observed that also led to significantly di ﬀ ering performances when the system was trained per rater. This shows that mood perception is indeed rather subjective, and that it will be challenging at di ﬀ erent levels to follow every user’s perception once a user would be willing to train or personalize such a system.
ABSTRACT: The goal of this work is to build an emotiondetect system which can analyze basic facial expression of human. In this project a method is presented for mood detection based on humans face emotions. The proposed method used the humans face to identify the mood of that human and finally using this result play the audio file which related to human’s emotion. Firstly system takes the human face as input then the further process will going on. Face detection and eye detection is carried out. After that using feature extraction technique to recognize the human face. Thismethod helps to recognize the human’s emotionusing feature of face image. Through the feature extraction of lip, mouth, and eyes, eyebrow, those feature points are found.If the input face wills matches exactly to the emotions base dataset face then we can identify the humans exact emotionto play the emotionrelated audio file also we will fetch the news data based on user preferences using API. Recognition under different environmental conditions can be achieved by training on limited number of characteristics faces. The proposed approach is simple, efficient, and accurate. System play’s very important role in recognition and detection related field.
Wu and Jeng  use a complex mixture of various features: Rhythmic Content, Pitch Content, Power Spectrum Centroid, Inter-channel Cross Correlation, Tonality, Spectral Contrast and Daubechies Wavelet Coe ﬃcient Histograms. For the classiﬁcation step in the music domain Support Vector Machines (SVM)  and Gaussian Mixture Models (GMM)  are typically applied. Liu et al.  utilize a nearest- mean classiﬁer. The compa rison of classiﬁcation results of diﬀ erent algorithms is di ﬃ cult because every publication uses an individual test set or ground-truth. E.g. the algorithm of Wu and Jeng[17 ] reaches an average classiﬁcation rateof 74,35% for 8 di ﬀ erent moods with the additional di ﬃ culty that the results of the system and the ground- truth contain mood histograms which are compared by aquadratic-cross- similarity. Jadon et al[21,22] have extracted time domain, pitch, frequency domain, sub band energy, and MFCC based audio features .
Despite the existence of many well-performing music classification methods, it is still unclear which music representation (i.e., audio features) and which machine learning algorithm is appropriate for a specific music clas- sification task. A possible explanation for the aforemen- tioned open question is that the classes (e.g., genre, mood, or other semantic classes) in music classification prob- lems are related to and built on some common unknown latent variables, which are different in each problem. For instance, many different songs, although they share instrumentation (i.e., have similar timbral characteristics), convey different emotions and belong to different genres. Furthermore, cover songs, which have the same harmonic content with the originals, may differ in the instrumen- tation and possibly evoke a different mood, so they are classified into different genres. Therefore, the challenge is to reveal the common latent features based on given music representations, such as timbral, auditory, etc., and to simultaneously learn the models that are appropriate for each specific classification task.
One of the main reasons of developing such taxonomy was to collect similar songs and clus- ter them into a single mood class. Preliminary observations showed significant invariability in case of audio features of the subclasses over its corresponding main or coarse class. Basically the preliminary observations of annotation are relat- ed with the psychological factors that influence the annotation process while annotating a piece of music after listening to the song. For example, a happy and a delighted song have high valence, whereas an aroused and an excited songs have high arousal. The final mood taxonomy used in our experiment is shown in Table 1.
ABSTRACT: This paper highlights methods for two tasks of Music Information Retrieval (MIR). Genre Classification and Mood Estimation are the tasks that are surveyed in this paper. Genres are the categories in which music is generally classified. It’s important to understand genre categorization for efficient music retrieval system. Techniques for genre classification automate the music retrieval systems to its users. On the other hand, human generally categorize music in terms of its emotional associations. Therefore, this paper also provides review of the methods that have been proposed for music emotion recognition. This MIR tasks needs features extraction. Here, we have focused on the method Gaussian Process (GP) and its models, Gaussian Process Classification and Gaussian Process Regression for both the tasks respectively.
membership, decrease anxiety, improve mood, and induce strong physical reactions such as thrills and chills. My dissertation research looks at closing this gap by investigating how music can offer more to its consumer than is currently understood. Using a mixed- method approach, I first explore the phenomenon of experiencing a favourite song. Following that, I experimentally investigate: 1) how and whether different modes of music can induce an emotive, cognitive and imagery filled experience (auditory transportation), 2) whether this transportation experience results in differences across songs that have happy versus sad personal connotations, and 3) whether manipulating varying levels of auditory transportation can in turn influence other consumer-related downstream behaviors. The contribution this research stands to make includes a theoretical one: I introduce a theory (transportation) previously limited to the visual domain into the auditory domain, while also systematically investigating whether it can predict psychological changes, and if these changes in turn influence marketplace interactions. Practically, I involve participants in the choosing of the music stimuli for experimentation, thereby not only increasing validity, but also addressing an important gap in the study of music consumption as a form of experiential consumption.
Music has vital role in the world of entertainment. As per the interest of listener, music can listen different situations in day to day life like doing sports, relaxing, studying or travelling etc. The features of music and music structure are used for selection of appropriate music as per the emotional interest of its listeners. The relationship between specific musical structure and emotional responses is challenging issue for a researcher. Automatic emotion or mood detection was present in early stages associated with various fields like music information retrieval, music and emotion psychology, and more recently affective computing evaluates the music emotion or mood based on which music structure is heard. In inducing emotion, a lot of research work is carried out on understanding the vital role of various music features and music structure .