Speech emotion recognition in emotional feedback for Human-Robot Interaction

Abstract

 For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six basic universal emotions from non-verbal features of human speech. The classification techniques used information from six audio files extracted from the eNTERFACE05 audio-visual emotion database. The information gain from a decision tree was also used in order to choose the most significant speech features, from a set of acoustic features commonly extracted in emotion analysis. The classifiers were evaluated with the proposed features and the features selected by the decision tree. With this feature selection could be observed that each one of compared classifiers increased the global accuracy and the recall. The best performance was obtained with Support Vector Machine and bayesNet.

Authors and Affiliations

Javier R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Cardenas, Isis Bonet

Keywords

Related Articles

 A Model for Facial Emotion Inference Based on Planar Dynamic Emotional Surfaces

 Emotions have direct influence on the human life and are of great importance in relationships and in the way interactions between individuals develop. Because of this, they are also important for the development of...

The Classification of the Real-Time Interaction-Based Behavior of Online Game Addiction in Children and Early Adolescents in Thailand

This paper aims to study actual behaviors of Thai children and early adolescents with different levels of game addiction while playing online games from an angle of the interaction between a user and computer. Real-time...

 Application of distributed lighting control architecture in dementia-friendly smart homes

 Dementia is a growing problem in societies with aging populations, not only for patients, but also for family members and for the society in terms of the associated costs of providing health care. Helping patients...

Effect of Driver Scope Awareness in the Lane Changing Maneuvers Using Cellular Automaton Model

This paper investigated the effect of drivers’ visibility and their perception (e.g., to estimate the speed and arrival time of another vehicle) on the lane changing maneuver. The term of scope awareness was used to desc...

 Relation Between Chlorophyll-A Concentration and Red Tide in the Intensive Study Area of the Ariake Sea, Japan in Winter Seasons by using MODIS Data

 Relation between chlorophyll-a concentration and red tide in the intensive study area of the back of Ariake Sea, Japan in the recent winter seasons is investigated by using MODIS data. Mechanism of red tide appeara...

Download PDF file
  • EP ID EP147949
  • DOI 10.14569/IJARAI.2015.040204
  • Views 114
  • Downloads 0

How To Cite

Javier R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Cardenas, Isis Bonet (2015).  Speech emotion recognition in emotional feedback for Human-Robot Interaction. International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 20-27. https://europub.co.uk/articles/-A-147949