Mood Extraction Using Facial Features to Improve Learning Curves of Students in E-Learning Systems

Abstract

Students’ interest and involvement during class lectures is imperative for grasping concepts and significantly improves academic performance of the students. Direct supervision of lectures by instructors is the main reason behind student attentiveness in class. Still, there is sufficient percentage of students who even under direct supervision tend to lose concentration. Considering the e-learning environment, this problem is aggravated due to absence of any human supervision. This calls for an approach to assess and identify lapses of attention by a student in an e-learning session. This study is carried out to improve student’s involvement in e-learning platforms by using their facial feature to extract mood patterns. Analyzing themoods based on emotional states of a student during an online lecture can provide interesting results which can be readily used to improvethe efficacy of content delivery in an e-learning platform. A survey is carried out among instructors involved in e-learning to identify most probable facial features that represent the facial expressions or mood patterns of a student. A neural network approach is used to train the system using facial feature sets to predict specific facial expressions. Moreover, a data association based algorithm specifically for extracting information on emotional states by correlating multiple sets of facial features is also proposed. This framework showed promising results in inciting student’s interest by varying the content being delivered.Different combinations of inter-related facial expressions for specific time frames were used to estimate mood patterns and subsequently level of involvement of a student in an e-learning environment.The results achieved during the course of research showed that mood patterns of a student provide a good correlation with his interest or involvement during online lectures and can be used to vary the content to improve students’ involvement in the e-learning system.More facial expressions and mood categories can be included to diversify the application of the proposed method.

Authors and Affiliations

Abdulkareem Al-Alwani

Keywords

Related Articles

Line of Sight Estimation Accuracy Improvement using Depth Image and Ellipsoidal Model of Cornea Curvature

Line of sight estimation accuracy improvement is attempted using depth image (distance between user and display) and ellipsoidal model (shape of user’s eye) of cornea curvature. It is strongly required to improve line of...

Design, Release, Update, Repeat: The Basic Process of a Security Protocol’s Evolution

Companies, businesses, colleges, etc. throughout the world use computer networks and telecommunications to run their operations. The convenience, information-gathering, and organizational abilities provided by computer n...

Design of Embedded Vision System based on FPGA-SoC

The advanced micro-electronics in the last decades provide each year new tools and devices making it possible to design more and more efficient artificial vision systems capable of meeting the constraints imposed. All th...

On the Codes over a Semilocal Finite Ring

In this paper, we study the structure of cyclic, quasi cyclic, constacyclic codes and their skew codes over the finite ring R. The Gray images of cyclic, quasi cyclic, skew cyclic, skew quasi cyclic and skew constacyclic...

An Agglomerative Hierarchical Clustering with Association Rules for Discovering Climate Change Patterns

Ozone analysis is the process of identifying meaningful patterns that would facilitate the prediction of future trends. One of the common techniques that have been used for ozone analysis is the clustering technique. Clu...

Download PDF file
  • EP ID EP397107
  • DOI 10.14569/IJACSA.2016.071157
  • Views 73
  • Downloads 0

How To Cite

Abdulkareem Al-Alwani (2016). Mood Extraction Using Facial Features to Improve Learning Curves of Students in E-Learning Systems. International Journal of Advanced Computer Science & Applications, 7(11), 444-453. https://europub.co.uk/articles/-A-397107