A Multimodal Interaction Framework for Blended Learning

Journal Title: EAI Endorsed Transactions on Creative Technologies - Year 2017, Vol 4, Issue 10

Abstract

Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a different environment that human can interact with, lack of input and output amalgamation in order to provide a close to natural interaction. Multimodal human-computer interaction has sought to provide alternative means of communication with an application, which will be more natural than the traditional “windows, icons, menus, pointer” (WIMP) style. Despite the great amount of devices in existence, most applications make use of a very limited set of modalities, most notably speech and touch. This paper describes a multimodal framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment and introduces a unified and effective framework for multimodal interaction called COALS.

Authors and Affiliations

N. Vidakis

Keywords

Related Articles

Instant Evaluation of Teaching Methods and Students’ Comprehension Level using Smart Mobile Technology

We design, implement and evaluate performance of Exantas application which is compatible with Android Operating System Smartphone devices. As Exantas tool was able to show ancients travelers the correct route to follow,...

Gamifying Navigation in Location-Based Applications

Location-based games entertain players usually by interactions at points of interest (POIs). Navigation between POIs often involve the use of either a physical or digital map, not taking advantage of the opportunity avai...

Enabling Active InteractionWith Music And Sound In Multisensory Environments

In recognising a lack of established design principles for multisensory environments (MSEs), two case studies are described which challenge current trends for creating and resourcing sensory spaces. Both environments wer...

Gist+RatSLAM: An Incremental Bio-inspired Place Recognition Front-End for RatSLAM

There exists ample research exploiting cognitive processes for robot localization and mapping, for instance RatSLAM [10]. In this regard, tasks such as visual perception and recognition, which are primarily governed by v...

Facilitating Requirements Inspection with Search-Based Selection of Diverse Use Case Scenarios

Use case scenarios are often used for conducting requirements inspection and other relevant downstream activities. While working with industrial partners, we discovered that an automated solution is required for optimall...

Download PDF file
  • EP ID EP45867
  • DOI http://dx.doi.org/10.4108/eai.4-9-2017.153057
  • Views 337
  • Downloads 0

How To Cite

N. Vidakis (2017). A Multimodal Interaction Framework for Blended Learning. EAI Endorsed Transactions on Creative Technologies, 4(10), -. https://europub.co.uk/articles/-A-45867