Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice

Journal Title: Informatics - Year 2019, Vol 6, Issue 1

Abstract

Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid translation from scratch, translation using translation memories, or machine translation postediting, we developed a web-based translation editing interface that permits multimodal input via touch-enabled screens and speech recognition in addition to keyboard and mouse. The tool also conforms to web accessibility standards. This article describes the tool and its development process over several iterations. Between these iterations we carried out two usability studies, also reported here. Findings were promising, albeit somewhat inconclusive. Participants liked the tool and the speech recognition functionality. Reports of the touchscreen were mixed, and we consider that it may require further research to incorporate touch into a translation interface in a usable way.

Authors and Affiliations

Carlos S. C. Teixeira, Joss Moorkens, Daniel Turner, Joris Vreeke and Andy Way

Keywords

Related Articles

Evaluation of the Omaha System Prototype Icons for Global Health Literacy

Omaha System problem concepts describe a comprehensive, holistic view of health in simple terms that have been represented in a set of prototype icons intended for universal use by consumers and clinicians. The purpose...

Designing towards the Unknown: Engaging with Material and Aesthetic Uncertainty

New materials with new capabilities demand new ways of approaching design. Destabilising existing methods is crucial to develop new methods. Yet, radical destabilisation—where outcomes remain unknown long enough that n...

Medical and Para-Medical Personnel’ Perspectives on Home Health Care Technology

User-based research is strongly recommended in design for older adults. The aim of this paper is to focus the attention on the poorly explored role of medical and para-medical personnel’s perspective on home health car...

Designing the Learning Experiences in Serious Games: The Overt and the Subtle—The Virtual Clinic Learning Environment

Serious Games are becoming more common in the educational setting and must pass muster with both students and instructors for their learning experience and knowledge building. The Virtual Clinic Learning Environment ha...

Exploiting Rating Abstention Intervals for Addressing Concept Drift in Social Network Recommender Systems

One of the major problems that social networks face is the continuous production of successful, user-targeted information in the form of recommendations, which are produced exploiting technology from the field of recom...

Download PDF file
  • EP ID EP44163
  • DOI https://doi.org/10.3390/informatics6010013
  • Views 257
  • Downloads 0

How To Cite

Carlos S. C. Teixeira, Joss Moorkens, Daniel Turner, Joris Vreeke and Andy Way (2019). Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice. Informatics, 6(1), -. https://europub.co.uk/articles/-A-44163