Enhancing Sign Language Understanding through Machine Learning at the Sentence Level

Journal Title: International Journal of Experimental Research and Review - Year 2024, Vol 41, Issue 5

Abstract

The visual language of sign language is based on nonverbal communication, including hand and body gestures. When it comes to communicating, it is the main tool for those who are deaf or hard of hearing all around the globe. Useful for both hearing and deaf persons, this can translate sign language into sentences in real-time or help those who are hard of hearing communicate with others. This work focuses on developing a sentence-level sign language detection system utilizing a custom dataset and Random Forest model. Leveraging tools such as Media Pipe and TensorFlow, we facilitate gesture detection. Through continuous detection of gestures, we generate a list of corresponding labels. These labels are then used to construct sentences automatically. The system seamlessly integrates with ChatGPT, allowing direct access to generate sentences based on the detected gestures. Our custom dataset ensures that the model can accurately interpret a wide range of sign language gestures. Our method helps close the communication gap between people who use sign language and others, with an accuracy of 80%, by merging machine learning with complex language models.

Authors and Affiliations

Ch Sekhar, Juthuka Aruna Devi, Mirtipati Satish Kumar, Kinthali Swathi, Pala Pooja Ratnam, Marada Srinivasa Rao

Keywords

Related Articles

A review on opportunities and challenges of nitrogen removal from wastewater using microalgae

This study provides information on the occurrence of nitrogenous contamination in surface water, their sources and their negative effects. In addition, the study gives an overview of the possible technical, institutional...

Prevalence of Stunting, wasting and underweight among Santal children of Galudih, Purbi Singbhum district, Jharkhand, India

The objective of this study was to assess the differences in body stature (height), body weight, and frequency of stunted, wasted, and underweight children of the Santal ethnicity in Galudih area, Purbi Singbhum, Jharkha...

Performance Evaluation of YOLOv5-based Custom Object Detection Model for Campus-Specific Scenario

This study evaluates the performance of a custom object detection model based on the YOLOv5 architecture, specifically tailored for autonomous electric vehicles. The model undergoes pre-processing using the Roboflow comp...

Empowering rural women by participating in sustainable environmental management: A case study of Banasthali University, Rajasthan

In this paper, an attempt has been made to examine the rural women living around the Banasthali University, Rajasthan and how they empowered themselves by participating in sustainable environmental management programmes...

Artificial Intelligence Driven Bibliometric Insights: Pioneering Down Syndrome Research

The present bibliometric analysis investigates the scholarly output from 2013 to 2022 to explore the use of artificial intelligence (AI) in Down syndrome research. The analysis demonstrates a significant and rapid growth...

Download PDF file
  • EP ID EP741625
  • DOI 10.52756/ijerr.2024.v41spl.002
  • Views 41
  • Downloads 0

How To Cite

Ch Sekhar, Juthuka Aruna Devi, Mirtipati Satish Kumar, Kinthali Swathi, Pala Pooja Ratnam, Marada Srinivasa Rao (2024). Enhancing Sign Language Understanding through Machine Learning at the Sentence Level. International Journal of Experimental Research and Review, 41(5), -. https://europub.co.uk/articles/-A-741625