Enhancing Sign Language Understanding through Machine Learning at the Sentence Level

Journal Title: International Journal of Experimental Research and Review - Year 2024, Vol 41, Issue 5

Abstract

The visual language of sign language is based on nonverbal communication, including hand and body gestures. When it comes to communicating, it is the main tool for those who are deaf or hard of hearing all around the globe. Useful for both hearing and deaf persons, this can translate sign language into sentences in real-time or help those who are hard of hearing communicate with others. This work focuses on developing a sentence-level sign language detection system utilizing a custom dataset and Random Forest model. Leveraging tools such as Media Pipe and TensorFlow, we facilitate gesture detection. Through continuous detection of gestures, we generate a list of corresponding labels. These labels are then used to construct sentences automatically. The system seamlessly integrates with ChatGPT, allowing direct access to generate sentences based on the detected gestures. Our custom dataset ensures that the model can accurately interpret a wide range of sign language gestures. Our method helps close the communication gap between people who use sign language and others, with an accuracy of 80%, by merging machine learning with complex language models.

Authors and Affiliations

Ch Sekhar, Juthuka Aruna Devi, Mirtipati Satish Kumar, Kinthali Swathi, Pala Pooja Ratnam, Marada Srinivasa Rao

Keywords

Related Articles

Orthogonal array and artificial neural network approach for sustainable cutting optimization machining of 17-4 PH steel under CNC wet turning operations

Sustainable manufacturing strives to increase output while minimizing resource usage, costs, and environmental impact. Tool longevity is crucial, considering material, power, and resource consumption. Challenges like chi...

Predictive risk assessment of a common food additive monosodium glutamate : An in vivo biochemical, patho-physiological and molecular study

Monosodium glutamate (MSG) is a popular food additive commonly known as Ajinomoto, which has a flavour enhancing effect on food. We investigated if the MSG has any potential to alter kidney and liver function and biochem...

Exploring Consumer Preferences and Behaviour Toward Digital Payment Gateways in India

This study explores consumer behaviour and preferences towards digital payment gateways in India, focusing on the factors that drive consumer choices and satisfaction. A structured survey with a stratified random sample...

Assess the Attitude Towards Mental Illness Among Nurses Working in Selected Tertiary Care Hospital

Mental health is an urgent concern in India as every sixth Indian needs mental health help as reported by a recent National Mental Health Survey done by Gururaj in the year 2016. Further, while 20% of Indians suffer fro...

Multimodal neuroprotection by Terminalia chebula fruit extract against haloperidol-induced neurotoxicity in rats

Terminalia chebula is a plant with a long history of use in traditional medicine for its medicinal properties. In this study, we investigated the potential neuroprotective effects of an aqueous extract derived from the d...

Download PDF file
  • EP ID EP741625
  • DOI 10.52756/ijerr.2024.v41spl.002
  • Views 17
  • Downloads 0

How To Cite

Ch Sekhar, Juthuka Aruna Devi, Mirtipati Satish Kumar, Kinthali Swathi, Pala Pooja Ratnam, Marada Srinivasa Rao (2024). Enhancing Sign Language Understanding through Machine Learning at the Sentence Level. International Journal of Experimental Research and Review, 41(5), -. https://europub.co.uk/articles/-A-741625