Article

Sign Language Regonization using CNN Algorithm with Meachine Learning Techniques

Author : M Akilana and M F Akila Lourdes

DOI : DOI:10.5072/FK26H4PV9J.2024.01.012.008

Deaf and hard-of-hearing people use sign language, a visual language, to communicate with one other and with people who do not know sign language. However, due to a lack of accessibility and communication hurdles, there is an increasing demand for technologies to help sign language users and the hearing community communicate. A system called sign language recognition with text and audio tries to fill this gap by automatically decoding sign language motions into spoken or written words. The procedure entails a number of processes, including image preprocessing, feature extraction, gesture detection, and translation into text or speech. The intricacy and variety of sign language motions are one of the major obstacles to text and audio-based sign language recognition. As a result, creating precise and trustworthy identification systems involves both a vast and varied collection of sign language motions as well as powerful machine learning methods like Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The ability to recognise sign language using text and audio has the potential to significantly increase accessibility and communication for the deaf and hard-of-hearing community, allowing them to interact more freely with the hearing community and take part more completely in society. Along with other industries, it has uses in entertainment, healthcare, and education. Technology has the ability to revolutionise interpersonal communication and close the gap between groups speaking various languages as it develops and gets better.


Full Text Attachment