Rosi, A. and Remya Rose, S and Murugan, C.Arul and Balamurugan, E and Priya, M. Sangeetha and Lalitha, K.S. (2024) Automated Gesture Recognition using Deep Learning Model for Visually Challenged People. In: UNSPECIFIED.
Full text not available from this repository.Abstract
Individuals with visual impairments face challenges in engaging in tasks involving surroundings, social interactions, and technologies. Moreover, individuals experience challenges in being self-reliant and secure in their everyday activities. The blind may precisely sense and react to emotions with recognition. The current application needs the integration of face and facial expression detection. Technologies seem far more sophisticated than they were in the past. It is possible to identify the communication of a deaf and visually challenged individual by recording their speech and comparing it to existing datasets, therefore determining their intentions. This research presents a system for recognizing hand gestures and faces using animated pictures and techniques. The hand gesture method identifies skin color and hand convex deformities, while the face recognition system utilizes Haar Cascade Classifiers and LBPH recognizer for identification and authentication. OpenCV is used for execution. The study achieved an accuracy rate of 96.3% in identifying hand gestures and facial features. The system is automated and operates on an artificial intelligence server. © 2024 Elsevier B.V., All rights reserved.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Subjects: | Computer Science > Computer Networks and Communications |
| Divisions: | Homoeopathy > Vinayaka Mission's Homoeopathic Medical College & Hospital, Salem > Psychiatry |
| Depositing User: | Unnamed user with email techsupport@mosys.org |
| Last Modified: | 27 Nov 2025 06:54 |
| URI: | https://vmuir.mosys.org/id/eprint/1891 |
Dimensions
Dimensions