Project Description: A machine learning and text-to-speech extension model was created to help special people with hand gestures. The model was trained on eight classes of hand positions, which were collected using a camera and cleaned of unclear positions. The model had a testing accuracy of 90%. It uses simple blocks for analyzing, detecting position, classification, generating captions, and speech. The model outputs a caption and speech of a word when given a hand position. This project showcases the potential of technology in enhancing the communication abilities of special people.
Project Theme: Strengthen the Health Infrastructure