Before the present study, no sign language recognition system for the Nigeria indigenous sign language particularly Yoruba language has been developed. As a result, this research endeavors at introducing a Yoruba Sign Language recognition system using image processing and Artificial Neural Network (ANN).The proposed system (YSLRS) was implemented and tested. 600 images from 60 different signers were gathered. The images were acquired using vision based method, the different signers were asked to stand in front of a laptop’s camera make sign number from one to ten with their fingers in three different times and the images were stored in a folder. The image dataset was pre-processed for proper presentation for de-noising, segmentation and feature extraction. Thereafter, pattern recognition was done using feed forward back propagation ANN. The study revealed that Median filter with higher PSNR of 47.7 a lower MSE of 1.11, performed better than the Gaussian filter. Furthermore, the efficiency of the developed system was determined using mean square error and the best validation performance occurred at 25 epochs with a MSE of 0.004052, implying than ANN was able to adequately recognize the pattern of the Yoruba signs. Histogram was also used to determine the efficiency of the system, it can be seen that the histogram of the trained, tested and validated error bars were close to zero error, implying that the ANN and Receiver Operating Characteristic (ROC) was used to evaluate the performance of ANN in matching the features of the Yoruba Signs, which shows that ANN performed efficiently, having a high true positive rate and a minimum false positive rate. Finally, YSLRS developed in the study would reduce negative attitudes of victimizations suffered by the hearing-impaired individuals, by bridging communication gap among Nigerian PWD with hearing impairment.
Published in | International Journal of Information and Communication Sciences (Volume 3, Issue 1) |
DOI | 10.11648/j.ijics.20180301.12 |
Page(s) | 11-18 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2018. Published by Science Publishing Group |
Image Processing, Machine Learning Techniques, Recognition System, Sign Language Yoruba
[1] | Asha, T and Dixit, S. (2015) Sign Language Alphabets Recognition Using Wavelet Transform. Proceedings of Int'l Conference on Intelligent Computing, Electronics Systems and Information Technology (ICESIT-15) Aug 25-26, 2015 Kuala Lumpur (Malaysia). |
[2] | Assaleh. K and Al-Rousan M, (2005) Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers,‖ EURASIP Journal on Applied Signal Processing, 2005, vol., no. 13, pp. 2136-2145. |
[3] | Fu, L., Neural Networks in Computer Intelligence, New York, NY: McGraw-Hill, Inc., 1994. |
[4] | Gani, E. and Kika, A. (2016). Albanian Sign Language (AlbSL) Number Recognition from Both Hand’s Gestures Acquired by Kinect Sensors. International Journal of Advanced Computer Science and Applications (IJACSA). 7(7): 216-220. |
[5] | Hasan, M.M. and P.K. Mishra, (2010). HSV brightness factor matching for gesture recognition system. QInt. J. Image Process., 4: 456-467. |
[6] | Khaled Assaleh, Tamer Shanableh, Mustafa Fanaswala, Harish Bajaj, and Farnaz Amin, (2008).” Vision-based system for Continuous Arabic Sign Language Recognition in user dependent mode”, Proceeding of the 5th International Symposium on Mechatronics and its Applications (ISMA08), Amman, Jordan, May 27-29, 2008. |
[7] | Kim, J., Jang, W. & Bien, Z. (1996). A dynamic gesture recognition system for the Korean sign language (KSL). IEEE Trans Syst Man Cybern B Cybern. 1996;26(2):354-9 |
[8] | Kumar, R.: Chauerjee, C.; Singh. R. D.; Lohani, A. K.; Kumar. S. 2004. GIUH based Clark and Nash Models for runoff estimation for an ungauged basin and their uncertainty analysis. International Journal of River Basin Management 2(4): 281-290. |
[9] | Ramapulana, N. (2011). Artificial Neural Network Modelling Of Flood Prediction and Early Warning. MS.c Dissertation submitted to the faculty of natural and agricultural science. University of Fee State Bloemfontein. |
[10] | Solís, F., Martínez, D., and Espinoza, O. (2016). Automatic Mexican Sign Language Recognition Using Normalized Moments and Artificial Neural Networks. Engineering. 8: 733-740. |
[11] | Subha Rajam P., and Balakrishnan, G. (2011). Recognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People”, International Conference on Communication Technology and System Design. |
[12] | Wang C., Gao W., & Xuan Z. (2001) A Real-Time Large Vocabulary Continuous Recognition System for Chinese Sign Language,‖ Proc. IEEE Pacific Rim Conf. Multimedia, , pp. 150-157. |
APA Style
Ogunsanwo Gbenga Oyewole, Goga Nicholas, Awodele Oludele, Okolie Samuel. (2018). Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network. International Journal of Information and Communication Sciences, 3(1), 11-18. https://doi.org/10.11648/j.ijics.20180301.12
ACS Style
Ogunsanwo Gbenga Oyewole; Goga Nicholas; Awodele Oludele; Okolie Samuel. Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network. Int. J. Inf. Commun. Sci. 2018, 3(1), 11-18. doi: 10.11648/j.ijics.20180301.12
AMA Style
Ogunsanwo Gbenga Oyewole, Goga Nicholas, Awodele Oludele, Okolie Samuel. Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network. Int J Inf Commun Sci. 2018;3(1):11-18. doi: 10.11648/j.ijics.20180301.12
@article{10.11648/j.ijics.20180301.12, author = {Ogunsanwo Gbenga Oyewole and Goga Nicholas and Awodele Oludele and Okolie Samuel}, title = {Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network}, journal = {International Journal of Information and Communication Sciences}, volume = {3}, number = {1}, pages = {11-18}, doi = {10.11648/j.ijics.20180301.12}, url = {https://doi.org/10.11648/j.ijics.20180301.12}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijics.20180301.12}, abstract = {Before the present study, no sign language recognition system for the Nigeria indigenous sign language particularly Yoruba language has been developed. As a result, this research endeavors at introducing a Yoruba Sign Language recognition system using image processing and Artificial Neural Network (ANN).The proposed system (YSLRS) was implemented and tested. 600 images from 60 different signers were gathered. The images were acquired using vision based method, the different signers were asked to stand in front of a laptop’s camera make sign number from one to ten with their fingers in three different times and the images were stored in a folder. The image dataset was pre-processed for proper presentation for de-noising, segmentation and feature extraction. Thereafter, pattern recognition was done using feed forward back propagation ANN. The study revealed that Median filter with higher PSNR of 47.7 a lower MSE of 1.11, performed better than the Gaussian filter. Furthermore, the efficiency of the developed system was determined using mean square error and the best validation performance occurred at 25 epochs with a MSE of 0.004052, implying than ANN was able to adequately recognize the pattern of the Yoruba signs. Histogram was also used to determine the efficiency of the system, it can be seen that the histogram of the trained, tested and validated error bars were close to zero error, implying that the ANN and Receiver Operating Characteristic (ROC) was used to evaluate the performance of ANN in matching the features of the Yoruba Signs, which shows that ANN performed efficiently, having a high true positive rate and a minimum false positive rate. Finally, YSLRS developed in the study would reduce negative attitudes of victimizations suffered by the hearing-impaired individuals, by bridging communication gap among Nigerian PWD with hearing impairment.}, year = {2018} }
TY - JOUR T1 - Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network AU - Ogunsanwo Gbenga Oyewole AU - Goga Nicholas AU - Awodele Oludele AU - Okolie Samuel Y1 - 2018/03/30 PY - 2018 N1 - https://doi.org/10.11648/j.ijics.20180301.12 DO - 10.11648/j.ijics.20180301.12 T2 - International Journal of Information and Communication Sciences JF - International Journal of Information and Communication Sciences JO - International Journal of Information and Communication Sciences SP - 11 EP - 18 PB - Science Publishing Group SN - 2575-1719 UR - https://doi.org/10.11648/j.ijics.20180301.12 AB - Before the present study, no sign language recognition system for the Nigeria indigenous sign language particularly Yoruba language has been developed. As a result, this research endeavors at introducing a Yoruba Sign Language recognition system using image processing and Artificial Neural Network (ANN).The proposed system (YSLRS) was implemented and tested. 600 images from 60 different signers were gathered. The images were acquired using vision based method, the different signers were asked to stand in front of a laptop’s camera make sign number from one to ten with their fingers in three different times and the images were stored in a folder. The image dataset was pre-processed for proper presentation for de-noising, segmentation and feature extraction. Thereafter, pattern recognition was done using feed forward back propagation ANN. The study revealed that Median filter with higher PSNR of 47.7 a lower MSE of 1.11, performed better than the Gaussian filter. Furthermore, the efficiency of the developed system was determined using mean square error and the best validation performance occurred at 25 epochs with a MSE of 0.004052, implying than ANN was able to adequately recognize the pattern of the Yoruba signs. Histogram was also used to determine the efficiency of the system, it can be seen that the histogram of the trained, tested and validated error bars were close to zero error, implying that the ANN and Receiver Operating Characteristic (ROC) was used to evaluate the performance of ANN in matching the features of the Yoruba Signs, which shows that ANN performed efficiently, having a high true positive rate and a minimum false positive rate. Finally, YSLRS developed in the study would reduce negative attitudes of victimizations suffered by the hearing-impaired individuals, by bridging communication gap among Nigerian PWD with hearing impairment. VL - 3 IS - 1 ER -