[1] Global Data on Visual Impairments 2010. Available online:
http://WWW.Who.int/blindness/GLOBALDATAFINALforweb.pdf (accessed on 23 April 2017).
[2] A. Nada, M. Fakhr and A. Seddik Assistive Infrared Sensor Based Smart Stick for Blind People, in Proceedings of IEEE Technically Sponsored Science and Information Conference London, UK, (2015).
[3] A. Nada, SMashaly, M. Fakhr, and A. Seddik Effective Fast Response Smart Stick for Blind People, in Proceedings of the Second International Conference on Advances in Bio-Informatics and Environmental Engineering – ICABEE, April, (2015).
[4] P. B. L. Meijer, a Modular Synthetic Vision and Navigation. System for the Totally Blind. World Congress Proposals, (2005).
[5] N. Bourbakis and D. Kavraki, Intelligent Assistants for handicapped people's independence: case study, IEEE International Joint Symposia on Intelligence and Systems, Rockville, MD, (1996) 337-344.
[6] N. Bourbakis and P. Kakumanu, Skin-based Face Detection-Extraction and Recognition of Facial Expressions. In Applied Pattern Recognition, .91 (2008).
[7] D. Dakopoulos and N. Bourbakis, Preserving Visual Information in Low Resolution Images During Navigation of Blind, Proceedings of the 1st International Conference on Pervasive Technologies Related to Assistive Environment , Athens, Greece, July, (2008).
[8] N. Bourbakis, Sensing surrounding 3-D space for navigation of the blind - A prototype system featuring vibration arrays and data fusion provides a near real-time feedback, IEEE Engineering in Medicine and Biology Magazine, 27 (2008) 49-55.
[9] V. Santiago Praderas, N. Ortigosa, L. Dunai and G. Peris Fajarnes, Cognitive Aid System for Blind People (CASBliP), Proceedings of XXI Ingehraf-XVII ADM Congress p. 31, June, (2009).
[10] S. Sivaraman and M.M. Trivedi, “Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE Trans.Intell. Transport. Syst. 14 (2013) 1773–1795.
[11] L. Shao, X. Zhen, D. Tao and X. Li, Spatio-temporal laplacian pyramid coding for action recognition, IEEE Trans. Cybernet, (2014) 817–827.
[12] W. Liu, X.Z. Wen, B.B. Duan and et al., Rear vehicle detection and tracking for lane change assist, Proceedings of the IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, (2007) 252–257.
[13] T. Liu, N. Zheng, L. Zhao and H. Cheng, Learning based symmetric features selection for vehicle detection, Proceedings of the IEEE Intelligence and Vehicular Symposium, (2005) 124–129.
[14] J. Cui, F. Liu, Z. Li, and Z. Jia, Vehicle localization using a single camera, Proceedings of the IEEE IV, (2010) 871– 876.
[15] Hinton, G.E.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. AR Xiv 2015, are Xiv: 1503.02531.
[16] J.Redom, S.Farhadi, You Only Look Once: Unified, Real-time Objects Detection; 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Electronic ISSN: 1063-6919; Published on 12 December (2016).
[17] Ren, S.; He, K.; Girshick, R.B.; Zhang, X.; Sun, J. Object Detection Networks on Convolutional Feature Maps.
[18] IEEE Trans. Pattern Anal. Mach. Intell. (2017) 1476–1481.
[19] https://arxiv.org/pdf/1506.02640v5
[20] J. Redmon and A. Farha, Yolov3: An incremental improvement. AR Xiv, (2018).
[21] V. Ordonez, G,Kulkarni, and T.Berg, Im2text: Describing images.
[22] T. Lin, M. Maire, S. Belondie,J. Hays,P. Perona,D. Ramanan P. Dollar, and C. L. Zitnick, Microsoft COCO: Common Objects in Context in ECCV,2014.
[23] [Online]. https:// Pjreddie.com/darknet/YOLO/ (Accessed on 14 the July (2018).
[24] C. Arteta, V. Lempitsky, and A. Zisserman. Counting in the wild. In ECCV, (2016).