Augmented reality using artificial neural networks –a review
-
2019-12-15 https://doi.org/10.14419/ijet.v8i4.29981 -
Augmented Reality, Artificial Neural Networks, Tracking Tracking Technologies, Gesture Recognition. -
Abstract
The present paper reviews the areas where Augmented Reality (AR) has been used in Artificial Neural Networks (ANN) (Artificial Neural Networks). The focus on systems based on AR is largely on enhancing technologies in diverse application areas such as; defense, robotics, medical, manufacturing, education, entertainment, assisted driving, maintenance and mobile assistance. However, AR is now finding much usage in ANN. The research considered a review based methodology wherein most studies conducted in the past on AR and ANN were reviewed. AR with ANN has profound applications in various sectors and has been developed in an extended way but still has some distance to go afore industries, the military and the common public will receive it as a accustomed user interface. AR would modernize the way people animate and the way industries endeavor by effective utilization. There is an incredible potential in fields such as construction, art, architecture, repair and manufacturing with mediated reality and well-organized visualization through AR.
Â
Â
-
References
[1] Akgul, O., Penekli, H.I. & Genc, Y. (2016). "Applying Deep Learning in Augmented Reality Tracking." In: 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). [Online]. 2016, IEEE, pp. 47–54. Available from: http://ieeexplore.ieee.org/document/7907444/. https://doi.org/10.1109/SITIS.2016.17.
[2] Alvarez, J.M., Gevers, T., LeCun, Y. & Lopez, A.M. (2012). "Road Scene Segmentation from a Single Image". In: In Proceedings of the European Conference on Computer Vision. [Online]. 2012, Berlin Heidelberg: Springer, pp. 376–389. Available from: http://link.springer.com/10.1007/978-3-642-33786-4_28. https://doi.org/10.1007/978-3-642-33786-4_28.
[3] Arvanitis, T.., Petrou, A., Knight, J.., Savas, S., Sotiriou, S., Gargalakos, M. & Gialouri, E. (2007). "Human factors and qualitative pedagogical evaluation of a mobile augmented reality system for science education used by learners with physical disabilities." Personal and Ubiquitous Computing. [Online]. 13 (3). pp. 243–250. Available from: https://www.sciencedirect.com/science/article/pii/S1877042813038305. https://doi.org/10.1007/s00779-007-0187-7.
[4] Azuma, R. (1997). "A Survey of Augmented Reality". California: Hughes Research Laboratories. https://doi.org/10.1162/pres.1997.6.4.355.
[5] Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S. & MacIntyre, B. (2001). "Recent Advances in Augmented Reality." Computer Graphics and Applications. [Online]. 26 (6). pp. 34–47. Available from: https://www.sciencedirect.com/science/article/pii/S2212827115011920. https://doi.org/10.1109/38.963459.
[6] Azuma, R.T., Hoff, B.R., Neely, H.E., Sarfaty, R., Daily, M.J., Bishop, G., Vicci, L., Welch, G., Neumann, U., You, S., Nichols, R. & Cannon, J. (1999). "Making augmented reality work outdoors requires hybrid tracking." In: IWAR ’98 Proceedings of the international workshop on Augmented reality. [Online]. 1999, Natick, MA, USA: A. K. Peters, Ltd, pp. 219–224. Available from: https://dl.acm.org/citation.cfm?id=322709.
[7] Baraldi, L., Paci, F., Serra, G., Benini, L. & Cucchiara, R. (2014). "Gesture Recognition in Ego-centric Videos Using Dense Trajectories and Hand Segmentation." In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops. June 2014, IEEE, pp. 702–707. https://doi.org/10.1109/CVPRW.2014.107.
[8] Bencina, R., Kaltenbrunner, M. & Jorda, S. (2005). "Improved Topological Fiducial Tracking in the reacTIVision System." In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Workshops. 2005, Washington, DC, USA: IEEE, pp. 99–99. https://doi.org/10.1109/CVPR.2005.475.
[9] Bergamasco, F., Albarelli, A., Cosmo, L., Rodola, E. & Torsello, A. (2016). "An Accurate and Robust Artificial Marker Based on Cyclic Codes." IEEE Transactions on Pattern Analysis and Machine Intelligence. 38 (12). pp. 2359–2373. https://doi.org/10.1109/TPAMI.2016.2519024.
[10] Betancourt, A., Marcenaro, L., Barakova, E., Rauterberg, M. & Regazzoni, C. (2016). "GPU Accelerated Left/Right Hand-Segmentation in First Person Vision." In: G. Hua & H. Jégou (eds.). Computer Vision – ECCV 2016 Workshops. ECCV 2016. Lecture Notes in Computer Science, vol 9913. 2016, Springer, Cham, pp. 504–517. https://doi.org/10.1007/978-3-319-46604-0_36.
[11] Betancourt, A., Morerio, P., Barakova, E., Marcenaro, L., Rauterberg, M. & Regazzoni, C. (2017). "Left/right hand segmentation in egocentric videos." Computer Vision and Image Understanding. 154. pp. 73–81. https://doi.org/10.1016/j.cviu.2016.09.005.
[12] Billinghurst, M., Clark, A. & Lee, G. (2015). "A Survey of Augmented Reality." Foundations and Trends® in Human–Computer Interaction. [Online]. 8 (2–3). pp. 73–272. Available from: http://www.nowpublishers.com/article/Details/HCI-049. https://doi.org/10.1561/1100000049.
[13] Billinghurst, M., Kato, H. & Myojin, S. (2009). "Advanced Interaction Techniques for Augmented Reality Applications." In: International Conference on Virtual and Mixed Reality. [Online]. 2009, Berlin: springer, pp. 13–22. Available from: http://link.springer.com/10.1007/978-3-642-02771-0_2. https://doi.org/10.1007/978-3-642-02771-0_2.
[14] Carrino, S., Mugellini, E., Khaled, O.A. & Ingold, R. (2011). "Gesture-based hybrid approach for HCI in ambient intelligent environmments." In: 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011). [Online]. June 2011, IEEE, pp. 86–93. Available from: http://ieeexplore.ieee.org/document/6007691/. https://doi.org/10.1109/FUZZY.2011.6007691.
[15] Caudel, T. & Mizell, D. (1992). Augmented Reality: "An Application of Heads-Up Display Technology to Manual Manufacturing Processes." In: In: Proc. of the Twenty-Fifth Hawaii International Conference on System Sciences. [Online]. 1992, Kauai: IEEE, pp. 659–669. Available from: http://ieeexplore.ieee.org/document/183317/. https://doi.org/10.1109/HICSS.1992.183317.
[16] Chang, C.-H., Chou, C.-N. & Chang, E.Y. (2017). CLKN: "Cascaded Lucas-Kanade Networks for Image Alignment." 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR.2017.402.
[17] Chen, C.-H., Wu, C.-L., Lo, C.-C. & Hwang, F.-J. (2017a). "An Augmented Reality Question Answering System Based on Ensemble Neural Networks." IEEE Access. 5. pp. 17425–17435. https://doi.org/10.1109/ACCESS.2017.2743746.
[18] Chen, P., Liu, X., Cheng, W. & Huang, R. (2017b). "A Review of using Augmented Reality in Education from 2011 to 2016." In: Innovations in Smart Learning. [Online]. Singapore: Springer Science+Business Media, pp. 13–18. Available from: http://link.springer.com/10.1007/978-981-10-2419-1_2. https://doi.org/10.1007/978-981-10-2419-1_2.
[19] Chen, T., Bin, D., Liu Daxue, Zhang Bo & Qixu, L. (2011). "3D LIDAR-based ground segmentation." In: The First Asian Conference on Pattern Recognition. [Online]. November 2011, IEEE, pp. 446–450. Available from: http://ieeexplore.ieee.org/document/6166587/.
[20] Comport, A.I., Marchand, E., Pressigout, M. & Chaumette, F. (2006). "Real-time markerless tracking for augmented reality: the virtual visual servoing framework." IEEE Transactions on Visualization and Computer Graphics. 12 (4). pp. 615–628. https://doi.org/10.1109/TVCG.2006.78.
[21] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S. & Schiele, B. (2016). "The cityscapes dataset for semantic urban scene understanding." In: In Proceedings of the Conference on Computer Vision and Pattern Recognition, 2016. [Online]. 2016, CVPR. Available from: http://sci-hub.cc/10.1109/itsc.2016.7795862. https://doi.org/10.1109/CVPR.2016.350.
[22] Costanza, E. & Robinson, J. (2003). "A Region Adjacency Tree Approach to the Detection and Design of Fiducials." Video Vision and Graphics. pp. 63–69.
[23] Dandachi, G., Assoum, A., Elhassan, B. & Dornaika, F. (2015). "Machine learning schemes in augmented reality for features detection." In: 2015 Fifth International Conference on Digital Information and Communication Technology and its Applications (DICTAP). [Online]. April 2015, IEEE, pp. 101–105. Available from: http://ieeexplore.ieee.org/document/7113179/. https://doi.org/10.1109/DICTAP.2015.7113179.
[24] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K. & FeiFei, L. (2009). "Imagenet: A large-scale hierarchical image database." In: In Proceedings of the Conference on Computer Vision and Pattern Recognition, 2009. [Online]. 2009, CVPR. Available from: http://sci-hub.cc/10.1109/itsc.2016.7795862.
[25] Deusch, H., Wiest, J., Reuter, S., Nuss, D., Fritzsche, M. & Dietmayer, K. (2014). "Multi-sensor self-localization based on Maximally Stable Extremal Regions." In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. [Online]. June 2014, IEEE, pp. 555–560. Available from: http://ieeexplore.ieee.org/document/6856413/. https://doi.org/10.1109/IVS.2014.6856413.
[26] Dharwal, R. & Kaur, L. (2016). "Applications of Artificial Neural Networks: A Review". Indian Journal of Science and Technology. [Online]. 9 (47). Available from: http://www.indjst.org/index.php/indjst/article/view/106807. https://doi.org/10.17485/ijst/2015/v8i1/106807.
[27] Fernandez, C., Izquierdo, R., Llorca, D.F. & Sotelo, M.A. (2015). "A Comparative Analysis of Decision Trees Based Classifiers for Road Detection in Urban Environments." In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems. [Online]. September 2015, IEEE, pp. 719–724. Available from: http://ieeexplore.ieee.org/document/7313214/. https://doi.org/10.1109/ITSC.2015.122.
[28] Fiala, M. (2005a). "ARTag, a Fiducial Marker System Using Digital Techniques." In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). 2005, Washington, DC, USA: IEEE, pp. 590–596.
[29] Fiala, M. (2005b). "Comparing ARTag and ARToolkit plus fiducial marker systems." In: IREE International Worksho on Haptic Audio Visual Environments and their Applications, 2005. 2005, Ottawa, Ont., Canada, Canada: IEEE, pp. 147–152.
[30] Fournier, A., Gunawan, A.S. & Romanzin, C. (1992). "Common Illumination between Real and Computer Generated Scenes." In: Technical Report Common Illumination between Real and Computer Generated Scenes. 1992, Vancouver, BC, Canada, Canada: University of British Columbia.
[31] Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F.J. & MarÃn-Jiménez, M.J. (2014). "Automatic generation and detection of highly reliable fiducial markers under occlusion." Pattern Recognition. 47 (6). pp. 2280–2292. https://doi.org/10.1016/j.patcog.2014.01.005.
[32] Gordienko, Y., Stirenko, S., Alienin, O., Skala, K., Sojat, Z., Rojbi, A., Lopez Benito, J.R., Artetxe Gonzalez, E., Lushchyk, U., Sajn, L., Llorente Coto, A. & Jervan, G. (2017). "Augmented Coaching Ecosystem for Non-obtrusive Adaptive Personalized Elderly Care on the basis of Cloud-Fog-Dew computing paradigm." In: 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). [Online]. May 2017, IEEE, pp. 359–364. Available from: http://ieeexplore.ieee.org/document/7973449/. https://doi.org/10.23919/MIPRO.2017.7973449.
[33] Grinchuk, O., Lebedev, V. & Lempitsky, V. (2016). "Learnable Visual Markers." NIPS. (Nips). pp. 1–9.
[34] Hamrol, A., Ciszak, O., Legutko, S. & Jurczyk, M. (2017). "Advances in Manufacturing." [Online]. Berlin: Springer. Available from: https://books.google.co.in/books?id=1f06DwAAQBAJ&dq=Human-Machine+Speech-Based+Interfaces+with+Augmented+Reality+and+Interactive+Systems+for+Controlling+Mobile+Cranes&source=gbs_navlinks_s.
[35] Haykin, S. (1999). Neural Networks: "A Comprehensive Foundation." New Jersey: Prentice Hall.
[36] Himmelsbach, M., Luettel, T. & Wuensche, H.-J. (2009). "Real-time object classification in 3D point clouds using point feature histograms." In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. [Online]. October 2009, IEEE, pp. 994–1000. Available from: http://ieeexplore.ieee.org/document/5354493/. https://doi.org/10.1109/IROS.2009.5354493.
[37] Janin, A.L., Mizell, D.W. & Caudell, T.P. (1993). "Calibration of head-mounted displays for augmented reality applications." In: Proceedings of IEEE Virtual Reality Annual International Symposium. [Online]. 1993, IEEE, pp. 246–255. Available from: http://ieeexplore.ieee.org/document/380772/.
[38] Kakuta, T., Oishi, T. & Ikeuchi, K. (2004). "Virtual Kawaradera: Fast Shadow Texture for Augmented Reality." Proc. of Intl. Society on Virtual Systems and MultiMedia. (October). pp. 141–150.
[39] Kang, B., Tan, K.-H., Tai, H.-S., Tretter, D. & Nguyen, T.Q. (2016). "Hand Segmentation for Hand-Object Interaction from Depth map.", 2016. https://doi.org/10.1109/GlobalSIP.2017.8308644.
[40] Kato, H. & Billinghurst, M. (1999). "Marker tracking and HMD calibration for a video-based augmented reality conferencing system." In: Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). 1999, San Francisco, CA, USA, USA: IEEE Comput. Soc, pp. 85–94. https://doi.org/10.1109/IWAR.1999.803809.
[41] Kim, K., Lepetit, V. & Woo, W. (2010). "Scalable real-time planar targets tracking for digilog books.", The Visual Computer. 26 (6–8). pp. 1145–1154. https://doi.org/10.1007/s00371-010-0490-6.
[42] Kipper, G. & Rampolla, J. (2012). "Augmented Reality: An Emerging Technologies Guide to AR.", Italy: Syngress.
[43] Krevelen, D. & Poelman, R. (2010). "A Survey of Augmented Reality Technologies Applications and Limitations.", International Journal of Virtual Reality. [Online]. 9 (2). pp. 1–20. Available from: https://www.sciencedirect.com/science/article/pii/S2212827115011920. https://doi.org/10.20870/IJVR.2010.9.2.2767.
[44] Leap Motion (2017). "Leap Motion.", 2017.
[45] Li, C. & Kitani, K.M. (2013). "Pixel-Level Hand Detection in Ego-centric Videos.", In: 2013 IEEE Conference on Computer Vision and Pattern Recognition. June 2013, Portland, OR, USA: IEEE, pp. 3570–3577. https://doi.org/10.1109/CVPR.2013.458.
[46] Liang, H., Wang, J., Sun, Q., Liu, Y.-J., Yuan, J., Luo, J. & He, Y. (2016). "Barehanded music. In: Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games - I3D ’16.", 2016, New York, New York, USA: ACM Press, pp. 87–94. https://doi.org/10.1145/2856400.2856411.
[47] Malik, S., McDonald, C. & Roth, G. (2002). "Hand tracking for interactive pattern-based augmented reality.", In: Proceedings. International Symposium on Mixed and Augmented Reality. 2002, Darmstadt, Germany, Germany: IEEE Comput. Soc, pp. 117–126. https://doi.org/10.1109/ISMAR.2002.1115080.
[48] Mather, C., Barnett, T., Broucek, V., Saunders, A., Grattidge, D. & Huang, W. (2017)." Helping Hands: Using Augmented Reality to Provide Remote Guidance to Health Professionals.", Studies in Health Technology and Informatics. 241. pp. 57–62.
[49] Maung, P.P. (2012). "Augmented Reality using a Neural Network.", [Online]. Ripon College. Available from: http://www.micsymposium.org/mics2012/submissions/mics2012_submission_27.pdf.
[50] Milgram, P. & Kishino, F. (1994). "A Taxonomy of Mixed Reality Visual Displays.", IEICE Transactions on Information Systems. [Online]. 77 (12). pp. 1321–1329. Available from: http://www.micsymposium.org/mics2012/submissions/mics2012_submission_27.pdf.
[51] Montenegro, J.M.F. & Argyriou, V. (2016). "Gaze estimation using EEG signals for HCI in augmented and virtual reality headsets. [Online].", 2016. Kingston. Available from: http://eprints.kingston.ac.uk/38202/1/Argyriou-V-38202-AAM.pdf. [Accessed: 9 November 2017]. https://doi.org/10.1109/ICPR.2016.7899793.
[52] Muller, B., Reinhardt, J. & Strickland, M.. (M.5M."Neural Networks an Introduction.", Berlin Heidelberg: Springer-Verlag.
[53] Naimark, M. (1991). "Elements of realspace imaging: A proposed taxonomy.", Stereoscopic Displays and Applications II. [Online]. 1457. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.83.6861&rep=rep1&type=pdf.
[54] Nowrouzezahrai, D., Geiger, S., Mitchell, K., Sumner, R., Jarosz, W. & Gross, M. (2011). "Light factorization for mixed-frequency shadows in augmented reality.", In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality. October 2011, IEEE, pp. 173–179. https://doi.org/10.1109/ISMAR.2011.6092384.
[55] Olson, E. (2011). AprilTag: "A robust and flexible visual fiducial system.", In: 2011 IEEE International Conference on Robotics and Automation. May 2011, Daejeon, South Korea: IEEE, pp. 3400–3407. https://doi.org/10.1109/ICRA.2011.5979561.
[56] Pemasiri, A., Wijebandara, C., Wijayarathna, S., Perera, A. & Gamage, C. (2015). "An online lighting model estimation using neural networks for augmented reality in handheld devices.†In: 2015 Fifteenth International Conference on Advances in ICT for Emerging Regions (ICTer). [Online]. August 2015, IEEE, pp. 4–8. Available from: http://ieeexplore.ieee.org/document/7377658/. https://doi.org/10.1109/ICTER.2015.7377658.
[57] Pressigout, M. & Marchand, E. (2006). "Hybrid tracking algorithms for planar and non-planar structures subject to illumination changes.", In: 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. [Online]. October 2006, IEEE, pp. 52–55. Available from: http://ieeexplore.ieee.org/document/4079256/. https://doi.org/10.1109/ISMAR.2006.297794.
[58] Risack, R., Klausmann, P., Krüger, W. & Enkelmann, W. (1998). "Robust lane recognition embedded in a real-time driver assistance system.", In: IEEE Intelligent Vehicles Symposium. [Online]. 1998, IEEE, pp. 35–40. Available from: http://sci-hub.cc/10.1109/itsc.2016.7795862.
[59] Schule, F., Schweiger, R. & Dietmayer, K. (2013b). "Augmenting night vision video images with longer distance road course information.", In: 2013 IEEE Intelligent Vehicles Symposium (IV). [Online]. June 2013, IEEE, pp. 1233–1238. Available from: http://ieeexplore.ieee.org/document/6629635/. https://doi.org/10.1109/IVS.2013.6629635.
[60] Seo, Y. & Rajkumar, R.R. (2014). "Detection and tracking of boundary of unmarked roads", In: International Conference on Information Fusion. 2014, Spain: IEEE.
[61] Shelhamer, E., Long, J. & Darrell, T. (2015). "Fully Convolutional Networks for Semantic Segmentation.", In: In Proceedings of the Conference on Computer Vision and Pattern Recognition. [Online]. 2015, CVPR, pp. 1–10. Available from: https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf.
[62] Sinha, A., Choi, C. & Ramani, K. (2016). "DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features.", In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2016, IEEE, pp. 4150–4158. https://doi.org/10.1109/CVPR.2016.450.
[63] Supan, P., Stuppacher, I. & Haller, M. (2006). "Image Based Shadowing in Real-Time Augmented Reality.", The International Journal of Virtual Reality. 5 (3). pp. 1–7. https://doi.org/10.20870/IJVR.2006.5.3.2692.
[64] Syberfeldt, A., Danielsson, O., Holm, M. & Wang, L. (2016). "Dynamic Operator Instructions Based on Augmented Reality and Rule-based Expert Systems.", Procedia CIRP. [Online]. 41. pp. 346–351. Available from: http://linkinghub.elsevier.com/retrieve/pii/S2212827115011920. https://doi.org/10.1016/j.procir.2015.12.113.
[65] Tsogas, M., Floudas, N., Lytrivis, P., Amditis, A. & Polychronopoulos, A. (2011). "Combined lane and road attributes extraction by fusing data from digital map, laser scanner and camera.†Information Fusion. [Online]. 12 (1). pp. 28–36. Available from: http://linkinghub.elsevier.com/retrieve/pii/S1566253510000199. https://doi.org/10.1016/j.inffus.2010.01.005.
-
Downloads
-
How to Cite
Narayanan, S., & Doss, S. (2019). Augmented reality using artificial neural networks –a review. International Journal of Engineering & Technology, 8(4), 603-610. https://doi.org/10.14419/ijet.v8i4.29981Received date: 2019-10-16
Accepted date: 2019-10-24
Published date: 2019-12-15