ABSTRACT
Freehand gesture is an essential input modality for modern Augmented Reality (AR) user experiences. However, developing AR applications with customized hand interactions remains a challenge for end-users. Therefore, we propose GesturAR, an end-to-end authoring tool that supports users to create in-situ freehand AR applications through embodied demonstration and visual programming. During authoring, users can intuitively demonstrate the customized gesture inputs while referring to the spatial and temporal context. Based on the taxonomy of gestures in AR, we proposed a hand interaction model which maps the gesture inputs to the reactions of the AR contents. Thus, users can author comprehensive freehand applications using trigger-action visual programming and instantly experience the results in AR. Further, we demonstrate multiple application scenarios enabled by GesturAR, such as interactive virtual objects, robots, and avatars, room-level interactive AR spaces, embodied AR presentations, etc. Finally, we evaluate the performance and usability of GesturAR through a user study.
Supplemental Material
- 3D Scanner App 2021. 3D Scanner App: Capture Anything in 3D. https://www.3dscannerapp.com/.Google Scholar
- Günter Alce, Mattias Wallergård, and Klas Hermodsson. 2015. WozARd: a wizard of Oz method for wearable augmented reality interaction—a pilot study. Advances in human-computer interaction 2015 (2015).Google Scholar
- Fraser Anderson, Tovi Grossman, and George Fitzmaurice. 2017. Trigger-action-circuits: Leveraging generative design to enable novices to design and build circuitry. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 331–342.Google ScholarDigital Library
- ARCore [n.d.]. ARCore. https://developers.google.com/ar.Google Scholar
- ARKit [n.d.]. ARKit Overview. https://developer.apple.com/augmented-reality/arkit/.Google Scholar
- Rahul Arora, Rubaiat Habib Kazi, Tovi Grossman, George Fitzmaurice, and Karan Singh. 2018. Symbiosissketch: Combining 2d & 3d sketching for designing detailed 3d objects in situ. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarDigital Library
- Rahul Arora, Rubaiat Habib Kazi, Danny M Kaufman, Wilmot Li, and Karan Singh. 2019. Magicalhands: Mid-air hand gestures for animating in vr. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 463–477.Google ScholarDigital Library
- Daniel Ashbrook and Thad Starner. 2010. MAGIC: a motion gesture design tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2159–2168.Google ScholarDigital Library
- Narges Ashtari, Andrea Bunt, Joanna McGrenere, Michael Nebeling, and Parmit K Chilana. 2020. Creating augmented and virtual reality applications: Current practices, challenges, and opportunities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Hrvoje Benko, Ricardo Jota, and Andrew Wilson. 2012. Miragetable: freehand interaction on a projected augmented reality tabletop. In Proceedings of the SIGCHI conference on human factors in computing systems. 199–208.Google ScholarDigital Library
- Fabio Bettio, Andrea Giachetti, Enrico Gobbetti, Fabio Marton, and Giovanni Pintore. 2007. A Practical Vision Based Approach to Unencumbered Direct Spatial Manipulation in Virtual Worlds.. In Eurographics Italian Chapter Conference. 145–150.Google Scholar
- Volkert Buchmann, Stephen Violich, Mark Billinghurst, and Andy Cockburn. 2004. FingARtips: gesture based direct manipulation in Augmented Reality. In Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia. 212–221.Google ScholarDigital Library
- Yuanzhi Cao, Xun Qian, Tianyi Wang, Rachel Lee, Ke Huo, and Karthik Ramani. 2020. An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Yuanzhi Cao, Tianyi Wang, Xun Qian, Pawan S Rao, Manav Wadhawan, Ke Huo, and Karthik Ramani. 2019. GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 521–534.Google ScholarDigital Library
- Di Laura Chen, Ravin Balakrishnan, and Tovi Grossman. 2020. Disambiguation techniques for freehand object manipulations in virtual reality. In 2020 IEEE conference on virtual reality and 3D user interfaces (VR). IEEE, 285–292.Google ScholarCross Ref
- Jiawen Chen, Shahram Izadi, and Andrew Fitzgibbon. 2012. KinÊtre: animating the world with the human body. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 435–444.Google ScholarDigital Library
- Xinghao Chen, Hengkai Guo, Guijin Wang, and Li Zhang. 2017. Motion feature augmented recurrent neural network for skeleton-based dynamic hand gesture recognition. In 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2881–2885.Google ScholarDigital Library
- Chiho Choi, Sang Ho Yoon, Chin-Ning Chen, and Karthik Ramani. 2017. Robust hand pose estimation during the interaction with an unknown object. In Proceedings of the IEEE International Conference on Computer Vision. 3123–3132.Google ScholarCross Ref
- Anind K Dey, Raffay Hamid, Chris Beckmann, Ian Li, and Daniel Hsu. 2004. a CAPpella: programming by demonstration of context-aware applications. In Proceedings of the SIGCHI conference on Human factors in computing systems. 33–40.Google ScholarDigital Library
- Mahmoud Elmezain, Ayoub Al-Hamadi, Jorg Appenrodt, and Bernd Michaelis. 2008. A hidden markov model-based continuous gesture recognition system for hand motion trajectory. In 2008 19th International Conference on Pattern Recognition. IEEE, 1–4.Google ScholarCross Ref
- Barrett Ens, Fraser Anderson, Tovi Grossman, Michelle Annett, Pourang Irani, and George Fitzmaurice. 2017. Ivy: Exploring spatially situated visual programming for authoring and understanding intelligent environments. In Proceedings of the 43rd Graphics Interface Conference. 156–162.Google ScholarDigital Library
- FinalIK 2021. FinalIK: Final IK - RootMotion. http://www.root-motion.com/final-ik.html.Google Scholar
- Markus Funk, Mareike Kritzler, and Florian Michahelles. 2017. HoloLens is more than air Tap: natural and intuitive interaction with holograms. In Proceedings of the seventh international conference on the internet of things. 1–2.Google ScholarDigital Library
- Terrell Glenn, Ananya Ipsita, Caleb Carithers, Kylie Peppler, and Karthik Ramani. 2020. StoryMakAR: Bringing Stories to Life With An Augmented Reality & Physical Prototyping Toolkit for Youth. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarDigital Library
- Celeste Groenewald, Craig Anslow, Junayed Islam, Chris Rooney, Peter J Passmore, and BL Wong. 2016. Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review. (2016).Google Scholar
- Sinem Güven and Steven Feiner. 2003. Authoring 3D hypermedia for wearable augmented and virtual reality. In Proceedings of IEEE International Symposium on Wearable Computers (ISWC’03). 21–23.Google ScholarCross Ref
- Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R Klemmer. 2007. Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In Proceedings of the SIGCHI conference on Human factors in computing systems. 145–154.Google ScholarDigital Library
- Robert Held, Ankit Gupta, Brian Curless, and Maneesh Agrawala. 2012. 3D puppetry: a kinect-based interface for 3D animation.. In UIST, Vol. 12. Citeseer, 423–434.Google Scholar
- Valentin Heun, James Hobin, and Pattie Maes. 2013. Reality editor: programming smarter objects. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. 307–310.Google ScholarDigital Library
- Otmar Hilliges, David Kim, Shahram Izadi, Malte Weiss, and Andrew Wilson. 2012. HoloDesk: direct 3d interactions with a situated see-through display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2421–2430.Google ScholarDigital Library
- Hololens 2 2021. Hololens 2: Mixed Reality Technology for Business. https://www.microsoft.com/en-us/hololens.Google Scholar
- Zhanpeng Huang, Weikai Li, and Pan Hui. 2015. Ubii: Towards seamless interaction between digital and physical worlds. In Proceedings of the 23rd ACM international conference on Multimedia. 341–350.Google ScholarDigital Library
- Ke Huo and Karthik Ramani. 2016. Window-Shaping: 3D Design Ideation in Mixed Reality. In Proceedings of the 2016 Symposium on Spatial User Interaction. 189–189.Google ScholarDigital Library
- Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, 2011. KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology. 559–568.Google ScholarDigital Library
- Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, and George Fitzmaurice. 2014. Kitty: sketching dynamic and interactive illustrations. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 395–405.Google ScholarDigital Library
- Annie Kelly, R Benjamin Shapiro, Jonathan de Halleux, and Thomas Ball. 2018. ARcadia: A rapid prototyping platform for real-time tangible interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–8.Google ScholarDigital Library
- Jun-Sik Kim, MyungHwan Jeon, and Jung-Min Park. 2019. Multi-Hand Direct Manipulation of Complex Constrained Virtual Objects. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 3235–3240.Google Scholar
- Yongkwan Kim and Seok-Hyung Bae. 2016. SketchingWithHands: 3D sketching handheld products with first-person hand posture. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 797–808.Google ScholarDigital Library
- Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, Vol. 2. Lille.Google Scholar
- Sinisa Kolaric, Alberto Raposo, and Marcelo Gattass. 2008. Direct 3D manipulation using vision-based recognition of uninstrumented hands. In X Symposium on Virtual and Augmented Reality. Citeseer, 212–220.Google Scholar
- Tobias Langlotz, Stefan Mooslechner, Stefanie Zollmann, Claus Degendorfer, Gerhard Reitmayr, and Dieter Schmalstieg. 2012. Sketching up the world: in situ authoring for mobile augmented reality. Personal and ubiquitous computing 16, 6 (2012), 623–630.Google Scholar
- David Ledo, Jo Vermeulen, Sheelagh Carpendale, Saul Greenberg, Lora Oehlberg, and Sebastian Boring. 2019. Astral: Prototyping Mobile and Smart Object Interactive Behaviours Using Familiar Applications. In Proceedings of the 2019 on Designing Interactive Systems Conference. 711–724.Google ScholarDigital Library
- Bokyung Lee, Minjoo Cho, Joonhee Min, and Daniel Saakes. 2016. Posing and acting as input for personalizing furniture. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction. 1–10.Google ScholarDigital Library
- Gun A Lee, Gerard J Kim, and Mark Billinghurst. 2005. Immersive authoring: What you experience is what you get (wyxiwyg). Commun. ACM 48, 7 (2005), 76–81.Google ScholarDigital Library
- Gun A Lee, Claudia Nelles, Mark Billinghurst, and Gerard Jounghyun Kim. 2004. Immersive authoring of tangible augmented reality applications. In Third IEEE and ACM International Symposium on Mixed and Augmented Reality. IEEE, 172–181.Google ScholarDigital Library
- Jae Yeol Lee, Gue Won Rhee, and Dong Woo Seo. 2010. Hand gesture-based tangible interactions for manipulating virtual objects in a mixed reality environment. The International Journal of Advanced Manufacturing Technology 51, 9-12(2010), 1069–1082.Google ScholarCross Ref
- Minkyung Lee, Mark Billinghurst, Woonhyuk Baek, Richard Green, and Woontack Woo. 2013. A usability study of multimodal input in an augmented reality environment. Virtual Reality 17, 4 (2013), 293–305.Google ScholarDigital Library
- Minkyung Lee, Richard Green, and Mark Billinghurst. 2008. 3D natural hand interaction for AR applications. In 2008 23rd International Conference Image and Vision Computing New Zealand. IEEE, 1–6.Google ScholarCross Ref
- Germán Leiva and Michel Beaudouin-Lafon. 2018. Montage: a video prototyping system to reduce re-shooting and increase re-usability. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 675–682.Google ScholarDigital Library
- Germán Leiva, Cuong Nguyen, Rubaiat Habib Kazi, and Paul Asente. 2020. Pronto: Rapid Augmented Reality Video Prototyping Using Sketches and Enaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Yingjiang Li, Jianhong Sun, and Rui Li. 2016. Human Action Recognition Based on Dynamic Time Warping and Movement Trajectory. International Journal of Simulation–Systems, Science & Technology 17, 46(2016).Google Scholar
- Henry Lieberman, Fabio Paternò, Markus Klann, and Volker Wulf. 2006. End-user development: An emerging paradigm. In End user development. Springer, 1–8.Google Scholar
- Hao Lü and Yang Li. 2012. Gesture coder: a tool for programming multi-touch gestures by demonstration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2875–2884.Google ScholarDigital Library
- Hao Lü and Yang Li. 2013. Gesture studio: Authoring multi-touch interactions through demonstration and declaration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 257–266.Google ScholarDigital Library
- Zhihan Lv, Alaa Halawani, Shengzhong Feng, Shafiq Ur Réhman, and Haibo Li. 2015. Touch-less interactive augmented reality game on vision-based wearable device. Personal and Ubiquitous Computing 19, 3 (2015), 551–567.Google ScholarDigital Library
- Chunyong Ma, Shengsheng Zhang, Anni Wang, Yongyang Qi, and Ge Chen. 2020. Skeleton-based dynamic hand gesture recognition using an enhanced network with one-shot learning. Applied Sciences 10, 11 (2020), 3680.Google ScholarCross Ref
- MagicLeapMica 2021. Magic Leap: I am mica. https://www.magicleap.com/en-us/news/op-ed/i-am-mica.Google Scholar
- Atsushi Matsubayashi, Yasutoshi Makino, and Hiroyuki Shinoda. 2019. Direct finger manipulation of 3d object image with ultrasound haptic feedback. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.Google ScholarDigital Library
- Daniel Mendes, Fernando Fonseca, Bruno Araujo, Alfredo Ferreira, and Joaquim Jorge. 2014. Mid-air interactions above stereoscopic interactive tables. In 2014 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 3–10.Google ScholarCross Ref
- Byung-Woo Min, Ho-Sub Yoon, Jung Soh, Yun-Mo Yang, and Toshiaki Ejima. 1997. Hand gesture recognition using hidden Markov models. In 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Vol. 5. IEEE, 4232–4235.Google Scholar
- MRTK 2021. MRTK: MRTK-Unity Developer Documentation. https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/.Google Scholar
- Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas, and Christian Theobalt. 2017. Real-time hand tracking under occlusion from an egocentric rgb-d sensor. In Proceedings of the IEEE International Conference on Computer Vision. 1154–1163.Google Scholar
- Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay. 2011. Mid-air pan-and-zoom on wall-sized displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 177–186.Google ScholarDigital Library
- Michael Nebeling and Katy Madier. 2019. 360proto: Making interactive virtual reality & augmented reality prototypes from paper. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Michael Nebeling, Janet Nebeling, Ao Yu, and Rob Rumble. 2018. Protoar: Rapid physical-digital prototyping of mobile augmented reality applications. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarDigital Library
- Michael Nebeling and Maximilian Speicher. 2018. The trouble with augmented reality/virtual reality authoring tools. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 333–337.Google ScholarCross Ref
- Andrew YC Nee, SK Ong, George Chryssolouris, and Dimitris Mourtzis. 2012. Augmented reality applications in design and manufacturing. CIRP annals 61, 2 (2012), 657–679.Google Scholar
- Gary Ng, Joon Gi Shin, Alexander Plopski, Christian Sandor, and Daniel Saakes. 2018. Situated game level editing in augmented reality. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction. 409–418.Google ScholarDigital Library
- SK Ong and ZB Wang. 2011. Augmented assembly technologies based on 3D bare-hand interaction. CIRP annals 60, 1 (2011), 1–4.Google Scholar
- Jong-Seung Park. 2011. AR-Room: a rapid prototyping framework for augmented reality applications. Multimedia tools and applications 55, 3 (2011), 725–746.Google Scholar
- Viet Toan Phan and Seung Yeon Choo. 2010. Interior design in augmented reality environment. International Journal of Computer Applications 5, 5(2010), 16–21.Google ScholarCross Ref
- Wayne Piekarski and Bruce Thomas. 2002. ARQuake: the outdoor augmented reality gaming system. Commun. ACM 45, 1 (2002), 36–38.Google ScholarDigital Library
- Wayne Piekarski and Bruce H Thomas. 2002. Using ARToolKit for 3D hand position tracking in mobile outdoor environments. In The First IEEE International Workshop Agumented Reality Toolkit,. IEEE, 2–pp.Google ScholarCross Ref
- Thammathip Piumsomboon, David Altimira, Hyungon Kim, Adrian Clark, Gun Lee, and Mark Billinghurst. 2014. Grasp-Shell vs gesture-speech: A comparison of direct and indirect natural interaction techniques in augmented reality. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 73–82.Google ScholarCross Ref
- Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In IFIP Conference on Human-Computer Interaction. Springer, 282–299.Google ScholarDigital Library
- David Porfirio, Allison Sauppé, Aws Albarghouthi, and Bilge Mutlu. 2018. Authoring and verifying human-robot interactions. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 75–86.Google ScholarDigital Library
- Victor Adrian Prisacariu, Olaf Kähler, Stuart Golodetz, Michael Sapienza, Tommaso Cavallari, Philip HS Torr, and David W Murray. 2017. Infinitam v3: A framework for large-scale 3d reconstruction with loop closure. arXiv preprint arXiv:1708.00783(2017).Google Scholar
- PyTorch 2021. PyTorch. https://pytorch.org/.Google Scholar
- Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin, John F Hughes, and Jeff Huang. 2019. Portal-ble: Intuitive free-hand manipulation in unbounded smartphone-based augmented reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 133–145.Google ScholarDigital Library
- Matthias Schwaller, Simon Brunner, and Denis Lalanne. 2013. Two handed mid-air gestural hci: Point+ command. In International Conference on Human-Computer Interaction. Springer, 388–397.Google ScholarDigital Library
- Hartmut Seichter, Julian Looser, and Mark Billinghurst. 2008. ComposAR: An intuitive tool for authoring AR applications. In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE, 177–178.Google ScholarDigital Library
- Jinwook Shim, Yoonsik Yang, Nahyung Kang, Jonghoon Seo, and Tack-Don Han. 2016. Gesture-based interactive augmented reality content authoring system using HMD. Virtual Reality 20, 1 (2016), 57–69.Google ScholarDigital Library
- Deepanjal Shrestha, Hyungwoo Lee, and Junchul Chun. 2018. Computer-vision-based bare-hand augmented reality interface for controlling an AR object. International Journal of Computer Aided Engineering and Technology 10, 3(2018), 257–265.Google ScholarCross Ref
- SonyAibo 2021. Sony: aibo. https://us.aibo.com/.Google Scholar
- Nur SyafiqahSafiee and Ajune Wanis Ismail. 2018. Ar home deco: virtual object manipulation technique using hand gesture in augmented reality. Innovations in Computing Technology and Applications 3 (2018).Google Scholar
- ultraleap 2021. Tracking: Leaping Motion Controller. https://www.ultraleap.com/product/leap-motion-controller/.Google Scholar
- Unity 2021. Unity: Real-Time Development Platform. https://www.unity.com.Google Scholar
- UnrealEngine 2021. UnrealEngine: The most powerful real-time 3D creation platform. https://www.unrealengine.com/en-US/.Google Scholar
- Ana Villanueva, Zhengzhe Zhu, Ziyi Liu, Kylie Peppler, Thomas Redick, and Karthik Ramani. 2020. Meta-AR-app: an authoring platform for collaborative augmented reality in STEM classrooms. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.Google ScholarDigital Library
- Christian Von Hardenberg and François Bérard. 2001. Bare-hand human-computer interaction. In Proceedings of the 2001 workshop on Perceptive user interfaces. 1–8.Google ScholarDigital Library
- Tianyi Wang, Xun Qian, Fengming He, Xiyun Hu, Ke Huo, Yuanzhi Cao, and Karthik Ramani. 2020. CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 328–341.Google ScholarDigital Library
- Matt Whitlock, Jake Mitchell, Nick Pfeufer, Brad Arnot, Ryan Craig, Bryce Wilson, Brian Chung, and Danielle Albers Szafir. 2020. MRCAT: In Situ Prototyping of Interactive AR Environments. In International Conference on Human-Computer Interaction. Springer, 235–255.Google Scholar
- Adam S Williams, Jason Garcia, and Francisco Ortega. 2020. Understanding Multimodal User Gesture and Speech Behavior for Object Manipulation in Augmented Reality Using Elicitation. IEEE Transactions on Visualization and Computer Graphics 26, 12(2020), 3479–3489.Google ScholarCross Ref
- Graham Wilson, Thomas Carter, Sriram Subramanian, and Stephen A Brewster. 2014. Perception of ultrasonic haptic feedback on the hand: localisation and apparent motion. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1133–1142.Google ScholarDigital Library
- Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems. 1083–1092.Google ScholarDigital Library
- Yukang Yan, Chun Yu, Xiaojuan Ma, Xin Yi, Ke Sun, and Yuanchun Shi. 2018. Virtualgrasp: Leveraging experience of interacting with physical objects to facilitate digital object retrieval. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Geng Yang, Honghao Lv, Feiyu Chen, Zhibo Pang, Jin Wang, Huayong Yang, and Junhui Zhang. 2018. A novel gesture recognition system for intelligent interaction with a nursing-care assistant robot. Applied Sciences 8, 12 (2018), 2349.Google ScholarCross Ref
- Hui Ye, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. 2020. ARAnimator: in-situ character animation in mobile AR with user-defined motion gestures. ACM Transactions on Graphics (TOG) 39, 4 (2020), 83–1.Google ScholarDigital Library
- Shahrouz Yousefi, Mhretab Kidane, Yeray Delgado, Julio Chana, and Nico Reski. 2016. 3D gesture-based interaction for immersive experience in mobile VR. In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2121–2126.Google ScholarCross Ref
- Run Yu and Doug A Bowman. 2018. Force push: Exploring expressive gesture-to-force mappings for remote object manipulation in virtual reality. Frontiers in ICT 5(2018), 25.Google ScholarCross Ref
- Ya-Ting Yue, Yong-Liang Yang, Gang Ren, and Wenping Wang. 2017. SceneCtrl: Mixed reality enhancement via efficient scene editing. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 427–436.Google ScholarDigital Library
- Bruno Zamborlin, Frederic Bevilacqua, Marco Gillies, and Mark D’inverno. 2014. Fluid gesture interaction design: Applications of continuous recognition for the design of modern gestural interfaces. ACM Transactions on Interactive Intelligent Systems (TiiS) 3, 4(2014), 1–30.Google ScholarDigital Library
- Jürgen Zauner, Michael Haller, Alexander Brandl, and Werner Hartman. 2003. Authoring of a mixed reality assembly instructor for hierarchical structures. In The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings. IEEE, 237–246.Google ScholarCross Ref
- ZED Mini 2021. ZED Mini: Mixed Reality Camera. https://www.unity.com.Google Scholar
- Lei Zhang and Steve Oney. 2020. FlowMatic: An Immersive Authoring Tool for Creating Interactive Scenes in Virtual Reality. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 342–353.Google ScholarDigital Library
Recommendations
ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing SystemsAugmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an ...
InstruMentAR: Auto-Generation of Augmented Reality Tutorials for Operating Digital Instruments Through Recording Embodied Demonstration
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing SystemsAugmented Reality tutorials, which provide necessary context by directly superimposing visual guidance on the physical referent, represent an effective way of scaffolding complex instrument operations. However, current AR tutorial authoring processes ...
Immersive authoring of Tangible Augmented Reality content: A user study
Immersive authoring refers to the style of programming or developing content from within the targetexecutable environment. Immersive authoring is important for fields such as augmented reality (AR) in which interaction usability and user perception of ...
Comments