skip to main content
10.1145/3613905.3651135acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
extended-abstract
Free Access

Designing and Prototyping Applications Using Acoustophoretic Interfaces

Published:11 May 2024Publication History

Abstract

Recently, the acoustophoretic interface, using acoustic levitation to manipulate objects in mid-air with ultrasound waves, has become a notable advancement in Human-Computer Interaction (HCI). This innovative interface simultaneously provides contactless haptic feedback and audio delivery through a single technical approach. The versatility of the acoustophoretic interface is evident in its wide range of applications, including physical displays, mid-air haptic interactions, contactless object manipulation, etc. Despite its potential, the interface remains underutilized, partly due to its novelty and the complexity of implementing advanced interaction tasks. My PhD research is dedicated to addressing these challenges by developing effective design and implementation strategies tailored to real-world application scenarios. By advancing the technical capabilities and application possibilities of acoustophoretic interfaces, my work strives to unlock new potentials for acoustophoretic interfaces, paving the way for innovative, practical solutions for designers, creators, and researchers to build interactive, engaging, and effective applications across various domains.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Acoustic levitation, or acoustophoresis, is an emerging technique that utilizes ultrasound waves to create acoustic traps, thereby suspending objects in mid-air. Phased array transducers (PATs) are a fundamental technology in this field, enabling dynamic control over dense arrays of sound sources, such as N*N ultrasound transducers, by constantly updating each transducer’s acoustic signal (i.e., amplitude and phase). By applying these control principles, it is possible not only to levitate objects by acoustic traps but also to generate contactless haptic sensations on the skin and produce audible sounds simultaneously [10].

Utilizing sound energy to manipulate objects in a way that defies gravity has demonstrated significant potential and benefits. This enables natural, contactless sensations and delivery methods without the risk of contamination. Over the past 10 years, it has been actively explored in Human-Computer Interaction (HCI), as well as in other physics and engineering fields, resulting in magical experiences. It is envisioned that this technology can be applied in a range of areas, including levitation-based volumetric displays [10, 18], data physicalizations [8, 16], mid-air haptics [13], physical interaction [5, 11], contactless printing and fabrication [3], etc.

The acoustophoretic interface, which is programmed using phase retrieval algorithms [18], enables a multi-modal display as well as the manipulation of multiple materials (such as solid particles, liquid droplets, and food) in mid-air. In contrast to more established techniques and interfaces, such as VR/AR and Tangible User Interfaces (TUI), the acoustophoretic interface represents a relatively new and still evolving field. While it has been recognized for its high potential, it is important to acknowledge that it remains in the early stages of development, particularly in real-world application scenarios.

Firstly, a significant challenge lies in conceptualizing how acoustic levitation can be integrated into HCI applications. One of the key challenges is to transform this technique into an experience that is not only meaningful and engaging for users but also intuitive and easy to use. Achieving this necessitates a blend of technical expertise in acoustic physics with creative design insights and practical user experience principles.

Secondly, there are currently few tools or platforms available to support the development and use of acoustophoretic interfaces. As a nascent interface, achieving basic interaction functionalities presents not only significant challenges but also opens up exciting avenues for exploration and innovation (e.g., interacting with mid-air contents and making dynamic and reconfigurable manipulations). As the demand for these interfaces grows, it is crucial to enhance complex and sophisticated capabilities, such as the stability and robustness of the levitation system, along with the ability to manipulate delicate materials while in levitation. Concurrently, addressing the additional challenges of developing essential foundational software and hardware becomes increasingly vital.

To unleash the potential of the acoustophoretic interface, my doctoral research aims to address the challenges in the design and implementation of prototypes enabled by acoustophoresis, offering practical solutions to designers and developers for creating innovative and effective applications in HCI. This thesis investigates distinct application domains to facilitate the advantages of acoustic levitation and proposes design principles and building platforms for implementing prototypes. Additionally, in a broader context, this work explores the performance improvement of the universal levitation system, thus paving the way for a wider range of applications and providing insights for future research. Here, my up-to-date progress demonstrates novel interactive examples through the acoustophoretic interface and the enhancement of levitation stability. I envision that acoustophoresis will revolutionize how we interact with the physical world and offer improved reliability across a wide range of applications for a broader audience.

Skip 2RELATED WORKS Section

2 RELATED WORKS

2.1 Acoustic Levitation Systems and Advances

Acoustic levitation was first enabled by the single-axis levitator with one Langevin transducer and an opposed reflector [23]. In the produced standing wave acoustic field, small particles are levitated at the points with the lowest acoustic pressure; the acoustic radiation forces exerted at this point is zero, thus trapping particles in mid-air. Recently, a phased array of transducers (PATs) has been commonly adopted in acoustic levitation systems, allowing arbitrary acoustic trap points in 3D and creating multiple trap points simultaneously[14]. Phase retrieval algorithms are the key to optimizing the transducers’ phase signals, which shape the acoustic field and create exact traps at target positions.

High-performance computing allows fast solvers that update the trap manipulation at speeds reaching up to 8.75m/s [10]. For real-time high-speed control, GS-PAT [18], a GPU multi-point phase retrieval algorithm, enables the fast update of the acoustic field at 10 kHz. Recent modifications in the boundary element method [9], have enabled high-speed acoustic levitation in various environments, overcoming previous limitations of requiring empty space between PATs, even in the presence of sound-scattering objects. Adhering to the same operating principle, acoustic levitation systems are capable of generating mid-air haptics, levitations, and parametric audio.

Meanwhile, the levitation system has some limitations. For instance, levitating large-sized objects has not been fully explored, and direct hand manipulation is not allowed in the acoustophoretic system. Moreover, recent findings have shown that PAT boards experience high abrupt phase changes that lead to amplitude fluctuations in the transducers’ emissions [20]. That is, the actual delivered acoustic energy diminished and was much lower than the theoretical simulation. These observations provide a crucial basis for guiding further research and enhancing technological advancements.

2.2 Applications with Acoustic Levitation

Mid-air haptics. Acoustophoresis is quite commonly used for creating mid-air tactile feedback using ultrasound wave focusing without contact with skin [13]. By employing different modulation techniques (e.g., amplitude, lateral, and spatiotemporal modulation), this approach not only generates diverse haptic sensations but also facilitates shape recognition and can even induce emotional responses[19]. Mid-air haptics have been explored for use in some application fields like medical interfaces, automotive interfaces, digital advertising, and mixed reality.

Mid-air interactions. Concurrently, advancements in the acous-tophoretic interface for levitating objects have expanded the range of applications and interactive possibilities. In terms of interaction with levitated primitives, Point-and-Shake [5] designed a selection method for a single or occluded particle by finger and proposed a selection feedback mechanism based on side-to-side movement (shake). LeviCursor [1] presented a technique for controlling and stabilizing a floating particle. It employs a novel approach where interactions based on the distance between a finger and the particle enable the selection and manipulation of levitated particles through indirect means. TipTrap [11] enhanced this interaction by allowing the finger to come closer to the levitated particle. It uses sound scattering from the finger to form a levitation trap, enabling direct, co-located interactions.

Physical displays. Acoustophoresis has been innovatively used in volumetric displays by updating the trapping positions with a high-update rate, which leads to high-speed movement and presents POV (persistence of vision) content. A single levitated particle is used at a time to scan the content within 0.1s, revealing some simple POV shapes like circles and ovals in 3D [10]. Considering the timing and position of trap-particle dynamics, the optimized trap trajectory makes the particle more controllable and can render larger and more complex POV shapes [17]. In addition to particle-based displays, research in acoustophoresis has explored other primitives for creating mid-air displays. A notable example is the manipulation of levitated threads and fabrics, where various numbers of levitated particles are used as anchor points. LeviProps [15] calculates the optimal anchor positions for these particles to maximize trapping stiffness, enabling stable manipulation of levitated fabric through precise millimetric adjustments of anchor positions.

Delivery, assembly, and fabrication. Utilizing the advantages of contact-free manipulation, acoustic levitation has paved the way for groundbreaking uses in various automated processes. For example, in the food delivery system [21], this technology is employed to float and transport items like miniature burgers and drinks such as wine and coffee. This technique is set to revolutionize culinary experiences with smell, visual, thermal, and auditory sensations [22]. ArticuLev [4] proposed an automatic physical assembly pipeline to detect different types of primitives (e.g., particles, threads, fabrics) and assemble them together by simply lifting, joining, and posing steps. In the field of 3D printing, acoustic levitation assists in handling UV resin and sticks, thereby enabling new design opportunities and facilitating the creation of structures that were once difficult or impossible to produce [3].

Skip 3PROPOSED AND COMPLETED WORKS Section

3 PROPOSED AND COMPLETED WORKS

Recent advances in acoustic levitation have shown that the acousto-phoretic interface holds promises for building novel, programmable, and multi-modal interactive artifacts. However, there remains a lack of conceptual framework and building tools for specific application contexts. My research aims to address the design and implementation challenges associated with applying an acoustophoretic interface in real application scenarios, fully utilizing the advantages of acoustic levitation, and overcoming technical hurdles.

In this thesis, I select two application areas—data physicalization and fabrication—that can immediately benefit from the acousto-phoretic interface, reaching a larger audience. Meanwhile, I address the emerging challenges in the levitation system, thereby laying the groundwork for future designers, creators, and developers.

3.1 Data Physicalizations with Acoustophoretic Interfaces

Data physicalization [12] is to build a physical artifact whose geometry and material properties encode data, enabling data analysis and communication in a tangible way. Interacting with physical data artifacts allows users to perceive information through multiple senses, including visual, haptic, and other senses, enhancing their understanding of depth and detail [2]. The acoustophoretic interface is one of the most promising ways to achieve data physicalization because it can flexibly manipulate various objects by levitation and offer multi-sensory feedback. Despite being a natural conceptual fit between acoustophoresis and data physicalization, conceptual mapping is lacking to encode data through such acoustophoretic demonstration. Additionally, the challenge remains in dynamically reconfiguring and manipulating physical artifacts to be interactive, engaging, and informative in an acoustic levitation context. Thus, I built DataLev [7, 8] (CHI’23 and UIST’22 work) to tackle these issues. Here, I outlined a general design space including five dimensions (i.e., embodiments, materials, multi-modal support, mixed-reality components, and animations) which leveraged the strength of an acoustophoretic interface and provided a framework exploring a wider range of mid-air physicalizations. Guided by those design principles, I proposed a building platform incorporating a levitation engine, hybrid imaging, and path planning techniques to support practical solutions with each design dimension. I demonstrated the design space and platform with eight novel physicalization examples, including scatter plots, map charts, arc diagrams, network diagrams, and others. Data physicalizations are significantly augmented by the acoustophoretic interface with mixed-reality animation, multi-modal interactions, and enriched materiality.

3.2 System Improvement of Acoustophoretic Interfaces

As a growing technique and interface, it has been asked to achieve complex levitation, display, and interaction tasks, like creating more levitation points and rendering complex display contents in real applications. However, acoustic levitation has been found to have unpredictable failures in some empirically dynamic levitations [4, 8]. Such issues can disrupt the delicate balance needed to maintain stable levitation, leading to inconsistent results between simulations and experiments. To mitigate these challenges, I proposed a data-driven method to analyze and detect the problematic pattern of levitation and identify enhancement strategies based on historical data. To my knowledge, there is no existing dataset for rigorous analysis and algorithm development. Thus, I created the first kind of hybrid levitation dataset, including both simulation features and experimental levitation outcomes, to fill this dataset gap. Building on this dataset, I developed StableLev [6] (CHI’24 work), a data-driven methodology that detects anomalies through AutoEncoder-based deep neural networks and effectively pinpoints and rectifies the unstable levitation trajectories within a time series. This is the first attempt to apply a data-driven method in the context of acoustic levitation, which offers fresh perspectives on multi-particle levitation and improves its overall stability with promising results. I envision that this work will open up further data-driven explorations using AI methods to enable more discoveries of acoustic levitation and scientific findings.

Skip 4NEXT STEPS Section

4 NEXT STEPS

In my prior explorations, I provided practical solutions for applying an acoustophoretic interface in data physicalization and enhanced the stability of the levitation system. Leveraging those results, in the next step, I plan to further explore the capabilities and usage of acoustic levitation in a more complex and exciting area—fabrication. Since the acoustophoretic interface is distinguished by its ability to manipulate objects of diverse materials (both solid and liquid objects), it plays an important role in the fabrication process. For example, the use of acoustic levitation to deliver food [21], and the assembly of elements like levitated sticks and resin droplets in the 3D printing process [3], mark revolutionary steps in contactless and programmable fabrication.

Here, I will consider different fabrication methods, fabrication flows, and fabrication materials, and then explore how acoustophoresis can be embedded and combined in the proper fabrication process. The ideal experimental scenario would be food fabrication and processing, which incorporates diverse fabricating processes and appealing scenarios. In this context, I intend to first examine the acoustic properties of different food materials and their responses to ultrasound processing to select suitable cooking materials. Next, I plan to get precision control of the food (e.g., placement and assembly) by computational models. With other ultrasound characteristics, I will also explore novel processing methods that are impossible on other cooking platforms. In the end, my research output will be a set of design guidelines and implementation procedures, demonstrating various food prototypes with the acoustophoretic interface. As an extension, this research will discuss how an acoustophoretic interface can enhance computational food-making and bring novel insights into human-food interactions.

Skip 5EXPECTED CONTRIBUTIONS AND CHALLENGES Section

5 EXPECTED CONTRIBUTIONS AND CHALLENGES

The acoustophoretic interface, as a burgeoning field of research, is poised for significant advancements, particularly in the realms of physical display and contactless fabrication. In my doctoral research, I delve into the specific application scenarios (i.e., data physicalization and fabrication) to find the opportunities for applying an acoustophoretic interface and also fill the technical gap of the acoustic levitation technique. The expected contributions of my thesis include theoretical considerations from a design perspective, along with the development of tools or platforms from a technical standpoint. This approach aims to integrate design and computational insights, thereby offering significant implications for a broad spectrum of future applications with acoustophoretic interfaces. Specifically, the design space and platform for data physicalization will inspire future research on building novel and engaging physicalization prototypes, adding great value to storytelling and communications; the design framework and system support computational food fabrication and bring novel culinary and multi-sensory experiences; the data-driven strategy models the levitation patterns and presents analytical solutions to improve the robustness of the levitation system. It promises the future model of the discovery of more and more complex acoustophoretic behaviors.

Meanwhile, to validate the discovered model of the levitation system, a potential challenge lies in the practical application of the discovered models in real systems, as translating computational models into functioning real-world applications often presents unforeseen complexities and issues. Thus, more considerate models could be developed to approximate the real world and get conclusive findings. Differences and inconsistencies in hardware setups pose a challenge, as they may lead to varied results and affect the reproducibility and reliability of our findings. In the current phase, we are initiating the design and implementation steps. These foundational efforts will set the stage for future user studies, providing a robust evaluation framework to explore how users interact with the prototypes, including direct interaction techniques and methods. In conclusion, the journey ahead for acoustophoretic interfaces is filled with exciting possibilities and significant challenges.

Skip 6DISSERTATION STATUS AND LONG-TERM GOALS Section

6 DISSERTATION STATUS AND LONG-TERM GOALS

I am currently a fourth-year PhD student in the Department of Computer Science at University College London, under the supervision of Professor Sriram Subramanian. I have never attended a doctoral consortium at previous SIGCHI conferences. I have published three papers on my research topic in the HCI community. The expected completion date of my Ph.D. degree will be around October 2024.

In long-term exploration, the integration of the acoustophoretic interface with other user interface technologies could unlock a multitude of new possibilities. This synergy could lead to more intuitive, immersive, and interactive systems across various domains. In the future, to fully realize the potential of acoustophoretic interfaces, human-centered explorations are crucial. This involves extensive user studies and human-centered evaluations, focusing not just on technological feasibility but also on cognitive and sensory aspects. Understanding how users interact with and perceive acoustophoretic interfaces is key to designing systems that are not only innovative but also intuitive and user-friendly.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This doctoral research is supported by the EU-H2020 through their ERC Advanced Grant (number 787413) and the Royal Academy of Engineering through their Chairs in Emerging Technology Program (CIET18/19). I am very grateful to my supervisor, Professor Sriram Subramanian, for his consistent encouragement and generous guidance. I also thank Dr. Ryuji Hirayama and Dr. Diego Martinez Plasencia, as well as all my research collaborators, who give me great support and helps.

References

  1. Myroslav Bachynskyi, Viktorija Paneva, and Jörg Müller. 2018. LeviCursor: Dexterous interaction with a levitating object. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces. Association for Computing Machinery, New York, NY, USA, 253–262. https://doi.org/10.1145/3279778.3279802Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Kurtis Danyluk, Teoman Ulusoy, Wei Wei, and Wesley Willett. 2020. Touch and beyond: Comparing physical and virtual reality visualizations. IEEE Transactions on Visualization and Computer Graphics 28, 4 (2020), 1930–1940. https://doi.org/10.1109/TVCG.2020.3023336Google ScholarGoogle ScholarCross RefCross Ref
  3. Iñigo Ezcurdia, Rafael Morales, Marco AB Andrade, and Asier Marzo. 2022. LeviPrint: Contactless Fabrication using Full Acoustic Trapping of Elongated Parts.. In ACM SIGGRAPH 2022 Conference Proceedings. Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3528233.3530752Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Andreas Rene Fender, Diego Martinez Plasencia, and Sriram Subramanian. 2021. ArticuLev: An integrated self-assembly pipeline for articulated multi-bead levitation primitives. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3411764.3445342Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Euan Freeman, Julie Williamson, Sriram Subramanian, and Stephen Brewster. 2018. Point-and-shake: selecting from levitating object displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3173574.3173592Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Lei Gao, Giorgos Christopoulos, Prateek Mittal, Ryuji Hirayama, and Sriram Subramanian. 2024. StableLev: Data-Driven Stability Enhancement for Multi-Particle Acoustic Levitation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3613904.3642286Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Lei Gao, James Hardwick, Diego Martinez Plasencia, Sriram Subramanian, and Ryuji Hirayama. 2022. DATALEV: Acoustophoretic Data Physicalisation. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, New York, NY, USA, 1–3. https://doi.org/10.1145/3526114.3558638Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Lei Gao, Pourang Irani, Sriram Subramanian, Gowdham Prabhakar, Diego Martinez Plasencia, and Ryuji Hirayama. 2023. DataLev: Mid-air Data Physicalisation Using Acoustic Levitation. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3544548.3581016Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ryuji Hirayama, Giorgos Christopoulos, Diego Martinez Plasencia, and Sriram Subramanian. 2022. High-speed acoustic holography with arbitrary scattering objects. Science advances 8, 24 (2022), eabn7614. https://doi.org/10.1126/sciadv.abn7614Google ScholarGoogle ScholarCross RefCross Ref
  10. Ryuji Hirayama, Diego Martinez Plasencia, Nobuyuki Masuda, and Sriram Subramanian. 2019. A volumetric display for visual, tactile and audio presentation using acoustic trapping. Nature 575, 7782 (2019), 320–323. https://doi.org/10.1038/s41586-019-1739-5Google ScholarGoogle ScholarCross RefCross Ref
  11. Eimontas Jankauskis, Sonia Elizondo, Roberto Montano Murillo, Asier Marzo, and Diego Martinez Plasencia. 2022. TipTrap: A Co-located Direct Manipulation Technique for Acoustically Levitated Content.. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3526113.3545675Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. 2015. Opportunities and challenges for data physicalization. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 3227–3236. https://doi.org/10.1145/2702123.2702180Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Benjamin Long, Sue Ann Seah, Tom Carter, and Sriram Subramanian. 2014. Rendering volumetric haptic shapes in mid-air using ultrasound. ACM Transactions on Graphics (TOG) 33, 6 (2014), 1–10. https://doi.org/10.1145/2661229.2661257Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Asier Marzo and Bruce W Drinkwater. 2019. Holographic acoustic tweezers. Proceedings of the National Academy of Sciences 116, 1 (2019), 84–89. https://doi.org/10.1073/pnas.1813047115Google ScholarGoogle ScholarCross RefCross Ref
  15. Rafael Morales, Asier Marzo, Sriram Subramanian, and Diego Martínez. 2019. LeviProps: Animating levitated optimized fabric structures using holographic acoustic tweezers. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, New York, NY, USA, 651–661. https://doi.org/10.1145/3332165.3347882Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Themis Omirou, Asier Marzo Perez, Sriram Subramanian, and Anne Roudaut. 2016. Floating charts: Data plotting using free-floating acoustically levitated representations. In 2016 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, Greenville, SC, USA, 187–190. https://doi.org/10.1109/3DUI.2016.7460051Google ScholarGoogle ScholarCross RefCross Ref
  17. Viktorija Paneva, Arthur Fleig, Diego Martínez Plasencia, Timm Faulwasser, and Jörg Müller. 2022. OptiTrap: Optimal trap trajectories for acoustic levitation displays. ACM Transactions on Graphics 41, 5 (2022), 1–14. https://doi.org/10.1145/3526113.3545675Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Diego Martinez Plasencia, Ryuji Hirayama, Roberto Montano-Murillo, and Sriram Subramanian. 2020. GS-PAT: high-speed multi-point sound-fields for phased arrays of transducers. ACM Transactions on Graphics (TOG) 39, 4 (2020), 138–1. https://doi.org/10.1145/3386569.3392492Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ismo Rakkolainen, Euan Freeman, Antti Sand, Roope Raisamo, and Stephen Brewster. 2020. A survey of mid-air ultrasound haptics and its applications. IEEE Transactions on Haptics 14, 1 (2020), 2–19. https://doi.org/10.1109/TOH.2020.3018754Google ScholarGoogle ScholarCross RefCross Ref
  20. Shun Suzuki, Masahiro Fujiwara, Yasutoshi Makino, and Hiroyuki Shinoda. 2020. Reducing amplitude fluctuation by gradual phase shift in midair ultrasound haptics. IEEE transactions on haptics 13, 1 (2020), 87–93. https://doi.org/10.1109/TOH.2020.2965946Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Chi Thanh Vi, Asier Marzo, Damien Ablart, Gianluca Memoli, Sriram Subramanian, Bruce Drinkwater, and Marianna Obrist. 2017. Tastyfloats: A contactless food delivery system. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. Association for Computing Machinery, New York, NY, USA, 161–170. https://doi.org/10.1145/3132272.3134123Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Chi Thanh Vi, Asier Marzo, Gianluca Memoli, Emanuela Maggioni, Damien Ablart, Martin Yeomans, and Marianna Obrist. 2020. LeviSense: A platform for the multisensory integration in levitating food and insights into its effect on flavour perception. International Journal of Human-Computer Studies 139 (2020), 102428. https://doi.org/10.1016/j.ijhcs.2020.102428Google ScholarGoogle ScholarCross RefCross Ref
  23. RR Whymark. 1975. Acoustic field positioning for containerless processing. Ultrasonics 13, 6 (1975), 251–261. https://doi.org/10.1016/0041-624X(75)90072-4Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Designing and Prototyping Applications Using Acoustophoretic Interfaces

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
        May 2024
        4761 pages
        ISBN:9798400703317
        DOI:10.1145/3613905

        Copyright © 2024 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 11 May 2024

        Check for updates

        Qualifiers

        • extended-abstract
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate6,164of23,696submissions,26%

        Upcoming Conference

        CHI PLAY '24
        The Annual Symposium on Computer-Human Interaction in Play
        October 14 - 17, 2024
        Tampere , Finland
      • Article Metrics

        • Downloads (Last 12 months)92
        • Downloads (Last 6 weeks)92

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format