skip to main content
10.1145/3613905.3651021acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

Towards an Evaluation Framework for Extended Reality Authentication Schemes

Published:11 May 2024Publication History

Abstract

Over the years, one of the biggest challenges for researchers has been to propose authentication schemes that will be both usable and secure and will ensure user’s privacy. The advent of Extended Reality (XR), as a new emerging technology, introduced more challenges and opportunities for authentication. Researchers have proposed several methods for user authentication, however a major challenge lies in comparing between the various methods. This comparison is crucial for XR stakeholders to understand the strengths and weaknesses of each approach, aiding informed decision-making. In addressing this challenge, this late-breaking work reports the results of a small-scale user study with experts in the field, moving towards a framework for evaluating user authentication schemes in XR.

Skip 1INTRODUCTION Section

1 INTRODUCTION

In the rapidly evolving landscape of technology, the integration of Extended Reality (XR)1 has introduced new challenges and opportunities, particularly in user authentication. The quest for user authentication schemes that seamlessly balance usability, security, and user privacy has long been a central concern for researchers, a concern that persists in XR. In XR, several user authentication schemes have been proposed to address that concern based on diverse authentication factors, modalities, devices, etc. An essential step in this direction is the evaluation of user authentication schemes, with research works in the Usable Security and Privacy (USP) domain presenting user studies evaluating the proposed user authentication schemes. However, researchers follow different evaluation approaches and the applied evaluation protocols, methods, and measures vary. This variation introduces several problems, such as limited consistency, comparability, and reproducibility; these problems become even more crucial in XR with the introduction of new affordances. In addition, the immersive nature of XR can create unique security challenges as users may be more susceptible to phishing attacks or social engineering techniques within these environments. XR experiences often involve multi-modal interaction, including gestures, voice commands, and biometric inputs. Designing effective user authentication mechanisms that accommodate these diverse interaction modalities while maintaining security is challenging but essential for a seamless user experience. Delivering XR experiences can vary in terms of physical location, lighting conditions, noise levels, and presence of other individuals. These environmental factors can impact the effectiveness of authentication methods and necessitate the development of adaptive authentication approaches tailored to the specific context. The unique characteristics of XR environments underscore the importance of customized authentication solutions in XR.

Aiming to assist researchers in the USP domain to make informed decisions regarding their approach to evaluate user authentication schemes in XR, this late-breaking work (LBW) presents the first steps towards a universal and standardized evaluation framework for user authentication schemes in XR. To shape the framework, we conducted a user study with field experts whose insights and experience ensure the adaptability and effectiveness of such a universal approach. The remainder of the LBW is as follows: we briefly present the background, related works, and the research objective; we discuss the user study (methods and results); we make the first steps toward the definition of the evaluation framework; we present the limitations and discuss the next steps.

Skip 2BACKGROUND, RELATED WORKS, AND RESEARCH OBJECTIVE Section

2 BACKGROUND, RELATED WORKS, AND RESEARCH OBJECTIVE

2.1 Background

When proposing user authentication schemes, researchers aim to deliver schemes that are secure and protect users through user-friendly and intuitive interfaces that do not burden them. This is the major concern in USP community; a concern that persists in XR, as it amplifies security concerns [1], introduces privacy challenges [1], and benefits the use of new interaction elements and paradigms that could influence usability and other relevant user experience aspects [16, 20]. In XR, several schemes have been proposed [19], aiming to support secure and usable user authentication. An important step in this direction is the evaluation of these schemes towards security and usability; thus, research works in the USP domain typically conduct user studies. However, the evaluation approaches reported in these studies differ, meaning that the applied methodologies, tools, and measures vary. This variation introduces several challenges: i) it is challenging to compare and benchmark different user authentication schemes; ii) inconsistent evaluation methods hinder the reproducibility of research findings; iii) it can lead to imbalance between usability and security evaluation; iv) new USP researchers may struggle to navigate diverse evaluation methods as there is no clear guidance. An evaluation framework would address such challenges, aiming for a more universal and standardized approach.

2.2 Related Works

In HCI, a framework can be a structured and standardized set of concepts, practices, criteria, guidelines, etc., providing a foundation or structure for designing, developing, or evaluating interactive experiences and interfaces. Focusing on the evaluation, frameworks targeting assessment dimensions (e.g., performance, usability) have been proposed in diverse domains, such as behavioral evaluation of artificial intelligence models [4], evaluation of user experiences based on brain signals [7], evaluation of mobile behaviour change [13], and evaluation of sense of agency in control tasks [9]. The rise of XR in recent years emphasizes the need for XR-specific evaluation frameworks. However, only a limited body of research focuses on such frameworks, such as evaluating XR applications for assistive robotics [15], evaluating the maturity of BIM-based AR/VR systems [14], and evaluating VR applications for industrial training [18].

In security, several evaluation frameworks have been proposed, spanning across different domains on Internet-of-Things (IoT) ecosystem, such as military [5], vehicles [21], cloud and network communications [8, 17]. However, they focus on technical security properties and system components (e.g., evaluation of authentication schemes based on features like mutual authentication, non-repudiation, and scalability [2], evaluation of trust between IoT components [10], and evaluation of mutual authentication protocols [12]) without considering human factors or assessing user studies. Focusing on user authentication, in their recent work, Dube et al. [6] proposed an evaluation framework for biometric-based user authentication schemes; however, it was constrained to technical aspects (e.g., performance and irreversibility) without considering usability aspects. In their recent work, Korać and Simić [11] proposed the Fishbone model for evaluating user mobile authentication schemes; while it discusses several factors, its output is limited to providing quantitative scores, enabling a numerical assessment of mobile authentication schemes.

2.3 Research Objective

Considering the benefits of evaluation frameworks, the importance of delivering user authentication schemes that are both secure and usable, and the rise of XR, there emerges a necessity to assist USP researchers in effectively assessing their authentication schemes. In this direction, the research objective of this LBW is to move towards a holistic evaluation framework, that would offer guidance to USP researchers when assessing their authentication schemes.

Skip 3STUDY METHODOLOGY Section

3 STUDY METHODOLOGY

To meet our research objective, we performed a user study with USP experts. We performed semi-structured interviews with people who have conducted and published research on user authentication in XR. We provided them with a conceptual evaluation model and asked them to share their insights for a holistic evaluation approach that ensures usability, security, and privacy in XR user authentication, aiming to shape an evaluation framework. The study has been approved by the Ethics Review Board of the University of Warwick (Reference No. BSREC 37/23-24). Details about the study methodology follow.

3.1 Participants and Recruitment Process

We recruited five individuals, comprising two self-described women and three self-described men, with a strong background (5+ years, Average publication number = 74) in USP, user authentication, and XR research. Our recruitment strategy involved sending email invitations to individuals who had (co)authored peer-reviewed papers on user authentication within XR. Before joining the study, participants were briefed on the context and the anticipated duration of the study (30-40 minutes) through the Participant Information Leaflet (PIL). We provided information on how their data would be anonymized, stored, and handled, and obtained their informed consent. Participation was voluntary, and participants retained the freedom to withdraw from the study at any point without any obligation.

3.2 Methods and Tools

3.2.1 Interview Protocol.

We followed a semi-structured approach. We used a predefined set of questions as a foundation for our interviews, but we also explored emergent themes and noteworthy aspects that surfaced during the discussions. The interview consisted of two parts. In Part I, we aimed to gain insights into the participants’ experience in formulating evaluation approaches for XR user authentication schemes. In Part II, we aimed to trigger a discussion about the participants’ perspectives on a holistic evaluation framework; to do so, we provided them with a conceptual evaluation model (discussed in Section 3.2.3), which served as a starting discussion point.

3.2.2 Interview and Data Analysis.

The interviews took place online, using Microsoft Teams. We used the recording feature of Microsoft Teams to record the interview sessions, ensuring the documentation of the discussions. We used the live transcription feature of Microsoft Teams to transcribe the interview recordings; a researcher inspected and adjusted (when needed) the generated transcriptions. The interview recordings and transcriptions were anonymized and securely stored on the University of Warwick servers. We used thematic analysis following an open coding approach [3] to identify patterns and themes within the interview data, including systematic coding, analysis, and interpretation of data to extract key themes and insights, performed using NVivo qualitative analysis software.

3.2.3 Conceptual Evaluation Model.

Aiming to provide the study participants with a starting point and trigger discussion regarding the shaping of a holistic evaluation framework for user authentication in XR, we developed a conceptual evaluation model. This model derived from our experience and expertise in the field merged with the insights and knowledge gained by examining research works in USP, user authentication, and XR. (e.g., [19]). Figure 1 offers a high-level overview, illustrating its key components. The model (CEM) implements a mapping function, defined as CEM: XY, where X = {X1, X2,..., XN} represents the set of input parameters and Y = {Y1, Y2,..., YM} represents the multi-dimensional output. The input parameters are related to the dimensions of the user authentication scheme: authentication factor (knowledge, biometrics, possession, multi-factor), modalities (hand, eye, head, body, brain, voice, multi-modality), and context of use (private/public, device ownership, task). The output includes evaluation dimensions: metrics and measures for efficient evaluation (towards security, usability, resource management, and privacy), evaluation protocol (protocol definition, equipment required, user study design), and comparison insights (results from related user studies).

Figure 1:

Figure 1: High-level overview of the conceptual evaluation model.

3.3 Procedure

We followed a three-phases approach:

Before the interviews. After recruiting the study participants (i.e., those who answered positively to our email invitations and provided consent, see Section 3.1), we scheduled interviews on Microsoft Teams (using the Meeting feature). We mutually agreed upon the date and time of the interviews. We sent a calendar invitation to participants, including the joining link.

During the interview. The interviewer introduced the topic and conducted the two parts of the interview (Section 3.2.1). Concluding the session, the interviewer thanked the participant for their contributions. The duration of each interview was 30-40 minutes; we recorded the sessions for transcription purposes (Section 3.2.2).

After the interview. A researcher inspected and adjusted (when needed) the generated transcriptions. Two researchers read through the transcribed data, independently marked the data relevant to the research objective, and turned them into labeled statements. Next, they categorized the statements using affinity diagrams to identify key themes. The co-authors reviewed, discussed, and revised the identified themes and clusters to validate the qualitative analysis.

Skip 4RESULTS Section

4 RESULTS

4.1 Insights from Experts

Evaluation Challenges. The participants highlighted the challenges in selecting and implementing an evaluation approach for XR authentication. These challenges underscore the complexity of such evaluations, necessitating a holistic and adaptable approach. The main challenges can be summarized as follows:

Variance Between Different Schemes and Determining Authentication Parameters.

The diverse characteristics of the authentication schemes and deciding on specific parameters of the authentication process (e.g., number of repetitions, time before blocking a user) make it difficult to standardize evaluations or directly compare different schemes. Ensuring clarity and consistency in evaluation methods is crucial for accurate assessments and comparisons.

Considering Different Scenarios and Contexts in XR.

Different scenarios and contexts (e.g., varying times of day or durations of XR immersion, type of XR devices, physical constraints of the environment) can affect the evaluation output, along with more technical aspects (e.g., field of view and motion sickness), highlighting the need for flexible evaluation frameworks that can adapt to diverse user contexts.

Formulating a Relevant Threat Model.

Creating a threat model that is both generalizable and relevant to specific contexts is challenging due to the need to ensure adequate coverage of real-world scenarios.

it was really difficult to motivate why this threat model was important to be looked at” — P2

It’s always tricky to frame the threat model in a way where you’re like, OK, this is really important” — P3

Adaptability and Iteration in Study Design.

A major challenge in evaluating user authentication in XR is adapting or rerunning studies to incorporate new factors or data analysis techniques that were initially overlooked.

Because sometimes I actually forget to add something and then it’s either me needing to rerun the study to add this data analysis or I just came across a paper that told me this [factor] exists” — P3

At first... we didn’t consider this [a metric], but... the reviewers pointed out that this [the metric] might be one of the factors,..., so, we re-ran the study” —P4

Need for Standardization.

To address such challenges, a more standardized and universal way of recommending evaluation approaches (e.g., through a holistic evaluation framework) would benefit the USP researchers.

I think a standardization and the way people report and measure things, making outputs publicly available... would make the comparison easier” —P2

Conceptualize the [XR evaluation] space somehow, because it’s not standardized to a certain extent” —P1

Parametrization of Framework Input. Participants expressed a desire to parameterize a set of input framework factors, aiming to enhance the granularity with which they can describe the authentication schemes they want to evaluate.

Description of Scheme Characteristics.

Two main dimensions need to be identified to describe the authentication scheme: authentication factor and input modality. Regarding the authentication factor, it is essential to know if the scheme is knowledge-based (i.e., something the user knows), biometric-based (i.e., something the user is), possession-based (i.e., something the user has), or if it implements a multi-factor approach. These factors introduce different vulnerabilities and limitations to the authentication process; thus, different measures and metrics are often more suitable in each case. For example, memorability is important in knowledge-based schemes; however, it is not applicable to biometrics or token-based schemes. Input modalities, such as hand, body, eye, brain, and voice (or a multi-modal approach), can also influence the evaluation process both in terms of security (e.g., hand modalities are more vulnerable to shoulder-surfing attacks) and usability.

XR Properties.

They can significantly influence the evaluation of user authentication schemes. The main XR properties include XR dimension (i.e., AR, VR, or MR), the device type that XR experience is delivered (e.g., HMD or mobile, tethered or not), and interaction mechanisms (e.g., eye-tracking, controllers). These properties introduce diverse security, usability and privacy concerns that must be addressed in the evaluation process. Another example is when interacting in an AR environment through HMD, environmental factors (e.g., lighting conditions) could influence the experience and have an impact on usability.

I think it’s indeed relevant, especially because different headsets have different sensors which could be used in different ways” — P2

...the device type. For example, if it’s tethered VR, where did the VR devices only display of the computer or the standalone VR which has the separate CPU? Or maybe just smartphone VR” — P5

Scenarios and Context of Use.

The scenarios that the authentication scheme implements and the context in which it is used includes authentication conditions and environmental factors that influence the security, usability and privacy aspects. These factors include the space (private, public, or the continuum between them) where the authentication process is made, the frequency of the authentication, the sensitivity of the content the user will be granted access to after authenticating, the physical activity during authentication etc. Such factors influence the adopted threat models and should be parameterized for an effective evaluation.

I added the importance of the account I am logging into. How important is it? Because that... I was working with behavior and that is affecting their behavior” — P3

I think in terms of the context of use, I would also add things like how often is the user expected to authenticate? Because... if they’re authenticating multiple times a day, you may want to use a faster scheme, even if it’s not so secure” — P2

Framework Output Guide: Based on the interview excerpts, the possible outputs of an evaluation framework for XR authentication can be summarized as follows:

Metrics, Measures, and Tools for Usability, Security, Privacy and Deployability.

A desirable output of the framework that would streamline the evaluation process is providing detailed metrics that help assess the usability, security, privacy and deploy-ability of authentication schemes. This includes measurements like authentication time, preparation time, accuracy, and speed trade-offs. Recommendations for standardized questionnaires to assess usability, privacy, and other relevant factors, along with rationales for choosing specific questionnaires. Advice on which type of threat models to consider during the evaluation and inclusion of matrices for evaluation and metrics that are relevant to each threat model and authentication scheme. Depending on the device type, things like power consumption (e.g., battery level in standalone HMDs) and processing requirements may be important for evaluating the proposed scheme.

so I would go into as low level as possible of saying what’s the time taken to authenticate what’s also the preparation time, like does it take too long to prepare yourself to authenticate” — P5

I would go to fine grained levels and I like that you included privacy and resource management” — P2

Precautions.

Various factors may be affecting the evaluation outcomes, such as spatial awareness, the lighting conditions etc. Apart from the environmental conditions, there are device-related considerations, such as battery level of controllers, or XR space, that should be taken care of for a smooth evaluation process. This would be particularly important for new researchers but also for conducting remote studies. Also, things that may affect the user experience, such as motion sickness and headaches should also be provided as a checklist or as limitations for the study.

I will add motion sickness as a factor there just to make sure that people are not getting motion tip especially if they are like doing spatial authentication where they need to look around” — P4

Standardized reporting.

The interview excerpts underscore the critical need for standardized reporting and comparisons among XR authentication schemes. Standardization not only aids in making study results transparent and comparable but also streamlines the research process by providing researchers with clear frameworks for reporting and analysis. Moreover, standardized reporting would facilitate the evaluation of different authentication schemes, allowing for meaningful comparisons and informed decision-making for the selection and implementation of authentication methods.

Figure 2:

Figure 2: Towards an evaluation framework for user authentication schemes in XR.

If we have a standard way of reporting, then we can do better comparisons and I think we don’t need to re-implement everything in our studies just to compare them” — P1

4.2 Towards an Evaluation Framework

The input dimensions discussed previously are important for the framework, as these will define the evaluation output and should be provided by the user in a simple and standardized way. The delivery of the output of the framework is achieved through an evaluation guide, which will include practical guidelines in terms of the measures and metrics. Apart from the guidelines, checklists for setting up and conducting authentication studies in XR, including environmental conditions, device setup, and user comfort and health considerations shall be provided. The framework will also feature a standardized way of reporting which will make comparisons easier and references and links to relevant literature will help the USP researcher for deeper understanding and justification of the chosen methods and measures.

Putting it all together, the framework (Figure 2) implements a mapping function that can be represented as MF: XY, where X = {ASC, XRP, SCU} represents the set of input parameters, and Y = {MMT, P, SR} represents the multi-dimensional output. Regarding the input parameters,  ASC = {AuthenticationFactors, InputModalities} represents the set of the authentication scheme characteristics, XRP = {XRDimension, DeviceType, InteractionMechanisms} represents the set of the XR properties, and SCU = {AuthenticationConditions, EnvironmentalFactors} represents the scenarios and contexts of use. Regarding the output dimensions, MMT = {Security, Usability, Privacy, Deployability} represents the set of the metrics, measures, and tools, while P = {EnviromentalConditions, XRDeviceSetup, UserComfortandHealth} represents the set of precautions, and finally SR = {EvaluationProtocols, ComparisonInsights} represents the standardized reporting aspects.

For instance, when a researcher is willing to evaluate a knowledge-based authentication scheme leveraging eye tracking in a workplace setting, they will input dimensions which include the factor of the scheme (i.e., knowledge-factor), the input modality, (i.e., eye tracker) and the scenario and the context of use, (i.e., authenticate to do a training session at work environment where other may be around). The framework’s output dimensions encompass metrics for security (e.g., use of a threat model based on observation attacks and measure times of successfully guessed authentication secret), for usability (e.g., measure time to authenticate as a metric for efficiency), for privacy (e.g., consider whether data from the authentication process is stored and how and where it is processed), and for deploy-ability (e.g., estimated eye tracking from head posing and measure accuracy), alongside precautions for optimizing environmental conditions (e.g., ensure that lighting conditions will not impact the experiment), for XR device setup (e.g., calibration is required), and for human comfort and health (e.g, if user wears glasses ensure eye-tracking is not hindered). Standardized reporting protocols are suggested based on good practice and on other works in the field facilitating comparison and insights into authentication scheme efficacy.

Skip 5DISCUSSION, LIMITATIONS, AND FUTURE WORK Section

5 DISCUSSION, LIMITATIONS, AND FUTURE WORK

In this LBW, we presented the first steps towards a universal and standardized evaluation framework for user authentication schemes in XR. To define the framework’s dimensions, we conducted interviews with USP researchers and extracted the main themes, which were then used to feed into the shaping of the framework. By proposing a structured framework, we offer a systematic approach to assess user authentication schemes, which is essential given the diverse modalities and contexts inherent in XR applications. By streamlining input dimensions, output guides, and metrics, the framework ensures clarity and consistency across evaluations, facilitating comparisons and enhancing the reproducibility of research findings. This aspect is crucial for advancing knowledge and fostering collaboration within the research community. We envision USP researchers of diverse experience and needs to use the guidelines and checklists provided by such a framework, and perform effective evaluation studies. In the same direction, adopting standardized reporting protocols and comparing the evaluation results with similar studies, enhances the transparency and rigor of evaluations, enabling researchers to communicate their findings and facilitate knowledge exchange.

Our approach has limitations, and a main one is the small number of participants; however, they were experts and provided insights and detailed understanding of complex issues within the user authentication in XR. We should also mention that the experts’ perspectives might differ from those who are newcomers to the USP domain, and in future research we need to engage such participants too, to mitigate potential expert bias. The immediate next steps include the refinement and validation of the framework following a 3-steps approach: i) increase the sample size of our study and include participants with diverse expertise (e.g., newcomers and experienced USP researchers), ii) use the framework to evaluate schemes that have been developed and works that have been published, assessing the effectiveness of the framework, and iii) use the framework to evaluate new authentication schemes in XR.

Skip 6CONCLUSION Section

6 CONCLUSION

In conclusion, this LBW is a first step towards addressing the need for an evaluation framework of user authentication schemes in XR. Through a small scale user study involving USP experts, we aimed to gain insights and perspectives to shape the framework that balances usability, security, privacy and deploy-ability considerations in XR authentication. By fostering standardization, transparency, and reproducibility, the framework serves as a valuable tool for researchers and practitioners seeking to design, evaluate, and deploy user authentication schemes in XR environments, thereby advancing the state of the art in immersive technology security and usability. Future research will focus on refining the framework based on empirical insights and feedback from various stakeholders.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

The work was supported by the University of Warwick Chancellor’s International Scholarship. We gratefully acknowledge the experts who participated in this study for their valuable contributions and cooperation.

Footnotes

  1. 1 by Extended Reality (XR), we refer to the umbrella term that covers Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).

    Footnote
Skip Supplemental Material Section

Supplemental Material

3613905.3651021-talk-video.mp4

Talk Video

mp4

26.1 MB

References

  1. Melvin Abraham, Pejman Saeghe, Mark Mcgill, and Mohamed Khamis. 2022. Implications of XR on Privacy, Security and Behaviour: Insights from Experts. In Nordic Human-Computer Interaction Conference (Aarhus, Denmark) (NordiCHI ’22). Association for Computing Machinery, New York, NY, USA, Article 30, 12 pages. https://doi.org/10.1145/3546155.3546691Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Yasir Ali and Habib Ullah Khan. 2022. GTM Approach towards Engineering a Features-oriented Evaluation Framework for Secure Authentication in IIoT Environment. Computers & Industrial Engineering 168 (2022), 108–119. https://doi.org/10.1016/j.cie.2022.108119Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Jodi Aronson. 1994. A Pragmatic View of Thematic Analysis. The Qualitative Report 2, 1 (1994), 1–3.Google ScholarGoogle Scholar
  4. Ángel Alexander Cabrera, Erica Fu, Donald Bertucci, Kenneth Holstein, Ameet Talwalkar, Jason I. Hong, and Adam Perer. 2023. Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 419, 14 pages. https://doi.org/10.1145/3544548.3581268Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Sungyong Cha, Seungsoo Baek, Sooyoung Kang, and Seungjoo Kim. 2018. Security Evaluation Framework for Military IoT Devices. Security and Communication Networks 2018 (July 2018), 1–12. https://doi.org/10.1155/2018/6135845Google ScholarGoogle ScholarCross RefCross Ref
  6. Abhinav Dube, Dhruwaman Singh, Rajesh Kumar Asthana, and Gurjit Singh Walia. 2020. A Framework for Evaluation of Biometric Based Authentication system. In 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS). IEEE, 925–932.Google ScholarGoogle ScholarCross RefCross Ref
  7. Jérémy Frey, Maxime Daniel, Julien Castet, Martin Hachet, and Fabien Lotte. 2016. Framework for Electroencephalography-based Evaluation of User Experience. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 2283–2294. https://doi.org/10.1145/2858036.2858525Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Frank Hessel, Lars Almon, and Flor Álvarez. 2020. ChirpOTLE: A Framework for Practical LoRaWAN Security Evaluation. In Proceedings of the 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks(WiSec ’20). ACM. https://doi.org/10.1145/3395351.3399423Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Axel Hoesl, Phuong Anh Vu, Christina Rosenmöller, Florian Lehmann, and Andreas Butz. 2017. Towards an Evaluation Framework: Implicit Evaluation of Sense of Agency in a Creative Continuous Control Task. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 1686–1693. https://doi.org/10.1145/3027063.3053072Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. N. Jyothi and Rekha Patil. 2022. A Fuzzy-based Trust Evaluation Framework for Efficient Privacy Preservation and secure authentication in VANET. Journal of Information and Telecommunication 6, 3 (March 2022), 270–288. https://doi.org/10.1080/24751839.2022.2040898Google ScholarGoogle ScholarCross RefCross Ref
  11. Dragan Korać and Dejan Simić. 2019. Fishbone Model and Universal Authentication Framework for Evaluation of Multifactor Authentication in Mobile Environment. Computers & Security 85 (Aug. 2019), 313–332. https://doi.org/10.1016/j.cose.2019.05.011Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Pawan Kumar and Dinesh Kumar. 2022. DoMT: An Evaluation Framework for WLAN Mutual Authentication Methods. In Mobile Radio Communications and 5G Networks, Nikhil Marriwala, C.C Tripathi, Shruti Jain, and Dinesh Kumar (Eds.). Springer Nature Singapore, Singapore, 345–363. https://doi.org/10.1007/978-981-16-7018-3_26Google ScholarGoogle ScholarCross RefCross Ref
  13. Claire McCallum, John Rooksby, Parvin Asadzadeh, Cindy M. Gray, and Matthew Chalmers. 2019. An N-of-1 Evaluation Framework for Behaviour Change Applications. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312923Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ziad Monla, Ahlem Assila, Djaoued Beladjine, and Mourad Zghal. 2023. A Conceptual Framework for Maturity Evaluation of BIM-Based AR/VR Systems Based on ISO Standards. In Extended Reality, Lucio Tommaso De Paolis, Pasquale Arpaia, and Marco Sacco (Eds.). Springer Nature Switzerland, Cham, 139–156. https://doi.org/10.1007/978-3-031-43401-3_9Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Max Pascher, Felix Ferdinand Goldau, Kirill Kronhardt, Udo Frese, and Jens Gerken. 2023. AdaptiX – A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics. arxiv:2310.15887 [cs.HC]Google ScholarGoogle Scholar
  16. Elaine M. Raybourn, William A. Stubblefield, Michael Trumbo, Aaron Jones, Jon Whetzel, and Nathan Fabian. 2019. Information Design for XR Immersive Environments: Challenges and Opportunities. Springer International Publishing, 153–164. https://doi.org/10.1007/978-3-030-21607-8_12Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Syed Rizvi, Jungwoo Ryoo, John Kissell, William Aiken, and Yuhong Liu. 2017. A Security Evaluation Framework for Cloud Security Auditing. The Journal of Supercomputing 74, 11 (May 2017), 5774–5796. https://doi.org/10.1007/s11227-017-2055-1Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Nattamon Srithammee and Prajaks Jitngernmadan. 2023. Holistic Evaluation Framework for VR Industrial Training. In Proceedings of the 19th International Conference on Computing and Information Technology (IC2IT 2023), Phayung Meesad, Sunantha Sodsee, Watchareewan Jitsakul, and Sakchai Tangwannawit (Eds.). Springer Nature Switzerland, Cham, 171–182. https://doi.org/10.1007/978-3-031-30474-3_15Google ScholarGoogle ScholarCross RefCross Ref
  19. Sophie Stephenson, Bijeeta Pal, Stephen Fan, Earlence Fernandes, Yuhang Zhao, and Rahul Chatterjee. 2022. SoK: Authentication in Augmented and Virtual Reality. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 267–284.Google ScholarGoogle Scholar
  20. Chris Warin and Delphine Reinhardt. 2022. Vision: Usable Privacy for XR in the Era of the Metaverse. In Proceedings of the 2022 European Symposium on Usable Security (Karlsruhe, Germany) (EuroUSEC ’22). Association for Computing Machinery, New York, NY, USA, 111–116. https://doi.org/10.1145/3549015.3554212Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Haichun Zhang, Yuqian Pan, Zhaojun Lu, Jie Wang, and Zhenglin Liu. 2021. A Cyber Security Evaluation Framework for In-Vehicle Electrical Control Units. IEEE Access 9 (2021), 149690–149706. https://doi.org/10.1109/access.2021.3124565Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Towards an Evaluation Framework for Extended Reality Authentication Schemes

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
          May 2024
          4761 pages
          ISBN:9798400703317
          DOI:10.1145/3613905

          Copyright © 2024 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 May 2024

          Check for updates

          Qualifiers

          • Work in Progress
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate6,164of23,696submissions,26%
        • Article Metrics

          • Downloads (Last 12 months)42
          • Downloads (Last 6 weeks)42

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format