Literature Review: Evaluating the Impact of Adaptive Multi-Factor Authentication on Usability and Perceived Security in Enterprise Environments

This chapter critically reviews existing research related to adaptive multi-factor authentication, usability, perceived security, and enterprise security practices. The purpose of the review is to establish a clear theoretical and empirical foundation for my project titled “Evaluating the Impact of Adaptive Multi-Factor Authentication on Usability and Perceived Security in Enterprise Environments.” In doing so, the chapter synthesises past work, identifies research gaps, and formulates a coherent justification for the study. The discussion is organised into thematic areas that reflect the key constructs of the research: authentication systems, adaptive and context-aware authentication, usability and user experience, perceived security and trust, and enterprise-focused security behaviour.

Introduction

The escalating landscape of cyber threats, characterised by increasingly sophisticated attacks such as phishing, ransomware, and identity theft, underscores the critical need for robust identity verification mechanisms in both personal and professional spheres. While traditional authentication paradigms, primarily reliant on static passwords, have long formed the bedrock of digital security, their inherent vulnerabilities have become glaringly apparent. The challenge lies not merely in fortifying defences but in doing so without inadvertently creating barriers that impede legitimate user access and productivity. This ongoing tension between security efficacy and user experience is particularly pronounced within complex enterprise environments, where employees navigate a multitude of systems daily. This literature review delves into the evolution of authentication from basic passwords to advanced multi-factor and adaptive systems, critically examining their impact on usability and perceived security within organisational contexts. The objective is to map the current state of research, highlight prevalent themes, and articulate specific gaps that necessitate further investigation, thereby establishing a strong theoretical and empirical rationale for the proposed research.

Background to Authentication Systems and MFA

Authentication is a fundamental component of information security, designed to verify identities and protect systems from unauthorised access. The history of digital authentication is largely characterised by a continuous effort to overcome the limitations of its most ubiquitous form: the password. Traditional authentication methods—such as passwords—have long been associated with weaknesses including memorability issues, susceptibility to phishing, brute-force attacks, and poor end-user practices (Das et al., 2020). Users often opt for weak, easily guessable passwords or reuse them across multiple services, driven by the cognitive burden of remembering numerous complex strings (Das et al., 2020). Such practices significantly undermine security, making systems vulnerable to a wide range of cyber threats.

The recognition of these inherent flaws led to the widespread adoption of Multi-Factor Authentication. MFA emerged as a stronger approach by requiring two or more independent authentication factors, typically categorised as:

  • Knowledge: Something the user knows (e.g., password, PIN).
  • Possession: Something the user has (e.g., a physical token, smartphone with an authenticator app).
  • Inherence: Something the user is (e.g., biometric data like fingerprint, facial recognition).

This layered security model significantly enhances protection, as an attacker would need to compromise multiple, distinct factors to gain unauthorised access. While MFA markedly improves security, its implementation often introduces increased user friction, inconvenience, and workflow disruption. Research consistently shows that users perceive MFA as time-consuming, cognitively burdensome, and intrusive, especially in high-frequency enterprise environments where employees authenticate multiple times a day across various applications (Das et al., 2020). The perceived added effort leads many users to view MFA as a “chore” rather than a beneficial security measure, hindering adoption and encouraging circumvention (Das et al., 2020). A systematic literature review on user perception of MFA technologies confirms these concerns, noting low adoption rates and pervasive avoidance among mandatory users (Das et al., 2022).

The tension between robust security and acceptable usability is a significant challenge for organisations. Quantifying the costs of enhanced security, such as MFA, involves considering factors beyond direct implementation expenses. Studies have shown that MFA introduces quantifiable costs, including increased login failures and the subsequent time legitimate users spend away from IT applications before successfully re-authenticating (Hastings et al., 2023, 2025). Empirical measurements of mandatory Two-Factor Authentication (2FA) implementations in large organisations, such as universities, revealed multiplicative effects of device remembrance, fragmented login services, and authentication timeouts on user burden (Reynolds et al., 2020). This burden does not deviate significantly from other compliance and risk management time requirements but highlights differences in user experience across individuals (Reynolds et al., 2020). This intricate balance between enhancing security and ensuring a seamless, productive user experience laid the groundwork for innovations in adaptive and context-aware authentication. The core idea was to find methods that could offer strong security without imposing a uniform, high-friction experience on all users, in all situations.

Adaptive and Context-Aware Authentication

Adaptive or risk-based authentication represents a significant evolution in authentication paradigms, dynamically adjusting authentication requirements based on contextual factors. Instead of applying uniform authentication requirements, adaptive MFA tailors security levels to the risk profile of each login attempt (Wiefling et al., 2020). This approach leverages a wide array of contextual information, including user location, device characteristics, behavioural patterns, network conditions, and time of access, to assess the risk associated with a particular login attempt. For example, a login from an unfamiliar device in a foreign country might trigger additional authentication steps, whereas a login from a known device within the corporate network might require only a single factor. This intelligent approach moves beyond a one-size-fits-all model to a more nuanced, risk-sensitive one.

Existing studies highlight several key benefits of adaptive authentication:

  • Improved Security through Dynamic Response: Adaptive systems enhance security by continuously monitoring events, user behaviour, and network traffic in real-time. They can detect anomalies and respond dynamically by escalating authentication requirements when risk signals are high. This continuous monitoring and dynamic response mechanism offers proactive protection against evolving threats (Borah, 2025).
  • Reduced User Friction in Low-Risk Scenarios: A primary advantage of adaptive MFA is its potential to significantly reduce user friction. Relaxing authentication requirements in low-risk scenarios streamlines the user experience without compromising overall security (Wiefling et al., 2020). Risk-based authentication has been found to be perceived as more usable than traditional 2FA variants, offering a more palatable security experience for users in various use cases (Wiefling et al., 2020, 2022). This is achieved by adjusting the rigour of MFA based on live risk evaluations that consider factors such as user behaviour, device status, and geographical location (Kandula et al., 2024).
  • Enhanced Organisational Security Posture: Adaptive security frameworks can dynamically adjust security measures to the current threat landscape, thereby contributing to a more robust organisational security posture (Dorairajan, 2024). Implementations using adaptive architectures based on machine learning algorithms have been shown to detect probable security threats significantly faster and to reduce false-positive alarms by a considerable margin compared to static authentication systems (Koshiy, 2025). Such systems also demonstrate a reduction in successful account compromise events while simultaneously increasing user satisfaction rates (Koshiy, 2025). This makes adaptive MFA a scalable solution recommended by government agencies to strengthen password-based authentication (Wiefling et al., 2022).

Nevertheless, the literature reveals several challenges associated with the implementation and acceptance of adaptive authentication:

  • Lack of Transparency and User Distrust: The dynamic, often opaque nature of adaptive systems can foster user distrust. When authentication requirements change without a clear explanation, users may not understand the rationale behind the prompts, potentially leading to feelings of inconsistency and a lack of control (Wiefling et al., 2020). This lack of transparency can hinder user acceptance and lead to questions about the system’s fairness.
  • Privacy Concerns: Adaptive MFA relies heavily on collecting and analysing contextual and behavioural data, such as user location, device information, and interaction patterns (Wiefling et al., 2021). This extensive data collection raises significant privacy concerns, as users may be uncomfortable with the amount of personal information being tracked and processed. Balancing the need for data to inform risk assessments with user privacy expectations is a critical challenge for system designers (Wiefling et al., 2021). The influence of people present in a context can also affect the perceived risk of device misuse, further complicating privacy in shared environments (Miettinen et al., 2014).
  • Behavioural Variability and False Positives: A significant challenge lies in managing the variability of legitimate user behaviour. Natural changes in user habits due to role changes, health, or environmental modifications can be misinterpreted by adaptive systems as anomalies, leading to false positives (Borah, 2025). Frequent false positives can frustrate users, diminish trust, and encourage them to bypass the system. Therefore, effective drift management and sophisticated adaptation mechanisms are required to distinguish between legitimate behavioural evolution and potential security threats (Borah, 2025). Clear communication about system behaviour and transparent appeal processes are crucial for maintaining user trust and cooperation (Borah, 2025).
  • Complexity of Implementation: Implementing adaptive authentication systems, especially in complex enterprise environments, can be challenging. It requires sophisticated infrastructure for data collection, real-time analysis, and policy enforcement, often leveraging advanced technologies like machine learning (Kandula et al., 2024). Edge cases, system failure scenarios, and the need for robust contingency planning with backup authentication mechanisms are critical considerations to ensure continuous operation and appropriate security levels (Borah, 2025).

Overall, adaptive MFA represents a promising yet under-examined advancement, particularly in its comprehensive impact on user experience and employee trust across diverse enterprise settings. Its potential to balance security and usability is high, but these challenges must be carefully addressed through thoughtful design and transparent communication.

Usability and User Experience in Authentication Systems

Usability is a critical component of authentication effectiveness, as even the most technically secure systems will fail if employees find them too inconvenient or difficult to use, leading to avoidance or circumvention (Sasse et al., 2014). Research across Human-Computer Interaction, security usability, and human-centred design consistently emphasises that secure systems must also be usable. The principle of “usable security” dictates that security measures should integrate seamlessly into workflows rather than imposing significant cognitive load or workflow disruptions. The field of usable security is still maturing, with studies often focusing on system comparisons rather than establishing robust design guidelines rooted in thorough analyses of user behaviour (Nocera et al., 2023).

Studies consistently demonstrate that poor usability in authentication systems leads to a cascade of negative consequences:

  • Reduced compliance with security policies: Employees may bypass security protocols if they are too cumbersome, opting for insecure workarounds to complete their tasks (Mayer et al., 2017).
  • Increased human errors and insecure workarounds: Frustration with complex authentication processes can lead to errors and the adoption of less secure practices out of expediency (Sasse et al., 2014).
  • Lower productivity and workflow disruption: Authentication tasks, when poorly designed, can place a significant burden on users and disrupt primary tasks, leading to decreased efficiency and increased frustration (Sasse et al., 2014). This impact on productivity is a major concern for organisations, as security measures can inadvertently hinder employees’ ability to perform their core duties (Post & Kagan, 2006).

Authentication usability is commonly evaluated using various metrics, including efficiency (time required to authenticate), effectiveness (authentication success rate), error rates, cognitive load, and user satisfaction. While extensive usability studies exist for traditional authentication methods like passwords, biometrics, and two-factor authentication, there is limited empirical research specifically examining adaptive MFA (Das et al., 2022). A systematic review found that a significant proportion of academic papers on MFA technologies proposed new tools, yet only a small fraction included user evaluation research, highlighting a considerable gap in understanding users’ experience with these advanced systems (Das et al., 2022).

The “chore” perception of MFA, as identified in qualitative studies, underscores the deep-seated usability issues (Das et al., 2020). Both experts and non-experts express frustration, often because they perceive a lack of additional benefits compared to the added effort. Furthermore, mandatory 2FA implementations have revealed systemic usability challenges. Empirical measurements using tens of millions of operational logs from universities showed that factors such as device remembrance, fragmented login services, and authentication timeouts significantly increase user burden (Reynolds et al., 2020). More than one in twenty 2FA ceremonies were aborted or failed, indicating substantial friction and a varied user experience (Reynolds et al., 2020). This suggests that the current state of many MFA implementations is far from ideal in terms of usability.

However, emerging research on Risk-based Authentication suggests a potential path forward. Studies indicate that RBA is generally considered more usable than traditional 2FA variants (Wiefling et al., 2020). This is attributed to its ability to relax authentication requirements in low-risk contexts, thereby reducing unnecessary friction. Despite this, specific usability problems in RBA implementations have been identified, prompting recommendations to mitigate them and achieve broader user acceptance (Wiefling et al., 2020). These problems might relate to the complexities of account recovery, as highlighted by a usability study on Passwordless FIDO2 in enterprise settings where account recovery was a significant concern for professionals (Kepkowski et al., 2023). Long-term user studies comparing fallback authentication schemes also shed light on the usability of different recovery methods, indicating that the choice of recovery mechanism can also impact the overall user experience and perceived usability (Lassak et al., 2024). The challenge remains to design adaptive systems that are not only technically sound but also intuitively usable, minimising cognitive load and integrating smoothly into diverse enterprise workflows.

Perceived Security and Trust in Authentication

Beyond technical efficacy, perceived security—users’ subjective belief in the system’s ability to protect them—is increasingly recognised as a critical determinant of adoption, compliance, and overall system effectiveness. Employees may resist, bypass, or actively undermine authentication systems if they do not trust them or perceive them as unnecessarily invasive, regardless of their objective security strength (Sasse et al., 2014). This subjective dimension is heavily influenced by psychological factors and mental models users develop about security mechanisms (Das et al., 2020; Wolf et al., 2018).

Studies show several important patterns in how users perceive security and build trust:

  • Misjudgment of Security Mechanisms: Users often misjudge the actual security of mechanisms, relying more on mental models shaped by personal experience and intuition rather than technical facts (Das et al., 2020). This can lead to a paradox in which technically stronger authentication systems are perceived as less secure when they introduce high friction or confusion. For example, some non-experts do not perceive any additional benefits from MFA, viewing it primarily as an added chore (Das et al., 2020).
  • Influence of Transparency, Control, and Fairness: Trust in any automated system, including authentication, is profoundly influenced by its transparency, the perceived control users have over their data and interactions, and the system’s perceived fairness (Abbass et al., 2015; Linsner et al., 2024). When systems are opaque or their behaviour appears arbitrary, user trust can erode. A key predictor of trust and distrust in information systems at work is reliability (system quality) and credibility (information quality) (Thielsch et al., 2018). Trust in automation is a central variable explaining both resistance to (disuse) and overreliance on (misuse) automated systems (Wischnewski et al., 2023).
  • The Paradox of Complexity: Overly complex authentication processes can paradoxically reduce perceived security, even when they objectively increase actual security (Nocera et al., 2023). This is because complexity often correlates with difficulty of use, leading to user frustration and a belief that the system is flawed or untrustworthy. A comparative usability study of two-factor authentication, however, found that users’ perception of trustworthiness is not necessarily negatively correlated with ease of use, suggesting that perceived security and usability can coexist (Cristofaro et al., 2014).

Perceived security in adaptive MFA remains underexplored. Because authentication challenges vary dynamically based on contextual factors, employees may receive inconsistent authentication prompts. This variability can lead to feelings of inconsistent protection or confusion about why certain prompts are triggered, potentially weakening trust unless the system’s behaviour aligns with user expectations and is clearly communicated. User perceptions of authentication schemes also vary significantly across different contexts of use (e.g., email, online banking, and smart homes), indicating that perceived security is highly situational (Zimmermann et al., 2022). The presence of unfamiliar people in a context, for instance, can increase the perceived risk of device misuse, highlighting that simple location data alone is insufficient for assessing perceived security (Miettinen et al., 2014).

While Risk-based Authentication is considered more usable than 2FA, it is also perceived as comparably secure to 2FA and more secure than password-only authentication (Wiefling et al., 2020). However, the privacy implications of RBA, particularly regarding the collection of potentially sensitive personal data, need careful consideration, as privacy concerns can significantly impact trust (Baig & Eskeland, 2021; Wiefling et al., 2021). The method of re-authentication within RBA also plays a role; while some methods can speed up the process without reducing security perception, novel methods like “magic links” can initially make users significantly more anxious, impacting perceived security and trust (Wiefling et al., 2020). Therefore, understanding how to build and maintain trust in such dynamic and data-intensive systems, especially in enterprise environments where the stakes are high, is paramount. This includes addressing user mental models, ensuring transparency in data use, and managing expectations regarding the adaptive nature of the authentication process.

Enterprise Security Behaviour and Organisational Context

The enterprise environment introduces unique and complex challenges for authentication systems compared with consumer-oriented contexts. Employees frequently log in, access a wide array of sensitive corporate systems, and operate within structured organisational policies and cultural norms. A critical phenomenon observed in this setting is security fatigue, in which repeated, often burdensome authentication demands lead to frustration, shortcuts, disengagement, and ultimately a decline in security compliance (Parkin et al., 2016; Sasse et al., 2014). Security fatigue can cause employees to actively avoid security measures, find ways to circumvent them, or even forgo pursuing innovative ideas due to the perceived “battle with security” (Sasse et al., 2014). This has direct negative impacts on both productivity and morale.

Existing research on enterprise security practices reveals several important findings:

  • Balancing Security Demands and Productivity: Employees constantly balance security demands against their primary task completion and productivity goals (Mayer et al., 2017). When security measures hinder efficiency, individuals may prioritise job performance over strict adherence to security protocols, leading to non-compliance. This tension means that tightening security without considering user workflow can inadvertently reduce overall security as employees seek workarounds (Post & Kagan, 2006). Rewards for productivity goal achievement have been strongly associated with decreased security compliance, underscoring the conflict between these two objectives within organisations (Mayer et al., 2017).
  • Impact of Unusable Authentication on Compliance: Unusable authentication systems directly contribute to non-compliance and policy violations. When authentication processes are too complex, time-consuming, or disruptive, employees are more likely to engage in insecure practices, such as writing down passwords, sharing credentials, or using unapproved workarounds (Sasse et al., 2014). The multiplicative effects of various usability challenges in mandatory 2FA implementations on user burden further highlight how systemic friction can reduce compliance (Reynolds et al., 2020).
  • Influence of Organisational Communication, Culture, and Training: The effectiveness of security systems is heavily mediated by organisational communication, security culture, and training programs. A strong security culture, clear communication about the why behind security policies, and effective training can significantly influence employee trust and acceptance of authentication systems (Borah, 2025). Conversely, a lack of understanding or perceived arbitrary policies can foster distrust and resistance.
  • Zero Trust Architecture and Adaptive Controls: Modern enterprise security increasingly adopts Zero Trust principles, which emphasise “never trust, always verify.” Within ZT frameworks, identity-centric policies, strong authentication, risk-adaptive access, and device posture checks are central (Youssef, 2025). Adaptive security frameworks are crucial here, as they enable dynamic adjustment of security measures to the current threat landscape, thereby enhancing organisational performance (Dorairajan, 2024). Adaptive authentication provides real-time monitoring and autonomous resolutions, reduces the attack surface, and shortens resolution times, offering significant benefits over traditional static approaches in large, complex systems.
  • Emerging Technologies and Productivity: The integration of Generative AI tools into cybersecurity operations also points towards future trends in enhancing productivity within enterprise security. Studies on Microsoft’s Security Copilot, for instance, demonstrate significant improvements in IT admin accuracy and speed for tasks like sign-in troubleshooting and device policy management (Bono & Xu, 2024). GAI adoption has also been associated with robust productivity gains in Security Operations Centres, including reductions in mean time to resolution and alert processing times (Bono et al., 2024, 2025). While not directly adaptive MFA, these developments suggest that technology can play a significant role in mitigating operational friction in broader security contexts, potentially influencing the design and acceptance of future adaptive authentication systems.

Despite these insights into general enterprise security behaviour, there is little focused research assessing how adaptive MFA, specifically, shapes employee workflows, trust, and perceived security within the unique constraints and cultures of various organisational settings. The ability of adaptive MFA to reduce friction in low-risk scenarios has significant implications for mitigating security fatigue and improving compliance in enterprises, but the extent to which these benefits are realised and the challenges (e.g., privacy concerns, lack of transparency) are managed remains an area requiring dedicated investigation.

Identified Research Gaps

A comprehensive synthesis of the current literature reveals several critical research gaps concerning the impact of adaptive multi-factor authentication on usability and perceived security within enterprise environments, thereby providing a clear justification for the proposed study.

Firstly, there are few empirical studies specifically examining adaptive MFA in real-world enterprise environments. While general MFA and RBA have received attention, much of the existing research focuses on technical performance metrics or theoretical models rather than in-depth user experience evaluations within diverse organisational contexts (Das et al., 2022). The systematic literature review indicated a significant dearth of user evaluation research for newly proposed MFA tools (Das et al., 2022). The unique dynamics of enterprise settings—characterised by high-frequency authentication, varying user roles, existing IT infrastructures, and organisational policies—mean that findings from consumer-oriented studies or general MFA research may not be directly transferable. More longitudinal studies are needed to understand the long-term impact of adaptive MFA on user behaviour, compliance, and experience over extended periods, moving beyond short-term lab studies or surveys (Lassak et al., 2024).

Secondly, there is an insufficient examination of perceived security and trust in adaptive authentication systems within enterprises. The dynamic nature of adaptive MFA, where authentication requirements change based on context, can be confusing for users and may lead to inconsistent perceptions of security (Wiefling et al., 2020). Research has highlighted that users often misjudge security based on mental models and perceived transparency rather than technical facts (Das et al., 2020). For adaptive systems, where the “why” behind an authentication challenge might not be immediately apparent, this opacity can erode trust. The balance between perceived trustworthiness and ease of use in adaptive systems needs further exploration, particularly how transparency in the adaptive logic influences user confidence and feelings of control (Abbass et al., 2015; Cristofaro et al., 2014). Furthermore, the privacy implications arising from the extensive data collection required by adaptive systems, such as location and behavioural patterns, introduce a complex interplay with perceived security and trust that remains underexplored in an enterprise context (Wiefling et al., 2021). How employees within a specific organisational culture weigh the benefits of reduced friction against potential privacy invasions is a critical area for investigation.

Thirdly, there is a lack of user-centred research evaluating how dynamic authentication specifically affects employee trust and security behaviour. While the concept of security fatigue and its negative impact on compliance is well documented for static MFA, its interaction with adaptive MFA remains less clear (Sasse et al., 2014). Adaptive MFA promises to reduce friction, thereby potentially mitigating security fatigue, but the mechanisms through which this occurs and the extent of its impact on actual security behaviour (e.g., reduced circumvention, improved policy adherence) require empirical validation. Understanding how employees form trust in a system that constantly “learns” and adapts, and how this trust influences their day-to-day security practices, is vital (Wischnewski et al., 2023). This gap extends to the exploration of psychological factors (e.g., cognitive load, perceived fairness, control) that mediate the relationship between adaptive MFA design and employee security behaviour and compliance (Hancock et al., 2023; Jones & Moncur, 2018).

Finally, there is minimal exploration of the nuanced trade-off between usability and security, particularly in enterprise-specific contexts with adaptive MFA. While the general trade-off is well known, adaptive MFA introduces a new dimension by dynamically managing this balance. Research needs to investigate whether adaptive systems genuinely optimise this trade-off in practice, given the complexities of an enterprise, accounting for factors such as varying departmental needs, legacy systems, and remote work challenges (Kepkowski et al., 2023). Quantifying this balance, perhaps through metrics such as the Security Friction Quotient (Youssef, 2025), for adaptive MFA in enterprise settings would provide valuable insights into how effectively these systems reduce operational friction while maintaining or enhancing security.

These identified gaps collectively highlight the pressing need for a focused, human-centred evaluation of adaptive MFA within organisational settings. Such research would provide crucial empirical evidence to address current gaps in the literature, inform the design and implementation of more effective, user-centric enterprise security solutions, and ultimately strengthen security posture and improve the employee experience.

References

  • Baig, A.F. and Eskeland, S. (2021) “Security, Privacy, and Usability in Continuous Authentication: A Survey,” Available art: https://www.mdpi.com/1424-8220/21/17/5967 and accessed on the 29th of November 2025.
  • Bono, J. et al. (2025) “Generative AI in Live Operations: Evidence of Productivity Gains in Cybersecurity and Endpoint Management,” Available at: https://arxiv.org/abs/2504.08805 and accessed on the 19th of November 2025
  • Bono, J., Grana, J. and Xu, A. (2024) “Generative AI and Security Operations Centre Productivity: Evidence from Live Operations,” Available at: https://arxiv.org/abs/2411.03116 and accessed on the 27th of November 2025.
  • Bono, J.V. and Xu, A. (2024) “Randomized Controlled Trials for Security Copilot for IT Administrators,” Available at: https://arxiv.org/abs/2411.01067 and accessed on the 20th of November 2025.
  • Das, S. et al. (2022) “Evaluating User Perception of Multi-Factor Authentication: A Systematic Review,” Available at: https://arxiv.org/abs/1908.05901 and accessed on the 29th of November 2025.
  • Jones, H.S. and Moncur, W. (2018) “The Role of Psychology in Understanding Online Trust,” in Advances in digital crime, forensics, and cyber terrorism book series. Available at: https://www.igi-global.com/gateway/chapter/199885 and accessed on the 19th of November 2025.
  • Miettinen, M. et al. (2014) “ConXsense – Automated Context Classification for Context-Aware Access Control” Available at: https://arxiv.org/abs/1308.2903 and accessed on the 20th of November 2025.
  •  
  • Nocera, F.D., Tempestini, G. and Orsini, M. (2023) “Usable Security: A Systematic Literature Review,” Availale at: https://www.mdpi.com/2078-2489/14/12/641 and accessed on the 20th of November 2025.
  • Thielsch, M.T., Meeßen, S.M. and Hertel, G. (2018) “Trust and distrust in information systems at the workplace,” Available at: https://peerj.com/articles/5483/ and accessed on the 23rd of November 2025.
  • Wiefling, S. et al. (2020) “Evaluation of Risk-Based Re-Authentication Methods,” Available at: https://arxiv.org/abs/2008.07795 and accessed on the 23rd of November 2025.
  • Wiefling, S. et al. (2022) “Pump Up Password Security! Evaluating and Enhancing Risk-Based Authentication on a Real-World Large-Scale Online Service,” Available at: https://arxiv.org/abs/2206.15139 and accessed on the 24th of November 2024
  • Wiefling, S., Dürmuth, M. and Iacono, L.L. (2020) “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication,” Available at: https://arxiv.org/abs/2010.00339 and accessed on the 1st of December 2025.
  • Wiefling, S., Tolsdorf, J. and Iacono, L.L. (2021) “Privacy Considerations for Risk-Based Authentication Systems,” Available at: https://arxiv.org/abs/2301.01505 and accessed on the 30th of November 2025.
  • Wischnewski, M., Krämer, N.C. and Müller, E. (2023) “Measuring and Understanding Trust Calibrations for Automated Systems: A Survey of the State-Of-The-Art and Future Directions,” Available at: https://dl.acm.org/doi/full/10.1145/3544548.3581197 and accessed on the 18th of November 2025.
  • Youssef, M. (2025) “Security Friction Quotient for Zero Trust Identity Policy with Empirical Validation,” Available at: https://arxiv.org/abs/2509.22663 and accessed on the 20th of November 2025.
  • Zimmermann, V., Gerber, P.J. and Stöver, A. (2022) “That Depends — Assessing User Perceptions of Authentication Schemes across Contexts of Use,” Available at: https://arxiv.org/abs/2209.13958 and accessed on the 19th of November 2025.