Normally we think of e.g PIN code locks as something needed every time you perform a certain step. By using other sources of data, some researchers have proposed requiring things like PIN codes only when there’s a chance the user is no longer the legitimate one.
Granted, you lose a little security up front. But– from what I’ve seen, users turning off or intentionally bypassing security measures is a major source of real-world vulnerabilities… we all know about tailgating and propped doors, for example. Some building doors only require access cards at night for good reason: if they needed them all day, people would prop the doors and they’d stay propped all night.
There’s an even bigger benefit, though. Thinking like this adds feedback to the design process in a way that’s fairly new. It’s now just as much about what the user has to do as it is about whether the system is secure secure.
We see counter-examples of PITA security measures all over the place. There’s a whole field of study, security and usability, intended to avoid them. But intelligent feedback is rare.
Why is feedback so critical? Without it, errors accumulate. Think about human nature. There’s both positive (“you did good”) and negative (“that was a bad idea”) feedback. Anyone who’s ever read of Roman emperors or seen Apocalypse Now will understand how the human mind distorts given an absence of negative feedback.
By comparison, keeping a daily journal improves your productivity and effectiveness far more than you’d expect, precisely because you’ve created your own feedback.
(On an organizational scale, of course, we call this accountability. It corrects out the errors while they’re still minor. Interestingly, this applies even when no public accountability is possible. The conspiracy buffs and shadowy characters here will appreciate that a culture of scary strong internal accountability was the [arguably] key innovation of the Bavarian Illuminati that allowed them to dominate their sphere of influence. [yes, they did exist, in the 1770s. No, I don’t endorse that mindset. Stick to keeping a diary.])
Coming full circle, we’ve now got at least one way of designing security measures that considers the user’s needs at the same time as it considers possible vulnerabilities. Feedback used to be something that was bolted-on afterwards (“measure how many people use it and revise the design”), now it’s a core part of the algorithm.
“The findings from a recent study carried out by Microsoft Research and the University of South Carolina suggest that we should be asking ourselves when to require authentication rather than whether to require authentication.
The research puts forth the idea of tailoring authentication requirements on mobile devices, by application or otherwise, so that users are only prompted for a password or other authentication method when it’s necessary. In this way, the study’s authors believe, users would be required to authenticate themselves less often, therefore, lowering the barrier of entry for those who currently use no authentication methods at all.
For example, the authors argue that if a user hangs up after a call and places the device in their pocket, where it remains, then they should not be required to re-authenticate the next time they attempt to access the device, because that person hasn’t truly lost contact with it. On the other hand, if a user puts the phone down somewhere, actually losing contact with it, then they should be required to re-authenticate before accessing the device again. Furthermore, the authors propose valuing the data within a given device. If a system has strong confidence in a user’s authenticity, then the system should allow that user to access any content without authentication. If the system has little confidence, then that user should still be able to access low-value content (news, weather, etc.) without authentication but be required to authenticate in order to view high-value content (email, banking, etc.).[…]
The actual experiment upon which the research was based demonstrated a 42 percent decrease in authentication prompts and no unauthorized accesses. Beyond that, users’ authenticity was over-valued in just eight percent of the trials.
On a more human level, the study’s respondents who already use authentication methods indicated that they would prefer that about half of their applications be accessible without a lock. Among those without authentication, they responded to the contrary, saying that they would prefer locks for about half of their applications. In the end though, most participants preferred a three-tiered authentication system, a highest-value tier, requiring complicated authentication, a mid-level tier, requiring simple authentication, and a bottom-tier, requiring no authentication.”