The Illusion of Security: How Perception is More Powerful Than Reality

An old friend and colleague of mine prepared a course on the human factors of cybersecurity at the university where he and I teach. I had the option of preparing this course myself, but I had a hard time deciding what I would cover in a semester course. There are certainly human factors issues in the design of security monitoring systems, but security has more to do with how people think and what they notice. We know that the vast majority of security breaches (at least the ones we have detected) come from acts of deception that people fail to detect, like an email that looks official but is actually a phishing expedition.

Security is often more perception than reality. For example, many people will refrain from setting up an account with a website because they “don’t want them to have [their] credit card information,” but will then purchase as a guest and enter their credit card information. We trust there is a difference (and there usually is).

But security is often pure perception. Take the case of a neighbor who brought me their Mac computer to look at. As part of an update request, the system asked them for their password. They entered all the passwords they had ever used but none of them worked. As a result, they cancelled the update. In addition, because they were concerned what would happen if they shut down the computer and didn’t have the password, they decided to leave the computer running. For months. Finally, after several months, they asked me if I could help them recover or reset their password. Knowing something about human behavior associated with setting passwords, I forced the system to prompt for the password and then just pressed Enter. It let me in. They had not set a password. They had assumed they had a password and thought they could not get in without it because the system prompted them for a password rather than just letting them in.

image of a laptop computer with a dial lock in the center of the screen

Another neighbor recently brought me two devices that they, too, could not recall the password for. One was an iPad and one was a Kindle. Apple takes privacy and potential theft of their devices very seriously, and makes resetting passwords very difficult. If you enter an incorrect password too many times, the product locks you out. You cannot reset the device to factory settings and then set it up as a new device (thus clearing the password) if you have “Find My iPhone” running. However, it does recognize a computer it was once associated with. Luckily for them, I was able to get that computer and reset the iPad to factory settings, though they lost all their data in the process. Without that computer, they would have had to contact Apple, and I’m not even sure Apple could have been able to help them bypass the security, or would have been willing to. The Kindle was a totally different story.

When prompted for a password on a Kindle, you can enter a special, master security code to reset the password. A special, master code you can find by a quick search of the internet. Provided to you by Kindle. (My son referred to this as the child lock of security.)

All of these events remind me of my first experience and favorite story I tell classes about the perception of security.

Back in the 1980s, I was working as a subcontractor on a project at IBM. We were using IBM PCs running DOS. The DOS operating system is a command line system and had a single system prompt that told you it was waiting for a command. The user can set this prompt (though few people ever did) to any number of things — the date, the time, the current directory, etc. The user could even enter a message of their choosing. As an inside joke, I made my prompt read “Greetings, Professor Falkin. Would you like to play a game?” (For anyone unfortunate to have missed the movie, this is from the movie War Games. I highly recommend it, but do see the original with Matthew Broderick.)

After several weeks, I essentially forgot all about this prompt. Then, one Monday morning, someone from IBM stopped by my office a bit upset. He had tried to get onto my machine over the weekend and told me he couldn’t get past my game program. “I typed yes. I typed no. I tried Ctrl-C, Ctrl-X, Ctrl-Y,” he told me. “I started entering the names of games. But all it would do was say ‘bad command or filename.'” (Bad command or filename was the default response for the DOS operating system itself when you enter something it doesn’t recognize.) Not realizing he was not in a game program, but rather entering commands at the system prompt, he finally gave up and went home.

One of my colleagues, also a subcontractor, asked why someone from IBM was trying to use my computer. I told him that IBM routinely checked all of the subcontractor computers to make sure we weren’t doing anything improper. “I don’t want them on my computer,” he said and left my office.

A few minutes later he called me and told me to check out his computer. When I arrived, it was powered off. I booted his computer, and when the screen flickered to life, it read: “Press any key to begin formatting hard drive.”

“There,” he said. “They won’t even try to get into my machine.”

I suspect he was right.