Search form

Ep. 2: Social Cybersecurity

Thursday, October 4, 2018

Featuring Dr. Sauvik Das

What does a prohibition-era speakeasy have in common with modern-day cybersecurity? How can ancient biblical tales inform our development of such systems? To finally convince mainstream society to adopt good security behaviors in the future, is it imperative that we look, instead, to our past? School of Interactive Computing Assistant Professor Sauvik Das thinks so.

Transcript:

Ayanna Howard: What does a prohibition-era speakeasy have in common with modern-day cybersecurity? How can ancient biblical tales inform our development of such systems? To finally convince mainstream society to adopt good security behaviors in the future, is it imperative that we look instead to our past? Assistant Professor Sauvik Das thinks so.

He indicates in his new research about the potentially revolutionary ways to view cybersecurity through the context of centuries of social evolution. We'll chat with Sauvik today to discover the origins of some of these ideas. Why haven't they been adopted in the past? And what can they mean for the future of cybersecurity?

(Instrumental)

I'm School of Interactive Computing Chair Ayanna Howard, and this is the Interaction Hour.

(Instrumental stops)

Sauvik is a recent addition to the faculty here at the School, though he isn't exactly new to Georgia Tech. He completed his undergraduate degree here at Georgia Tech before attending Carnegie Mellon University for his master's and Ph.D. in Human-Computer Interaction. He has published extensively at academic conferences, earning a best paper designation at Ubicomp 2013. He holds two patents -- so he's an academic and he understand commercialization -- and his patents utilize this concept of social cybersecurity, which he's going to tell us about today.

Thank you for joining us, Sauvik.

Sauvik Das: Thank you for having me.

Ayanna: I think this conversation is going to be awesome. So, you know, cybersecurity in general is a nuisance. It's a blessing because it keeps us safe, but it's also just -- oh, it's so irritating sometimes. You go through all these levels of security and you have to remember your pets name and birthdate and all of it together, because of course you can't make it easy like "Password." So, this is really hard and, I mean, I don't really understand why it's so hard. Why is cybersecurity so difficult for the everyday user?

Sauvik: Well, that's a really great question. It's one that I constantly struggle with myself and it's a question that I want to sort of make strides towards through my own research. But, really what it comes down to is that security is something that is ultimately a secondary concern to users like even me. You know, we don't use our computer to be secure. We use our computer to check our email or to browse Facebook or to play games. And security is the sort of necessary nuisance that all of us have to sort of deal with so that we can do those other things in a way that doesn't compromise our safety or our personal data or the files that we have on our computer, things of that nature.

Ayanna: So, why -- Ok, so I think about I get into my car, I open it up, I have my key, and I unlock it and I drive. I don't have to think about, you know, who created the lock and does my key fit in the lock. It just works.

Sauvik: It's great that you mention that, because in the physical world we have these sort of mechanisms of safety that sort of make sense to us viscerally or emotionally in some way that is really just not evident in the cyber world. For example, what does it mean to add two characters to the end of your password? How is that making your more safe, right? Whereas, when you lock your front door or when you use a key to open your car, you understand at some level -- even if you don't know how to make a lock yourself -- you understand at some level viscerally, emotionally what that's doing for you. In the cyber world, however, end users don't really get that same sort of visceral, tangible, emotional connection to their actions. You do something that's really annoying like enable two-factor authentication and you know intellectually that smart people tell me this is good for me. But what is it actually doing? It's pretty difficult for a lot of people to really understand that.

Ayanna: It is. I remember security, I think about in terms of the home and I put the sign in my yard, and that's a deterrent. My neighbors, they look out, so I tell them, you know, I'm going away, and they will watch my house. Is there an equivalent to this in the cybersecurity world?

Sauvik: In the cybersecurity world right now, we're sort of in a fog of war. We can't really see the threats that are relevant to us and we can't necessarily see what other people are doing to protect themselves. And we don't have outward social cues so much that sort of make it evident that we are people who care about security, and so it's harder for behaviors regarding security to spread. As you mentioned with the sign you put out in the yard, that's not only a cue to potential attackers but it's a cue to your neighbors. It says Ayanna cares about security and this is how she's protecting her house, so maybe we should do something like this too. But in the cybersecurity world, we're all sort of isolated in our own little digital bubbles. We can't really peer into anyone else's bubble to see how they're protecting themselves.

So, human beings are fundamentally social creatures. We look to others for cues on how to act when we're uncertain. Like if we don't know if we should take a particular road, we try to see how many other cars are going down that road. Or if we don't know whether or not we like a particular university, we look for reviews online to see what other students there think about that university.

But in the security and privacy world, it's sort of a thing you have to figure out by yourself and it's a thing that you need to do absent of cues from other people. So, it's really easy to see -- given that it's a secondary concern -- why would anybody sort of go out of their way to figure it out.

Ayanna: So, are there ways then, given that we are inherently social creatures, are there ways moving forward or can we look at ways in the past that people have done this really well, and put it into this cybersecurity domain?

Sauvik: Yeah, you know, I really like that you mention that, because I do think that there's a lot we can learn from our past. Fundamentally, if you think about what cybersecurity is trying to do, it's trying to provide us with some peace of mind so that we can sort of engage in the digital world without having to worry about our personal information leaking and things like that. That's not that new of a problem to humanity, right?

We have from the early get-goes of civilization tried to sort of collectively come up with security protocols in the physical world that can give us the same sort of peace of mind when we're interacting with other people in the physical world. Right, like, we might have locks on our front doors. Or, if we're looking at groups of people who are using slightly more innovative security solutions to identify themselves to each other and things like that.

We can certainly look to past societies and groups like the Freemason society. They famously would apply selective pressure when they would shake somebody else's hand in order to identify themselves as Freemasons. So, if the other person did the same thing, the two of them now knew that they were both of the same society without having to worry about whether or not that identified them to other people. Similarly, early Christians drew a fish in the sand to identify themselves as being Christians in a time when Christianity wasn't particularly widespread and potentially dangerous to identify yourself as such. In the prohibition-era of the United States, alcohol was famously banned in that era of course, and if you as somebody who sought alcohol wanted to go out and drink with your friends, you would have to know if these secret bars that existed as little pop-ups throughout the city. And the way you would identify yourself into those bars in a way that the bar owner would know that you are somebody who's in the know and who is ready to partake in this illegal activity at the time, you would enter a secret knock. Something like (knocking). And if you did that, then the bouncer would know that, hey, this person knows that we're a bar, so he's safe. But if you didn't do that, and you just knocked, then they would know that, "I don't know who this person is, so maybe we should be wary and take down the operation."

Ayanna: So, I understand secret handshakes and secret knocks. Those were like vibrant times. Now we're in the digital -- I mean, I'm at my computer. How am I supposed to knock on my computer so my friend will let me in or receive my email.

Sauvik: Right, so that's a great question. The beauty of where we're going in the future and how that connects to our past is that computing is becoming more and more physicalized. It's no longer just you're entering stuff on a keyboard on your computer. Pretty soon, with the Internet of Things and ubiquitous computing, we're bridging this divide between what's cyber and what's physical. So, in one of my projects, for example, I used the accelerometer and the gyroscope on a regular smart phone and borrowing from this analogy of the prohibition-era secret knocks, we created this system that allowed end users, end groups to enter a secret knock on a smart phone, which we would capture through the accelerometer information and the microphone information from that smart phone in order to both authenticate users -- so, knowing when someone in the group is entering that secret knock -- but also identifying individuals in that group. I could tell that not only does this person know the secret knock, but this is Ayanna entering the secret knock or something like that.

Ayanna: So, I can be on my whatever -- Twitter feed at like @ICatGT and I can do my little secret tap, and that means I can get into my account?

Sauvik: Right, right. And the real beauty of that is that -- let's say you had a family Twitter account or something like that and you wanted parental controls set on it so that your 5-year-old couldn't see certain things on Twitter. You don't have to have your own password and have the 5-year-old have her own password. You guys can all have the same shared family secret knock, but the system would still know who's entering it.

Ayanna: This is cool, and so can you have different levels of security for your family? So, I think about my teenage person versus maybe my spouse or my partner. I want them to have different types of access. Is there a way then to apply security even within my own domain.

Sauvik: Oh, absolutely. This is access control is typically what it's considered in standard security lexicon. So, you can imagine having something like this secret knock authenticator, which I called Thumprint. Using it to identify who is the individual in the family without requiring those individuals to have their own secrets, and then once that person is identified you could, for example, as the mother of the family you could set access control policies such that once the system knows who it is, you can restrict access to certain things or you can grant access to other things.

Ayanna: So, this sounds easy. This is social in nature. I mean, so what about companies. Facebook -- is this something that Facebook would be interested in?

Sauvik: Right, you know, I would hope so. So, I was in the fortunate position a few years ago that I was able to intern at Facebook. At Facebook, I did a couple of formative studies that really sort of alerted me to this trend of why social is so important. In the first study I did, I basically analyzed how social influence affects the diffusion of the optional security tools that Facebook offered at the time. There were these three security tools that Facebook offered at the time. One was two-factor authentication, which is what they call log-in approvals. One was a notification that was sent to you whenever there was a suspicious log-in in your account -- they call that log-in notifications. And then there was this third security tool, which was a little bit different than the other two called trusted contacts, which allowed you to specify 3-5 of your friends who could vouch for you in case you ever lost access to your account for some reason.

So, I analyzed how does a friend's usage of that security tool, each of those three, affect one's own likelihood to use that tool. Traditionally, if you look at typical social psychology, what you would expect is this nice curve that goes up and to the right. As more of my friends use this thing, I am more likely to use this thing myself. That's what you would expect.

Ayanna: Ok, that make sense.

Sauvik: But with standard security tools like log-in approvals -- two-factor authentication -- and log-in notifications, we didn't see that nice up-and-to-the-right curve. Instead, we saw a little bit of a check mark, which meant that at that early phase of adoption where only a handful of your friends use log-in approvals or log-in notifications, you are actually less likely to use those same tools yourself. That was a really weird result because that kind of flies in the face of normal intuition. Why would, if you have five friends who use a thing, why are you less likely to use it than if you had no friends who used the thing?

One of the intuitions that we came up with was that -- I want you to close your eyes and think of the first person who would come to mind that would use two-factor authentication on Facebook.

Ayanna: Oh, one of my crazy friends (laughing).

Sauvik: So, it's typically somebody who others perceive as a little bit nutty, a little bit paranoid, a little bit overzealous about security perhaps. And that can create the social stigma around the use of security tools. Because the early adopters are typically people who others perceive as being a little paranoid.

Interestingly, that effect was not present for trusted contacts, which is more social in nature, as you know, because you specify 3-5 of your friends who could help you with your security instead. For that tool, we saw that nice up-and-to-the-right curve that we were expecting, which suggests that if we can design security tools to be more social in nature, then perhaps we won't get this stigma around using cybersecurity tools. So, that was the formative study I did that put me in this direction. Then, later on with Facebook, I was able to experimentally test this hypothesis. Can we make people more likely to use security tools if we make things a little bit more social?

Ayanna: Ok, so if you think about this social stuff -- this was an experiment. So, you had a couple of studies -- how do we really know that it works? Like if I'm thinking about long-term vision, how do we know that the social stuff will work versus, you know, at some point four-stage factor authentication with my cell phone and my computer and all that stuff?

Sauvik: That's a great question, and you know part of answering that question is the nature of my research now. But I did do an experiment with Facebook with 50,000 Facebook users where I showed some subset of them "X of your friends use optional security tools, do you want to use this too?" versus another control condition where we just said that you had the option to use these additional security tools, which is typically how these things are presented to us. There's no social cue associated with them. We found that the social conditions vastly outperformed the control conditions with no social cues. As soon as people were alerted to the fact that their friends used these optional security tools, even two-factor authentication and log-in notifications that are prone to this social stigma effect, we found that people were much more likely to use those things. Suggesting, giving us some concrete experimentally-derived results that suggest that social influence and incorporating more social cues into the design of these security systems have big potential to overcome this barrier that people have with using security tools in the first place.

Ayanna: Ok, so this has potential. I'm sitting there, I'm listening to you, I'm thinking, "Oh, I really like this social cybersecurity thing." So, what can I do now as a user?

Sauvik: As an end user? Well, one important thing that I would suggest regular end users do right now is to not be so bashful about their security behaviors. Maybe have more conversations about these things with your friends even if it's not necessarily that interesting to you. I have noted that people who talk to their friends and their loved ones about their cybersecurity behaviors are much more likely to, one, learn about the other people's strategies for it and be able to incorporate things that are potentially easier for their own life into it, and, two, they're also influencing others. If you're somebody who cares about security and you don't want your friend to get hacked or your dad to get hacked or something like that, one concrete step you can take that doesn't necessarily require a tone of effort on your part is to just talk to them about what you do and why you do it.

Ayanna: So, do you believe that will then change our behavior with respect to security?

Sauvik: I think so, yeah. The evidence is there that adding social cues to our day-to-day conversations about security will certainly sort of move us as a population generally towards more good security behaviors.

Ayanna: Then it won't just be the crazy folks that are early adopters. It'll be everyone.

Sauvik: It'll be everyone, right. And, you know, one thing that particularly comes out from the research is that perhaps one of the strategies to use is to find the next of kin of the crazies. So, if the only people who are currently using security tools right now are the people who are sort of hardcore, overzealous about security some would say but others would say just very security-conscious people -- those people have friends. So, what if you can target the immediate next step in the social network from them and build outward from there? You have the core set of early adopters, then perhaps you can use the social cues of them using those security tools to influence the one degree of separation away from them and then build outward from there so you can get to just the casual people who don't care about security that much on a day-to-day basis, but might be interested in knowing how to protect themselves if it ever becomes relevant to them.

Ayanna: This is wonderful. This has been a wonderful conversation. What I've learned is just like everyone asks you, "Have you had your flu shot today?" We should start asking each other, "Have you had your security shot today?" To change behaviors so that we can be safer online.

Sauvik: Right (laughing).

Ayanna: I thank the audience for listening in today. And, of course, our guest Sauvik Das. For our next episode, it's going to be an exciting one about mental health and computing. I want to give a shoutout lastly to our social channels -- please follow us at @ICatGT and find us online at ic.gatech.edu.

Podcast Information

About the Podcast

The Interaction Hour is a production of the School of Interactive Computing, where we aim to redefine the human experience of computing. Hosted by School Chair Ayanna Howard, our podcast will investigate the impacts of computation on life’s big issues, from health care and national security to ethics, education, and more.

In accordance with our theme – interaction – we want to engage with you, the listener, to learn what topics you are interested in. Reach out to us on social media or via email with your questions about computing’s role in society. Each episode, one of our world-class researchers will offer expert analysis on these topics – a podcast for the listener written by the listener

Your Host

Dr. Ayanna Howard is an educator, researcher, and innovator. Her academic career is highlighted by her focus on technology development for intelligent agents that must interact with and in a human-centered world, as well as on the education and mentoring of students in the engineering and computing fields. A world-renowned roboticist with over 200 peer-reviewed publications, Dr. Howard has been widely disseminated in international journals, conference proceedings, and national media. Her professional career spans work at NASA, the Georgia Institute of Technology, and Zyrobotics, for which she is the founder and Chief Technology Officer. Currently, Dr. Howard is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing in Georgia Tech’s College of Computing.