CHI 2019 Paper On Digital Technology Use in Elementary Schools Accepted

Our paper, with fantastic collaborators Priya Kumar, Tammy Clegg, and Jessica Vitak, has been accepted to CHI 2019!  This paper investigates how elementary school teachers consider privacy and security for digital technology use.  A summary blog post will be posted closer to the conference!

Leave a comment

Contextual Integrity Symposium 2018 Report Available

We recently held the Symposium for Applications of Contextual Integrity in September 2018. The full report for the symposium is now available within a related blog post here.

Leave a comment

Participatory Design with Children, Tweens, and Teens at SOUPS 2018

Recently, we held a workshop at the SOUPS 2018 conference on Participatory Design (PD) with children, tweens, and teens. To read more about the workshop, please check out this related post.

Leave a comment

User Perceptions of Smart Home Internet of Things (IoT) Privacy

Posted on behalf of Serena Zheng, Noah Apthorpe, Marshini Chetty, and Nick Feamster.

Our work on “User Perceptions of Smart Home Internet of Things (IoT) Privacy” will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on November 6th, 2018. We briefly summarize our findings below.

What did we do? Smart home Internet of Things (IoT) devices are rapidly increasing in popularity, with more households including Internet-connected appliances that continuously monitor user activities. We wanted to investigate how users perceive the privacy implications of smart home technology and what role privacy considerations play in device purchasing and use decisions.

How did we do it? We conducted 11 interviews of early adopters of smart home technology in the United States, investigating their reasons for purchasing IoT devices, perceptions of smart home privacy risks, and actions taken to protect their privacy from entities external to the home who create, manage, track, or regulate IoT devices and/or their data.

What did we find? We identified four common themes across interview responses:

  1. Convenience and connectedness are priorities for smart home device users. These values often outweigh other concerns about IoT devices, including obsolescence, security, and privacy.
  2. User opinions about who should have access to their smart home data (e.g., manufacturers, Internet service providers, and governments) depend on perceived benefit to the user.  
  3. User assumptions about privacy protections are contingent on their trust of IoT device manufacturers, although they do not know whether these companies actually perform data encryption or anonymization.
  4. Users are less concerned about privacy risks from devices, such as lightbulbs and thermostats, that do not record audio or video, despite research showing that metadata from such devices can be used to infer home occupancy, work routines, sleeping patterns, and other user activities.

What are the implications of this work? These themes motivate recommendations for smart home device designers, researchers, regulators, and industry standards bodies. Participants’ desires for convenience and trust in IoT device manufacturers limit their willingness to take action to verify or enforce smart home data privacy. This means that privacy notifications and settings must be exceptionally clear and convenient, especially for smart home devices without screens. Improved cybersecurity and privacy regulation, combined with industry standards outlining best privacy practices, would also reduce the burden on users to manage their own privacy. We encourage follow-up studies examining the effects of smart home devices on privacy between individuals within a household and comparing perceptions of smart home privacy in different countries.

For more details about our interview findings and corresponding recommendations, please read this related blog post or the full paper.

Posted in conferences, presentations, publication | Leave a comment

Upcoming Creativity/HCI/Design Speakers at Princeton

Interested in learning about creativity in design? Don’t forget to attend Keith Sawyer’s lecture tomorrow, on Tuesday October 9 at 4:30pm in the Friend Center, Convocation Room. Keith is being hosted by the Keller Center and represents one of many speakers we endeavor to bring to campus.

On that note, save the date, on November 8, 2018, Elizabeth Churchill,  director of User Experience (UX) at Google, will be coming to give a talk at Princeton as part of a celebration of World Usability day.

The talk will be held at the Arthur Lewis Auditorium, Robertson Hall from 5:00 PM – 6:00 PM. In the morning, there will also be a panel on careers in User Experience (UX) design composed of Princeton graduates who now work in UX careers. The panel will be held at the Convocation Room, Friend Center from 12:00 PM – 1:30 PM.

Leave a comment

CSCW 2018 Best Paper Award and Upcoming CSCW Papers

Congratulations to Princeton HCI’s Arunesh Mathur on a CSCW 2018 Best paper award for his work on measuring the prevalence of affiliate marketing on YouTube and Pinterest and user perceptions of endorsement disclosures on these platforms! The lab has two additional exciting papers to be presented at the conference including one on kids and Internet safety and one on smart home Internet of Things privacy perceptions. Blog post summaries of these papers to follow in the upcoming weeks!

Leave a comment

Use of Blocking Extensions at SOUPS 2018

We describe a few highlights from our recent paper on studying peoples’ use of browser-based blocking extensions that will be presented at the 2018 Usenix Symposium on Usable Privacy and Security (SOUPS).

What did we do?: One of the ways in which people can block online tracking on the Internet is using browser-based blocking extensions such as Ad blockers, Content blockers and Tracker blockers. In our study, we asked why people use these extensions, what their knowledge of online tracking is, and what users do when these extensions fail to function correctly.

How did we do it?: We conducted two surveys using Amazon Mechanical Turk and measured what extensions survey-takers were using, if any. In the first survey, participants reported details about the extensions they used and how they thought online tracking worked. We then asked them why they used the extensions they reported, how they learned about them, and how long they had been using these blocking extensions. We also conducted measurements to check whether participants were using the extensions they mentioned. In the second survey, which we administered only to the subset of participants who reported using these extensions, we asked participants about their experiences when their extensions “break” websites they are trying to access.

What did we find?: We have three main findings. First, our results show that blocking extension usage only weakly relates with an advanced understanding of online tracking in the real world. Second, we find that each extension type has a primary reason behind adoption that is in line with expectations: users adopt Ad blockers and Content blockers primarily for user experience gains and rarely take full advantage of the privacy benefits of these blockers, other users adopt Tracker blockers for privacy reasons. Finally, our results show that current users report that they rarely experience website breakages because of their blocking extensions. However, when users are poised with a choice to disable their extensions to access the content they are trying to reach, they base their decisions on how much they trust the website and how much they value the content they desire to access.

What are the implications of the work?: Based on our findings, we make two suggestions. First, given that both blocking extension users and non-users do not fully understand the landscape of online tracking, we suggest that system designers should focus their efforts on building systems that automatically enforce tracking protection as opposed to having users take action to protect themselves (such as by installing an extension). We argue that browser vendors can play an important role in facilitating this type of default privacy protection. Second, we suggest that blocking extensions can be further improved by better understanding how website developers embed third-party trackers and deliver content through their websites so that non-use (disabling) is not forced upon users.

Read the SOUPS 2018 paper for more details, and also follow related coverage on the Princeton Engineering website!

Posted in publication | Tagged , , , | Leave a comment

How Do Tor Users Navigate Onion Services?

Posted on behalf of Philipp Winter, Annie Edmundson, Laura Roberts, Agnieskza Dutkowska-Żuk, Marshini Chetty, and Nick Feamster

Our work on “How Tor Users Interact With Onion Services” will be presented at the upcoming USENIX Security conference in Baltimore in August. Below, we briefly summarize our findings.

What are onion services?: Onion services were created by the Tor project in 2004.  They offer privacy protection for individuals browsing the web and also allow web servers, and thus websites themselves, to be anonymous. This means that any “onion site” or dark web site cannot be physically traced to identify those running the site or where the site is hosted. Unlike traditional URLS, onion domains consist of a string of letters and numbers because they are hashes over a site’s public key.

What did we do? We wanted to investigate how users perceive, manage, and use Tor’s onion services and onion domains. We also wanted to understand what challenges exist for current onion service users and what privacy and security enhancements are needed to help users better navigate these services.

How did we do it? We conducted a survey of 517 Tor users and interviewed 17 Tor users in depth to determine how users perceive, use, and manage onion services and what challenges they face in using these services. To compliment our qualitative data, we analyzed “leaked” DNS lookups to onion domains, as seen from a DNS root server. This data gave us insights into actual usage patterns to corroborate some of the findings from the interviews and surveys.

What did we find? We found that users have an incomplete mental model of onion services, use these services for anonymity and have varying trust in onion services in general. Users also have difficulty discovering and tracking onion sites and authenticating them. Finally, users want technical improvements to onion services and better information on how to use them.

What are the implications of this work? Our findings suggest various improvements for the security and usability of Tor onion services, including ways to automatically detect phishing of onion services, more clear security indicators, and ways to manage onion domain names that are difficult to remember.

Read more in a related blog post or in the full paper here.

Leave a comment

Developing Online Safety Resources for Elementary School Children at IDC 2018

Posted on behalf of Priya Kumar, Elizabeth Bonsignore, Marshini Chetty, Tammy Clegg, Brenna McNally, Jonathan Yang, and Jessica Vitak

Our paper on “Co-Designing Online Privacy-Related Games and Stories with Children” will be presented at the international Interaction Design and Children Conference in June. Below we summarize our findings.

What did we do? Children spend hours going online at home and school, but they receive little to no education about how going online affects their privacy. We explored the power of games and storytelling as two mechanisms for teaching children about privacy online.

How did we do it? We held three co-design sessions with Kidsteam, an intergenerational design team at the University of Maryland, College Park that designs technologies for children’s by working with children throughout the technology design process that included eight children ages 8-11. During these design sessions, which included eight children ages 8-11, we reviewed existing privacy resources with children and elicited design ideas for new resources.  In session 1, the team examined currently available resources such as Google’s Mindful Mountain game. In session 2, we improved the design of a conceptual prototype of an app inspired by the popular game Doodle Jump. Our version, which we called Privacy Doodle Jump, incorporated quiz questions related to privacy and security online. In session 3, children developed their own interactive narratives–similar to Choose Your Own Adventure stories–related to privacy online.

What did we find? All three co-design sessions emphasized that, when presented with educational resources related to privacy online, children want to understand the purpose of these resources and what takeaways they offer for everyday life. If resources rely on abstract or unfamiliar scenarios, children might have a harder time relating to them or understanding what they are supposed to learn from them. For example, a child might more easily absorb a privacy lesson from a story about another child who uses Instagram than a game that uses a fictional character in an imaginary world. Additionally, we found that materials designed to teach children about privacy online often instruct children on “do’s and don’ts” rather than helping them develop the skills to navigate privacy online. Such straightforward guidelines can be useful when introducing children to complex subjects like privacy, or when working with younger children. However, focusing on lists of rules does little to equip children with the necessary when making complex, privacy-related decisions online. Finally, we found that both gaming and interactive narratives can be powerful tools to help to teach children about online safety in an engaging manner.

What are the implications of this work? First, educational resources related to privacy should use scenarios that relate to children’s everyday lives. For instance, our Privacy Doodle Jump game included a question that asked a child what they would do if they were playing Xbox and saw an advertisement pop up that asked them to buy something. Second, educational resources should go beyond listing do’s and don’ts for online behavior and help children develop strategies for dealing with new and unexpected scenarios they may encounter. Because context is such an important part of privacy-related decision making, resources should facilitate discussion between parents or teachers and children rather than simply tell children how to behave. Third, educational resources should showcase a variety of outcomes of different online behaviors instead of framing privacy as a black and white issue. For instance, privacy guidelines may instruct children to never turn on location services, but this decision might differ based on the app that is requesting it. Turning on location services in Snapchat may pinpoint your house to others — a potential negative, but turning on location services in Google Maps may yield real-time navigation — a potential positive.  However, turning on location services on apps like Find My iPhone, Google Maps, and Snapchat have different, and sometimes beneficial, outcomes such as the ability to find a lost phone or get real-time navigation. Exposing children to a variety of positive and negative consequences of privacy-related decision making will help them develop the skills they need to navigate uncharted situations online.

Read more in the here.

 

Leave a comment

Princeton HCI at CHI 2018

Nathan Matias, Sam Jaroszewski, Janet Vertesi, and Marshini Chetty

Nathan Matias, Sam Jaroszewski, Janet Vertesi, and Marshini Chetty

Leave a comment