SOUPS and USENIX Security 2022

Many members of the UW Security and Privacy Research Lab were thrilled last week to finally re-join our broader research community in person in Boston, at SOUPS and USENIX Security 2022. It was fantastic to see some of our alumni, talk in person with current and future collaborators, meet new members of the community, catch up with old and new friends, and more!

UW Security Lab members and alumni at USENIX Security 2022: Yoshi Kohno, Ada Lerner, Kentrell Owens, Kimberly Ruth, Earlence Fernandes, Eric Zeng, Umar Iqbal, Kaiming Cheng, Miranda Wei, and Franzi Roesner

Our members presented a great set of talks across both conferences:

Designing beyond the default: Allen School researchers receive NSF award to address privacy and security needs of marginalized and vulnerable populations

(Cross-posted from Allen School News, by Kristin Osborne)

For people around the world, technology eases the friction of everyday life: bills paid with a few clicks online, plans made and sometimes broken with the tap of a few keys, professional and social relationships initiated and sustained from anywhere at the touch of a button. But not everyone experiences technology in a positive way, because technology — including built-in safeguards for protecting privacy and security — isn’t designed with everyone in mind. In some cases, the technology community’s tendency to develop for a “default persona” can lead to harm. This is especially true for people who, whether due to age, ability, identity, socioeconomic status, power dynamics or some combination thereof, are vulnerable to exploitation and/or marginalized in society.

Researchers in the Allen School’s Security & Privacy Research Lab have partnered with colleagues at the University of Florida and Indiana University to provide a framework for moving technology design beyond the default when it comes to user security and privacy. With a $7.5 million grant from the National Science Foundation through its Secure and Trustworthy Cyberspace (SaTC) Frontiers program, the team will blend computing and the social sciences to develop a holistic and equitable approach to technology design that addresses the unique needs of users who are underserved by current security and privacy practices.

“Technology is an essential tool, sometimes even a lifeline, for individuals and communities. But too often the needs of marginalized and vulnerable people are excluded from conversations around how to design technology for safety and security,” said Allen School professor and co-principal investigator Franziska Roesner. “Our goal is to fundamentally change how our field approaches this question to center the voices of marginalized and vulnerable people, and the unique security and privacy threats that they face, and to make this the norm in future technology design.”

To this end, Roesner and her collaborators — including Allen School colleague and co-PI Tadayoshi Kohno — will develop new security and privacy design principles that focus on mitigating harm while enhancing the benefits of technology for marginalized and vulnerable populations. These populations are particularly susceptible to threats to their privacy, security and even physical safety through their use of technology: children and teenagers, LGBTQ+ people, gig and persona workers, people with sensory impairments, people who are incarcerated or under community supervision, and people with low socioeconomic status. The team will tackle the problem using a three-prong approach, starting with an evaluation of how these users have been underserved by security and privacy solutions in the past. They will then examine how these users interact with technology, identifying both threats and benefits. Finally, the researchers will synthesize what they learned to systematize design principles that can be applied to the development of emerging technologies, such as mixed reality and smart city technologies, to ensure they meet the privacy and security needs of such users.

The researchers have no intention of imposing solutions on marginalized and vulnerable communities; a core tenet of their proposal is direct consultation and collaboration with affected people throughout the duration of the project. They will accomplish this through both quantitative and qualitative research that directly engages communities in identifying their unique challenges and needs and evaluating proposed solutions. The team will apply these insights as it explores how to leverage or even reimagine technologies to address those challenges and needs while adhering to overarching security and privacy goals around the protection of people, systems, and data.

The team’s approach is geared to ensuring that the outcomes are relevant as well as grounded in rigorous scientific theory. It’s a methodology that Roesner, Kohno, and their colleagues hope will become ingrained in the privacy and security community’s approach to new technologies — but they anticipate the impact will extend far beyond their field.

Portraits of Tadayoshi Kohno and Franziska Roesner separated by diagonal white line
Tadayoshi Kohno (left) and Franziska Roesner. Dennis Wise

“In addition to what this will mean in terms of a more inclusive approach to designing for security and privacy, one of the aspects that I’m particularly excited about is the potential to build a community of researchers and practitioners who will ensure that the needs of marginalized and vulnerable users will be met over the long term,” said Kohno. “Our work will not only inform technology design, but also education and government policy. The impact will be felt not only in the research and development community but also society at large.”

Kohno and Roesner are joined in this work by PI Kevin Butler and co-PIs Eakta Jain and Patrick Traynor at the University of Florida, co-PIs Kurt Hugenberg and Apu Kapadia at Indiana University, and Elissa Redmiles, CEO & Principal Researcher at Human Computing Associates. The team’s proposal, “Securing the Future of Computing for Marginalized and Vulnerable Populations,” is one of three projects selected by NSF in its latest round of SaTC Frontiers awards worth a combined $24.5 million. The other projects focus on securing the open-source software supply chain and extending the “trusted execution environment” principle to secure computation in the cloud.

Read the NSF announcement here and the University of Florida announcement here.

Congratulations to our 2022 Graduates!!!

The UW Security and Privacy Research Lab is incredibly excited to congratulate our many graduates this year! It has been a tough couple of years for everyone, and our BS, MS, PhD, and Postdoc graduates have nevertheless conducted incredible research and contributed to a great lab community. We will miss you all, and we can’t wait to see where your careers take you!

This year’s graduates include:

  • Prof. Dr. Pardis Emami-Naeini is completing her postdoc in our lab. Pardis has accepted a position as Assistant Professor of Computer Science at Duke University.
  • Dr. Chris Geeng completed their PhD, with a dissertation entitled “Analyzing Usable Security, Privacy, and Safety Through Identity-Based Power Relations”. Chris will begin a postdoc appointment at NYU, working with Damon McCoy.
  • Dr. Lucy Simko completed her PhD, with a dissertation entitled “Humans and Vulnerability During Times of Change: Computer Security Needs, Practices, Challenges, and Opportunities”. Lucy will begin a postdoc appointment (details TBA).
  • Dr. Eric Zeng completed his PhD, with a dissertation entitled “Characterizing and Measuring ‘Bad Ads’ on the Web”. Eric has already begun a postdoc appointment at CMU, working with Lujo Bauer.
  • Michelle Lin and Savanna Yee graduated from our BS/MS program with their MS degrees.
  • Rachel McAmis and Jeffery Tian graduated with the BS degrees. We’re excited that Rachel will remain at UW, joining our PhD program.
  • Though they are not yet leaving us, Kaiming Cheng, Michael Flanders, Kentrell Owens, and Miranda Wei all passed their Qualifying Exams this year, earning their MS degrees and completing the first of three major milestones on the path to a PhD.

Congratulations, everyone!!! We are so proud of you!

Photo caption: Lucy, Eric, Chris, Savanna, Michelle, and Pardis at our lab graduation celebration
Photo caption: Lucy, Eric, Franzi, Yoshi, and Chris before the official graduation

Announcing the Hertzbleed Attack

Security lab faculty member David Kohlbrenner and collaborators announced the Hertzbleed Attack today. The team found a way to mount remote timing attacks on constant-time cryptographic code running on modern x86 processors (see Twitter thread). From the website: “Hertzbleed is a new family of side-channel attacks: frequency side channels. In the worst case, these attacks can allow an attacker to extract cryptographic keys from remote servers that were previously believed to be secure.” The Hertzbleed paper will appear in the 31st USENIX Security Symposium. Congratulations to the team!

Political ads during the 2020 presidential election cycle collected personal information and spread misleading information

(Cross-posted from UW News, by Sarah McQuate and Rebecca Gourley)

UW researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people's personal information or having headlines that might affect web surfers' views of candidates.

UW researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates.University of Washington

Online advertisements are found frequently splashed across news websites. Clicking on these banners or links provides the news site with revenue. But these ads also often use manipulative techniques, researchers say.

University of Washington researchers were curious about what types of political ads people saw during the 2020 presidential election. The team looked at more than 1 million ads from almost 750 news sites between September 2020 and January 2021. Of those ads, almost 56,000 had political content.

Political ads used multiple tactics that concerned the researchers, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates.

The researchers presented these findings Nov. 3 at the ACM Internet Measurement Conference 2021.

“The election is a time when people are getting a lot of information, and our hope is that they are processing it to make informed decisions toward the democratic process. These ads make up part of the information ecosystem that is reaching people, so problematic ads could be especially dangerous during the election season,” said senior author Franziska Roesner, UW associate professor in the Paul G. Allen School of Computer Science & Engineering.

The team wondered if or how ads would take advantage of the political climate to prey on people’s emotions and get people to click.

“We were well positioned to study this phenomenon because of our previous research on misleading information and manipulative techniques in online ads,” said Tadayoshi Kohno, UW professor in the Allen School. “Six weeks leading up to the election, we said, ‘There are going to be interesting ads, and we have the infrastructure to capture them. Let’s go get them. This is a unique and historic opportunity.’”

The researchers created a list of news websites that spanned the political spectrum and then used a web crawler to visit each site every day. The crawler scrolled through the sites and took screenshots of each ad before clicking on the ad to collect the URL and the content of the landing page.

The team wanted to make sure to get a broad range of ads, because someone based at the UW might see a different set of ads than someone in a different location.

“We know that political ads are targeted by location. For example, ads for Washington candidates will only be featured to viewers browsing from the state of Washington. Or maybe a presidential campaign will have more ads featured in a swing state,” said lead author Eric Zeng, UW doctoral student in the Allen School.

“We set up our crawlers to crawl from different locations in the U.S. Because we didn’t actually have computers set up across the country, we used a virtual private network to make it look like our crawlers were loading the sites from those locations.”

The researchers initially set up the crawlers to search news sites as if they were based in Miami, Seattle, Salt Lake City and Raleigh, North Carolina. After the election, the team also wanted to capture any ads related to the Georgia special election and the Arizona recount, so two crawlers started searching as if they were based in Atlanta and Phoenix.

The team continued crawling sites throughout January 2021 to capture any ads related to the Capitol insurrection.

Four screenshots of example poll ads in a square. Starting in the top left is a poll asking if Trump should concede. In the top right is an ad asking people to sign a thank you card for Dr. Fauci, in the bottom right is an ad that says "Sign the petition that Nancy Pelosi hates," and in the bottom left is a poll about whether illegal immigrants should get unemployment benefits

Some political ads posed as a poll to collect people’s personal information.University of Washington

The researchers used natural language processing to classify ads as political or non-political. Then the team went through the political ads manually to further categorize them, such as by party affiliation, who paid for the ad or what types of tactics the ad used.

“We saw these fake poll ads that were harvesting personal information, like email addresses, and trying to prey on people who wanted to be politically involved. These ads would then use that information to send spam, malware or just general email newsletters,” said co-author Miranda Wei, UW doctoral student in the Allen School. “There were so many fake buttons in these ads, asking people to accept or decline, or vote yes or no. These things are clearly intended to lead you to give up your personal data.”

Ads that appeared to be polls were more likely to be used by conservative-leaning groups, such as conservative news outlets and nonprofit political organizations. These ads were also more likely to be featured on conservative-leaning websites.

The most popular type of political ad was click-bait news articles that often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. The team observed more than 29,000 of these ads, and the crawlers often encountered the same ad multiple times. Similar to the fake poll ads, these were also more likely to appear on right-leaning sites.

“One example was a headline that said, ‘There’s something fishy in Biden’s speeches,’” said Roesner, who is also the co-director of the UW Security and Privacy Research Lab. “I worry that these articles are contributing to a set of evidence that people have amassed in their minds. People probably won’t remember later where they saw this information. They probably didn’t even click on it, but it’s still shaping their view of a candidate.”

Three screenshots of example clickbait ads. The first shows Pence making an "eyebrow raising declaration after DC siege." The second says "Joe Biden goes on head-turning rant, fires off at reporter." The third shows Ted Cruz making a "head turning statement to Trump about the riot"

Click-bait news articles often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information.University of Washington

The researchers were surprised and relieved, however, to find a lack of ads containing explicit misinformation about how and where to vote, or who won the election.

“To their credit, I think the ad platforms are catching some misinformation,” Zeng said. “What’s getting through are ads that are exploiting the gray areas in content and moderation policies, things that seem deceptive but play to the letter of the law.”

The world of online ads is so complicated, the researchers said, that it’s hard to pinpoint exactly why or how certain ads appear on specific sites or are viewed by specific viewers.

“Certain ads get shown in certain places because the system decided that those would be the most lucrative ads in those spots,” Roesner said. “It’s not necessarily that someone is sitting there doing this on purpose, but the impact is still the same — people who are the most vulnerable to certain techniques and certain content are the ones who will see it more.”

To protect computer users from problematic ads, the researchers suggest web surfers should be careful about taking content at face value, especially if it seems sensational. People can also limit how many ads they see by getting an ad blocker.

Theo Gregersen, a UW undergraduate student studying computer science is also a co-author on this paper. This research was funded by the National Science Foundation, the UW Center for an Informed Public, and the John S. and James L. Knight Foundation.

For more information, contact badads@cs.washington.edu.

Runner-Up for Best Paper Award at IMC ’21

A paper from UW Security and Privacy Lab researchers Eric Zeng, Miranda Wei, Theo Gregersen, Yoshi Kohno, and Franzi Roesner was a runner-up for the Best Paper Award at this year’s Internet Measurement Conference (IMC)! Read the paper “Polls, Clickbait, and Commemorative $2 Bills: Problematic Political Advertising on News and Media Websites Around the 2020 U.S. Elections” and find the dataset here: https://badads.cs.washington.edu/political.html.

Welcome, Tina and Umar!

With the start of the new academic year, we’re thrilled to be spending less time on Zoom and more time on our beautiful campus.

We’re also excited to welcome a new PhD student to the lab, Tina Yeung, who joins us from the FTC. We’re also excited to welcome a new postdoc to the lab, Umar Iqbal. Dr. Iqbal received his PhD at the University of Iowa studying privacy and tracking on the web, and was selected as a member of the 2021 cohort of CIFellows. Welcome, Tina and Umar!

Professor Franziska Roesner earns Consumer Reports Digital Lab Fellowship to support research into problematic content in online ads

(Cross-posted from Allen School News.)

Franziska Roesner smiling and leaning against a wood and metal railing
Credit: Dennis Wise/University of Washington

As anyone who has visited a website knows, online ads are taking up an increasing amount of page real estate. Depending on the ad, the content might veer from mildly annoying to downright dangerous; sometimes, it can be difficult to distinguish between ads that are deceptive or manipulative by design and legitimate content on a site. Now, Allen School professor Franziska Roesner (Ph.D., ‘14), co-director of the University of Washington’s Security and Privacy Research Lab, wants to shed light on problematic content in the online advertising ecosystem to support public-interest transparency and research.

Consumer Reports selected Roesner as a 2021-2022 Digital Lab Fellow to advance her efforts to create a public-interest online ads archive to document and investigate problematic ads and their impacts on users. With this infrastructure in place, Roesner hopes to support her team and others in developing new user-facing tools to combat the spread of misleading and potentially harmful ad content online. She is one of three public interest technology researchers to be named in the latest cohort of Digital Lab Fellows focused on developing practical solutions for addressing emerging consumer harms in the digital realm. 

This is not a new area of inquiry for Roesner, who has previously investigated online advertising from the perspective of user privacy such as the use of third-party trackers to collect information from users across multiple websites. Lately, she has expanded her focus to examining the actual content of those ads. Last year, amidst the lead-up to the U.S. presidential election and the pandemic’s growing human and economic toll — and against the backdrop of simmering arguments over the origins of SARS-CoV-2, lockdowns and mask mandates, and potential medical interventions — Roesner and a team of researchers unveiled the findings of a study examining the quality, or lack thereof, of ads that appear on news and media sites. They found that problematic online ads take many forms, and that they appeared equally on both trusted mainstream news sites and low quality sites devoted to peddling misinformation. In follow-up work, Roesner and her collaborators further studied people’s — not just researchers’ — perceptions of problematic ad content, and in forthcoming work, problematic political ads surrounding the 2020 U.S. elections.

“Right now, the web is the wild west of advertising. There is a lot of content that is misleading and potentially harmful, and it can be really difficult for users to tell the difference,” explained Roesner. “For example, ads may take the form of product ‘advertorials,’ in which their similarity to actual news articles lends them an appearance of legitimacy and objectivity. Or they might rely on manipulative or click-baity headlines that contain or imply disinformation. Sometimes, they are disguised as political opinion polls with provocative statements that, when you click on them, ask for your email address and sign you up for a mailing list that delivers you even more manipulative content.”

Roesner is keen to build on her previous work to improve our understanding of how these tactics enable problematic ads to proliferate — and the human toll that they generate in terms of the time and attention wasted and the emotional impact of consuming misinformation. Building out the team’s existing ad collection infrastructure, the ad archive will provide a structured, longitudinal, and (crucially) public look into the ads that people see on the web. These insights will support additional research from Roesner’s team as well as other researchers investigating how misinformation spreads online. Roesner and her collaborators ultimately aim to help “draw the line” between legitimate online advertising content and practices, and problematic content that is harmful to users, content creators, websites, and ad platforms.

But Roesner doesn’t think we should wait for the regulatory framework to catch up. One of her priorities is to protect users from problematic ads, such as by developing tools that automatically block certain ads or empower users to recognize and flag them. While acknowledging that online advertising is here to stay — it funds the economic model of the web, after all — Roesner believes that there is a better balance to be struck between revenue and the quality of content that people consume on a daily basis as they point and click.

“Even the most respected websites may be inadvertently hosting and assisting the spread of bogus content — which, as things stand, puts the onus on users to assess the veracity of what they are seeing,” said Roesner. “My hope is that this collaboration with Consumer Reports will support efforts to analyze ad content and its impact on users — and generate regulatory and technical solutions that will lead to more positive digital experiences for everyone.”

Consumer Reports created the Digital Lab Fellowship program with support from the Alfred P. Sloan Foundation and welcomed its first cohort last year. 

“People should feel safe with the products and services that fill our lives and homes. That depends on dedicated public interest technologists keeping up with the pace of innovation to effectively monitor the digital marketplace,” Ben Moskowitz, director of the Digital Lab at Consumer Reports, said in a press release. “We are proud to support and work alongside these three Fellows, whose work will increase fairness and trust in the products and services we use everyday.”

Read the Consumer Reports announcement here, and learn more about the Digital Lab Fellowship program here.

Congratulations, Franzi!

Allen School’s Amy Zhang and Franziska Roesner win NSF Convergence Accelerator for their work to limit the spread of misinformation online

(Cross-posted from Allen School News.)

Amy Zhang (left) and Franziska Roesner

Allen School’s Amy Zhang and Franziska Roesner win NSF Convergence Accelerator for their work to limit the spread of misinformation online

The National Science Foundation (NSF) has selected Allen School professors Amy Zhang, who directs the Social Futures Lab, and Franziska Roesner, who co-directs the  Security and Privacy Research Lab, to receive Convergence Accelerator funding for their work with collaborators at the University of Washington and the grassroots journalism organization Hacks/Hackers on tools to detect and help stop misinformation online. The NSF’s Convergence Accelerator program is unique in that its structure offers researchers the opportunity to accelerate their work over the course of a year to find tangible solutions. The curriculum is designed to strengthen each team’s convergence approach and further develop their solution to move on to a second phase with the potential for additional funding.

In their proposal, “Analysis and Response for Trust Tool (ARTT): Expert-Informed resources for Individuals and Online Communities to Address Vaccine Hesitancy and Misinformation,” Zhang, Roesner, Human Centered Design & Engineering professor and Allen School adjunct professor Kate Starbird, Information School professor and director of the Center for an Informed Public Jevin West, and internet and Hacks/Hackers researcher at large Connie Moon Sehat, who serves as primary investigator of the project, aim to develop a software tool — ARTT — that helps people identify and prevent misinformation. This currently happens on a smaller scale by individuals and community moderators with few resources or expert guidance on combating false information. The team, made up of experts in fields such as computer science, social science, media literacy, conflict resolution and psychology, will develop a software program that helps moderators analyze information online and present practical information that builds trust.

“In our previous research, we learned that rather than platform interventions like ‘fake news’ labels, people often learn that something they see or post on social media is false or untrustworthy from comment threads or other community members,” said Roesner, who serves as co-principal investigator on the ARTT project alongside Zhang. “With the ARTT research, we are hoping to support these kinds of interactions in productive and respectful ways.”

While ARTT will help prevent the spread of any misinformation, the team’s focus right now is on combating false information on vaccines — vaccine hesitancy has been identified by the World Health Organization as one of the top 10 threats to global health.

In addition to her participation in the ARTT enterprise, Zhang has another Convergence Accelerator project focused on creating a “golden set” of guidelines to help prevent the spread of false information. That proposal, “Misinformation Judgments with Public Legitimacy,” aims to use public juries to render judgments on socially contested issues. The jurors will continue to build these choices to create a “golden set” that social media platforms can use to evaluate information posted on social media. Besides Zhang, the project team includes the University of Michigan’s Paul Resnick, associate dean for research and faculty affairs and professor at the School of Information, and David Jurgens, professor at the Information School and in the Department of Electrical Engineering & Computer Science, and the Massachusetts Institute of Technology’s David Rand, professor of management science and brain and cognitive sciences and Adam Berinsky, professor of political science.

Online platforms have been increasingly called on to reduce the spread of false information. There is little agreement on what process should be used to do so, and many social media sites are not fully transparent about their policies and procedures when it comes to combating misinformation. Zhang’s group will develop a forecasting service to be used as external auditing for platforms to reduce false claims online. The “golden sets” created from the jury’s work will serve as training data to improve the forecasting service over time. Platforms that use this service will also be more transparent about their judgments regarding false information posted on their platform. 

“The goal of this project is to determine a process for collecting judgments on content moderation cases related to misinformation that has broad public legitimacy,” Zhang said. “Once we’ve established such a process, we aim to implement it and gather judgments for a large set of cases. These judgments can be used to train automated approaches that can be used to audit the performance of platforms.”

Participation in the Convergence Accelerator program includes a $749,000 award for each team to develop their work. Learn more about the latest round awards here and read about all of the UW teams that earned a Convergence Accelerator award here

Security Lab Holds First Annual Industry Affiliates Workshop

The UW Security and Privacy Research Lab has recently launched an Industry Affiliates Program. The UW Security and Privacy Industry Affiliates Program helps support our ongoing research and strengthens collaborations between the UW Security and Privacy Research Lab and industry partners. On September 28, we held our first annual workshop for existing and prospective affiliates company members. This workshop was an opportunity for industry affiliates (or potential future affiliates) to learn about UW Security and Privacy research. This workshop was also an opportunity for UW Security and Privacy Research Lab members to learn about industry needs and opportunities. Speaking for the UW side, we learned a lot!

Many thanks to all of the attendees, and huge thanks especially to our current (named) affiliates companies: Google, Woven Planet, and Qualcomm! And many thanks to our unnamed affiliate companies as well! We so appreciate your support and look forward to further connection!

1 2 3 26