With the start of the new academic year, we’re thrilled to be spending less time on Zoom and more time on our beautiful campus.
We’re also excited to welcome a new PhD student to the lab, Tina Yeung, who joins us from the FTC. We’re also excited to welcome a new postdoc to the lab, Umar Iqbal. Dr. Iqbal received his PhD at the University of Iowa studying privacy and tracking on the web, and was selected as a member of the 2021 cohort of CIFellows. Welcome, Tina and Umar!
As anyone who has visited a website knows, online ads are taking up an increasing amount of page real estate. Depending on the ad, the content might veer from mildly annoying to downright dangerous; sometimes, it can be difficult to distinguish between ads that are deceptive or manipulative by design and legitimate content on a site. Now, Allen School professor Franziska Roesner (Ph.D., ‘14), co-director of the University of Washington’s Security and Privacy Research Lab, wants to shed light on problematic content in the online advertising ecosystem to support public-interest transparency and research.
Consumer Reports selected Roesner as a 2021-2022 Digital Lab Fellow to advance her efforts to create a public-interest online ads archive to document and investigate problematic ads and their impacts on users. With this infrastructure in place, Roesner hopes to support her team and others in developing new user-facing tools to combat the spread of misleading and potentially harmful ad content online. She is one of three public interest technology researchers to be named in the latest cohort of Digital Lab Fellows focused on developing practical solutions for addressing emerging consumer harms in the digital realm.
This is not a new area of inquiry for Roesner, who has previously investigated online advertising from the perspective of user privacy such as the use of third-party trackers to collect information from users across multiple websites. Lately, she has expanded her focus to examining the actual content of those ads. Last year, amidst the lead-up to the U.S. presidential election and the pandemic’s growing human and economic toll — and against the backdrop of simmering arguments over the origins of SARS-CoV-2, lockdowns and mask mandates, and potential medical interventions — Roesner and a team of researchers unveiled the findings of a study examining the quality, or lack thereof, of ads that appear on news and media sites. They found that problematic online ads take many forms, and that they appeared equally on both trusted mainstream news sites and low quality sites devoted to peddling misinformation. In follow-up work, Roesner and her collaborators further studied people’s — not just researchers’ — perceptions of problematic ad content, and in forthcoming work, problematic political ads surrounding the 2020 U.S. elections.
“Right now, the web is the wild west of advertising. There is a lot of content that is misleading and potentially harmful, and it can be really difficult for users to tell the difference,” explained Roesner. “For example, ads may take the form of product ‘advertorials,’ in which their similarity to actual news articles lends them an appearance of legitimacy and objectivity. Or they might rely on manipulative or click-baity headlines that contain or imply disinformation. Sometimes, they are disguised as political opinion polls with provocative statements that, when you click on them, ask for your email address and sign you up for a mailing list that delivers you even more manipulative content.”
Roesner is keen to build on her previous work to improve our understanding of how these tactics enable problematic ads to proliferate — and the human toll that they generate in terms of the time and attention wasted and the emotional impact of consuming misinformation. Building out the team’s existing ad collection infrastructure, the ad archive will provide a structured, longitudinal, and (crucially) public look into the ads that people see on the web. These insights will support additional research from Roesner’s team as well as other researchers investigating how misinformation spreads online. Roesner and her collaborators ultimately aim to help “draw the line” between legitimate online advertising content and practices, and problematic content that is harmful to users, content creators, websites, and ad platforms.
But Roesner doesn’t think we should wait for the regulatory framework to catch up. One of her priorities is to protect users from problematic ads, such as by developing tools that automatically block certain ads or empower users to recognize and flag them. While acknowledging that online advertising is here to stay — it funds the economic model of the web, after all — Roesner believes that there is a better balance to be struck between revenue and the quality of content that people consume on a daily basis as they point and click.
“Even the most respected websites may be inadvertently hosting and assisting the spread of bogus content — which, as things stand, puts the onus on users to assess the veracity of what they are seeing,” said Roesner. “My hope is that this collaboration with Consumer Reports will support efforts to analyze ad content and its impact on users — and generate regulatory and technical solutions that will lead to more positive digital experiences for everyone.”
Consumer Reports created the Digital Lab Fellowship program with support from the Alfred P. Sloan Foundation and welcomed its first cohort last year.
“People should feel safe with the products and services that fill our lives and homes. That depends on dedicated public interest technologists keeping up with the pace of innovation to effectively monitor the digital marketplace,” Ben Moskowitz, director of the Digital Lab at Consumer Reports, said in a press release. “We are proud to support and work alongside these three Fellows, whose work will increase fairness and trust in the products and services we use everyday.”
Allen School’s Amy Zhang and Franziska Roesner win NSF Convergence Accelerator for their work to limit the spread of misinformation online
The National Science Foundation (NSF) has selected Allen School professors Amy Zhang, who directs the Social Futures Lab, and Franziska Roesner, who co-directs the Security and Privacy Research Lab, to receive Convergence Accelerator funding for their work with collaborators at the University of Washington and the grassroots journalism organization Hacks/Hackers on tools to detect and help stop misinformation online. The NSF’s Convergence Accelerator program is unique in that its structure offers researchers the opportunity to accelerate their work over the course of a year to find tangible solutions. The curriculum is designed to strengthen each team’s convergence approach and further develop their solution to move on to a second phase with the potential for additional funding.
In their proposal, “Analysis and Response for Trust Tool (ARTT): Expert-Informed resources for Individuals and Online Communities to Address Vaccine Hesitancy and Misinformation,” Zhang, Roesner, Human Centered Design & Engineering professor and Allen School adjunct professor Kate Starbird, Information School professor and director of the Center for an Informed Public Jevin West, and internet and Hacks/Hackers researcher at large Connie Moon Sehat, who serves as primary investigator of the project, aim to develop a software tool — ARTT — that helps people identify and prevent misinformation. This currently happens on a smaller scale by individuals and community moderators with few resources or expert guidance on combating false information. The team, made up of experts in fields such as computer science, social science, media literacy, conflict resolution and psychology, will develop a software program that helps moderators analyze information online and present practical information that builds trust.
“In our previousresearch, we learned that rather than platform interventions like ‘fake news’ labels, people often learn that something they see or post on social media is false or untrustworthy from comment threads or other community members,” said Roesner, who serves as co-principal investigator on the ARTT project alongside Zhang. “With the ARTT research, we are hoping to support these kinds of interactions in productive and respectful ways.”
While ARTT will help prevent the spread of any misinformation, the team’s focus right now is on combating false information on vaccines — vaccine hesitancy has been identified by the World Health Organization as one of the top 10 threats to global health.
In addition to her participation in the ARTT enterprise, Zhang has another Convergence Accelerator project focused on creating a “golden set” of guidelines to help prevent the spread of false information. That proposal, “Misinformation Judgments with Public Legitimacy,” aims to use public juries to render judgments on socially contested issues. The jurors will continue to build these choices to create a “golden set” that social media platforms can use to evaluate information posted on social media. Besides Zhang, the project team includes the University of Michigan’s Paul Resnick, associate dean for research and faculty affairs and professor at the School of Information, and David Jurgens, professor at the Information School and in the Department of Electrical Engineering & Computer Science, and the Massachusetts Institute of Technology’s David Rand, professor of management science and brain and cognitive sciences and Adam Berinsky, professor of political science.
Online platforms have been increasingly called on to reduce the spread of false information. There is little agreement on what process should be used to do so, and many social media sites are not fully transparent about their policies and procedures when it comes to combating misinformation. Zhang’s group will develop a forecasting service to be used as external auditing for platforms to reduce false claims online. The “golden sets” created from the jury’s work will serve as training data to improve the forecasting service over time. Platforms that use this service will also be more transparent about their judgments regarding false information posted on their platform.
“The goal of this project is to determine a process for collecting judgments on content moderation cases related to misinformation that has broad public legitimacy,” Zhang said. “Once we’ve established such a process, we aim to implement it and gather judgments for a large set of cases. These judgments can be used to train automated approaches that can be used to audit the performance of platforms.”
Participation in the Convergence Accelerator program includes a $749,000 award for each team to develop their work. Learn more about the latest round awards here and read about all of the UW teams that earned a Convergence Accelerator award here.