Professor Franziska Roesner earns Consumer Reports Digital Lab Fellowship to support research into problematic content in online ads

(Cross-posted from Allen School News.)

Franziska Roesner smiling and leaning against a wood and metal railing
Credit: Dennis Wise/University of Washington

As anyone who has visited a website knows, online ads are taking up an increasing amount of page real estate. Depending on the ad, the content might veer from mildly annoying to downright dangerous; sometimes, it can be difficult to distinguish between ads that are deceptive or manipulative by design and legitimate content on a site. Now, Allen School professor Franziska Roesner (Ph.D., ‘14), co-director of the University of Washington’s Security and Privacy Research Lab, wants to shed light on problematic content in the online advertising ecosystem to support public-interest transparency and research.

Consumer Reports selected Roesner as a 2021-2022 Digital Lab Fellow to advance her efforts to create a public-interest online ads archive to document and investigate problematic ads and their impacts on users. With this infrastructure in place, Roesner hopes to support her team and others in developing new user-facing tools to combat the spread of misleading and potentially harmful ad content online. She is one of three public interest technology researchers to be named in the latest cohort of Digital Lab Fellows focused on developing practical solutions for addressing emerging consumer harms in the digital realm. 

This is not a new area of inquiry for Roesner, who has previously investigated online advertising from the perspective of user privacy such as the use of third-party trackers to collect information from users across multiple websites. Lately, she has expanded her focus to examining the actual content of those ads. Last year, amidst the lead-up to the U.S. presidential election and the pandemic’s growing human and economic toll — and against the backdrop of simmering arguments over the origins of SARS-CoV-2, lockdowns and mask mandates, and potential medical interventions — Roesner and a team of researchers unveiled the findings of a study examining the quality, or lack thereof, of ads that appear on news and media sites. They found that problematic online ads take many forms, and that they appeared equally on both trusted mainstream news sites and low quality sites devoted to peddling misinformation. In follow-up work, Roesner and her collaborators further studied people’s — not just researchers’ — perceptions of problematic ad content, and in forthcoming work, problematic political ads surrounding the 2020 U.S. elections.

“Right now, the web is the wild west of advertising. There is a lot of content that is misleading and potentially harmful, and it can be really difficult for users to tell the difference,” explained Roesner. “For example, ads may take the form of product ‘advertorials,’ in which their similarity to actual news articles lends them an appearance of legitimacy and objectivity. Or they might rely on manipulative or click-baity headlines that contain or imply disinformation. Sometimes, they are disguised as political opinion polls with provocative statements that, when you click on them, ask for your email address and sign you up for a mailing list that delivers you even more manipulative content.”

Roesner is keen to build on her previous work to improve our understanding of how these tactics enable problematic ads to proliferate — and the human toll that they generate in terms of the time and attention wasted and the emotional impact of consuming misinformation. Building out the team’s existing ad collection infrastructure, the ad archive will provide a structured, longitudinal, and (crucially) public look into the ads that people see on the web. These insights will support additional research from Roesner’s team as well as other researchers investigating how misinformation spreads online. Roesner and her collaborators ultimately aim to help “draw the line” between legitimate online advertising content and practices, and problematic content that is harmful to users, content creators, websites, and ad platforms.

But Roesner doesn’t think we should wait for the regulatory framework to catch up. One of her priorities is to protect users from problematic ads, such as by developing tools that automatically block certain ads or empower users to recognize and flag them. While acknowledging that online advertising is here to stay — it funds the economic model of the web, after all — Roesner believes that there is a better balance to be struck between revenue and the quality of content that people consume on a daily basis as they point and click.

“Even the most respected websites may be inadvertently hosting and assisting the spread of bogus content — which, as things stand, puts the onus on users to assess the veracity of what they are seeing,” said Roesner. “My hope is that this collaboration with Consumer Reports will support efforts to analyze ad content and its impact on users — and generate regulatory and technical solutions that will lead to more positive digital experiences for everyone.”

Consumer Reports created the Digital Lab Fellowship program with support from the Alfred P. Sloan Foundation and welcomed its first cohort last year. 

“People should feel safe with the products and services that fill our lives and homes. That depends on dedicated public interest technologists keeping up with the pace of innovation to effectively monitor the digital marketplace,” Ben Moskowitz, director of the Digital Lab at Consumer Reports, said in a press release. “We are proud to support and work alongside these three Fellows, whose work will increase fairness and trust in the products and services we use everyday.”

Read the Consumer Reports announcement here, and learn more about the Digital Lab Fellowship program here.

Congratulations, Franzi!

Allen School’s Amy Zhang and Franziska Roesner win NSF Convergence Accelerator for their work to limit the spread of misinformation online

(Cross-posted from Allen School News.)

Amy Zhang (left) and Franziska Roesner

Allen School’s Amy Zhang and Franziska Roesner win NSF Convergence Accelerator for their work to limit the spread of misinformation online

The National Science Foundation (NSF) has selected Allen School professors Amy Zhang, who directs the Social Futures Lab, and Franziska Roesner, who co-directs the  Security and Privacy Research Lab, to receive Convergence Accelerator funding for their work with collaborators at the University of Washington and the grassroots journalism organization Hacks/Hackers on tools to detect and help stop misinformation online. The NSF’s Convergence Accelerator program is unique in that its structure offers researchers the opportunity to accelerate their work over the course of a year to find tangible solutions. The curriculum is designed to strengthen each team’s convergence approach and further develop their solution to move on to a second phase with the potential for additional funding.

In their proposal, “Analysis and Response for Trust Tool (ARTT): Expert-Informed resources for Individuals and Online Communities to Address Vaccine Hesitancy and Misinformation,” Zhang, Roesner, Human Centered Design & Engineering professor and Allen School adjunct professor Kate Starbird, Information School professor and director of the Center for an Informed Public Jevin West, and internet and Hacks/Hackers researcher at large Connie Moon Sehat, who serves as primary investigator of the project, aim to develop a software tool — ARTT — that helps people identify and prevent misinformation. This currently happens on a smaller scale by individuals and community moderators with few resources or expert guidance on combating false information. The team, made up of experts in fields such as computer science, social science, media literacy, conflict resolution and psychology, will develop a software program that helps moderators analyze information online and present practical information that builds trust.

“In our previous research, we learned that rather than platform interventions like ‘fake news’ labels, people often learn that something they see or post on social media is false or untrustworthy from comment threads or other community members,” said Roesner, who serves as co-principal investigator on the ARTT project alongside Zhang. “With the ARTT research, we are hoping to support these kinds of interactions in productive and respectful ways.”

While ARTT will help prevent the spread of any misinformation, the team’s focus right now is on combating false information on vaccines — vaccine hesitancy has been identified by the World Health Organization as one of the top 10 threats to global health.

In addition to her participation in the ARTT enterprise, Zhang has another Convergence Accelerator project focused on creating a “golden set” of guidelines to help prevent the spread of false information. That proposal, “Misinformation Judgments with Public Legitimacy,” aims to use public juries to render judgments on socially contested issues. The jurors will continue to build these choices to create a “golden set” that social media platforms can use to evaluate information posted on social media. Besides Zhang, the project team includes the University of Michigan’s Paul Resnick, associate dean for research and faculty affairs and professor at the School of Information, and David Jurgens, professor at the Information School and in the Department of Electrical Engineering & Computer Science, and the Massachusetts Institute of Technology’s David Rand, professor of management science and brain and cognitive sciences and Adam Berinsky, professor of political science.

Online platforms have been increasingly called on to reduce the spread of false information. There is little agreement on what process should be used to do so, and many social media sites are not fully transparent about their policies and procedures when it comes to combating misinformation. Zhang’s group will develop a forecasting service to be used as external auditing for platforms to reduce false claims online. The “golden sets” created from the jury’s work will serve as training data to improve the forecasting service over time. Platforms that use this service will also be more transparent about their judgments regarding false information posted on their platform. 

“The goal of this project is to determine a process for collecting judgments on content moderation cases related to misinformation that has broad public legitimacy,” Zhang said. “Once we’ve established such a process, we aim to implement it and gather judgments for a large set of cases. These judgments can be used to train automated approaches that can be used to audit the performance of platforms.”

Participation in the Convergence Accelerator program includes a $749,000 award for each team to develop their work. Learn more about the latest round awards here and read about all of the UW teams that earned a Convergence Accelerator award here

Security Lab Holds First Annual Industry Affiliates Workshop

The UW Security and Privacy Research Lab has recently launched an Industry Affiliates Program. The UW Security and Privacy Industry Affiliates Program helps support our ongoing research and strengthens collaborations between the UW Security and Privacy Research Lab and industry partners. On September 28, we held our first annual workshop for existing and prospective affiliates company members. This workshop was an opportunity for industry affiliates (or potential future affiliates) to learn about UW Security and Privacy research. This workshop was also an opportunity for UW Security and Privacy Research Lab members to learn about industry needs and opportunities. Speaking for the UW side, we learned a lot!

Many thanks to all of the attendees, and huge thanks especially to our current (named) affiliates companies: Google, Woven Planet, and Qualcomm! And many thanks to our unnamed affiliate companies as well! We so appreciate your support and look forward to further connection!

UW Security Lab at FTC PrivacyCon 2021

Three members of the UW Security & Privacy Lab were invited to speak at the Federal Trade Commission’s PrivacyCon this year! Postdoc Pardis Emami-Naeini and PhD student Miranda Wei spoke about their pre-UW work, and PhD student Christine Geeng spoke about her study of COVID-19 misinformation interventions on social media (a collaboration with Security Lab faculty member Franzi Roesner and iSchool faculty member Jevin West). What’s more, Christine’s session was moderated by FTC technologist Christina Yeung, who we are excited will be joining the Security Lab as a PhD student this fall! You can see the PrivacyCon talk videos as well as check out the agenda, including links to the associated research papers, here: https://www.ftc.gov/news-events/events-calendar/privacycon-2021.

Q&A with Tadayoshi Kohno: In his new novella ‘Our Reality,’ Allen School professor invites readers to consider who benefits (and who doesn’t) from technology

(Cross-posted from Allen School News.)

Book cover art: water and horizon in a grid with silhouettes of people and text Our Reality: A novella, Tadayoshi Kohno

What if you could engage with the world — go to class, do your job, meet up with friends, get a workout in — without leaving the comfort of your bedroom thanks to mixed-reality technology? 

What if you could customize your avatar in many ways, but you couldn’t make it reflect your true racial or cultural identity? What if the best way to get a high-paying job was based on your access to this technology, only you are blind and the technology was not designed for you?

And what if notions of equity and justice manufactured in that virtual world — at least, for some users — still don’t carry over into the real one?

These and many other questions come to mind as one reads “Our Reality,” a new science-fiction novella authored by Tadayoshi (Yoshi) Kohno, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering. Kohno, who co-directs the UW’s Security and Privacy Research Lab and Tech Policy Lab, has long regarded science fiction as a powerful means through which to critically examine the relationship between technology and society. He even dealt previously with the issue in a short piece he contributed to the Tech Policy Lab’s recent anthology, “Telling Stories,” which explores culturally-responsive artificial intelligence (AI).

“Our Reality,” so named for the mixed-reality technology he imagines taking hold in the post-pandemic world circa 2034, is Kohno’s first foray into long-form fiction. The Allen School recently sat down with Kohno with the help of another virtual technology — Zoom — to discuss his inspiration for the story, his personal as well as professional journey to examining matters of racism, equity and justice, and how technology meant to help may also do harm.

First of all, congratulations on the publication of your first novella! As a professor of computer science and a well-known cybersecurity researcher, did you ever expect to be writing a work of science fiction?

Yoshi Kohno: I’ve always been interested in science fiction, ever since I was a kid. I’ve carried that fascination into my career as a computer science educator and researcher. Around 10 years ago, I published a paper on science fiction prototyping and computer security education that talked about my experiences asking students in my undergraduate computer security course to write science fiction as part of the syllabus. Writing science fiction means creating a fictional world and then putting technology inside that world. Doing so forces the writer — the students, in my course — to explore the relationship between society and technology. It has been a dream of mine to write science fiction of my own. In fact, when conducting my research — on automotive security, or cell phone surveillance, or anything else — I have often discussed with my students and colleagues how our results would make good plot elements for a science fiction story! The teenage me would be excited that I finally did it. 

What was the inspiration for “Our Reality,” and what made you decide now was the right time to realize that childhood dream?

YK: Recently, many people experienced a significant awakening to the harms that technology can bring and to the injustices in society. We, as a society and as computer scientists, have a responsibility to address these injustices. I have written a number of scientific, scholarly papers. But I realized that some of the most important works in history are fiction. Think about the book “1984,” for example, and how often people quote from it. Now, I know my book will not be nearly as transformative as “1984,” but the impact of that book and others inspired me.

I was particularly disturbed by how racism can manifest in technologies. As an educator, I believe that it is our responsibility to help students understand how to create technologies that are just, and certainly that are not racist. I wrote my story with the hope that it would inform the reader’s understanding of racism and racism in technology. At the same time, I also want to explicitly acknowledge that there are already amazing books on the topic of racism and technology. Consider, for example, Dr. Ruha Benjamin’s book “Race After Technology,” or Dr. Safiya Umoja Noble’s book “Algorithms of Oppression.” I hope that readers committed to addressing injustices in society and in technology read these books, as well as the other books I cite in the “Suggested Readings” section of my novella. 

In addition to racism in technology, my story also tries to surface a number of other important issues. I sometimes intentionally added flaws to the technologies featured in “Our Reality” — flaws that can serve as starting points for conversations.

Portrait of Yoshi Kohno
Yoshi Kohno (Dennis Wise/University of Washington)

How did your role as the Allen School’s Associate Director for Diversity, Equity, Inclusion and Access inform your approach to the story?

YK: I view diversity, equity, inclusion, and access as important to me as an individual, important for our school, and important for society. I’ve thought about the connection between society and technology throughout my career. As a security researcher and educator, I have to think about the harms that technologies could bring. But when I took on the role of Associate Director, I realized that I had a lot more learning to do. I devoured as many resources as I could, I talked with many other people with expertise far greater than my own, I read many books, I enrolled in the Cultural Competence in Computing (3C) Fellows Program, and so on. This is one of the things that we as a computer science community need to always be doing. We should always be open to learning more, to questioning our assumptions, and to realizing that we are not the experts.

One of the things that I’ve always cared about as an educator was helping students understand not only the relationship between society and technology, but to recognize how technologies can do harm. I’m motivated to get people thinking about these issues and proactively working to mitigate those harmful impacts. If people are not already thinking about the injustices that technologies can create, then I hope that “Our Reality” can help them start. 

One of the main characters in “Our Reality,” Emma, is a teenage Black girl. How did you approach writing a character whose identity and experiences would be so different from your own?

YK: Your question is very good. I have several answers that I want to give. First, this question connects to one of the teaching goals of “Our Reality” and, in particular, to Question 6 in the “Questions for Readers” section of the novella. Essentially, how should those who design technologies approach their work when the users or affected stakeholders might have identities very different from their own? And how do they ensure that they don’t rely on stereotypes or somehow create an injustice for those users or affected stakeholders? Similarly, in writing “Our Reality,” and with my goal of contributing to a discussion about racism in technology, I found myself in the position of centering Emma, a teenage girl who is Black. This was a huge responsibility, and I knew that I needed to approach this responsibility respectfully and mindfully. My own identity is that of a cisgender man who is Japanese and white. 

The first thing I did towards writing Emma was to buy the book “Writing the Other“ by Nisi Shawl and Cynthia Ward. It is an amazing book, and I’d recommend it for anyone else who wishes to either write stories that include characters with identities other than their own. I read their book last summer. I then discovered that Shawl was teaching a year-long course in science fiction writing through the Hugo House here in Seattle, so I enrolled. That taught me a significant amount about writing science fiction and writing about diverse characters.

I should acknowledge that while I tried the best I could, I might have made mistakes. While I have written many versions of this story, and while I’ve received amazingly useful feedback along the way, any mistakes that remain are mine and mine alone.

In your story, Our Reality is also the name of a technology that allows users to experience a virtual, mixed-reality world. One of the other themes that stood out is who has access to technologies like Our Reality — the name implies a shared reality, but it’s only really shared by those who can afford it. There are the “haves” like Emma and her family, and the “have nots” like Liam and his family. How worried are you that unequal access to technology will exacerbate societal divides?

YK: I’m quite worried that technology will exacerbate inequity along many dimensions. In the context of mixed-reality technology like Our Reality, I think there are three reasons that a group of people might not access it. One is financial; people may not be able to afford the mixed-reality Goggles, the subscription, the in-app purchases, and so on. They either can’t access it at all or will have unequal access to its features, like Liam discovered when he tried to customize his avatar beyond the basic free settings. The second reason is because the technology was not designed for them. I alluded to this in the story when I brought up Liam’s classmate Mathias, who is blind. Finally, some people will elect not to access the technology for ideological reasons. When I think about the future of mixed-reality technologies, like Our Reality, I worry that society will further fracture into different groups, the “haves” or “designed fors” and the “have nots” or “not designed fors.”

Emma’s mother is a Black woman who holds a leadership position in the company that makes Our Reality, but her influence is still limited. For example, Emma objects to the company’s decision to make avatars generic and “raceless,” which means she can’t fully be herself in the virtual world. What did you hope people would take away from that aspect of the story?

YK: First, this is an example of one of the faults that I intentionally included in the technologies in “Our Reality.” I also want to point the reader to the companion document that I prepared, which describes in more detail some of the educational content that I tried to include in Our Reality. Your question connects to so many important topics, such as the notion of “the unmarked state” — the default persona that one envisions if they are not provided with additional information — as well as colorblind racism. This also connects to something that Dr. Noble discusses in the book “Algorithms of Oppression” and which I tried to surface in “Our Reality” — that not only do we need to increase the diversity within the field, but we need to overcome systemic issues that stand in the way of fully considering the needs of all people, and systemic inequities, in the design of technologies.

Stepping back, I am hoping that readers start to think about some of these issues as they read “Our Reality.” I hope that they realize that the situation described in “Our Reality” is unjust and inequitable. I hope they read the companion document, to understand the educational content that I incorporated into “Our Reality.” And then I hope that readers are inspired to read more authoritative works, like “Algorithms of Oppression” and the other books that I reference in the novella and in the companion document.

You and professor Franziska Roesner, also in the Allen School, have done some very interesting research with your students in the UW Security and Privacy Research Lab. Your novella incorporates several references to issues raised in that research, such as tracking people via online advertising and how to safeguard users of mixed-reality technologies from undesirable or dangerous content. It almost feels uncomfortably close to your version of 2034 already. So how can we as a society, along with our regulatory frameworks, catch up and keep up with the pace of innovation?

YK: Rather than having society and our regulatory frameworks catch up to the pace of innovation, we should consider slowing the pace of innovation. Often there is a tendency to think that technology will solve our problems; if we just build the next technology, things will be great, or so it seems like people often think. Instead of perpetuating that mentality, maybe we should slow down and be more thoughtful about the long-term implications of technologies before we build them — and before we need to put any regulatory framework in place. 

Franziska Roesner and Yoshi Kohno wearing augmented-reality goggles
Kohno and colleague Franziska Roesner have explored the privacy and security of mixed reality technologies with students in the Security and Privacy Research Lab

As part of changing the pace of innovation, we need to make sure that the innovators of technology understand the broader society and global context in which technologies exist. This is one of the reasons why I appreciate the Cultural Competence in Computing (3C) Fellows Program coming out of Duke so much, and why I encourage other educators to apply. That program was created by Dr. Nicki Washington, Dr. Shaundra B. Daily, and graduate assistant Cecilé Sadler at Duke University. The program’s goal is to help empower computer science educators, throughout the world, with the knowledge and skills necessary to help students understand the broader societal context in which technologies sit.

As an aside, one of the reasons that my colleagues Ryan Calo in the School of Law and Batya Friedman in the iSchool and I co-founded the Tech Policy Lab at the University of Washington is that we understood the need for policymakers and technologists to also come together and explore issues at the intersection between society, technology, and policy.

Speaking of understanding context, in the companion document to “Our Reality” you note “Computing systems can make the world a far better place for some, but a far worse one for others.” Can you elaborate? 

YK: There are numerous examples of how technologies, when one looks at them from a 50,000 foot perspective, might seem to be beneficial to individuals or society. But when one looks more closely at the specific case of specific individuals, you find that they’re not providing a benefit; in fact, they have the potential to actively cause harm. Consider, for example, an app that helps a user find the location of their family or friends. Such an app might seem generally beneficial — it could help a parent or guardian find their child if they get separated at a park. But now consider situations of domestic abuse. Someone could use that same technology to track and harm their victim.

Another example, which I explored in “Our Reality” through Emma’s encounter with the police drones, is inequity across different races. Face detection and recognition systems are now widely understood to be inequitable because they have decreased accuracy with Black people compared to white people. This is incredibly inequitable and unjust. I encourage readers to learn more about the inequities with face detection and face recognition. One great place to start is the film “Coded Bias” directed and produced by Shalini Kantayya, which centers MIT Media Lab researcher Joy Buolamwini.

At one point, Emma admonishes her mother, a technologist, that she can’t solve everything with technology. How do we determine what is the best use of technology, and what is the responsibility of your colleagues who are the ones inventing it?

YK: I think that it is absolutely critical for those who are driving innovation to understand how the technology that they create sits within a broader society and interacts with people, many of whom are different from themselves. I referred earlier to this notion of a default persona, also called the “unmarked state.” Drawing from Nisi Shawl and Cynthia Ward’s book “Writing the Other,” this is more often than not someone who is white, male, heterosexual, single, young, and with no disabilities. Not only should one be thinking about how a technology would fit in the context of society, but also consider it in the context of the many people who do not identify with this default persona. 

On top of that, when designing technologies for someone “not like me,” people need to be sure they are not invoking stereotypes or false assumptions about those who are not like themselves. There’s a book called “Design Justice” by Dr. Sasha Costanza-Chock about centering the communities for whom we are designing. As technologists, we ought to be working with those stakeholders to understand what technologies they need. And we shouldn’t presume that any specific technology is needed. It could be that a new technology is not needed.

Kohno drew inspiration from an academic-industry summit on mixed reality that he and Roesner co-organized in exploring the potential pitfalls of the technology in “Our Reality”

Some aspects of Our Reality sound like fun — for example, when Emma and Liam played around with zero gravity in the science lab. If you had the opportunity and the means to use Our Reality, would you?

YK: I think it is an open research question about what augmented reality and mixed-reality technologies will be like in the next 15 years. I do think that technologies like Our Reality will exist in the not-too-distant future. But I hope that the people developing these technologies will have addressed the access questions and societal implications that I raised in the story. As written, I think I would enjoy aspects of the technology, but I would not feel comfortable using it if the equity issues surrounding Our Reality aren’t addressed.

Stepping even further back, there are a whole class of risks with mixed-reality technologies that are not deeply surfaced in this story: computer security risks. This is a topic that Franziska Roesner and I have been studying at UW for about 10 years, along with our colleagues and students. There are a lot of challenges to securing future mixed-reality platforms and applications.

So you would be one of those ideological objectors you mentioned earlier.

YK: I would, yes. And, in addition to issues of access and equity and the various security risks, I used to also be a yoga instructor. I like to see and experience the world through my real senses. I fear that mixed-reality technologies are coming. But for me, personally, I don’t want to lose the ability to experience the world for real, rather than through Goggles.

Who did you have in mind as the primary audience for “Our Reality”?

YK: I had several primary audiences, actually. In a dream world, I would love to see middle school students reading and discussing “Our Reality” in their social studies classes. I would love for the next generation to start discussing issues at the intersection of society and technology before they become technologists. If students discuss these issues in middle school, then maybe it will become second nature for them to always consider the relationship between society and technology, and how technologies can create or perpetuate injustices and inequities.

I would also love for high school and first- and second-year college students to read this story. And, of course, I would love for more senior computer scientists — advanced undergraduate students and people in industry — to read this story, too. I also hope that people read the books that I reference in the Suggested Readings section of my novella and the companion document. Those references are excellent. My novella scratches the surface of important issues, and provides a starting point for deeper considerations; the books that I reference provide much greater detail and depth.

As an educator, I wanted the story to be as accessible as possible, to the broadest audience possible. That’s why I put a free PDF of the novella on my website. I also put a PDF of the companion document on my web page. I wrote the companion document in such a way that I hope it will be useful and educational to people even if they never read the “Our Reality” novella.

What are the main lessons you hope readers will take away from “Our Reality”?

YK: I hope that readers will understand the importance of considering the relationship between society and technology. I hope that readers will understand that it is not inevitable that technologies be created. I hope that readers realize that when we do create a technology, we should do so in a responsible way that fully acknowledges and considers the full range of stakeholders and the present and future relationships between that technology and society.

Also, I tried to be somewhat overt about the flaws in the technologies featured in “Our Reality.” As I said earlier, I intentionally included flaws in the technologies in “Our Reality,” for educational purposes. But when one interacts with a brand new technology in the real world, sometimes there are flaws, but those flaws are not as obvious. I would like to encourage both users and designers of technology to be critical in their consideration of new technologies, so that they can proactively spot those flaws from an equity and justice perspective. 

If my story reaches people who have not been thinking about the relationship between society, racism, and technology already, I hope “Our Reality” starts them down the path of learning more. I encourage these readers to look at the “Our Reality” companion document, and explore some of the other resources that I reference. I would like to also thank these readers for caring about such an important topic.

Readers may purchase the paperback or Kindle version of “Our Reality” on Amazon.com, and access a free downloadable PDF of the novella, the companion document, and a full list of resources on the “Our Reality” webpage.

Kentrell Owens Receives Honorable Mention at CHI

Congratulations to Security Lab PhD student Kentrell Owens for receiving an Honorable Mention for his paper “‘You Gotta Watch What You Say’: Surveillance of Communication with Incarcerated People“, which will appear at the CHI Conference on Human Factors in Computing Systems in May. Big congratulations to Kentrell and to his co-authors at CMU, Camille Cobb (a UW Security Lab alum) and Lorrie Cranor!!

Also appearing at CHI this year: Security Lab PhD student Eric Zeng’s paper on “What Makes a ‘Bad’ Ad? User Perceptions of Problematic Online Advertising“, as well as an impressively long list of contributions from others at the University of Washington. Congrats to everyone!

Coverage of Security Lab Work Studying Misleading and Other Problematic Online Ads

Recent coverage of Security Lab PhD candidate Eric Zeng‘s work studying problematic online ads (with Yoshi Kohno and Franzi Roesner, published at ConPro ’20 — “Bad News: Clickbait and Deceptive Ads on News and Misinformation Websites” — and under ongoing investigation):

1 3 4 5 6 7 29