Franzi On KUOW’s “Primed” About Smart Homes

Security and Privacy Lab co-director Professor Franzi Roesner was interviewed on KUOW’s “Primed” Podcast about how smart home technologies can exacerbate existing power dynamics or tensions among home occupants or visitors. Listen to the interview here. Read more about the Security Lab’s work on this topic in several papers:

Uncle Phil, is that really you? Allen School researchers decode vulnerabilities in online genetic genealogy services

(Cross-posted from Allen School News.)

Hand holding saliva collection tube
Marco Verch/Flickr

Genetic genealogy websites enable people to upload their results from consumer DNA testing services like Ancestry.com and 23andMe to explore their genetic makeup, familial relationships, and even discover new relatives they didn’t know they had. But how can you be sure that the person who emails you claiming to be your Uncle Phil really is a long-lost relation?

Based on what a team of Allen School researchers discovered when interacting with the largest third-party genetic genealogy service, you may want to approach plans for a reunion with caution. In their paper “Genotype Extraction and False Relative Attacks: Security Risks to Third-Party Genetic Genealogy Services Beyond Identity Inference,” they analyze how security vulnerabilities built into the GEDmatch website could allow someone to construct an imaginary relative or obtain sensitive information about people who have uploaded their personal genetic data. 

Through a series of highly-controlled experiments using information from the GEDmatch online database, Allen School alumnus and current postdoctoral researcher Peter Ney (Ph.D., ‘19) and professors Tadayoshi Kohno and Luis Ceze determined that it would be relatively straightforward for an adversary to exploit vulnerabilities in the site’s application programming interface (API) that compromise users’ privacy and expose them to potential fraud. The team demonstrated multiple ways in which they could extract highly personal, potentially sensitive genetic information about individuals on the site — and use existing familial relationships to create false new ones by uploading fake profiles that indicate a genetic match where none exists.

Part of GEDmatch’s attraction is its user-friendly graphical interface, which relies on bars and color-coding to visualize specific genetic markers and similarities between two profiles. For example, the “chromosome paintings” illustrate the differences between two profiles on each chromosome, accompanied by “segment coordinates” that indicate the precise genetic markers that the profiles share. These one-to-one comparisons, however, can be used to reveal more information than intended. It was this aspect of the service that the researchers were able to exploit in their attacks. To their surprise, they were not only able to determine the presence or absence of various genetic markers at certain segments of a hypothetical user’s profile, but to reconstruct 92% of the entire profile with 98% accuracy.

As a first step, Ney and his colleagues created a research account on GEDmatch, to which they uploaded artificial genetic profiles generated from data contained in anonymous profiles from multiple, publicly available datasets designated for research use. By assigning each of their profiles a privacy setting of “research,” the team ensured that their artificial profiles would not appear in public matching results. Once the profiles were uploaded, GEDmatch automatically assigned each one a unique ID, which enabled the team to perform comparisons between a specific profile and others in the database — in this case, a set of “extraction profiles” created for this purpose. The team then performed a series of experiments. For the total profile reconstruction, they uploaded and ran comparisons between 20 extraction profiles and five targets. Based on the GEDmatch visualizations alone, they were able to recover just over 60% of the target profiles’ data. Based on their knowledge of genetics, specifically the frequency with which possible DNA bases are found within the population at a specific position on the genome, they were able to determine another 30%. They then relied on a genetic technique known as imputation to fill in the rest. 

Once they had constructed nearly the whole of a target’s profile, the researchers used that information to create a false child for one of their targets. When they ran the comparison between the target profile and the false child profile through the system, GEDmatch confirmed that the two were a match for a parent-child relationship.

While it is true that an adversary would have to have the right combination of programming skills and knowledge of genetics and genealogy to pull it off, the process isn’t as difficult as it sounds — or, to a security expert, as it should be. To acquire a person’s entire profile, Ney and his colleagues performed the comparisons between extraction and target profiles manually. They estimate the process took 10 minutes to complete — a daunting prospect, perhaps, if an adversary wanted to compare a much greater number of targets. But if one were to write a script that automatically performs the comparisons? “That would take 10 seconds,” said Ney, who is the lead author of the paper.

Consumer-facing genetic testing and genetic genealogy are still relatively nascent industries, but they are gaining in popularity. And as the size of the database grows, so does the interest of law enforcement looking to crack criminal cases for which the trail has gone cold. In one high-profile example from last year, investigators arrested a suspect alleged to be the Golden State Killer, whose identity remained elusive for more than four decades before genetic genealogy yielded a breakthrough. Given the prospect of using genetic information for this and other purposes, the researchers’ findings yield important questions about how to ensure the security and integrity of genetic genealogy results, now and into the future.

“We’re only beginning to scratch the surface,” said Kohno, who co-directs the Allen School’s Security and Privacy Research Lab and previously helped expose potential security vulnerabilities in internet-connected motor vehicles, wireless medical implants, consumer robotics, mobile advertising, and more. “The responsible thing for us is to disclose our findings so that we can engage a community of scientists and policymakers in a discussion about how to mitigate this issue.”

Echoing Kohno’s concern, Ceze emphasizes that the issue is made all the more urgent by the sensitive nature of the data that people upload to a site like GEDmatch — with broad legal, medical, and psychological ramifications — in the midst of what he refers to as “the age of oversharing information.”

“Genetic information correlates to medical conditions and potentially other deeply personal traits,” noted Ceze, who co-directs the Molecular Information Systems Laboratory at the University of Washington and specializes in computer architecture research as a member of the Allen School’s Sampa and SAMPL groups. “As more genetic information goes digital, the risks increase.”

Unfortunately for those who are not prone to oversharing, the risks extend beyond the direct users of genetic genealogy services. According to Ney, GEDmatch contains the personal genetic information of a sufficient number and variety of people across the U.S. that, should someone gain illicit possession of the entire database, they could potentially link genetic information with identity for a large portion of the country. While Ney describes the decision to share one’s data on GEDmatch as a personal one, some decisions appear to be more personal — and wider reaching — than others. And once a person’s genetic data is compromised, he notes, it is compromised forever. 

So whether or not you’ve uploaded your genetic information to GEDmatch, you might want to ask Uncle Phil for an additional form of identification before rushing to make up the guest bed. 

“People think of genetic data as being personal — and it is. It’s literally part of their physical identity,” Ney said. “You can change your credit card number, but you can’t change your DNA.”

The team will present its findings at the Network and Distributed System Security Symposium (NDSS 2020) in San Diego, California in February.

To learn more, read the UW News release here and an FAQ on security and privacy issues associated with genetic genealogy services here. Also check out related coverage by MIT Technology Review, OneZero, ZDNet, GeekWire, McClatchy, and Newsweek.

New tools to minimize risks in shared, augmented-reality environments

(Cross-posted from UW News, by Sarah McQuate)

A person holding up an iPad that shows a digital world over the real world.

For now, augmented reality remains mostly a solo activity, but soon people might be using the technology in groups for collaborating on work or creative projects.

A few summers ago throngs of people began using the Pokemon Go app, the first mass-market augmented reality game, to collect virtual creatures hiding in the physical world.

For now, AR remains mostly a solo activity, but soon people might be using the technology for a variety of group activities, such as playing multi-user games or collaborating on work or creative projects. But how can developers guard against bad actors who try to hijack these experiences, and prevent privacy breaches in environments that span digital and physical space?

University of Washington security researchers have developed ShareAR, a toolkit that lets app developers build in collaborative and interactive features without sacrificing their users’ privacy and security. The researchers presented their findings Aug. 14 at the USENIX Security Symposium in Santa Clara, California.

“A key role for computer security and privacy research is to anticipate and address future risks in emerging technologies,” said co-author Franziska Roesner, an assistant professor in the Paul G. Allen School of Computer Science & Engineering. “It is becoming clear that multi-user AR has a lot of potential, but there has not been a systematic approach to addressing the possible security and privacy issues that will arise.”

Sharing virtual objects in AR is in some ways like sharing files on a cloud-based platform like Google Drive — but there’s a big difference.

“AR content isn’t confined to a screen like a Google Doc is. It’s embedded into the physical world you see around you,” said first author Kimberly Ruth, a UW undergraduate student in the Allen School. “That means there are security and privacy considerations that are unique to AR.”

For example, people could potentially add virtual inappropriate images to physical public parks, scrawl virtual offensive messages on places of worship or even place a virtual “kick me” sign on an unsuspecting user’s back.

“We wanted to think about how the technology should respond when a person tries to harass or spy on others, or tries to steal or vandalize other users’ AR content,” Ruth said. “But we also don’t want to shut down the positive aspects of being able to share content using AR technologies, and we don’t want to force developers to choose between functionality and security.”

To address these concerns, the team created a prototype toolkit, ShareAR, for the Microsoft HoloLens. ShareAR helps applications create, share and keep track of objects that users share with each other.

Another potential issue with multi-user AR is that developers need a way to signal the physical location of someone’s private virtual content to keep other users from accidentally standing in between that person and their work — like standing between someone and the TV. So the team developed “ghost objects” for ShareAR.

“A ghost object serves as a placeholder for another virtual object. It has the same physical location and rough 3D bulk as the object it stands in for, but it doesn’t show any of the sensitive information that the original object contains,” Ruth said. “The benefit of this approach over putting up a virtual wall is that, if I’m interacting with a virtual private messaging window, another person in the room can’t sneak up behind me and peer over my shoulder to see what I’m typing — they always see the same placeholder from any angle.”

The team tested ShareAR with three case study apps. Creating objects and changing permission settings within the apps were the most computationally expensive actions. But, even when the researchers tried to stress out the system with large numbers of users and shared objects, ShareAR took no longer than 5 milliseconds to complete a task. In most cases, it took less than 1 millisecond.

Three example case study apps, one showing virtual blocks over a living room, one showing virtual notes over the living room and one showing red paintballs over the living room.

The team tested ShareAR with three case study apps: Cubist Art (top panel), which lets users create and share virtual artwork with each other; Doc Edit (bottom left panel), which lets users create virtual notes or lists they can share or keep private; and Paintball (bottom right panel), which lets users play paintball with virtual paint. In the Doc Edit app, the semi-transparent gray box in the top left corner represents a “ghost object,” or a document that another user wishes to remain private.Ruth et al./USENIX Security Symposium

Developers can now download ShareAR to use for their own HoloLens apps.

“We’ll be very interested in hearing feedback from developers on what’s working well for them and what they’d like to see improved,” Ruth said. “We believe that engaging with technology builders while AR is still in development is the key to tackling these security and privacy challenges before they become widespread.”

Tadayoshi Kohno, a professor in the Allen School, is also a co-author on this paper. This research was funded by the National Science Foundation and the Washington Research Foundation.

Learn more about the UW Security & Privacy Lab and its role in the space of computer security and privacy for augmented reality.

###

For more information, contact Roesner at franzi@cs.washington.edu, Ruth at kcr32@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

Grant numbers: CNS-1513584, CNS-1565252, CNS-1651230

Allen School and AI2 researchers unveil Grover, a new tool for fighting fake news in the age of AI

(Cross-posted from Allen School News.)

Sample fake news headline with drawing of Grover and "Fake News" speech bubble
What makes Grover so effective at spotting fake news is the fact that it was trained to generate fake news itself.

When we hear the term “fake news,” more often than not it refers to false narratives written by people to distort the truth and poison the public discourse. But new developments in natural language generation have raised the prospect of a new potential threat: neural fake news. Generated by artificial intelligence and capable of adopting the particular language and tone of popular publications, this brand of fake news could pose an even greater problem for society due to its ability to emulate legitimate news sources at a massive scale. To fight the emerging threat of fake news authored by AI, a team of researchers at the Allen School and Allen Institute for Artificial Intelligence (AI2) developed Grover, a new model for detecting neural fake news more reliably than existing technologies can.

Until now, the best discriminators could correctly distinguish between real, human-generated news content and AI-generated fake news 73% of the time; using Grover, the rate of accuracy rises to 92%. What makes Grover so effective at spotting fake content is that it learned to be very good at producing that content itself. Given a sample headline, Grover can generate an entire news article written in the style of a legitimate news outlet. In an experiment, the researchers found that the system can also generate propaganda stories in such a way that readers rated them more trustworthy than the original, human-generated versions.

“Our work on Grover demonstrates that the best models for detecting disinformation are the best models at generating it,” explained Yejin Choi, a professor in the Allen School’s Natural Language Processing group and a researcher at AI2. “The fact that participants in our study found Grover’s fake news stories to be more trustworthy than the ones written by their fellow humans illustrates how far natural language generation has evolved — and why we need to try and get ahead of this threat.”

Choi and her collaborators — Allen School Ph.D. students Rowan Zellers, Ari Holtzman, and Hannah Rashkin; postdoctoral researcher Yonatan Bisk; professor and AI2 researcher Ali Farhadi; and professor Franziska Roesner — describe their results in detail in a paper recently published on the preprint site arXiv.org. Although they show that Grover is capable of emulating the style of a particular outlet and even writer — for example, one of the Grover-generated fake news pieces included in the paper is modeled on the writing of columnist Paul Krugman of The New York Times — the researchers point out that even the best examples of neural fake news are still based on learned style and tone, rather than a true understanding of language and the world. So, that Krugman piece and others like it will contain evidence of the true source of the content.

“Despite how fluid the writing may appear, articles written by Grover and other neural language generators contain unique artifacts or quirks of language that give away their machine origin,” explained Zellers, lead author of the paper. “It’s akin to a signature or watermark left behind by neural text generators. Grover knows to look for these artifacts, which is what makes it so effective at picking out the stories that were created by AI.”

The research team, top row from left: Rowan Zellers, Ari Holtzman, Hannah Rashkin, and Yonatan Bisk. Bottom row from left: Ali Farhadi, Franziska Roesner, and Yejin Choi.

Although Grover will naturally recognize its own quirks, which explains the high success rate in the team’s study, the ability to detect evidence of AI-generated fake news is not limited to its own content. Grover is better at detecting fake news written by both human and machine than any system that came before it, in large part because it is more advanced than any neural language model that came before. The researchers believe that their work on Grover is only the first step in developing effective defenses against the machine-learning equivalent of a supermarket tabloid. They plan to release two of their models, Grover-Base and Grover-Large, to the public, and to make the Grover-Mega model and accompanying dataset available to researchers upon request. By sharing the results of this work, the team aims to encourage further discussion and technical innovation around how to counteract neural fake news.

According to Roesner, who co-directs the Allen School’s Security and Privacy Research Laboratory, the team’s approach is a common one in the computer security field: try to determine what adversaries might do and the capabilities they may have, and then develop and test effective defenses. “With recent advances in AI, we should assume that adversaries will develop and use these new capabilities — if they aren’t already,” she explained. “Neural fake news will only get easier and cheaper and better regardless of whether we study it, so Grover is an important step forward in enabling the broader research community to fully understand the threat and to defend the integrity of our public discourse.”

Roesner, Choi and their colleagues believe that models like Grover should be put to practical use in the fight against fake news. Just as sites like YouTube rely on deep neural networks to scan videos and flag those containing illicit content, a platform could employ an ensemble of deep generative models like Grover to analyze text and flag articles that appear to be AI-generated disinformation.

“People want to be able to trust their own eyes when it comes to determining who and what to believe, but it is getting more and more difficult to separate real from fake when it comes to the content we consume online,” Choi said. “As AI becomes more sophisticated, a tool like Grover could be the best defense we have against a proliferation of AI-generated fake news.”

Read the arXiv paper here, and see coverage by TechCrunch, GeekWire, New Scientist, The New York Times, ZDNet, and Futurism. Also check out a previous project by members of the Grover team analyzing the language of fake news and political fact checking here.

Groundbreaking study that served as the foundation for securing implantable medical devices earns IEEE Test of Time Award

(Cross-posted from Allen School News.)

Members of the team that examined the privacy and security risks of implantable medical devices in 2008. UW News Office

In March 2008, Allen School researchers and their collaborators at the University of Massachusetts Amherst and Harvard Medical School revealed the results of a study examining the privacy and security risks of a new generation of implantable medical devices. Equipped with embedded computers and wireless technology, new models of implantable cardiac defibrillators, pacemakers, and other devices were designed to make it easier for physicians to automatically monitor and treat patients’ chronic health conditions while reducing the need for more invasive — and more costly — interventions. But as the researchers discovered, the same capabilities intended to improve patient care might also ease the way for adversarial actions that could compromise patient privacy and safety, including the disclosure of sensitive personal information, denial of service, and unauthorized reprogramming of the device itself.

A paper detailing their findings, which earned the Best Paper Award at the IEEE’s 2008 Symposium on Security and Privacy, sent shock waves through the medical community and opened up an entirely new line of computer security research. Now, just over 10 years later, the team has been recognized for its groundbreaking contribution by the IEEE Computer Society Technical Committee on Security and Privacy with a 2019 Test of Time Award.

“We hope our research is a wake-up call for the industry,” professor Tadayoshi Kohno, co-director of the Allen School’s Security and Privacy Research Laboratory, told UW News when the paper was initially published. “In the 1970s, the Bionic Woman was a dream, but modern technology is making it a reality. People will have sophisticated computers with wireless capabilities in their bodies. Our goal is to make sure those devices are secure, private, safe and effective.”

Chest x-ray showing an implanted cardioverter defibrillator (ICD).

To that end, Kohno and Allen School graduate student Daniel Halperin (Ph.D., ‘12), worked with professor Kevin Fu, then a faculty member at University of Massachusetts Amherst, and Fu’s students Thomas Heydt-Benjamin, Shane Clark, Benessa Defend, Will Morgan, and Ben Ransford — who would go on to complete a postdoc at the Allen School — in an attempt to expose potential vulnerabilities and offer solutions. The computer scientists teamed up with cardiologist Dr. William Maisel, then-director of the Medical Device Safety Institute at Beth Israel Deaconess Medical Center and a professor at Harvard Medical School. As far as the team was aware, the collaboration represented the first time that anyone had examined implantable medical device technology through the lens of computer security. Their test case was a commercially available implantable cardioverter defibrillator (ICD) that incorporated a programmable pacemaker capable of short-range wireless communication.

The researchers first partially reverse-engineered the device’s wireless communications protocol with the aid of an oscilloscope and a commodity software radio. They then commenced a series of computer security experiments targeting information stored and transmitted by the device as well as the device itself. With the aid of their software radio, the team found that they were able to compromise the security and privacy of the ICD in a variety of ways. As their goal was to understand and address potential risks without enabling an unscrupulous actor to use their work as a guide, they omitted details from their paper that would facilitate such actions outside of a laboratory setting. On a basic level, they discovered that they could trigger identification of the specific device, including its model and serial number. This, in turn, yielded the ability to elicit more detailed data about a hypothetical patient, including name, diagnosis, and other sensitive details stored on the device. From there, the researchers tested a number of scenarios in which they sought to actively interfere with the device, demonstrating the ability to change a patient’s name, reset the clock, run down the battery, and disable therapies that the device was programmed to deliver. They were also able to bypass the safeguards put in place by the manufacturer to prevent the accidental issuance of electrical shocks to the patient’s heart, thereby potentially triggering shocks to induce hypothetical fibrillation after turning off the ICD’s automatic therapies.

Equipment used in the 2008 study to test the security of a commercially available ICD.

The team set out to not only identify potential flaws in implantable medical technology, but also to offer practical solutions that would empower manufacturers, providers, and patients to mitigate the potential risks. The researchers developed prototypes for three categories of defenses that could ostensibly be refined and built into future ICD models. They dubbed these “zero-power defenses,” meaning they did not need to draw power from the device’s battery to function but instead harvested energy from external radio frequency (RF) signals. The first, zero-power notification, provides the patient with an audible warning in the event of a security-sensitive event. To prevent such events in the first place, the researchers also proposed a mechanism for zero-power authentication, which would enable the ICD to verify it is communicating with an authorized programmer. The researchers complemented these defenses with a third offering, zero-power sensible key exchange. This approach enables the patient to physically sense a key exchange to combat unauthorized eavesdropping of their implanted device.

Upon releasing the results of their work, the team took great pains to point out that their goal was was to aid the industry in getting ahead of potential problems; at the time of the study’s release, there had been no reported cases of a patient’s implanted device having been compromised in a security incident. But, as Kohno reflects today, the key to computer security research is anticipating the unintended consequences of new technologies. It is an area in which the University of Washington has often led the way, thanks in part to Kohno and faculty colleague Franziska Roesner, co-director of the Security and Privacy Research Lab. Other areas in which the Allen School team has made important contributions to understanding and mitigating privacy and security risks include motor vehicles, robotics, augmented and virtual reality, DNA sequencing software, and mobile advertising — to name only a few. Those projects often represent a rich vein of interdisciplinary collaboration involving multiple labs and institutions, which has been a hallmark of the lab’s approach.

Professor Tadayoshi Kohno (left) and Daniel Halperin

“This project is an example of the types of work that we do here at UW. Our lab tries to keep its finger on the pulse of emerging and future technologies and conducts rigorous, scientific studies of the security and privacy risks inherent in those technologies before adversaries manifest,” Kohno explained. “In doing so, our work provides a foundation for securing technologies of critical interest and value to society. Our medical device security work is an example of that. To my knowledge, it was the first work to experimentally analyze the computer security properties of a real wireless implantable medical device, and it served as a foundation for the entire medical device security field.”

The research team was formally recognized during the 40th IEEE Symposium on Security and Privacy earlier this week in San Francisco, California. Read the original research paper here, and the 2008 UW News release here. Also see this related story from the University of Michigan, where Fu is currently a faculty member, for more on the Test of Time recognition.

Congratulations to Yoshi, Dan, Ben, and the entire team!

2019 UW Undergraduate Research Symposium

Security Lab undergraduate researchers Mitali Palekar and Kimberly Ruth both presented today at the UW Undergraduate Research Symposium. Mitali will present her published work on “Analysis of the Susceptibility of Smart Home Programming Interfaces to End User Error” again next week at the IEEE Workshop on the Internet of Safe Things (SafeThings 2019), and Kimberly will present her published work on “Secure Multi-User Content Sharing for Augmented Reality Applications” again at the 28th USENIX Security Symposium (USENIX Security 2019) in August. Congratulation Mitali and Kimberly on the great work!

Mitali Palekar
Kimberly Ruth

Christine Chen Wins NSF Fellowship

Congratulations to Security Lab PhD student Christine Chen for being awarded an NSF Graduate Research Fellowship!

Quoting from the Allen School News article on the topic: Chen’s research interests lie at the intersection of technology and crime, physical safety, and at-risk populations. Her recent work has focused on technology and survivors of human trafficking. Chen just wrapped up a study in which she interviewed victim service providers (VSPs) to expose how technology can be utilized to re-victimize survivors of trafficking and understand how VSPs mitigate these risks as they interact with and support survivors. As a result of this work, Chen and her collaborators propose privacy and security guidelines for technologists who wish to partner with VSPs to support and empower trafficking survivors. The study will be presented at the upcoming USENIX Security Symposium in August.

1 4 5 6 7 8 10