Summer Project Presentation: Henry Bowman

Visiting Cal Poly undergraduate Henry Bowman presented his summer project final presentation at today’s Security Lab meeting, before returning to Cal Poly to finish his Bachelors Degree.

Henry’s work focused on problems related to augmented reality, computer security, and privacy. As part of his summer project, Henry contributed to the Security Lab’s ShareAR project. ShareAR, or the Secure and Private AR Sharing Toolkit, is a project developed by Security Lab member Kimberly Ruth with faculty members Franzi and Yoshi and that enables the secure and private sharing of holographic HoloLens objects with others users. Allen School undergraduate student AJ Kruse also contributed to the project this summer. To learn more about the project, see Kimberly’s 2019 USENIX Security paper and talk.

Great job Henry, and great talk!

New tools to minimize risks in shared, augmented-reality environments

(Cross-posted from UW News, by Sarah McQuate)

A person holding up an iPad that shows a digital world over the real world.

For now, augmented reality remains mostly a solo activity, but soon people might be using the technology in groups for collaborating on work or creative projects.

A few summers ago throngs of people began using the Pokemon Go app, the first mass-market augmented reality game, to collect virtual creatures hiding in the physical world.

For now, AR remains mostly a solo activity, but soon people might be using the technology for a variety of group activities, such as playing multi-user games or collaborating on work or creative projects. But how can developers guard against bad actors who try to hijack these experiences, and prevent privacy breaches in environments that span digital and physical space?

University of Washington security researchers have developed ShareAR, a toolkit that lets app developers build in collaborative and interactive features without sacrificing their users’ privacy and security. The researchers presented their findings Aug. 14 at the USENIX Security Symposium in Santa Clara, California.

“A key role for computer security and privacy research is to anticipate and address future risks in emerging technologies,” said co-author Franziska Roesner, an assistant professor in the Paul G. Allen School of Computer Science & Engineering. “It is becoming clear that multi-user AR has a lot of potential, but there has not been a systematic approach to addressing the possible security and privacy issues that will arise.”

Sharing virtual objects in AR is in some ways like sharing files on a cloud-based platform like Google Drive — but there’s a big difference.

“AR content isn’t confined to a screen like a Google Doc is. It’s embedded into the physical world you see around you,” said first author Kimberly Ruth, a UW undergraduate student in the Allen School. “That means there are security and privacy considerations that are unique to AR.”

For example, people could potentially add virtual inappropriate images to physical public parks, scrawl virtual offensive messages on places of worship or even place a virtual “kick me” sign on an unsuspecting user’s back.

“We wanted to think about how the technology should respond when a person tries to harass or spy on others, or tries to steal or vandalize other users’ AR content,” Ruth said. “But we also don’t want to shut down the positive aspects of being able to share content using AR technologies, and we don’t want to force developers to choose between functionality and security.”

To address these concerns, the team created a prototype toolkit, ShareAR, for the Microsoft HoloLens. ShareAR helps applications create, share and keep track of objects that users share with each other.

Another potential issue with multi-user AR is that developers need a way to signal the physical location of someone’s private virtual content to keep other users from accidentally standing in between that person and their work — like standing between someone and the TV. So the team developed “ghost objects” for ShareAR.

“A ghost object serves as a placeholder for another virtual object. It has the same physical location and rough 3D bulk as the object it stands in for, but it doesn’t show any of the sensitive information that the original object contains,” Ruth said. “The benefit of this approach over putting up a virtual wall is that, if I’m interacting with a virtual private messaging window, another person in the room can’t sneak up behind me and peer over my shoulder to see what I’m typing — they always see the same placeholder from any angle.”

The team tested ShareAR with three case study apps. Creating objects and changing permission settings within the apps were the most computationally expensive actions. But, even when the researchers tried to stress out the system with large numbers of users and shared objects, ShareAR took no longer than 5 milliseconds to complete a task. In most cases, it took less than 1 millisecond.

Three example case study apps, one showing virtual blocks over a living room, one showing virtual notes over the living room and one showing red paintballs over the living room.

The team tested ShareAR with three case study apps: Cubist Art (top panel), which lets users create and share virtual artwork with each other; Doc Edit (bottom left panel), which lets users create virtual notes or lists they can share or keep private; and Paintball (bottom right panel), which lets users play paintball with virtual paint. In the Doc Edit app, the semi-transparent gray box in the top left corner represents a “ghost object,” or a document that another user wishes to remain private.Ruth et al./USENIX Security Symposium

Developers can now download ShareAR to use for their own HoloLens apps.

“We’ll be very interested in hearing feedback from developers on what’s working well for them and what they’d like to see improved,” Ruth said. “We believe that engaging with technology builders while AR is still in development is the key to tackling these security and privacy challenges before they become widespread.”

Tadayoshi Kohno, a professor in the Allen School, is also a co-author on this paper. This research was funded by the National Science Foundation and the Washington Research Foundation.

Learn more about the UW Security & Privacy Lab and its role in the space of computer security and privacy for augmented reality.

###

For more information, contact Roesner at franzi@cs.washington.edu, Ruth at kcr32@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

Grant numbers: CNS-1513584, CNS-1565252, CNS-1651230

Security Lab at USENIX Security 2019

The UW Security and Privacy Lab, and the lab’s friends and alumni, were out in force at USENIX Security 2019. On Wednesday, current UW Security and Privacy Lab members presented three papers in the same session:

Below are photos from each of the above talks, as well as from UW Systems Lab alumnus Charlie Reis‘s talk’s (with alumnus Alex Moshchuk) on “Site Isolation: Process Separation for Web Sites within the Browser” and Ivan Evtimov‘s talk on a new smarthome security lab (URL forthcoming).

Distinguished Paper Award @ USENIX Security 2019

Congratulations to UW Security and Privacy lab member Christine Chen, advised by Prof. Franzi Roesner, and collaborator (and UW alumnae) Nicki Dell for winning a Distinguished Paper Award at USENIX Security 2019! USENIX Security is one of the top peer-reviewed conferences in computer security, and this is an incredible honor. The authors are also extremely grateful to the people who participated in their study and for the opportunity to share those voices with the computer security and privacy community.

Read their paper on “Computer Security and Privacy in the Interactions Between Victim Service Providers and Human Trafficking Survivors” here.

In London? See our work on Adversarial Machine Learning at the Science Museum

In 2018, UW Security and Privacy Lab members Ivan Evtimov and Earlence Fernandes (now faculty at Wisconsin), along with UW Prof. Yoshi Kohno and researchers from Samsung Research North America, Stanford University, Stony Brook University, University of California at Berkeley, and University of Michigan , wrote a now widely sited paper on fooling computer vision classifiers and, in doing so, demonstrated the ability to fool a machine learning system into misidentifying a stop sign as, say, a speed limit sign.

The Science Museum in London asked to include the UW Stop Sign in their exhibit titled “Driverless: Who is in Control?”. If you’re in London, please stop by and check it out!

Congratulations to all the 2019 Security and Privacy Lab Graduates!!

Congratulations to all UW Allen School Security and Privacy Research Lab PhD Graduates — Dr. Camille Cobb, Dr. Kiron Lebeck, Dr. Peter Ney, and Dr. Alex Takakuwa! Congratulations also to graduating Security Lab undergraduate Mitali Palekar, who also won one of the Allen School’s few Outstanding Senior Awards. What an amazing job everyone!

Photos from before and after PhD hooding below. Post-hooding photo order, left to right: Prof. Franzi Roesner, Dr. Kiron Lebeck, Dr. Alex Takakuwa, Dr. Camille Cobb, Dr. Peter Ney, and Prof. Yoshi Kohno.

Congratulations everyone!! And congratulations to all other graduates as well!!

Introducing Dr. Alex Takakuwa

Congratulations to Dr. Alex Takakuwa for successfully defending his PhD dissertation today! Alex’s PhD work focuses on improving various key open challenges in two-factor authentication, and is a result of significant collaboration with Dr. Alexei Czeskis from Google. Alex will continue at UW as a postdoc, incubating a creative new technology idea. Congratulations Dr. Takakuwa!

Allen School and AI2 researchers unveil Grover, a new tool for fighting fake news in the age of AI

(Cross-posted from Allen School News.)

Sample fake news headline with drawing of Grover and "Fake News" speech bubble
What makes Grover so effective at spotting fake news is the fact that it was trained to generate fake news itself.

When we hear the term “fake news,” more often than not it refers to false narratives written by people to distort the truth and poison the public discourse. But new developments in natural language generation have raised the prospect of a new potential threat: neural fake news. Generated by artificial intelligence and capable of adopting the particular language and tone of popular publications, this brand of fake news could pose an even greater problem for society due to its ability to emulate legitimate news sources at a massive scale. To fight the emerging threat of fake news authored by AI, a team of researchers at the Allen School and Allen Institute for Artificial Intelligence (AI2) developed Grover, a new model for detecting neural fake news more reliably than existing technologies can.

Until now, the best discriminators could correctly distinguish between real, human-generated news content and AI-generated fake news 73% of the time; using Grover, the rate of accuracy rises to 92%. What makes Grover so effective at spotting fake content is that it learned to be very good at producing that content itself. Given a sample headline, Grover can generate an entire news article written in the style of a legitimate news outlet. In an experiment, the researchers found that the system can also generate propaganda stories in such a way that readers rated them more trustworthy than the original, human-generated versions.

“Our work on Grover demonstrates that the best models for detecting disinformation are the best models at generating it,” explained Yejin Choi, a professor in the Allen School’s Natural Language Processing group and a researcher at AI2. “The fact that participants in our study found Grover’s fake news stories to be more trustworthy than the ones written by their fellow humans illustrates how far natural language generation has evolved — and why we need to try and get ahead of this threat.”

Choi and her collaborators — Allen School Ph.D. students Rowan Zellers, Ari Holtzman, and Hannah Rashkin; postdoctoral researcher Yonatan Bisk; professor and AI2 researcher Ali Farhadi; and professor Franziska Roesner — describe their results in detail in a paper recently published on the preprint site arXiv.org. Although they show that Grover is capable of emulating the style of a particular outlet and even writer — for example, one of the Grover-generated fake news pieces included in the paper is modeled on the writing of columnist Paul Krugman of The New York Times — the researchers point out that even the best examples of neural fake news are still based on learned style and tone, rather than a true understanding of language and the world. So, that Krugman piece and others like it will contain evidence of the true source of the content.

“Despite how fluid the writing may appear, articles written by Grover and other neural language generators contain unique artifacts or quirks of language that give away their machine origin,” explained Zellers, lead author of the paper. “It’s akin to a signature or watermark left behind by neural text generators. Grover knows to look for these artifacts, which is what makes it so effective at picking out the stories that were created by AI.”

The research team, top row from left: Rowan Zellers, Ari Holtzman, Hannah Rashkin, and Yonatan Bisk. Bottom row from left: Ali Farhadi, Franziska Roesner, and Yejin Choi.

Although Grover will naturally recognize its own quirks, which explains the high success rate in the team’s study, the ability to detect evidence of AI-generated fake news is not limited to its own content. Grover is better at detecting fake news written by both human and machine than any system that came before it, in large part because it is more advanced than any neural language model that came before. The researchers believe that their work on Grover is only the first step in developing effective defenses against the machine-learning equivalent of a supermarket tabloid. They plan to release two of their models, Grover-Base and Grover-Large, to the public, and to make the Grover-Mega model and accompanying dataset available to researchers upon request. By sharing the results of this work, the team aims to encourage further discussion and technical innovation around how to counteract neural fake news.

According to Roesner, who co-directs the Allen School’s Security and Privacy Research Laboratory, the team’s approach is a common one in the computer security field: try to determine what adversaries might do and the capabilities they may have, and then develop and test effective defenses. “With recent advances in AI, we should assume that adversaries will develop and use these new capabilities — if they aren’t already,” she explained. “Neural fake news will only get easier and cheaper and better regardless of whether we study it, so Grover is an important step forward in enabling the broader research community to fully understand the threat and to defend the integrity of our public discourse.”

Roesner, Choi and their colleagues believe that models like Grover should be put to practical use in the fight against fake news. Just as sites like YouTube rely on deep neural networks to scan videos and flag those containing illicit content, a platform could employ an ensemble of deep generative models like Grover to analyze text and flag articles that appear to be AI-generated disinformation.

“People want to be able to trust their own eyes when it comes to determining who and what to believe, but it is getting more and more difficult to separate real from fake when it comes to the content we consume online,” Choi said. “As AI becomes more sophisticated, a tool like Grover could be the best defense we have against a proliferation of AI-generated fake news.”

Read the arXiv paper here, and see coverage by TechCrunch, GeekWire, New Scientist, The New York Times, ZDNet, and Futurism. Also check out a previous project by members of the Grover team analyzing the language of fake news and political fact checking here.

1 6 7 8 9 10 29