2023 NSF Fellowship Recipients

Congratulations to two Security Lab PhD students for being selected as National Science Foundation Graduate Research Fellowship Program (GRFP) recipients!! Cross-posting the details from Allen School News, written by Jennifer Webster:

Portrait of Rachel Hong against a blurred backdrop of a brick building, smiling and wearing a white button up cotton blouse and a navy suit jacket.

Rachel Hong

Fellowship recipient Rachel Hong is a first-year Ph.D. student. She works with Allen School professors Jamie Morgenstern, who focuses on the social impacts of ML, and Tadayoshi (Yoshi) Kohno, co-director of the Security and Privacy Research Lab.

Combining ML, security and technology policy, Hong explores the behavior of existing ML algorithms in relation to privacy and fairness, as well as how to prevent those algorithms from being misapplied in society. As an undergraduate student, Hong was introduced to the field of algorithmic fairness through building a novel representation learning algorithm on biomedical data to help patients receiving care at a variety of hospitals in both rural and urban settings. Hong seeks to build on that foundation to improve algorithmic fairness through examining demographic biases in facial recognition technology to better understand how various modifications of training data can mitigate disparate outcomes.

Portrait of Alexandra Michael against a blurred blue sky, smiling and wearing oval wire-rimmed classes frames, small drop earrings with butterflies and a navy top.

Alexandra Michael

First-year Ph.D. student Alexandra Michael received a fellowship for her work that is co-advised by Allen School professors David Kohlbrenner in the Security and Privacy Research Lab and Dan Grossman in the Programming Languages and Software Engineering (PLSE) group.

Michael’s research combines her interests in security, programming languages and compilers. Prior to graduate school, Michael was fascinated by how computers could connect people yet put them at risk. Her work focuses on mitigating those risks by leveraging programming languages and security tools to improve the security and privacy of systems and the people who use them. She proposes to build a highly performant, secure and portable low-level language that will act as target for programs written in unsafe languages.