The goal of the Allen School Women's Research Day event is to celebrate women and nonbinary people in research at the Allen School and in the greater Seattle area with a day of research talks, panels, and posters. We will have short research talks, a poster session of student presentations, and a Q&A session with the director of the Allen School. We will also be releasing recordings of the event as well as other prerecorded research talks to the Allen School YouTube channel after the event!
|1:00 - 1:45pm||
Welcome & Virtual Poster Session
|2:00 - 2:30pm||
Q&A with Magda
Magda Balazinska, Director of the Allen School
|2:30 - 3:00pm||
Susan Eggers, Professor Emeritus
The Verbal Autopsy (VA) task involves gathering information about a deceased individual in order to determine their cause of death. VA has previously been explored with statistical methods, which do not take advantage of some textual aspects of VA surveys. Several pre-training methods in NLP typically require large amounts of text data (BERT, RoBERTa, etc.). Recent models, such as VAMPIRE, are not only suitable for smaller data, but also operate with fewer assumptions (a Bag-of-Words model) and allow for semi-supervised training. In this work, we take advantage of the text data and other data from VA surveys by using a lightweight pre-training framework to improve performance on the task. We aim to learn from and leverage strengths of statistical and neural methods while understanding how pre-training methods translate to a low-resource task, with less data and more variance. Our goal is to develop a model to be realistically used by field workers performing Verbal Autopsy surveys.
This work is our initial efforts towards developing a robotic limb repositioning system. Our approach combines programming by demonstration and end-user programming in a tele-manipulation system that includes the user in the loop. The system is based on a general-purpose mobile manipulator and a web interface where a user can select, edit, preview and execute different repositioning exercises based on the selected limb. This approach shows the potential to empower people who have mobility impairments to be more involved in an activity of daily living.
Augmented reality (AR) is under rapid development, and recent advances have begun making multi-user AR applications feasible in practice. However, although there is a growing body of literature in the security and privacy community about new risks posed by AR technologies, prior work has not systematically explored the risks that arise when users can augment each other’s reality. Drawing lessons from AR user experience studies, this work formulates a set of security and privacy goals for multi-user AR, proposes a design for a module that meets these goals, and instantiates this design as an application-level developer toolkit for the Microsoft HoloLens. By helping developers protect users from each other, our work takes steps toward allowing AR technologies to securely reach their full potential.
We incorporate morphology supervision into character language models (CLMs) with a multitask learning objective and show that this addition improves bits-per-character (BPC) performance across 24 languages, even when the morphology data and language modeling data do not overlap. Analyzing the CLMs shows that inflected words benefit more from explicitly modeling morphology than uninflected words, and that morphological supervision improves performance even as the amount of language modeling data grows. We then transfer morphological supervision across languages to improve language modeling performance in the low-resource setting.