Network Structure Optimization for Hospital Acquired Infections Control
Mentors: Dr. Bijaya Adhikari, Dr. Sriram Pemmaraju
Domain Expert: Dr. Philip Polgreen
Hospital acquired infections (HAIs) pose a major challenge to modern day healthcare systems. When patients are hospitalized seeking care, they are actually at risk of being exposed to infections. In fact, CDC estimates that 1 in 25 hospitalized patients are infected by HAIs. The annual cost of combatting these infections is estimated to be between 28 and 45 billion US dollars per year in the United States. Hence, controlling the spread of HAIs is critical. In this project, we aim to tackle the problem of HAI spread by optimizing the underlying contact network through which the infections propagate. The mobility patterns of healthcare workers and patients within a hospital determines the contact network. We propose to alter the mobility patterns by modifying the structure of the network (by adding and/or deleting nodes and/or edges) such that the pathways of infection flow are eliminated.
Personalized hearing aids
Mentors: Dr. Octav Chipara
Domain Expert: Dr. Yu-Hsiang Wu
Twenty percent of Americans will be 65 years or older by 2030 out of which 35 - 50% will report having an age-related hearing impairment that is treated primarily with hearing aids. Regular use of hearing aids has been shown to improve communication and avoid the harmful effects of hearing loss, which include an increased risk of social isolation and depression. Past research has shown that many people do not wear their hearing aids regularly, as they are unsatisfied with their performance in the real world. A fundamental limitation of existing methods for tuning hearing aids is that they are not tailored to individual needs, which often leads to unsatisfactory performance. We are developing new methods for tuning hearing aids that are based on the unique needs of a patient. Additionally, we are developing signal processing algorithms that take advantage of the edge and cloud service to improve their operation.
Physical Activity in Virtual Reality for People with Visual Impairments
Mentor: Dr. Kyle Rector
Virtual, Augmented, and Mixed Reality are a growing source of gaming, exploration, physical exertion, training, education, and medical applications. However, most Virtual Reality (VR) environments rely on visual representations and are not accessible to people with visual impairments. The few Virtual Reality (VR) environments developed for people with visual impairments are of limited complexity (i.e., 2D experiences or 3D experiences with static objects). In prior work, we developed Virtual Showdown (VR game meant for physical activity) with audio and vibratory game elements and cues to teach youth with visual impairments (ages 8-20) to hit a moving virtual ball. We will extend this work by designing accessible 3D VR environments with moving targets. We will determine the cues necessary to teach PVIs to interact with targets on a vertical plane, how to cue PVIs to understand the time to contact of a 3D virtual target, and how to develop cues that teach PVIs to use their body to effectively hit multiple moving virtual targets. Our findings will supply the basis for research about accessible VR, accessible gaming, and movement-based gaming.
Using Machine Learning for Test or Questionnaire Assessments
Mentor: Dr. Suely Oliveira
It is estimated that approximately 18 million people have at least a single depression episode in the US every year. The evaluation and treatment of depression rely extensively on surveys and questionnaires. We work on the development of artificial neural networks and their applications. We have designed neural networks for predicting student abilities based on exam responses. We are also investigating the design of similar neural network approaches for analyzing questionnaires involved in medical decision making for diagnosing anxiety and depression. With the help of two of our graduate students you will learn how to use software for designing these types of neural networks. We use neural networks known as autoencoders and variational autoencoders. In this project, students will learn the basics of neural networks and how to use our new software for the above problems. Some knowledge of a scripting language such as Python will be very helpful. Basic knowledge of neural networks or the R language is desirable but not required. We will provide resources where you can learn all you need.
Pedestrian and Bicycling Simulation
Mentor: Dr. Joseph Kearney
Domain Expert: Dr. Jodie Plumert
In 2018, 6,283 pedestrians died on U.S. roadways. This represents the highest number of annual pedestrian fatalities in more than two decades. Our research focuses on how virtual environments can be used as laboratories for the study of human perception, action, and decision making with a special focus on how children and adults cross traffic-filled roadways on a bike or on foot. Our lab includes two large-screen (CAVE) virtual environments that can be configured as either bicycling or pedestrian simulators. We also have a variety of head-mounted AR/VR systems, including the HTC Vive, HTC VivePro, Oculus Quest 2, and Magic Leap. Our research aims to understand the underlying causes of crashes involving vulnerable road users, with the eventual aim of finding interventions to reduce or eliminate crashes. Projects include studies on how pedestrians and cyclists respond to adaptive headlights that highlight them on the edge of the road, the influence of texting on pedestrian road crossing, the effectiveness of Vehicle to Pedestrian (V2P) technology that sends alerts and warnings to pedestrians, how two pedestrians jointly cross gaps in a stream of traffic, distributed simulators that connect pedestrian and driving simulators in a shared virtual environment, and the design of interfaces for autonomous vehicles (especially driverless cars) to communicate and interact with vulnerable road users such as pedestrians and bicyclists.
Exploring higher-order language features
Mentor: Dr. Padmini Srinivasan
Domain Expert: Dr. Jacob Michaelson
Stylometric research aims to detect and compare the writing styles of individuals. Stylometric research has been the basis of investigations into applications such as authorship identification and its companion authorship obfuscation. Other applications include gender, age, and race identification. The reason for its success in these applications is that individuals have stylistic fingerprints that are learned, typically unconsciously over time, such as the preference for particular sentence structures or word forms. Style can also be shaped by the community in which communication takes place. For example, we have found that style varies across subreddit and 4chan communities, and other social media groups. In contrast to style, which is subtle in its development, content expertise is gained deliberately as an individual becomes interested or trained in a topic area. Although the style and content analysis car usually considered separately, they interact in different ways to support communication between individuals, whether through writing, speech, or media such as songs, movies, or advertisements. Our research goal is to explore is to combine techniques from stylometric research and content analysis through a principled approach from research in linguistics, psychology, and cognitive sciences and use these to analyze language in writing and speech. In speech specifically, we are analyzing the prevalence of mazes as a feature of value. Our study of language in the context of mental illness and psychological states is in collaboration with Professor Jacob Michaelson (Associate Professor, Psychiatry and Neuroscience).
Computational Supports for Children’s Development of Executive Functions
Mentor: Dr. Juan Pablo Hourcade
We are developing interactive technology to lower barriers to a specific set of evidence-based activities intended to enhance preschool children's executive functions through high-quality social play. Our interactive web application, called StoryCarnival, includes interactive stories to motivate play, a play-planning tool, and voice agents controlled by adult facilitators to keep children engaged in play. We will expand our current research on StoryCarnival in multiple ways that we expect to contribute to the field of human-computer interaction. First, we are researching advanced user interfaces, including smart recommendations, for adult facilitators to control tangible voice agents with which children interact as part of StoryCarnival activities. Second, we will expand our focus from preschool children to neurodiverse children (e.g., autistic children, children with Down Syndrome).