Battling Antibiotic Resistance with Reinforcement Learning
Mentor: Dr. Mehrdad Moharrami
The rise of antibiotic-resistant bacteria presents an urgent global health threat,
undermining the effectiveness of current antibiotics and jeopardizing advancements in medicine.
This crisis has led to increased healthcare costs, prolonged hospital stays, and higher mortality rates.
Traditional antibiotics are rapidly losing their efficacy due to the overuse and misuse of these drugs,
making it imperative to develop new, effective treatments. Furthermore, the economic burden associated with
combating resistant infections places significant strain on healthcare systems worldwide.
Reinforcement Learning (RL) offers a transformative approach to tackle challenges in drug
discovery and prescription optimization. By modeling the process as a sequential decision-making task,
RL algorithms can personalize prescriptions and dosages to individual patient profiles, dynamically
adapting treatments based on real-time responses. This innovative strategy has the potential to curb
overprescription, thereby combating antibiotic resistance. By delaying or preventing the evolution of resistance,
RL could revolutionize our ability to address one of the most pressing public health challenges of the modern era.
Collecting Medical Data over 5G
Mentor: Dr. Tianyu Zhang
Over the last decade, we have observed the emergence of new devices capable of collecting health data ranging from simple step counts, vital signs, or blood glucose levels. A key challenge for this types of devices is to efficiently use the cellular to collect data. As part of this project, we aim to conduct experimental research on 5G New Radio (NR) using the NSF-supported Powder platform. Powder is a remotely accessible "living laboratory" and provides open-source 5G testbeds, combining software-defined radio (SDR) units equipped with custom-designed RF frontends and implementation of the open-source OAI (OpenAirInterface) 5G software stack. Powder provides a detailed manual and some ready-to-use experiment configurations called ‘profiles.’ We plan to proceed with the project following two steps.
1. Using some of the handy profiles provided by Powder to evaluate the 5G PHY layer techniques, e.g., mixed-numerology,
by measuring the network throughput and latency.
2. Implementing well-designed scheduling algorithms into the platform, creating our own profile, and performing performance evaluation.
Ethics of Emerging Technologies for Children
Mentor: Dr. Juan Pablo Hourcade
We are studying the ethics of emerging technologies, such as extended reality and generative artificial intelligence, with respect to children as part of a national consortium. Our goal is to develop detailed ethical guidelines that arise from informed stakeholder (children, parents, teachers, pediatricians, therapists) views. Each site in our consortium is working with a focus on different groups of children (urban, rural, young children, elementary school children, teenagers, neurodivergent children). Conducting the research involves various tasks, such as, identifying ethically salient characteristics of emerging technologies, compiling potential use cases, understanding how to introduce and enable use of emerging technologies by stakeholders, integrating results from multiple research sites, and developing outreach materials.
Fine-grained representation learning of audio events
Mentor: Dr. Weiran Wang
Supervised and weakly-supervised representation learning has demonstrated state-of-the-art performance in different application domains. In this project, we will perform representation learning on audio clips, to extract robust features for the detection of audio events, like glass breaking, fire alarm, baby cry, coughing, etc. We will use fine-grained supervision, in the form of ontology, e.g., wind and thunderstorm are more similar to each other than to piano, because they belong to the category of natural sounds while piano belongs to the category of music. We believe the hierarchical structure of ontology allows us to inject semantic structure in the learned representation and therefore improves model generalization. Experiments will be conducted on the AudioSet database, which contains 2 million audio clips from Youtube with 527 sound event labels. The potential applications of this approach include the detection of fine-grained human sounds like coughing, snoring, panting, heat-beat for health monitoring.
The preferred qualification includes courses in Artificial Intelligence and/or machine learning, proficiency in python, and experience in ML frameworks like pytorch or tensorflow.
Intervention against HAIs
Mentor: Dr. Bijaya Adhikari
Domain Expert: Dr. Philip Polgreen
Clostridioides difficile infection (CDI) is a bacterial healthcare-associated infection (HAI). HAIs like CDI are a major burden on both patients and healthcare professionals (HCPs). The CDC estimates that there are roughly 4.5 HAI cases for every 100 hospital admissions with an annual cost of between 28 and 45 billion USD. In this project, we will develop AI-driven approach to design intervention strategies to mitigate HAI infection and reduce the rate of asymptomatic transmissions.
Modeling equity in vaccine allocation
Mentors: Dr. Sriram Pemmaraju and PhD student Jeffrey Keithley
Domain Expert: Dr. Philip Polgreen
The COVID-19 pandemic has highlighted the importance of fairness and equity in the allocation of vaccines, especially in the early period, when vaccines are scarce. Defining and modeling equity in this context is complicated. Should "demographic equity" be the main goal? Should we focus on equity of allocation or equity of outcomes? Should vaccines target the most vulnerable or should they target segments of the population that provide greatest societal utility? Furthermore, increasing fairness of allocation may reduce overall efficiency of vaccine allocation and so both fairness and overall efficiency are distinct and important goals. This project, which will build on previous work by our group, will focus on mathematical and computational modeling of fairness and equity of vaccine allocation. We will view this as an optimization problem, design efficient algorithms to solve this problem, and evaluate our solutions on realistic contact networks and disease-spread models.
Modeling the Spread of Hospital Acquired Infections
Mentor: Dr. Alberto Maria Segre
Domain Expert: Dr. Philip Polgreen
Students will work on modeling problems related to the spread of hospital acquired infections as part of the
Computational Epidemiology research group. Team method is data driven, based on fine-grained data
obtained from, e.g., the University of Iowa Hospitals and Clinics or other similar healthcare facilities, as well as other
sources of health-related data. Typically, students will undertake data science style analysis of a data set, or devise
and execute a simulation study to evaluate the effectiveness of changes to hospital architecture, hospital operations,
healthcare worker behaviors (e.g., hand hygiene), and patient room assignments. Thus, depending on student interests,
research topics might include, for example:
1. Constructing generative statistical models of, e.g., healthcare worker behavior from large, fine-grained,
healthcare worker movement data sets;
2. Constructing simulations of different hospital operation policies (e.g., room assignments, cleaning practices, etc.);
3. Extracting spatial models of healthcare facilities from CAD files; or
4. Devising statistical models of healthcare acquired disease transmission parameters (e.g., shedding rates,
healthcare worker/patient contact parameters, etc.).
5. Extracting disease information from patient observations (e.g., hospital electronic medical record notes, or disease-oriented discussion boards).
Multi-Device Health Tracking Integration
Mentor: Dr. Lucas Silva
This project focuses on developing health tracking solutions by integrating data from various consumer wearable devices (smart rings, phones, smartwatches). We are working on creating a data processing pipeline that enables individuals to better understand their health patterns and behaviors through multi-device sensor fusion. There are several opportunities to contribute to this research:
1. Developing algorithms for collecting and synchronizing data streams from different devices,
typically through different platform's API.
2. Developing algorithms for processing and combining various health parameters
(e.g., heart rate, sleep patterns, activity levels) into data bases for multiple users.
3. Exploring visualizations for multiple health data types that are accessible for users.
Students will gain experience in working with consumer devices and developing data processing pipelines.
Personalizing Hearing-Aids
Mentor: Dr. Octav Chipara
Domain Expert: Dr. Yu-Hsiang Wu
Approximately 35 - 50% of Americans over 65 years old report having an age-related hearing impairment treated primarily with hearing aids. Regular use of hearing aids has been shown to improve communication and avoid the harmful effects of hearing loss, including an increased risk of social isolation and depression. Over the last decade, researchers have successfully improved the performance of hearing aids by leveraging the increasing capabilities of hardware platforms, which enabled the deployment of sophisticated signal processing algorithms such as those for noise suppression, directional microphones, and auditory scene classification. In contrast, future internet-enabled hearing aids will overcome limited hardware resources by accessing cloud services and edge devices' significant computational and storage capabilities. As part of this development project, you will learn how to design, train, and deploy machine learning algorithms that enhance the capabilities of hearing aids. These algorithms will run either in the cloud or on resource-constrained edge devices.
Using Smart Glasses and Watches to Access Hearing Aids
Mentor: Dr. Octav Chipara
Domain Expert: Dr. Yu-Hsiang Wu
For any hearing aids to work, they must do the right things (e.g., enable directional microphones), for the right people, in the right places (when speech and noise are spatially separated). To this end, hearing aids must accurately identify challenging and critical listening situations, the characteristics of these situations, and the listening goals of listeners. One commonly used method to assess the performance of hearing aids in the real world is to use Ecological Momentary Assessment (EMA) self-reports. EMA is a methodology that asks respondents to repeatedly report their experiences during or shortly after the experiences in their natural environments. Although EMA in audiology research is increasing, EMA is not a tool without limitations. For example, it is generally tricky for EMA to capture challenging listening situations when EMA surveys are delivered at random as they occur infrequently. In addition, the association between discrete EMA data and continuous sensor data (i.e., how listeners integrate the experience of the past several minutes to answer an EMA survey question) is unknown. This project aims to use smartwatches and smart glasses to (1) capture more challenging listening situations with higher fidelity than previously possible and (2) determine the association between EMA self-reports and continuous sensor data is using novel machine learning algorithms.