2004 — 2009 |
Kitts, James Bilmes, Jeffrey [⬀] Fox, Dieter (co-PI) [⬀] Kautz, Henry (co-PI) [⬀] Choudhury, Tanzeem |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Creating Dynamic Social Network Models From Sensor Data @ University of Washington
This project will develop a hybrid method for rigorously observing structures of social interaction over time, and validate this method by comparison with conventional survey and observation designs. It will use both wearable and fixed computer devices to collect streaming data on research participants' physical location, speech, and motion, and then will develop computational models to infer structures of social interaction from these data. This suite of tools will thus allow direct automated measurement of networks of face-to-face interaction over time. Having demonstrated and validated this approach, the project will illuminate a set of classic theoretical problems that have eluded rigorous analysis under conventional methods. Substantial advances in modeling the dynamics of social networks have been frustrated by the paucity of appropriate data for empirical investigation, as scholars must often address dynamic theories using cross-sectional or sparse panel data.
The team of investigators includes experts from both Computer Science and Sociology, integrates tools from both fields, and addresses questions that would be intractable without this interdisciplinary lens. For example, the precise measurement of interaction in time and space allows researchers to observe the co-evolution of social roles (as performed by individuals in day-to-day interaction) and structural positions in a social network. The streaming measures of social interaction allow a detailed analysis of conversations, analyzing how styles of communication change within social relationships over time, including the effect of structural position on styles of interaction and the effect of interaction style on position in the network.
The research will examine the simple evolution of social networks over short (weeks) and long (month and/or years) time scales. Using Global Positioning Systems and various other location sensor technologies, the work will contribute an explicitly spatial investigation of network dynamics, modeling the interplay of the physical environment and social networks. For example, particular locations may serve as hubs or bridges, connecting otherwise disparate network components. Results may refine scientific understanding of the co-evolution of social networks and physical locations.
The project will develop a set of methods for social network observation and analysis, generate datasets of unprecedented breadth and depth, and provide an independent standard for comparison of conventional tools, all of which will be invaluable resources for the broader scientific community. The resulting longitudinal network datasets are likely to be mined for insights into social network dynamics by many other researchers, while the team of graduate students working under this project will benefit from unique interdisciplinary training. Beyond basic research, the novel application of sensor-based and machine learning methods to understanding human communication has broad applicability to real-world social problems. As a simple example, a refined understanding of the co-evolution of networks and physical locations may provide insight into macro-level processes of community integration and disintegration, informing social architects and urban planners. The project will promote teaching, training and understanding among researchers in computer science and social science.
|
1 |
2013 — 2017 |
Estrin, Deborah [⬀] Gay, Geraldine (co-PI) [⬀] Choudhury, Tanzeem Stein, Daniel Ancker, Jessica |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sch: Int: Novel Techniques For Patient-Centric Disease Management Using Automatically Inferred Behavioral Biomarkers and Sensor-Supported Contextual Self-Report
The vision of patient-centric, personalized, precision medicine and wellness will be fully realized only when an individual?s self-care and clinical decision making are informed by a rich, predictive model of that individual?s health status. The evolution and dissemination of mobile technology has created unprecedented opportunities for highly detailed and personalized data collection in a far more granular, unobtrusive, and even affordable way; these data include activity levels, location patterns, sleep, consumption, and communication and social interaction. However, turning this potential into practice requires that we develop the algorithms and methodologies to transform these raw data into actionable information. The research will develop novel and generalizable techniques to derive robust measures relevant to individual health and clinical decision making. The team will develop and evaluate tools that convert raw human-activity data into clinically actionable behavioral biomarkers. This demands creative uses of the underlying technical capabilities (i.e., passive data capture, data analysis and machine learning, data visualization, user experience), as well as rigorous understanding of the underlying health condition and management (i.e. functional health measures, achievable and optimal health outcomes, patient challenges in adherence, risks and benefits associated with medication and other aspects of treatment, and clinical decision making). The approach has broad applicability across disease management (e.g., auto-immune, gastrointestinal, depression, cognitive decline, and neurologic disorders), but also calls for tailoring to specific conditions and individuals. Therefore, we will conduct this initial work in a specific context, that of chronic pain management for three prominent conditions: rheumatoid arthritis, osteoarthritis, and lower back pain. The behavioral biomarkers associated with our initial target domain, pain management, center around: (i) decline in activity levels; (ii) increase in stress; (iii) decrease in sleep quality; (iv) drop in function, e.g., reduction in travel distance or inability to go to work. The effectiveness of passive sensing capabilities of the mobile phone to track sleep, changes in activity level, stress, social isolation, geographic location and several other indicators that are likely antecedents or symptoms of pain interference has been demonstrated previously.
While behavioral biomarkers rely extensively on passively captured data streams (such as activity, location, communication, application usage and audio), there remain important cases in which self-report data is required to augment or clarify passively collected data. However, the standardized patient survey instruments that assess relevant symptoms and behavior are not suitable for use on a daily basis because of length, question design, or both. Further, traditional forms of self report are often intrusive, burdensome, and suffer high rates of attrition. A new approach, contextual recall, aims to mitigate the issues related to self-report through three key mechanisms: optimizing the delivery of prompts, providing the user with key contextual cues to improve recall, and employing visual input techniques as an alternative to long-form measures that do not scale well to frequent mobile self-reports. The approach to personalizing disease management is intentionally scalable in terms of affordability and accessibility. Passive data collection requires no user attention, and contextual recall is a form of self-report designed for busy individuals with a range of demands and constraints on their time, as well as potential literacy and numeracy constraints. The clinician-facing components of this approach are also designed to work in resource-constrained clinical settings where clinicians are under particular time pressure. The team will recruit patients and clinicians from typically underserved communities to engage in the participatory design process. The overall contributions of this work will include development and evaluation of: (1) software techniques to combine and transform passively monitored and self-reported data streams into clinically meaningful, actionable, and personalized indicators, which we call behavioral biomarkers; (2) contextual recall that allows the collection of highly granular and contextually specific self-report data to enhance passively captured data with information from the patient perspective, while balancing the tension faced in balancing recall bias and usability; and (3) a methodology that systematizes the collaboration with clinical domain experts to develop and integrate behavioral biomarkers into clinical decision making for specific diseases. We will create and evaluate a modular and extensible suite of analytics and user interaction techniques designed to facilitate iterative implementation and evaluation. These modules will themselves be a contribution, but equally important will be the evaluation of the overall approach of behavioral biomarkers as a driver of precision medicine.
|
0.957 |
2014 — 2019 |
Ozcan, Aydogan (co-PI) [⬀] Estrin, Deborah (co-PI) [⬀] Mehta, Saurabh (co-PI) [⬀] Erickson, David [⬀] Choudhury, Tanzeem |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Inspire Track 2: Public Health, Nanotechnology, and Mobility (Phenom)
PI: Erickson, David Proposal: 1343058 Title: INSPIRE Track 2: Public Health, Nanotechnology, and Mobility (PHeNoM)
This INSPIRE award brings together research areas traditionally supported by: the Biophotonics and Nanobiosensing Programs in the Chemical, Bioengineering, Environmental, and Transport Systems Division (CBET) of the Engineering Directorate (ENG); the Communications, Circuits and Sensing Systems Program in the Electrical, Communications and Cyber Systems Division (ECCS) of the ENG Directorate; the Science, Technology and Society Program in the Social and Economical Sciences Division of the Social, Behavioral and Economic Sciences Directorate (SBE); and the Smart Health and Wellbeing Program in the Information and Intelligent Systems Division (IIS) of the Computer & Information Science Engineering Directorate (CISE).
Significance The science and technology enabled by the Public Health, Nanotechnology and Mobility (PHeNoM) project may ultimately lead to widespread access to health information obtainable from lab-on-chip technology. This research project could alter the domestic healthcare landscape by enabling earlier-stage detection of disease, reducing the cost of public healthcare delivery, and allowing individuals to take better control of their own well-being. Such advances require the integration of the social and technical contexts of health care device deployment. This integration is accomplished by gathering feedback on early versions of the technology and modifying future designs based on that initial feedback. Iterations between feedback and design are facilitated by research efforts that interpret the feedback and guide the development process. The ultimate transfer of the technology to the marketplace is enabled by a new education effort that involves a unique combination of coursework, business plan development, pre-seed grant workshops, and collaborations with existing start-ups in the mobile health space.
Technical Description Advancements in nanotechnology and microfluidics have enabled the development of lab-on-chip devices that can detect and quantify protein, genetic, and other biochemical markers of diseases with precision. Currently-available personalized diagnostic devices are limited to conditions that require either frequent monitoring (e.g. glucose for diabetics) or "binary" results (e.g. pregnancy). The goals of the PHeNoM program are to demonstrate that deployment of lab-on-chip technology can be fundamentally altered by taking advantage of ubiquitous smartphone technology and show that the fusion of physical sensing and molecular assays on mobile platforms enable healthcare diagnostics that are more informative than either technology alone.
To meet these aims, the investigators are focusing their efforts on developing and deploying three systems that may have an immediate impact on advancing personalized healthcare in the United States: a Stress-Phone for long term stress management, a Nutri-Phone for bloodwork-enabled nutritional awareness, and a Hema-Phone for monitoring viral loading in HIV+ patients. Beyond the immediate merits of these technologies, the broader merit of this project is the demonstration of new "bioinfo-mobile" diagnostics that intertwine the advantages of mobility, computation, physical sensing, and biomolecular assays.
|
0.957 |
2018 — 2021 |
Choudhury, Tanzeem |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fw-Htf: Collaborative Research: An Embodied Intelligent Cognitive Assistant to Enhance Cognitive Performance of Shift Workers
The Future of Work at the Human-Technology Frontier (FW-HTF) is one of 10 new Big Ideas for Future Investment announced by NSF. The FW-HTF cross-directorate program aims to respond to the challenges and opportunities of the changing landscape of jobs and work by supporting convergent research. This award fulfills part of that aim.
This project develops a closed-loop embodied Intelligent Cognitive Assistant (e-ICA) to infer circadian rhythm, alertness, and stress levels and to provide personalized feedback to enhance users' cognitive ability and wellbeing in an unobtrusive and effortless manner. Approximately 20% of the labor force engages in shift work, which often leads to inadequate and poor sleep. Being out of synch with one's natural body clock, or circadian rhythm, can lead to many complications over time, including a higher likelihood for cardiovascular disease, cancer, obesity, and mental health problems. In addition, there can be serious deficits in cognitive performance, with productivity loss and more accidents in the workplace. This research program will design and develop a novel sensor-based e-ICA that can monitor shift workers' circadian rhythms, sleep patterns, and stress levels. The platform will be flexible so that it can be used to study broad populations in addition to shift workers. Given that 70 percent of the population suffers from circadian rhythm disruption, this technology has the potential to impact a wide range of workers and firms. In addition, the anonymized data and information about developed hardware and software will be shared in the community. The technology will impact the needs of a broad cross-section of stakeholders, but will be tested initially with medical residents and other health practitioners who typically work long shift that change every few weeks. Moreover, this project will also allow interdisciplinary cross fertilization between science, engineering, psychology, sleep and circadian biology, and psychiatry.
The investigators explore ways to provide biologically and physiologically attuned support in the areas of cognitive ability, performance, sleep, and well-being based on the inferred individual circadian rhythm and personalized embodied assistance by: a) passively and continuously gathering behavioral and physiological streams through wearable, mobile and remote devices from workers in an effortless manner; b) continuously inferring each person's individual circadian rhythm, alertness, and stress in daily life settings; and c) providing personalized multi-sensory feedback or actionable behavioral suggestions to modulate circadian rhythm, alertness and stress levels with the aim of enhancing cognitive ability, performance and wellbeing. The investigators will test the efficacy, usability and acceptability of the system both in a series of laboratory studies and in an in-situ work environment.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.957 |
2020 |
Nandakumar, Rajalakshmi Choudhury, Tanzeem |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rapid: Using Smartphones to Detect and Monitor Respiratory Symptoms in Covid-19 Patients
This project will develop, refine, and evaluate a smartphone-based solution to reliably track changes in blood oxygen saturation (SpO2) and respiration rate and volume. Recent analysis of COVID-19 patients have shown some unusual findings. For example, there is a discordance between the respiratory symptoms and the blood oxygen saturation levels. This can lead to sharp deterioration of patient status without the individual experiencing the usual signs of distress. Existing smartphone solutions do not work in detecting significant drop in blood oxygenation, which is essential to detect whether the person needs to be hospitalized. Accurate in-home tracking of respiratory signals and blood oxygenation levels can help to monitor and follow patients with COVID-19 and identify those who are stable vs. those who are deteriorating.
This project will enable two informative, scalable, and cost-effective measurements using smartphones: (i) SpO2 and (ii) respiration rate and volume changes. Although there are many standalone pulse oximeters on the market which are FDA approved and work well (accuracy of ±2%), most people don't have them and are unlikely to buy special purpose devices. Recently, several smartphone and smartwatch based apps have been released that claim to measure oxygen saturation, but they are not reliable. These applications simply use the phone's camera to measure the change in reflection. While these apps can capture pulse reliably, and even capture the blood hemoglobin concentration to some degree, it does not work for oxygen saturation as there are no separate signals to compare oxygenated against deoxygenated hemoglobin. In general, pulse oximetry works by measuring the light absorption in hemoglobin (transdermally) at two different wavelengths (red: 660nm and near-infrared: 940nm). Both of these bands can be found in broadband white LEDs, such as those used for flash on smartphones and can be read by the image sensors (cameras), as they use infrared for distance measurements in photographs. With optical filters attached to the smartphone flash, these two distinct bands can be separated out from the broadband source, captured by the phone's camera, and be used as a pulse oximeter. For monitoring respiration signal/rate, the project will build on the investigators' previous work on opioid overdose detection, which leverages the speakers and microphones of a smartphone to monitor the chest motion of a person in a contactless fashion. At a high level, the smartphone transmits inaudible high-frequency custom sound signals using the device's speaker. These signals are reflected by the subject's chest and recorded using the device's microphones. The chest motion due to breathing causes a change in these reflections as seen by the microphones. These changes can be detected and the respiration signal can be obtained using signal processing algorithms on the smartphone. This system can now be improved to detect changes in respiration rates caused due to the onset of viral infections and difficult breathing conditions like hypoxia.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.957 |
2022 — 2026 |
Choudhury, Tanzeem |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Hcc: Medium: Body as Intervention: Toward Closed-Loop, Embodied Behavioral Health Interventions
There has been a drastic increase in stress and anxiety in the U.S., leading to a mental health pandemic. The need for effective mental health interventions is more urgent now than ever. By monitoring users' symptoms and their context (e.g., when someone is having an anxiety attack or experiencing cravings when passing by a bar) through wearables and IoT (Internet of Things) devices, mobile health (mHealth) technologies have the potential to transform mental health care. Despite the advanced monitoring capability, most existing mHealth interventions are digitization of traditional health interventions that do not deliver in-the-moment precision interventions in response to users' symptoms. As such, they inherit the limitations of their predecessors: the reliance on human motivation and the need for active engagement to be effective, resulting in limited adherence. To address this problem, the investigators will develop a class of novel solutions – sensory interventions – that can be effective without disrupting the users or requiring their active engagement. Sensory interventions are real-time closed-loop systems that directly act on the users’ bodies or immediate environment in response to users behavioral or physiological signals. Unlike existing solutions, sensory interventions combine applied engineering, signal processing, and machine learning to trigger interventions autonomously without user effort. The project will create three types of closed-loop wearable and IoT systems that use different modalities (vibration, airflow, and touch) to deliver sensory interventions in mental health contexts, such as cravings, workplace stress, and social stress. Ultimately, this project will enable mHealth interventions to be as rich, diverse, and personalized as mHealth monitoring solutions. This project will produce open-source software, hardware designs, and datasets. Collaborations with Cornell Tech Precision Health Initiative and with the University of Chicago Medicine and their clinical and industry partners will accelerate the dissemination of research through clinical evaluations and commercialization.<br/> <br/>Most existing mHealth behavioral health interventions, although coupled with advanced sensing systems to detect health needs, require conscious cognitive processing of information and active participation from users to be effective. This project will introduce and develop the concept of sensory interventions, a novel class of mHealth interventions that require little or no cognitive awareness to be effective. This project will investigate sensory interventions in four stages: (i) investigate and map modalities of external (electromechanical) stimuli to actuate neurological responses that produce a neurophysiological effect (ii) design and develop devices that enable these sensory interventions within the constraints of mHealth, (iii) determine physiological signals that are associated with target behaviors and integrate sensing systems, signal processing, and machine learning with sensory interventions to achieve closed-loop systems that automatically triggers intervention, and (iv) evaluate the efficacy, usability, and acceptability of the closed-loop systems (both in-lab and in situ). Throughout this process, the investigators will evaluate and characterize how sensory interventions impact three common stress-induced mental health challenges: substance cravings, workplace stress, and social stress. To intervene in substance cravings, the investigators will leverage heart rate biofeedback, develop a smartwatch-based system to deliver biofeedback using vibrotactors, and evaluate how such vibrotactile actuation mitigates alcohol and nicotine cravings. To intervene in workplace stress, the investigators will leverage breathing regulations, develop a fan-based system that alters the perception of airflow around the nose, and evaluate how such airflow entrains slow, guided breathing in the workplace. To intervene in social stress, the investigators will leverage affective touch, develop an arm-worn device that activates affective touch neurons, and evaluate how affective touch helps regulate social stress. Collectively, this research will enable a new class of mHealth interventions that are responsive to users’ health context in real-time and can be effective irrespective of users cognitive capacity or availability.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.957 |