2007 — 2011 |
Orlitsky, Alon [⬀] Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Design and Analysis of Compressed Sensing Dna Microarrays @ University of California-San Diego
The diverse functions performed by a living cell during her life cycle are controlled and regulated through complicated gene- and protein- interaction networks. Any pattern of irregular behavior of genes in the network can lead to cell malfunctioning, cell death, or the emergence of diseases like cancer. It is therefore of crucial importance to recognize erroneous gene interaction patterns and compare them to those in healthy cells. For this type of study, one of the most frequently used bioengineering systems is the well known DNA microarray device. DNA microarrays consist of grids of spots containing unique genetic identifiers for each of the tested genes, capable of generating snapshots of gene activity in terms of selective DNA sequence annealing. Microarrays have also found many other applications in the field of molecular biology, most notably for the purpose of detecting hostile microbial agents in food, water, and in the air. One of the main drawbacks of current microarray designs is that they are, for the purpose of whole genome studies, severely underutilized; similarly, for biosensing applications, existing microarray systems cannot be used for simultaneous identification of a large number of microorganisms and their strains due to technological limitations.
The investigators study novel array architectures, termed compressed sensing DNA microarrays. The research involves finding DNA probes that serve as group identifiers for classes of microorganisms; designing sparse sensing matrices for DNA group identifiers; developing compressed sensing reconstruction algorithms capable of handling saturation effects arising due to high agent concentration levels; characterizing the fundamental trade-offs between distortion and sensor dimension for non-linear arrays; and, analyzing the complexity of integrating compressed sensing microarrays into existing biosensor networks.
|
0.915 |
2009 — 2013 |
Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Reducing the Cost of Computation in Cmps @ University of California-San Diego
"This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5)."
Computer processor industry has moved fully into the multi-core era to enable continual scaling of performance, but at the cost of increased energy consumption and increased cooling costs due to higher temperatures and thermal gradients. This proposal describes three major research thrusts that address these costs in multiple ways: (1) New modeling and simulation tools: We will integrate performance, power, temperature, reliability and cooling estimation, so that the designers will be able to analyze the impact of design choices and runtime decisions over significant time spans. (2) Runtime thread scheduling policies: We will identify and demonstrate power and thermal scheduling mechanisms that maintain performance, reduce the total energy consumption, and eliminate or reduce hot spots, but also maximize processor lifetime. The policies will use the data from thermal sensors and performance counters to proactively drive the management decisions. (3) New cooling strategies: Our goal is to create thermal management and cooling control algorithms that work in tandem to reduce the overall energy consumption.
The proposed research forms the basis for discovery and learning in the areas of multi core processors, and, more generally, system design and management. Graduate and undergraduate students will be involved in various parts of the proposed research and help in connecting this work with other NSF sponsored projects. The results of research, tools and coursework materials developed will be freely and easily distributed to engineering community at large.
|
0.915 |
2009 — 2013 |
Griswold, William Krueger, Ingolf (co-PI) [⬀] Dasgupta, Sanjoy (co-PI) [⬀] Rosing, Tajana Shacham, Hovav (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps:Medium: Citisense - Adaptive Services For Community-Driven Behavioral and Environmental Monitoring to Induce Change @ University of California-San Diego
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
The objective of this research project is to achieve fundamental advances in software technology that will enable building cyber-physical systems to allow citizens to see the environmental and health impacts of their daily activities through a citizen-driven body-worn mobile-phone-based commodity sensing platform. The approach is to create aspect-oriented extensions to a publish-subscribe architecture, called Open Rich Services (ORS), to provide a highly extensible and adaptive infrastructure. As one example, ORS will enable highly adaptive power management that not only adapts to current device conditions, but also the nature of the data, the data?s application, and the presence and status of other sensors in the area. In this way, ORS will enable additional research advances in power management, algorithms, security and privacy during the project. A test-bed called CitiSense will be built, enabling in-the-world user and system studies for evaluating the approach and providing a glimpse of a future enhanced by cyber-physical systems.
The research in this proposal will lead to fundamental advances in modularity techniques for composable adaptive systems, adaptive power management, cryptographic methods for open systems, interaction design for the mobile context, and statistical inference under multiple sources of noise.
The scientific and engineering advances achieved through this proposal will advance our national capability to develop cyber-physical systems operating under decentralized control and severe resource constraints. The students trained under this project will become part of a new generation of researchers and practitioners prepared to advance the state of cyber-physical systems for the coming decades.
|
0.915 |
2010 — 2015 |
Rosing, Tajana Defanti, Thomas Brown, Maxine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Irnc:Pronet: Translight/Starlight @ University of California-San Diego
ABSTRACT Translight/Starlight Project Proposal Number: 0962997 PI: Tom DeFanti - UC San Diego
The Translight/Starlight team will be pursuing application experiments on international networks with 6 different activity areas. These areas are: - Greenlight International - TL/SL will investigate monitoring for energy consumption in selected international-reach experimental networks. - SCCSnet - TL/SL will support the Science Cloud Communication Services Network by assisting projects that are developing high-performance communication services tailored for computational clouds used by data-intensive scientific applications. These projects are addressing the high-volume, high-performance national and international communication requirements of scientific computational clouds versus general consumer and enterprise clouds, which use the commodity Internet. - CineGrid ? TL/SL will help enable the production, use and exchange of very-high-quality digital media, including scientific collaborations and visualizations, over photonic networks. UCSD will serve as the lead on CineGrid activities. TL/SL funding will offset personnel time involved in conducting international trials, and for partial participant costs for the annual CineGrid conferences in 2010, 2011, and 2012, held at Calit2. - High-Performance Digital Media Network (HPDMnet) ? TL/SL will work with international partners to develop new services based on high-performance optical transport (i.e., optical multicast) of large-scale data streams, including high-resolution digital media streams, as well as new capabilities for high-volume scientific data. - iGENI (International GENI) ? TL/SL will connect existing StarLight resources with current GENI backbone transport resources. - SAGE - TL/S will help transition SAGE from a transformative research prototype to a hardened technology that provides production-quality, community-driven open services for visualization and collaboration utilizing shared national and international cyberinfrastructure, for the advancement of scientific research and education. Partners sites are in the US, Australia, Canada, China, Czech Republic, Japan, Korea, Netherlands, Russia, Saudi Arabia, and Taiwan.
Intellectual Merit Contributions from this project have the potential to strengthen activities in network modeling, exploring leading edge green technology research, simulation, advanced imaging transmissions enabling the IRNC program to play a formative role in the emerging discipline of network science, and enhancing NSF's leading role in sustainable stewardship of cyberinfrastructure
Broader Impact The proposed work will help IRNC operators better understand their networks by making more effective use of data they already collect as well as newer technologies for measurement and visibility of their networks. The data will have broad applicability to other researchers and pragmatic network operations.
|
0.915 |
2010 — 2017 |
Zhou, Yuanyuan (co-PI) [⬀] Rosing, Tajana Jhala, Ranjit (co-PI) [⬀] Gupta, Rajesh [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Variability-Aware Software For Efficient Computing With Nanoscale Devices @ University of California-San Diego
Abstract: The Variability Expedition Project: Variability-Aware Software for Efficient Computing with Nanoscale Devices
As semiconductor manufacturers build ever smaller components, circuits and chips at the nano scale become less reliable and more expensive to produce ? no longer behaving like precisely chiseled machines with tight tolerances. Modern computing is effectively ignorant of the variability in behavior of underlying system components from device to device, their wear-out over time, or the environment in which the computing system is placed. This makes them expensive, fragile and vulnerable to even the smallest changes in the environment or component failures. We envision a computing world where system components -- led by proactive software -- routinely monitor, predict and adapt to the variability of manufactured systems. Changing the way software interacts with hardware offers the best hope for perpetuating the fundamental gains in computing performance at lower cost of the past 40 years. The Variability Expedition fundamentally rethinks the rigid, deterministic hardware-software interface, to propose a new class of computing machines that are not only adaptive but also highly energy efficient. These machines will be able to discover the nature and extent of variation in hardware, develop abstractions to capture these variations, and drive adaptations in the software stack from compilers, runtime to applications. The resulting computer systems will work and continue working while using components that vary in performance or grow less reliable over time and across technology generations. A fluid software-hardware interface will thus mitigate the variability of manufactured systems and make machines robust, reliable and responsive to the changing operating conditions.
The Variability Expedition marshals the resources of researchers at the California Institute for Telecommunications and Information Technology (Calit2) at UC San Diego and UC Irvine, as well as UCLA, University of Michigan, Stanford and University of Illinois at Urbana-Champaign. With expertise in process technology, architecture, and design tools on the hardware side, and in operating systems, compilers and languages on the software side, the team also has the system implementation and applications expertise needed to drive and evaluate the research as well as transition the research accomplishments into practice via application drivers in wireless sensing, software radio and mobile platforms.
A successful Expedition will dramatically change the computing landscape. By re-architecting software to work in a world where monitoring and adaptation are the norm, it will achieve more robust, efficient and affordable systems that are able to predict and withstand not only hardware failures, but other kinds of software bugs or even attacks. The new paradigm will apply across the entire spectrum of embedded, mobile, desktop and server-class computing machines, yielding particular gains in sensor information processing, multimedia rendering, software radios, search, medical imaging and other important applications. Transforming the relationship between hardware and software presents valuable opportunities to integrate research and education, and this Expedition will build on established collaborations with educator-partners in formal and informal arenas to promote interdisciplinary teaching, training, learning and research. The team has built strong industrial and community outreach ties to ensure success and reach out to high-school students through a combination of tutoring and summer school programs. The Variability Expedition will engage undergraduate and graduate students in software, hardware and systems research, while promoting participation by underrepresented groups at all levels and broadly disseminating results within academia and industry.
|
0.915 |
2012 — 2015 |
Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Cooling, Energy and Performance Management in Computing Systems @ University of California-San Diego
In today's computing systems workload schedulers target best performance, but are unaware of thermal and power realities of the system. Similarly, the cooling subsystem controllers take only data from thermal sensors as their input and are thus totally oblivious to workload scheduling and power management decisions. Even though these systems all share a single computer infrastructure, their operation is optimized separately, resulting in inefficiencies.
This proposal goes well beyond previous work on optimizing thermal problems in CPUs separately from memory by largely neglecting the rest of the system, to solutions that understand the complex interplay between CPUs, HW accelerators such as GPUs, memory and hard disks with their related cooling subsystems. The PIs propose to develop joint control policies for such systems and to quantify the respective benefits and disadvantages. The project plans to study and design control policies for various ways of implementing cooling, using both fans and liquid cooling systems (e.g micro-channel vs. channels in a heat sink with external pump). The project will also test ideas on computing systems available in a modular data center container obtained at UCSD as a part of recently awarded NSF MRI (GreenLight) grant.
Graduate and undergraduate students will be involved in various parts of the proposed research and help in connecting this work with other NSF sponsored projects. The results of research, tools and coursework materials developed will be freely and easily distributed to engineering community at large. In addition, the PI has created a new program affiliated with the Computer Science and Engineering department at the UCSD whose target is to ensure seamless transfer of ideas, funds and people between academic and industrial settings.
|
0.915 |
2012 — 2015 |
Kahng, Andrew (co-PI) [⬀] Rosing, Tajana Mookherjea, Shayan (co-PI) [⬀] Fainman, Yeshaiahu [⬀] Buckwalter, James (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Engineering Testbed: Universal Chip Scale Photonic Testing Instrument (Ucpti) @ University of California-San Diego
The objective of this research is to develop a state-of-the-art photonic chip-scale probing solution for integrated Si-photonics testing and to enable new multidisciplinary collaborative projects in nano-photonics and opto-electronics. The approach exploits a universal electronic-photonic probing station that integrates electrical, optical far-field, and optical near-field probes for electrical and optical interfacing to integrated circuits and to individual elements within such circuits, together with a full set of external optical and electronic instrumentation to provide an affordable, zero-capital-investment testing capability for Research and Development by academic, industry and government laboratories.
The intellectual merit of this versatile and user friendly Si-Photonics testing instrument includes basic research to identify new phenomena, inventing new photonic technology and creating new applications, as well as providing tremendous benefit to small businesses, various research institutions and government laboratories in their product development efforts. Moreover, it can serve as a testbed for development and reduction to practice of new approaches for efficiently probing and testing Si-photonic chips, gradually evolving to become industry standards.
The broader impact of the instrument spans multiple fields, including information systems, high speed electronics and photonics, and future computer science and engineering to create wealth for 21st century economy by advancing integration of nanoscale photonic, electronic and biomedical science and technology. It will provide service to industry in Southern California and play a significant role in the education and development of human resources in science and engineering at the graduate and undergraduate levels helping to train future engineers.
|
0.915 |
2012 — 2015 |
Papadopoulos, Philip Fountain, Tony Rosing, Tajana Tilak, Sameer [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Sensor-Rocks: a Novel Integrated Framework to Improve Software Operations and Management (O&M) and Power Management in Environmental Observing Systems @ University of California-San Diego
This experimental activity focuses on configuring, maintaining, and adaptive power management of remote sensors and their embedded software used in environmental observing systems. The approach is based on a well known and popular cluster configuration system out of UCSD called Rocks. Sensor platforms and sensor networks require scalable and reliable solutions to manage their software configurations. This activity sets out to prove or disprove that a solution proven to work for heterogeneous cluster and data center configuration and management (ROCKS)can be applied to distributed sensors. Attached to this work is a set of integrated research and development activities on improved power management for sensor platforms and their operational workloads. Broader impact derives from the initial targeted communities in limnology and marine science with associated real world scenarios, and extends to other environmental observing systems as well as disaster response. Educational content from the work will be developed and inserted into a graduate level course at UCSD on embedded computing. Intellectual merit is found in the new approach to sensor reproducibility and adaptive power management algorithms.
|
0.915 |
2013 — 2014 |
Rosing, Tajana Tang, Lingjia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Cross-Layer Hw/SW Solutions Towards Energy-Efficient Datacenters @ University of California-San Diego
Facilitating workload consolidation and improving server utilization are critical for reducing cost and improving energy-efficiency of modern datacenters. One major challenge for improving utilization is to do so without affecting the quality of service (QoS). On modern servers, various co-located jobs may share critical resources including 1) micro-architectural resources such as last level cache, memory bandwidth, and functional units, etc., and 2) energy resources including grid power and distributed batteries that provide an additional power source especially during high load period. Multiple co-located jobs may contend for shared these resources, causing interference, threatening application QoS or even triggering the circuit breaker and resulting in costly downtime and severe QoS violations. This project addresses these challenges by designing a cross-layer system that effectively manages workload consolidation, quality of service (QoS) and various energy sources to optimize for energy efficient computing. Our system spans several layers, including profiler, static compiler, online lightweight monitoring and prediction, runtime execution management, hardware power state control and energy sources control. The compilation technique identifies and inserts markers in contentious code regions in low-priority applications, as well as critical regions in high-priority applications that require QoS protection. The lightweight runtime utilizes the compiler hints, monitors the QoS, power consumption, etc., and adaptively adjusts the pressure applications generate to the shared resources such as shared cache and memory bandwidth, manages battery (dis) charges and hardware power states to guarantee QoS and achieve efficient power shaving.
|
0.915 |
2013 — 2017 |
Huang, Jeannie Gamst, Anthony Rosing, Tajana Patrick, Kevin (co-PI) [⬀] Tilak, Sameer (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sch: Exp: Sensehealth: a Platform to Enable Personalized Healthcare Through Context-Aware Sensing and Predictive Modeling Using Sensor Streams and Electronic Medical Record Data @ University of California-San Diego
Current healthcare diagnostics and assessment systems are limited by health data, which is sporadic, periodic, and incomplete. Wireless devices and health sensor technologies are increasing in use for continuous monitoring and assessment of key physiologic, psychological, and environmental variables and reduce the current gaps in health data. Uptake of such data by current health systems has been slow because of the reliance upon the physician/healthcare team to interpret and manage incoming data. Nevertheless, the large streams of data generated by these devices in conjunction with traditional clinical data (Electronic Medical Records) have the potential provide real and important insights into patient health and behavior. To address this gap, this proposal will develop SenseHealth -- a novel software platform that will automatically process and incorporate volumes of real-time data from sensors tailored to the individual in the context of personal electronic medical records and available environmental data. Such data will be integrated into the clinical care workflow to enable system usability, feasibility, and ultimately utility. A core component of the cyberinfrastructure is a collection of quantitative, predictive models that are sensitive to concerns across age, diseases, and health and variety of patient situations (ranging from low priority with no consequence on patient management to high priority requiring emergency evaluation), and sensor failures. The models will be integrated with a distributed real-time stream data processing system and a complex event stream processing engine to process sensor data in a scalable and fault-tolerant manner. Research at Rady Children's Hospital of San Diego, an affiliate of UCSD will be leveraged to develop these models. In each of the following studies, clinically relevant events (i.e. events that require clinical intervention) will be identified and disease specific models will be developed that will predict clinical relevance or the need for intervention. Incoming data and resulting clinical management activity from studies using various types of health sensors will be evaluated in two different patient populations: (1) MyGlucoHealth application for evaluating the use of a Bluetooth-enabled glucometer (for blood sugar measurements) in 40 youths with Type 1 diabetes, and (2) Asthma Tracking application for evaluating the ability of a metered dose inhaler (MDI) tracking device to track asthma medication use in 50 mild-to-moderate asthma subjects over a period of 6 months. The models will then be evaluated using multiple sensor streams in youth with diabetes (The Diabetes Management Integrated Technology Research Initiative (DMITRI) study) and in a prospective study in youth with asthma to determine their validity, efficacy, and utility in identifying patient scenarios of concern.
The SenseHealth system architecture will consist of four major components (1) Health and environmental sensors linked with (2) smartphone applications that communicate with (3) a back-end Data Center comprised of data storage and clusters doing and real-time analytics and data visualization, which will then provide a comprehensive health picture to users/clients via (4) tailored, programmed user/client applications. For these continuous sensing applications, managing sensors and smartphone in an energy-efficient manner is critical. SenseHealth will include a novel context-aware power management framework that uses both the application-level context (e.g., sensor data) and the dynamic environmental or system-level context (e.g., battery level, next phone charging opportunity prediction, or bandwidth availability) to adaptively control the state of hardware components and deliver a consistent performance (e.g. data accuracy, latency). In particular, data sampling protocols will be energy-aware and will be designed to sample data accurately but only as necessary to provide relevant clinical information. SenseHealth will use Storm, an open source distributed real-time computation system to process the data in a scalable and fault-tolerant manner. The aforementioned predictive models will be implemented in ESPER, an open-source complex event processing (CEP) engine. The models will use ESPER's rich Event Processing Language (EPL) to express filtering, aggregation, and joins, possibly over sliding windows of multiple event streams and pattern semantics to express complex temporal causality among events and trigger custom actions when event conditions occur among event streams. Finally, SenseHealth will fuse sensor and clinical data in a visual format that will increase interpretability and comprehension independent of literacy levels and will provide feedback and ultimately intervention support that is timely and relevant to the user (patient and clinician) based on comprehensive knowledge of data. Open source software visualization tools developed at Calit2 that leverage advances in scaled display wall technology will serve as the foundation for the data visualization component. NSF-funded DELPHI project will provide the data center component to store health sensor data and provide access to SenseHealth algorithm-processed data and visualization protocols.
The research itself will have direct impact on two patient communities, but the broader impacts of the proposed research will extend well beyond them. The proposed open software platform will be built with flexibility to allow for alternative programming with plug-and-play data processing algorithms as required for various sensors/data sources/clinical scenarios. The results from the proposed development activities and prototyping experiments will be of tremendous value to medical professionals, scientists and engineers who are engaged in planning and developing sensor-based systems for continuous health monitoring. The developed software products will be publicly available as open source products under the Apache license. The tools developed from this proposal will be designed to be extensible so that other sensors as well as models can easily be integrated and impact a broader range of healthcare applications. SenseHealth is an essential step toward providing a real-time 360-degree snapshot of health to optimize patient-centered, evidence-based decisions and to empower patients to participate in their own healthcare. The project team will contribute to training a diverse next generation of scientists by involving undergraduate students in the development process, both for computer science techniques and medical science research. The exciting aspect of this proposed work is that wellness is a very tangible and important factor even at young age. The education program will be structured to excite students, particularly those from traditionally underrepresented groups such as minorities and females, about multi-disciplinary research. Through the UCSD's COSMOS program, simple, fun and hands-on experiences for these students will be designed to allow them to understand importance of self-health assessment and disease management at an early age. The team is involved heavily in Graduate Medical Education at UCSD and will promote use of SenseHealth to integrate health data into current health systems in fellowship training activities. This proposal also funds for one graduate student.
|
0.915 |
2013 — 2017 |
Schulze, Jurgen (co-PI) [⬀] Rosing, Tajana Kuester, Falko [⬀] Defanti, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Advanced Visualization Instrumentation For the Collaborative Exploration of Big Data @ University of California-San Diego
Proposal #: 13-38192 PI(s): Kuester, Falko; DeFanti, Thomas A.; Rosing, Tajana S.; Schulze, Jurgen P. Institution: University of California - San Diego Title: MRI/Dev.: Advanced Visualization Instrumentation for the Collaborative Exploration of Big Data Project Proposed: This project, developing a Scalable Omni-Presence Environment (ScOPE), a next generation visualization system for collaborative exploration of large volumes of data, provides an environment for analyzing, processing, and visualizing Big Data resulting from many different areas of science and engineering. The instrument serves as an integrative, virtual metaphor for a combined microscope and telescope, enabling users to explore data from the nano to macro to mega scale. ScOPE provides researchers the ability to study simulated and acquired data at a level of precision previously unmatched. It is expected to become the platform for training a new generation of users to be fluent in data analytics in collaborative environments. Initially, three universities will have direct access to the ScOPE instrument and all its features: U. California-San Diego (UCSD), Jackson State U. (JSU), and U of Texas Medical Branch (UTMB). Nonetheless, following the tradition of the project team (effectively done with earlier generations of visualization technologies (e.g., OptIPortal tile display walls now installed at more than 100 institutions worldwide), the critical components of the infrastructure will be broken such that they may be replicated for use at remote locations by other research or educational institutions. The developers anticipate that private-sector collaborators, such as Qualcomm and Intel, will help popularize use of specific components for the nation?s big-data analytics infrastructure. Notwithstanding, the broadest impact of the instrument should be evident in the discoveries and advances made by engineers and scientist that use ScOPE to enhance collaboration and analysis in the disciplines that have been singled out as ?Domain Drivers? for the project. These include projects led by researchers in ocean sciences (and ocean observatories); cyber-archaeology and cultural heritage diagnostics; real-time brain imaging; digital cinema and very-high quality digital media; integrative computational biology; underwater microscopy; molecular dynamics; structural biology and computational chemistry; and large-scale numerical simulation. In turn, these domain specialists will work alongside computer scientists who will address grand challenges in system architecture, data transport, security, representation, arching, processing multi-modal analytics, and human-computer interaction. ScOPE?s long-distance collaboration will be supported by telepresence at bandwidths ranging up to 40 Gigabits per second. Thus, the project creates a highly interactive collaboration space equipped with a natural human-computer interface and advanced 3D modeling and rendering at a sufficient scale to tackle complex experiments and analyze large amount of visual and numerical data pertaining to phenomena of wide dimensions and extreme time scales. Domain drivers have been identified to ensure that the resulting environment and tools are applicable to a broad array of scientific disciplines. These include earth system sciences, civil and structural engineering, mechanical and aerospace engineering, biomedical and electrical (and ocean observatories engineering, social sciences, and anthropology. This project takes a great leap forward into a new generation of collaborative environment that until recently was unthinkable. The display capabilities will no longer be passive; envisioned is a continuous spatial workspace imaging, including eye, skin response, and even mobile electroencephalography sensing, allowing ScOPE to respond to and infer user intent. The environment will be designed specifically to handle ?big data,? using a failure-tolerant and cloud-centric approach while also downsizing the supercomputer flash memory architecture. ?Big Data.? The instrument will enable scientific discoveries as well as research on how best to process, analyze, and visualize Scope will serve as a prototype for other similar instruments. The research enabled by ScOPE will have impacts in many areas of science. Broader Impacts: As previously mentioned, the ScOPE instrument provides researchers the ability to study simulated and acquired data at a level of precision previously unmatched. ScOPE is expected to become the platform for training a new generation of users to be fluent in data analytics in collaborative environments. The developers anticipate that private-sector collaborators, such as Qualcomm and Intel, will help popularize the use of specific components for the nation?s big-data analytics infrastructure. Notwithstanding, the broadest impact of the instrument should be evident in the discoveries and advances made by engineers and scientist that use ScOPE to enhance collaboration and analysis in the disciplines that have been singled out as ?Domain Drivers? for the project. These include projects led by researchers in ocean sciences (and ocean observatories); cyber-archaeology and cultural heritage diagnostics; real-time brain imaging; digital cinema and very-high quality digital media; integrative computational biology; underwater microscopy; molecular dynamics; structural biology and computational chemistry; and large-scale numerical simulation. In turn, these domain specialists will work alongside computer scientists who will address grand challenges in system architecture, data transport, security, representation, arching, processing multi-modal analytics, and human-computer interaction. The instrument will have direct impact on three universities, while the technology developed in building the instrument will inform the construction of similar instruments around the nation. To promote greater public appreciation of scientific research, the public will be invited to tour the visualization facilities, hopefully encouraging young people to enter career in science and engineering. The area of ScOPE accessible to the public will have significant impact on the public?s impression of academic research. ScOPE?s capabilities are likely to transform our ability to collaborate with distributed research teams and be directly applied to day-to-day research.
|
0.915 |
2015 — 2018 |
Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Enact: Environment-Aware Management of Mobile Systems @ University of California-San Diego
Mobile systems are pervasive and have been adapted for diverse computing needs throughout the world. Smart mobile systems, with advanced computing and connectivity, promise superlative experience, yet battery constraints limit their use. Device temperatures are also a major concern as high temperatures reduce hardware reliability but also because human skin can tolerate only moderately warm phones. Mobile system usage is highly influenced by user context, hence can provide a crucial input for proactive, instead of reactive, device power, thermal and reliability management. Even with a small amount of contextual information such as user location and motion, battery lifetime can be improved by as much as a factor of five. Yet, despite the fact that there are already many sensors available in mobile systems, there have been no simple and flexible ways to incorporate context into mobile system resource management. To enable mobile systems to become more context-aware, this project is designing ENACT, an ENvironment Aware ConTrol framework that leverages overarching sensor data for system-wide context-aware management.
ENACT is a lightweight framework that enables a mobile system to tap into a vast array of sensors, and leverage comprehensive context about hardware, software, and user, to control system-wide actuators. The framework has two main components. A context recognition service leverages well-established statistical techniques to robustly derive semantic information from raw sensor data. Then, by observing system behavior with respect to context and actuators, it computes stochastic models. A context-aware control service, based on stochastic model predictive control, smartly sets the system's actuators to minimize the energy consumption while meeting reliability, temperature, and performance constraints. To efficiently meet these requirements, the control framework is supported by a hierarchical structure working at different time scales. ENACT can efficiently transforming a large set of raw sensor data into usable contextual information accessible system-wide, and leverage context to configure a set of actuators to improve the way mobile system resources are managed while providing excellent user experience. This allows for accurately aligning system goals to performance expectations. The project includes the implementation of novel context-aware techniques for energy, thermal and reliability management, which can smartly adapt to dynamic performance requirements without sacrificing energy margins. Students involved in this project gain valuable training in disciplines across machine learning, hierarchical control, optimization and resource management. The ENACT framework can be leveraged in academia and industry alike to implement context-awareness in systems, and to design resource management policies that can leverage context.
|
0.915 |
2015 — 2018 |
Rosing, Tajana Patrick, Kevin (co-PI) [⬀] Dasgupta, Sanjoy (co-PI) [⬀] Griswold, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Ttp Option: Synergy: Collaborative Research: Calibration of Personal Air Quality Sensors in the Field - Coping With Noise and Extending Capabilities @ University of California-San Diego
All cyber-physical systems (CPS) depend on properly calibrated sensors to sense the surrounding environment. Unfortunately, the current state of the art is that calibration is often a manual and expensive operation; moreover, many types of sensors, especially economical ones, must be recalibrated often. This is typically costly, performed in a lab environment, requiring that sensors be removed from service. MetaSense will reduce the cost and management burden of calibrating sensors. The basic idea is that if two sensors are co-located, then they should report similar values; if they do not, the least-recently-calibrated sensor is suspect. Building on this idea, this project will provide an autonomous system and a set of algorithms that will automate the detection of calibration issues and preform recalibration of sensors in the field, removing the need to take sensors offline and send them to a laboratory for calibration. The outcome of this project will transform the way sensors are engineered and deployed, increasing the scale of sensor network deployment. This in turn will increase the availability of environmental data for research, medical, personal, and business use. MetaSense researchers will leverage this new data to provide early warning for factors that could negatively affect health. In addition, graduate student engagement in the research will help to maintain the STEM pipeline.
This project will leverage large networks of mobile sensors connected to the cloud. The cloud will enable using large data repositories and computational power to cross-reference data from different sensors and detect loss of calibration. The theory of calibration will go beyond classical models for computation and physics of CPS. The project will combine big data, machine learning, and analysis of the physics of sensors to calculate two factors that will be used in the calibration. First, MetaSense researchers will identify measurement transformations that, applied in software after the data collection, will generate calibrated results. Second, the researchers will compute the input for an on-board signal-conditioning circuit that will enable improving the sensitivity of the physical measurement. The project will contribute research results in multiple disciplines. In the field of software engineering, the project will contribute a new theory of service reconfiguration that will support new architecture and workflow languages. New technologies are needed because the recalibration will happen when the machine learning algorithms discover calibration errors, after the data has already been collected and processed. These technologies will support modifying not only the raw data in the database by applying new calibration corrections, but also the results of calculations that used the data. In the field of machine learning, the project will provide new algorithms for dealing with spatiotemporal maps of noisy sensor readings. In particular, the algorithms will work with Gaussian processes and the results of the research will provide more meaningful confidence intervals for these processes, substantially increasing the effectiveness of MetaSense models compared to the current state of the art. In the field of pervasive computing, the project will build on the existing techniques for context-aware sensing to increase the amount of information available to the machine learning algorithms for inferring calibration parameters. Adding information about the sensing context is paramount to achieve correct calibration results. For example, a sensor that measures air pollution inside a car on a highway will get very different readings if the car window is open or closed. Finally, the project will contribute innovations in sensor calibration hardware. Here, the project will contribute innovative signal-conditioning circuits that will interact with the cloud system and receive remote calibration parameters identified by the machine learning algorithms. This will be a substantial advance over current circuits based on simple feedback loops because it will have to account for the cloud and machine learning algorithms in the loop and will have to perform this more complex calibration with power and bandwidth constraints. Inclusion of graduate students in the research helps to maintain the STEM pipeline.
|
0.915 |
2017 — 2020 |
Smarr, Larry [⬀] Rosing, Tajana Altintas, Ilkay Defanti, Thomas Kreutz-Delgado, Kenneth (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-New: Cognitive Hardware and Software Ecosystem Community Intrastructure (Chase-Ci) @ University of California-San Diego
This project, called the Cognitive Hardware And Software Ecosystem Community Infrastructure (CHASE-CI), will build a cloud of hundreds of affordable Graphics Processing Units (GPUs), networked together with a variety of neural network machines to facilitate development of next generation cognitive computing. This cloud will be accessible by 30 researchers assembled from 10 universities via the NSF-funded Pacific Research Platform. These researchers will investigate a range of problems from image and video recognition, computer vision, contextual robotics to cognitive neurosciences using the cloud to be purpose-built in this project.
Training of neural network with large data-sets is best performed on GPUs. Lack of availability of affordable GPUs and lack of easy access to the new generation of Non-von Neumann (NvN) machines with embedded neural networks impede research in cognitive computing. The purpose-built cloud will be available over the network to address this bottleneck. PIs will study various Deep Neural Network, Recurrent Neural Network, and Reinforcement Learning Algorithms on this platform.
|
0.915 |
2018 — 2021 |
Gao, Sicun Rosing, Tajana Atanasov, Nikolay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Fnd: Collab: Distributed Bayesian Learning and Safe Control For Autonomous Wildfire Detection @ University of California-San Diego
Wildfires destroy millions of hectares of forest, sensitive ecological systems, and human infrastructure. A critical aspect of mitigating wildfire-related damages is early fire detection, well before initiating fires grow to disastrous proportions. Current practices are based on expensive assets, such as satellites, watchtowers, and remote-piloted aircraft, that require constant human supervision, limiting their use to high-risk or high-value areas. This project aims to take advantage of the hyperconvergence of computation, storage, sensing, and communication in small unmanned aerial vehicles (UAVs) to realize large-scale mapping of environmental factors such as temperature, vegetation, pressure, and chemical concentration that contribute to fire initiation. UAV teams that recharge autonomously and communicate intermittently among each other and with static sensors is a compelling research objective that will aid firefighters with continuous real-time surveillance and early detection of ensuing fires.
This proposal offers three fundamental innovations to address the scientific challenges associated with autonomous, collaborative environmental monitoring. First, a new Satisfiability Modulo Optimal Control framework is proposed to handle mixed continuous flight dynamics and discrete constraints and ensure collision avoidance, persistent communication, and autonomous recharging for UAV navigation. Second, a distributed systems architecture using new uncertainty-weighted models will be developed to enable cooperative mapping across a heterogeneous team of UAVs and static sensors and avoid bandwidth-intensive data streaming. Lastly, a new Bayesian learning and inference approach is proposed to generate multi-modal (e.g., thermal, semantic, geometric, chemical) maps of real-time environmental conditions with adaptive accuracy and uncertainty quantification. This project with its focus on multi-robot teams benefits, e.g., conservation management and search-and-rescue operations. Both applications demand robot coordination, cooperation, and autonomy, including multi-modal mapping, collaborative inference over heterogeneous networks, and multi-objective navigation with safety, communication, and energy constraints.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2018 — 2021 |
Wuerthwein, Frank (co-PI) [⬀] Smarr, Larry [⬀] Rosing, Tajana Altintas, Ilkay Papadopoulos, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc* Npeo: Toward the National Research Platform @ University of California-San Diego
Academic researchers need a simple data sharing architecture with end-to-end 10-to-100Gbps performance to enable virtual co-location of large amounts of data with computing. End-to-end is a difficult problem to solve in general because the networks between ends (campuses, data repositories, etc.) typically traverse multiple network management domains: campus, regional, and national. No one organization owns the responsibility for providing scientists with high-bandwidth disk-to-disk performance. Toward the National Research Platform (TNRP), addresses issues critical to scaling end-to-end data sharing. TNRP will instrument a large federation of heterogeneous "national-regional-state" networks (NRSNs) to greatly improve end-to-end network performance across the nation.
The goal of improving end-to-end network performance across the nation requires active participation of these distributed intermediate-level entities to reach out to their campuses. They are trusted conveners of their member institutions, contributing effectively to the "people networking" that is as necessary to the development of a full National Research Platform as is the stability, deployment, and performance of technology. TNRP's collaborating NRSNs structure leads to engagement of a large set of science applications, identified by the participating NRSNs and the Open Science Grid.
TNRP is highly instrumented to directly measure performance. Visualizations of disk-to-disk performance with passive and active network monitoring show intra- and inter-NSRN end-to-end performance. Internet2, critical for interconnecting regional networks, will provide an instrumented dedicated virtual network instance for the interconnection of TNRP's NRSNs. Cybersecurity is a continuing concern; evaluations of advanced containerized orchestration, hardware crypto engines, and novel IPv6 strategies are part of the TNRP plan.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2022 |
Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fet: Small: Brain-Inspired Hyperdimensional Computing For Iot Applications @ University of California-San Diego
In today's world, technological advances are continually creating more data than what we can cope with. Much of data processing will need to run at least partly on devices at the edge of the internet, such as sensors and smart phones. However, running existing machine learning algorithms on these devices would drain their batteries and be also too slow. Hyper-Dimensional (HD) computing is a class of learning algorithms that is motivated by the observation that the human brain operates on a lot of simple data in parallel. In contrast to today's Deep Neural Networks and other similar algorithms, systems that use HD computing to learn will be able to run at least thousand times more efficiently, can be implemented directly in non-volatile memory, and are natively more secure as they use a large number of bits (~10,000) to encode and process data in parallel. Most importantly, such systems can explain how they made decisions, resulting in sensors and phones that can learn directly from the data they obtain without the need for the cloud at minimum impact to their battery lifetime. This project will develop HD computing software and hardware infrastructure, so that engineers can easily provide HD computing capabilities in their products, and thus benefit from their speed and energy efficiency. The project will support underrepresented minority students including K-12 outreach activities, and disseminate its outcomes and code through open-source efforts.
The project seeks to develop: i) novel algorithms supporting key cognitive computations in high-dimensional space including classification, clustering and regression; and ii) novel systems for efficient HD computing on sensors and mobile devices, which cover hardware accelerators such as GPUs, FPGAs and Processor in Memory (PIM), along with software infrastructure to support it. Prototypes will be built and tested in smart homes, and in a large scale sensor network called HPWREN, used for many applications including firefighting, covering 20,000 sq. miles in San Diego area. These demonstrations will show both the quality of the proposed HD algorithms and the efficiency of system designs to address the real-world learning problems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2023 |
Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Mlwins: Hyperdimensional Computing For Scalable Iot Intelligence Beyond the Edge @ University of California-San Diego
The Internet of Things (IoT) generates large amounts of data that machine learning algorithms today process in the cloud. The heterogeneity of the data types and devices, along with limited computing and communication capabilities of IoT devices, poses a significant challenge to real-time training and learning with classical machine learning algorithms. This project instead proposes to use Hyperdimensional (HD) computing for distributed machine learning. HD computing is a brain-inspired machine learning paradigm that transforms data into knowledge at very low cost, while being extremely robust to errors. When completed, this project has the potential to change the way machine learning is done today ? instead of depending on the cloud, IoT systems will be able to make quality decisions on the spot, in real time, regardless of connectivity, with long battery lifetime. This will be made possible by designing: i) novel HD encoding schemes to represent various data in IoT applications including numerical feature vectors, time-series data, and images, ii) a novel distributed learning framework for IoT networks by incorporating active learning to considerably reduce communication overhead and learning costs, and iii) a reliable learning solution based on the error-tolerant characteristic of HD computing. The ideas developed in this project will be tested on both UCSD and SDSU using a fully instrumented testbed for human activity recognition.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2021 |
Rosing, Tajana Moshiri, Alexander |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rapid: Real-Time Phylogenetic Inference and Transmission Cluster Analysis of Covid-19 @ University of California-San Diego
As the COVID-19 pandemic spreads rapidly around the world, public health officials need to be able to answer questions such as ?How is COVID-19 spreading through the population?? and ?How many individual outbreaks exist within a given community??. With increasing access to sequencing technologies, scientists can analyze the genome sequences of collected SARS-CoV-2 viral samples in order to gain information about to aid in the development of vaccines and drugs as well as to infer the most likely evolutionary history of the virus, which can help epidemiologists track the spread of the virus across populations. The epidemiological use of the evolutionary history of the virus is only useful if it can be updated in real-time, but as the sheer volume of available data rapidly grows, scientists will require scalable computational tools to conduct these analyses. The goal of this project is to develop novel algorithms, software tools, and hardware systems that will scale to the massive amounts of data that are rapidly being generated in this pandemic, which will in turn aid in phylogenomic analysis of the virus, the effective tracking of the spread of the virus as well as in the development of novel vaccines and drugs in this pandemic. As a broader impact, this project will help with replicability and reproducibility of genetic and epidemiological research results. Furthermore, the existence of such a system will aid in fighting future viral outbreaks. This project provides professional development opportunities for an early career scientist.
The standard viral phylogenetic inference workflow consists of quality checking and filtering, multiple sequence alignment, phylogenetic inference, phylogenetic rooting, phylogenetic dating, and transmission clustering. The researchers have identified that the computational bottlenecks of the workflow are multiple sequence alignment and phylogenetic inference, which scale poorly as a function of the number of input sequences. The objective of this project is the development of a user-friendly, scalable, and modular workflow for conducting a real-time computational phylogenetic analysis of assembled viral genomes, with a primary focus of SARS-CoV-2. The project solution includes: (1) the development of a novel software tool for orchestrating the automated end-to-end workflow, (2) the development of novel algorithms (and software implementations of these algorithms) to speed up the computational bottlenecks of the workflow, (3) the development of novel hardware systems for accelerating the workflow, and (4) a real-time publicly-accessible repository in which researchers can access the most up-to-date analysis results (with intermediate files) of all SARS-CoV-2 genomes currently available to prevent repeat computation efforts. The analysis infrastructure that will be built in this project will be broadly applicable to any viral pathogen for which phylogenetic inference is biologically and epidemiologically meaningful.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2026 |
Wuerthwein, Frank [⬀] Rosing, Tajana Defanti, Thomas Tatineni, Mahidhar (co-PI) [⬀] Weitzel, Derek |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Category Ii: a Prototype National Research Platform @ University of California-San Diego
Advances in data-intensive science and engineering research, supported by ongoing developments in cyberinfrastructure, enable new insights and discoveries. Among these are progress in understanding fundamental processes and mechanisms from human public health to the health of the planet; predicting and responding to natural disasters; and promoting the increasing interconnectedness of science and engineering across many fields, including in astronomy, extreme-scale systems management, cell biology, high energy physics, social science, and satellite image analyses. Fundamentally new system architectures are required to accelerate such advances, including capabilities that integrate diverse computing and data resources, research and education networks, edge computing devices, and scientific instruments into highly usable and flexible distributed systems. Such systems provide both technological platforms for conducting research, and can catalyze distributed and multidisciplinary teams, which are developing new and transformative approaches to addressing disciplinary and multidisciplinary research problems.
Recent reports, informed through community visioning, including the NSF supported report “Transforming Science Through Cyberinfrastructure”, note that a cyberinfrastructure (CI) ecosystem designed to be open and scalable, and to grow with time may advance through in kind contributions of compute and data resources by the national science and education community. This CI ecosystem may be viewed, “more holistically as a spectrum of computational, data, software, networking, and security resources, tool and services, and computational and data skills and expertise that can be seamlessly integrated and used, and collectively enable new, transformative discoveries across S&E [science and education]”.
Aligned with this vision of a national scale CI ecosystem, the San Diego Supercomputer Center (SDSC) at the University of California, San Diego (UCSD), in association with partners at the University of Nebraska, Lincoln (UNL) and the Massachusetts Green High Performance Computing Center (MGHPCC), will deploy the “Prototype National Research Platform” (NRP). This novel, national scale, distributed testbed architecture includes: a high performance subsystem to be deployed at SDSC that integrates advanced processors to be available in association with extremely low latency national Research and Educational (R&E) networks operating at multiple 100Gbps speeds; additional highly optimized subsystems each constituting 288 Graphics Processing Units (GPUs) to be deployed at the University of Nebraska, Lincoln (UNL) and the Massachusetts Green High Performance Computing Center (MGHPCC), to be also interconnected to the R&E networks at 100Gbps speeds at each location; a minimum of additional 1 PB of high performance disk storage to be deployed at each of the three sites to establish a Content Delivery Network (CDN) providing prototype caliber access to data anywhere in the nation within a round trip time (RTT) of ~10ms to be available through a set of eight optimally positioned 50TB Non Volatile Memory (NVMe)-based network caches; and an innovative system software environment enabling both centralized management of the nationally distributed testbed system. Additionally, the system architecture will remain open to future growth through additional integration of capabilities to be achieved through a novel “bring your own resource” program.
The project is structured as a three-year testbed phase, followed by a two-year allocations phase. During the testbed phase, SDSC researchers, working closely with collaborators at UNL and MGHPCC, as well as with small numbers of research teams, will evaluate the NRP architecture and performance of constituent components. Semiannual workshops will bring teams together to share lessons learned, develop the knowledge and best practices to inform researchers, and explore the innovative architecture to accelerate S&E discoveries from ideas to publications. During the allocations phase, NRP will be available to researchers with projects deemed meritorious by an NSF-approved allocation process. Workshops continue through the allocations phase.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2024 |
Wuerthwein, Frank (co-PI) [⬀] Rosing, Tajana Altintas, Ilkay Defanti, Thomas Yu, Qi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: Ens: Cognitive Hardware and Software Ecosystem Community Infrastructure (Chase-Ci) @ University of California-San Diego
Machine learning (ML) is a rapidly expanding field. Computationally intensive workflows train neural nets and then use the results in smartphones, robots, drones, self-driving vehicles, and to run the Internet of Things. Access to graphics processing units (GPUs) is provided through CHASE-CI’s Nautilus, a highly distributed but centrally managed on-demand computer cluster designed for ML and Computational Media (CM). CHASE-CI provides over 20 campuses the scaffold for adding on-premises compute cycles and fast data handling and it offers researcher-focused support and training. Using CHASE-CI’s detailed measurements of performance, researchers learn to become experts in optimization of their computational resources.
CHASE-CI is a community-building effort that sustains a growing community of ML/CM researchers using a purpose-built continuously enhanced nationally distributed computing and data storage infrastructure. Researchers explore combinations of algorithms and architectures optimized with the help of graphed performance metrics. Researchers benefit from extensive shared workflows and open-source software. They use CHASE-CI’s on-line social media platform to receive and give support and share techniques. Community use of CHASE-CI informs computer architecture discussions about future national cyberinfrastructure research and instructional lab needs. CHASE-CI forms a national on-line community that is easy to join, designed for sharing code, data, and results.
The hardware, software, and socio-technical approaches developed by CHASE-CI have provided a roadmap for broader research uses and student training in ML/CM technologies. Researchers get expanded access to hundreds of GPUs for developing algorithms and software to train sensing devices and visualize results thus engaging the students who will soon join the essential workforce for the ongoing massive expansion of mobile platforms such as robots, drones, and self-driving cars. CHASE-CI impacts social diversity in computer science, broadening participation among Minority-Serving Institutions and underserved states. CHASE-CI thoroughly measures and monitors data access by applications over the regional and national R&E networks.
The repository for the project may be found at prp.ucsd.edu, to be maintained for the length of the project at a minimum. It is the anchor website containing pointers to all the research efforts that build upon the Pacific Research Platform. It contains code repositories, presentations, references like publications, presentations, and recorded lectures, and it maintains and archives an active social media channel. CHASE-CI is led by UC San Diego, partnering with investigators at UC Santa Cruz, The University of Nebraska-Lincoln, Florida Agricultural and Mechanical University, New York University, The University of Illinois at Chicago, and San Diego State University.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2023 |
Smarr, Larry [⬀] Rosing, Tajana Altintas, Ilkay Defanti, Thomas Yu, Qi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: Abr: Cognitive Hardware and Software Ecosystem Community Infrastructure (Chase-Ci) @ University of California-San Diego
This project, called the Cognitive Hardware And Software Ecosystem Community Infrastructure (CHASE-CI), is to continue and expand a cloud of hundreds of affordable Graphics Processing Units (GPUs), networked together with a variety of neural network machines to facilitate development of next generation cognitive computing. This cloud will be accessible by 30 researchers assembled from 10 universities via the NSF-funded Pacific Research Platform. These researchers will investigate a range of problems from image and video recognition, computer vision, contextual robotics to cognitive neurosciences using the cloud to be purpose-built in this project.
Training of neural network with large data-sets is best performed on GPUs. Lack of availability of affordable GPUs and lack of easy access to the new generation of Non-von Neumann (NvN) machines with embedded neural networks impede research in cognitive computing. The purpose-built cloud will be available over the network to address this bottleneck. PIs will study various Deep Neural Network, Recurrent Neural Network, and Reinforcement Learning Algorithms on this platform.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2022 — 2025 |
Dasgupta, Sanjoy [⬀] Rosing, Tajana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Iis: Ri: Medium: Lifelong Learning With Hyper Dimensional Computing @ University of California-San Diego
The use of artificial intelligence (AI) has enabled computers to solve some problems that were out of reach just a decade ago, such as recognizing familiar objects in images, or translating between languages with reasonable accuracy. In each case, a specific task (such as "translate spoken Mandarin into spoken Spanish") is defined, data is collected (consisting, say, of utterances in the two languages), and an AI system is trained to achieve this functionality. To further expand the scope of AI, it is important to build systems that are not just geared towards highly-specific and static predefined tasks, but are able to take on new tasks as they arise (new words, new accents, and new dialects, for instance). This is often called "lifelong learning", and it means, basically, that the systems are adaptive to change. This project develops an approach to lifelong learning using a brain-inspired framework for distributed computing, yielding machines that potentially can solve tasks more flexibly and consume significantly less power than traditional AI systems. It will: (1) advance the ability of AI systems to handle changing environments, (2) enable a host of new low-power AI systems with applications such as environmental sensing, (3) strengthen mathematical connections between computer science and neuroscience, and (4) serve as the basis for educational and outreach activities.<br/><br/>This project will develop lifelong learning within the framework of "hyperdimensional computing", a neurally-inspired model of computation in which information is encoded using randomized distributed high-dimensional representations, often with limited precision (e.g., with binary components), and processing consists of a few elementary operations such as vector summation. We will build HD algorithms for some fundamental statistical primitives -- similarity search, density estimation, and clustering -- and then use these as building blocks for various forms of lifelong learning. These will rest on mathematical advances in (1) the analysis of sparse codes produced by expansive random maps and (2) algorithmic exploitation of kernel properties of high-dimensional randomized representations. Our algorithms will be implemented in hardware, deployed on a network of low-power sensors, and evaluated experimentally in a lifelong learning task involving air quality sensing.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |