2010 — 2015 |
Adams, Ryan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Microwave Circulators Based On Magnetostatic Waves @ University of North Carolina At Charlotte
Abstract The objective of this research is to design, fabricate and test microwave circulators that operate in the band between 5 and 50 GHz utilizing magnetostatic waves as the dominant energy transport mechanism. This will allow total device sizes of 1 µm or less for competitive integration with standard microelectronic processes. The approach is threefold 1) a simplified numerical model (i.e., Green's function or integral equation) will be developed 2) new materials processing techniques will be created for low conductivity dilute magnetic semiconductor materials and 3) dc bias circuitry will be fabricated. Intellectual Merit The proposed activity aims to dramatically change microwave ferrite circulator design approaches as well as communication/RADAR system functionality. Magnetostatic Wave phenomena will significantly change perspectives on nonreciprocal device operation and promises the extraordinary outcome of expanded bandwidths in smaller structures than are currently employed. In addition to these explicit outcomes, the proposed research is expected to provide glimpses of new applications and phenomena associated with magnetoelectric semiconducting effects of dilute magnetic semiconductor materials; new theories, models and devices are expected. Broader Impacts The proposed activities integrate magnetostatic wave phenomena and dilute magnetic semiconductor material fabrication discoveries with significant teaching, training and learning activities. In particular, underrepresented undergraduate and graduate students will participate in the proposed research wherever possible. Also, K-12 students will be included via presentations, workshops and laboratory tours. Results are planned for dissemination in top-tier peer-reviewed journals; intermediate results will be presented to the international community through conferences and symposia.
|
0.909 |
2011 — 2015 |
Weldon, Thomas Adams, Ryan Daneshvar, Kasra (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Advanced Wideband Microwave Metamaterials @ University of North Carolina At Charlotte
Proposal # 1101939 Advanced Wideband Microwave Metamaterials Thomas Weldon, Ryan Adams, Kasra Daneshvar
Abstract Objective: The main objectives of the proposed research are to develop, demonstrate, and disseminate theory, design methods, and experimental results for advanced microwave metamaterial devices that demonstrate wideband negative permittivity, negative permeability, and negative refractive index. In particular, the theory and design for these new metamaterials will be developed, simulations will be performed, and prototype microwave metamaterial devices will be fabricated and tested.
Intellectual merit: The proposed research aims to develop and experimentally confirm the theory and design methods for novel microwave metamaterials that offer significantly wider bandwidth by using active devices. The proposed effort offers the promise of broad, fundamental, and transformational impact in metamaterial technology in microwaves, terahertz, electromagnetics, and optics that lays the foundation for novel wideband devices having negative permeability, negative permittivity, and negative refractive index.
Broader impacts: The proposed activity advances discovery in the areas of metamaterial phenomena and integrates these discoveries with significant teaching, training and learning activities. In particular, we plan to work with K-12 students in presentations, workshops, and laboratory tours. Underrepresented K-12 students will be particularly targeted for inclusion in the educational efforts. Also, underrepresented undergraduate and graduate students will be asked to participate in the proposed research wherever possible as research assistants as well as through course topic modification, course additions, senior design projects, class projects, and summer REU opportunities. The results of the proposed research are planned for dissemination in top?tier peer?reviewed journals; intermediate results will be presented to the international community through professional conferences.
|
0.909 |
2014 — 2017 |
Adams, Ryan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Parallel Methods For Large-Scale Probabilistic Inference
We are undergoing a revolution in data. We have grown accustomed to constant upheaval in computing -- quicker processors, bigger storage and faster networks -- but this century presents the new challenge of almost unlimited access to raw data. Whether from sensor networks, social computing, or high-throughput cell biology, we face a deluge of data about our world. Scientists, engineers, policymakers, and industrialists need to use these enormous floods of data to make better decisions. This research project is about providing foundations for tools to achieve these goals. Simple models give only coarse understanding. The world is sophisticated and dynamic, providing rich information. Furthermore, representation of uncertainty is critical to discovering patterns in complex data. Not only are many natural processes intrinsically random, but our knowledge is always limited. The calculus of probability allows us to represent this uncertainty and design algorithms to act effectively in an unpredictable world. The gold standard for probabilistic analysis is Markov chain Monte Carlo (MCMC), a way to identify hypotheses about the unobserved structure of the world that are consistent with observed data. It is a powerful and principled way to perform data analysis, but traditional MCMC methods do not map well onto modern computing environments. MCMC is a sequential procedure that cannot generally take advantage of the parallelism offered by multi-core desktops and laptops, cloud computing, and graphical processing units. This research will develop new methods for MCMC that are provably correct, but that take advantage of large-scale parallel computing. There are a variety of broader impacts of this work. In addition to the core technical contributions, the project engages in deep scientific collaborations. New photovoltaic materials will lead to better solar cells and more sustainable energy production. New techniques for uncovering genetic regulatory mechanisms will lead to better understanding of disease. Quantitative models of mouse activity will give insight into the neural basis of behavior and provide a deeper understanding of brain disorders.
From a technical point of view, this work pursues two complementary approaches to large-scale Bayesian data analysis with MCMC: 1) a novel general-purpose framework for sharing of information between parallel Markov chains for faster mixing, and 2) a new computational concept for speculative parallelization of individual Markov chains. These theoretical and practical explorations, combined with the release of associated open source software, will yield more robust and scalable probabilistic modeling. It will develop provably-correct foundations and efficient new algorithms for parallelization of Markov transition operators for posterior simulation. These operators will be used in three collaborations that are representative of the methodological demands for large-scale statistical inference: 1) predicting the efficiencies of novel organic photovoltaic materials, 2) discovering new genetic regulatory mechanisms, and 3) quantitative neuroscientific models for mouse behavior. While this proposal focuses on the generalizable technical challenges of these problems, these collaborations provide compelling examples of how machine learning can be broadly transformative.
Finally, the project includes a significant outreach component, engaging with local middle schoolers, and involving underrepresented minorities in summer research.
|
0.904 |
2014 — 2018 |
Adams, Ryan (co-PI) [⬀] Seltzer, Margo [⬀] Brooks, David (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Xps: Full: Cca: Collaborative Research: Automatically Scalable Computation
For over thirty years, each generation of computers has been faster than the one that preceded it. This exponential scaling transformed the way we communicate, navigate, purchase, and conduct science. More recently, this dramatic growth in single processor performance has stopped and has been replaced by new generations of computers with more processors on them; for example, even the cell phones we carry have multiple processors in them. Writing software that effectively leverages multiple processing elements is difficult, and rewriting the decades of accumulated software is both difficult and costly. This research takes a different approach -- rather than converting sequential software into parallel software, this project develops ways to store and reuse computation. Imagine computing only when computer time and energy are cheap and plentiful, storing that computation, and then using it later, when computation might be limited or expensive. The approach used involves making informed predictions about computation likely to happen in the future, proactively executing likely computations in parallel with the actual computation, and then "jumping forward in time" if the actual execution arrives at any of the predicted computations that have already been completed. This research touches many areas within Computer Science, architecture, compilers, machine learning, systems, and theory. Additionally, exploiting massively parallel computation will produce immediate returns in multiple scientific fields that rely on computation.
The approach used in this research views computational execution as moving a system through the enormously high dimensional space represented by its registers and memory of a conventional single-threaded processor. It uses machine learning algorithms to observe execution patterns and make predictions about likely future states of the computation. Based on these predictions, the system launches potentially large numbers of speculative threads to execute from these likely computations, while the actual computation proceeds serially. At strategically chosen points, the main computation queries the speculative executions to determine if any of the completed computation is useful; if it is, the main thread uses the speculative computation to immediately begin execution where the speculative computation left off, achieving a speed-up over the serial execution. This approach has the potential to be extremely scalable: the more cores, memory, and communication bandwidth available, the greater the potential for performance improvement. The approach also scales across programs -- if the program running today happens upon a state encountered by a program running yesterday, the program can reuse yesterday's computation. This project has the potential to break new ground for research in many areas in Computer Science touched by it.
|
0.904 |
2015 — 2018 |
Fiddy, Michael (co-PI) [⬀] Joshi, Bharat [⬀] Adams, Ryan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Synergy: Collaborative Research: Fault Tolerant Brain Implantable Cyber-Physical System @ University of North Carolina At Charlotte
CPS: Synergy: Collaborative Research: Fault Tolerant Brain Implantable Cyber-Physical System
Epilepsy is one of the most common neurological disorders, affecting between 0.4% and 1% of the world's population. While seizures can be controlled in approximately two thirds of newly diagnosed patients through the use of one or more antiepileptic drugs (AEDs), the remainder experience seizures even on multiple medications. The primary impacts of the chronic condition of epilepsy on a patient are a lower quality of life, loss of productivity, comorbidities, and increased risk of death. Epilepsy is an intermittent brain disorder, and in localization-related epilepsy, which is the most common form of epilepsy, one or a few discrete brain areas (the seizure focus or seizure foci) are believed to be responsible for seizure initiation. More recent approaches with implantable electrical stimulation seizure control devices hold value as a therapeutic option for the control of seizures. These devices, directly or indirectly, target the seizure focus and seek to control its expression. In this project we will build a multichannel brain implantable device based on emerging cyber physical system (CPS) principles. This brain implantable CPS device will incorporate key design features to make the device dependable, scalable, composable, certifiable, and interoperable. The device will operate over the life of an animal, or a patient, and continuously record brain activity and stimulate the brain when seizure related activity is detected to abort an impending seizure.
Episodic brain disorders such as epilepsy have a considerable impact on a patient's productivity and quality of life and may be life-threatening when seizures cannot be controlled with medications. The goal of this project is to create a second generation brain-implantable sensing and stimulating device (BISSD) based on emerging CPS principles and practice. The development of a BISSD as a exemplifies several defining aspects that inform and illustrate core CPS principles. First, to meet the important challenge of regulatory approval a composable, scalable and certifiable framework that supports testing in multiple species is proposed. Second, a BISSD must be wholly integrated with the patient and fully cognizant at every instant of brain state, including dynamic changes in both the normal and abnormal expression of brain physiology and therapeutic intervention. Thus, this project seeks a tight conjunction of the cyber solution that must monitor itself and monitor and stimulate the brain using implanted, adaptable, distributed, and networked electrodes, and the physical system which in this case is the intermittently failing human brain. Third, a BISSD must function for an extensive period of time, up to the life of the patient, because each surgery to place and retrieve a BISSD carries an attendant risk. This requirement necessitates a dependable solution, which this project seeks to reliably achieve through both an understanding of the brain's foreign body response and a unique hierarchical fault-tolerant design. Fourth, an advanced salient approaches to acquire, compress, and analyze sensor signals to achieve real-time monitoring and control of seizures is employed. This project should yield a powerful, scalable CPS framework for robust fault-tolerant implantable medical devices with real-time processing that can grow with advances in sensors, sensing modalities, time-series analysis, real-time computation, control, materials, power and knowledge of underlying biology. The USA has a competitive advantage in the control of seizures in medically refractory epilepsy. In the modern era, epilepsy surgery evolved in the USA in the 1970s and spread from here to other parts of the world. Similarly, the USA enjoys a competitive advantage in BISSDs, and success in this effort will enable the USA to build on and maintain this advantage. In addition to epilepsy, advances made here can be expected to benefit the treatment of other neurological and psychiatric brain disorders.
|
0.909 |
2016 — 2021 |
Fiddy, Michael (co-PI) [⬀] Hofmann, Tino Aggarwal, Ishwar Adams, Ryan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Phase Ii I/Ucrc U. of North Carolina At Charlotte Site: Center For Metamaterials (Cfm) @ University of North Carolina At Charlotte
The Center for Metamaterials (CfM) provides a one-stop-shop for research and development of metamaterials with collaborations between UNC Charlotte, Clarkson University and its many industrial and government members. Metamaterials are man-made or engineered materials having remarkable properties that do not occur in nature. The focus of the CfM is primarily on optical, infrared and radar applications of metamaterials and the new ways in which they permit to control the flow of light and sophisticated filtering and detection of light. Examples include improved performance of light harvesting systems for energy production and designs for smaller higher performance antennas for broadband communications. The CfM works closely with members to address fundamental and precompetitive questions regarding the design and fabrication of metamaterials with customized properties. Fundamental questions include how radiation can be made to interact more strongly with the meta-atoms that compose the metamaterial. It is these interactions that lead to new approaches for engineering designs for improved technologies ranging from imaging sensors to improved heat sinks. The projects conducted within the CfM are developed collaboratively with the Industry Advisory Board (IAB) deciding on the viability of metamaterials solutions for achieving smaller, better, lower-cost and higher performance products for both commercial and defense deployment. The global metamaterials market sector is growing fast and through the CfM, members remain current and competitive on the many emerging technology opportunities. In addition, all projects are conducted by a cohort of students on each site, several from under- represented groups in science. Students' projects typically fold into their dissertations and work closely with members as interns making them well-positioned for employment after graduation.
The CfM's faculty and students at UNCC and Clarkson to work hand-in-hand with IAB members to develop new knowledge, new materials' capabilities and software tools. The technology roadmap, developed by our members, provides a framework for the clustering of project objectives into more specific goals. To ensure the long-term success and viability of the Center, we focus on research that impacts a broad range of industries. CfM goals are to better understand and exploit new properties associated with structured materials, publish and generate IP related to new components and devices. Within the CfM there are many foundational scientific questions to be addressed, concerning understanding and modeling the physical processes that occur when waves interact with subwavelength sized structures. Resonant behavior combined with effective medium descriptions can predict unusual material properties, such as negative refractive index. Exploiting resonant behavior in subwavelength elements, surface patterns and composites, all demand more complete descriptions and a deeper understanding of field-matter interactions. At the nanoscale many new phenomena are being unveiled that are ripe for technological advances. The rich resources of the CfM are well suited to address the tiers of challenges associated with advancing metamaterial applications and filling technology gaps. UNC Charlotte has an interdisciplinary team of faculty participating in the CfM. It also has excellent optical, microwave and micro/nanofabrication facilities user facilities housed in its Center for Optoelectronics and Optical Communications to support CfM projects. CfM supported graduate students are drawn from several graduate programs including Optical Science and Engineering (OSE), EE, ME and Nanoscale Sciences (NS). Moving forward, we plan to include medical and acoustic metamaterials while retaining a brand for the CfM primarily tied to sensing, imaging, antennas and energy harvesting.
|
0.909 |
2017 — 2020 |
Wei, Gu-Yeon (co-PI) [⬀] Brooks, David [⬀] Adams, Ryan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Virtualized Accelerators For Scalable, Composable Architectures
This project seeks to develop fundamental technologies to enable the next-generation of computing devices that will power future ubiquitous computing devices such as smartphones, self-driving cars, and autonomous robots. The project develops novel tools and techniques at both the hardware and software layers of computer systems. This project will also train new graduate engineers in architecting complex computing systems, modern software and hardware design methodologies, and cutting edge machine learning techniques. All of these skillsets are in broad demand in US industry but have been underrepresented in STEM education.
Heterogeneous architectures comprising general purpose processors, graphics processors, and hardware accelerators designed for specific computing tasks have been widely adopted in today's computing systems for both edge and cloud devices. Specialized computing blocks provide tremendous benefits in energy efficiency. However, a major challenge in the design of such systems is the loss of generality and flexibility that has limited their adoption to a small set of application domains that do not often change. Increased flexibility could be unlocked if accelerators were built from smaller dynamically composable blocks, but existing approaches are difficult to program and scale poorly. This project proposes a design flow to generate a templated System-on-Chip (Soc) with a composable accelerator system that can be physically instantiated for a range of computing platforms. Through a virtualization layer, collections of physical hardware blocks are exposed to software as virtual accelerators. To efficiently search the large design space of the SoC, new design space exploration techniques are under investigation.
|
0.904 |
2017 — 2019 |
Wood, Robert (co-PI) [⬀] Wood, Robert (co-PI) [⬀] Adams, Ryan (co-PI) [⬀] Wei, Gu-Yeon [⬀] Brooks, David (co-PI) [⬀] Kuindersma, Scott (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
S&as: Int: Robobees 2.0 Towards Autonomous Micro Air Vehicles
In 2009, a group of researchers from Harvard led an NSF Expeditions in Computing project to build a colony of flapping-wing robots, called RoboBees, motivated by the multidisciplinary challenges associated with building and controlling effective robotic insects. The research has been exciting and it has tickled the imagination of many "young and old" through numerous museum exhibits and outreach activities. The severe inherent constraints associated with building at-scale flying robotic insects required many innovations and new technologies at each step. For example, a new manufacturing process called pop-up MEMS was developed to enable mass production of small-scale, foldable devices. New electronics were developed to flap artificial insect-scale wings. A new small-scale computer chip (called the BrainSoC), connected to various sensors, was created to control the robot. The culmination of this work has been exciting demonstrations of RoboBees hovering and maneuvering about within carefully controlled environments. The next phase of this work is to imbue these robots with machine intelligence and autonomy: RoboBee 2.0. The main objective of this proposal will be to teach the RoboBees to fly autonomously.
Over the past 10 years, while roboticists have been busily building small-scale robots, there has been a surge of activity in machine learning that has led to rapid advances in machine perception and control. For example, the recent success of deep learning can be attributed to the virtuous cycle of (i) more and higher quality data; (ii) faster parallel computation; and (iii) more efficient learning algorithms. The time is ripe to combine these threads of research to develop machine learning-enabled flight control and perception for RoboBees. This project brings together a multidisciplinary team of experts from different engineering backgrounds to build the next generation of RoboBees. The project seeks to push the envelope by targeting the RoboBees platform, which introduces flight dynamics and sensitivity requirements beyond the bleeding edge of what is possible using off-the-shelf components. This effort builds on the existing experimental RoboBee platform at Harvard built with special onboard electronics, which will be used to record large volumes of flight data. This data can then feed exploration of machine learning flight control algorithms, which begins with simple hovering before tackling more challenging maneuvers such as obstacle avoidance and object tracking. Since hand tuning conventional control algorithms is overly cumbersome, focus will be on modern computing paradigms that can be taught rather than programmed. Development and demonstration of autonomous flight control based on deep learning for insect-scale flapping-wing robots will broadly impact the fields of microrobotics, machine learning, energy-efficient computing, and a broad array of autonomous systems, further extending capabilities of autonomy, to a broad range of robotic platforms, from regular vehicles to tiny robots of diverse configurations and applications.
|
0.904 |
2019 — 2022 |
Ramadge, Peter [⬀] Adams, Ryan Vonholdt, Bridgett Engelhardt, Barbara (co-PI) [⬀] Mittal, Prateek (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri Acquisition of a High Performance Large Memory Computing Cluster For Large Scale Data-Driven Research
This project will acquire a state-of-the-art High Performance Computing (HPC) cluster to support large scale, data-driven research. The instrument will support a variety of projects from computer science, electrical engineering, ecology, evolutionary biology, neuroscience and genomics. In neuroscience, the cluster will allow the use of advanced statistical techniques at scale to identify and connect anatomical and functional brain-imaging features of diseased and healthy subjects with specific underlying genetic profiles. In computer science, using machine learning algorithms deployed on the instrument, researchers will to seek new ways to protect the security and privacy of users in large-scale networked systems. Finally, the cluster will also enable research that will improve our understanding of evolutionary history and the molecular complexities of traits through the analysis of multi-animal, large-scale genomic datasets. In addition, through short courses and multiday boot-camps, the instrument will provide valuable opportunities for training postdoctoral fellows, graduate students, and advanced undergraduates in large-scale computational data science. The instrument will also be a valuable asset for certificate programs in statistics and machine learning (one for undergraduate students, the other for graduate students) and for a certificate program in computational science, all of which will support broadening participation of groups underrepresented in STEM. The research and training enabled by the instrument is expected to help improve our understanding of human health and well-being, help create new knowledge that will aid economic competitiveness, and help maintain the country's leadership in science and engineering.
The computing cluster will be formed of by nodes with very large memory. The system complements the institution's investments in research cyberinfrastructure and will be managed by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology (OIT). The instrument would initially be used by five research groups, part of the Center for Statistics and Machine Learning (CSML), which will leverage existing programs and partnerships to increase participation in data science. The initial five specific projects are united under a common theme: machine learning will be used for analyzing big data sets that may not be easily broken into smaller pieces for processing. Specifically, they will examine the following: 1) the use of probabilistic models for large-scale scientific analysis and de novo design in applications areas such as mechanical metamaterials and mixed-signal circuit development; 2) statistical machine learning in genomics, biomedicine, and health biostatistics including the analysis of hospital records to aid doctors in taking early action to improve patient outcomes, the heritability of neuropsychiatric diseases and drug responses, and statistical and experimental examination of cardiovascular disease risk; 3) security and privacy challenges in networked systems using machine learning techniques to detect and isolate attackers in networked systems such as social media; 4) large-scale machine learning for neuroscience such as joint analysis of many large-scale, multi-subject fMRI datasets where the size and number of the datasets; 5) evolutionary genomic and epigenome analyses through collection and analysis of large datasets to investigate the evolutionary history and molecular complexities of traits. Collectively, these research groups are composed of forty graduate students, ten postdocs, and include, on average, thirteen undergrad research projects per year. The instrument will also be used by other researchers engaged in large-scale, data-driven research across a wide variety of disciplines. Hence both the capacity and the capability aspects of the proposed instrument will be highly utilized and will enable the continued advancement of research at the University.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.901 |