1994 — 2001 |
Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nyi: Signal Analysis and Processing in Matched Coordinate Systems @ William Marsh Rice University
This research aims to extend current methods of time-frequency and time-scale analysis by developing a general theory for signal analysis and processing in alternative coordinate systems. Specific tools under investigation include optimal, signal- dependent time-scale representations, information measures for time-frequency and time-scale analysis, and operator-based, generalized coordinate systems. Test signals are being drawn from problems in machine health monitoring, magnetic resonance imaging, and dispersive signal processing.
|
0.915 |
1997 |
Bennett, John (co-PI) [⬀] Adve, Sarita [⬀] Adve, Vikram (co-PI) [⬀] Aazhang, Behnaam (co-PI) [⬀] Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Instrumentation: Design and Evaluation of Architectures, Programming Environments, and Applications For Shared-Memory Systems @ William Marsh Rice University
Adve, Sarita V. Aazhang, Behnaam Rice University CISE Research Instrumentation: Design and Evaluation of Architectures, Programming Environments, and Applications for Shared-Memory Systems This research instrumentation grant facilitates acquiring a shared-memory multiprocessor to support the following research projects: -Design and Evaluation of ILP-Based Shared-Memory Multiprocessors -Parallelizing Compilers for Shared-Memory Systems- Interactive and Adaptive Techniques for Tuning the Performance of Shared-Memory Parallel Programs -Parallel Algorithms for Communications Systems -Signal and Image Processing. The following research is thus enabled:- Architectural techniques to exploit instruction-level-parallelism in shared-memory multiprocessors: Fast simulation methods are the key enabling technology for this research. The proposed multiprocessor enables the development and use of high-performance parallel simulators. - Compilation techniques for High Performance Fortran (HPF): The proposed multiprocessor is a cost-effective platform for HPF compiler development and an important target for evaluating the compiler-generated parallel code. - Runtime techniques to identify and remedy performance bottlenecks in shared-memory programs: The multiprocessor is the desired platform to develop and evaluate the techniques. - Algorithms for wireless and network communication systems: Most such algorithms must meet stringent real-time constraints. The multiprocessor enables the development of parallel algorithms to meet these constraints.- Algorithms for signal and image processing for applications including geophysics, radar, and medical imaging diagnostics: The multiprocessor is needed to test and develop parallel and resource-intensive sequential algorithms on real data sets. Overall, the proposed system enables the above research in three critical ways: enables parallelization of applications that cannot be run sequentially, provides a testbed for compiler and tools research, and provides cost-effective resources for sequential but resource-intensive tasks.
|
0.915 |
1999 — 2003 |
Wallach, Dan (co-PI) [⬀] Aazhang, Behnaam [⬀] Baraniuk, Richard Knightly, Edward (co-PI) [⬀] Cavallaro, Joseph (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Wireless Technology: Seamless Multitier Wireless Networks For Multimedia Applications @ William Marsh Rice University
In the past decade, the number of subscribers to mobile and wireless communication services has grown at an exponential rate. Concurrently, emerging wireless devices have enabled new modes of communication beyond traditional cellular voice. However, to remain continually "connected," users face the frustrating task of manually coordinating a vast disarray of services, devices, and wireless technologies.
The goal of this project is to develop a platform for truly seamless communication throughout environments as fundamentally different as high-speed indoor wireless and conventional cellular systems. The investigators propose to design, build, and evaluate RENE (Rice Everywhere NEtwork), a multi-tier system that provides network- and application-level services using a single network interface card. The key innovations of the RENE project are as follows:
1. The design of an mNIC (multi-tier Network Interface Card), a novel network interface card that is reprogrammable on-the-fly to different physical- and network-layer standards. The mNIC will support soft handoffs, both horizontally within a tier and vertically among tiers, including transitions from a prototype 100 Mbps indoor wireless LAN protocol to commercial CDMA cellular standards.
2. The building of a proxy file system that enables seamless and consistent access to a user's home working environment, independent of the user's location or available network resources. The system will respond to changes in available capacity using transcoders, allow consistent reading and writing of files (even when transcoded), and facilitate network-awareness in unmodified applications.
3. The investigators will perform an extensive measurement and modeling study of proxy traffic using multi-fractal models. Using these results, policies will be devised which enable the proxy to make intelligent decisions on when and to what extent to transcode or store user data to best meet user performance objectives.
4. The development of a new coarse-grained approach to resource reservation and admission control that enables users to obtain predictable performance in multi-tier environments. The key technique is to abstract system resources into networks of virtual bottleneck cells such that by provisioning resources in the virtual cells, quality of service objectives can be satisfied in the actual system.
This research will be conducted in collaboration with Nokia and Texas Instruments in order to build a complete prototype implementation of the RENE system and demonstrate its capabilities.
|
0.915 |
1999 — 2002 |
Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Multiscale Signal and Image Processing Using Singularity Grammars @ William Marsh Rice University
Many real-world signals and images are naturally processed in a multiscale fashion, just as we move away from an object to view its general appearance and zoom in to see its fine details. The multiscale data representation of the wavelet transform naturally matches important signal and image singularity structures such as edges, ridges, bursts, textures, and other transients. This project focuses on characterizing these key structures using wavelets. The resulting new signal and image processing algorithms will impact a diverse range of important problems, from the management of computer network traffic in the next-generation Internet to seamless handling of complicated data such as documents containing both text and images.
Since wavelets form a basis, they can reproduce arbitrary functions, from highly structured real-world signals and images to completely unstructured noise. In linguistic terms, the wavelet vocabulary can be too expressive. To perform useful modeling of real-world signals and images, the vocabulary's scope must be narrowed by imposing a set of constraints -- a grammar -- that capture the salient structures of singularities. While much research has concentrated on developing new wavelet vocabularies, the most important current challenges to wavelet-based processing lie in grammatical modeling. This project studies a new statistical approach to multiscale modeling designed to match singularity-rich data. Grammars under consideration include efficient and trainable probabilistic graphs (hidden Markov tree models) and multiplicative pyramidal cascades. New grammar design is being guided by a detailed study of the fine structure of singularities using Besov spaces and multifractal analysis. From these new models follow new algorithms for signal and image estimation, detection/classification, segmentation, fusion, and synthesis.
|
0.915 |
2000 — 2002 |
Levander, Alan (co-PI) [⬀] Symes, William [⬀] Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of a High Performance Computer System For the Center For Computational Geophysics @ William Marsh Rice University
9977697 Symes
This grant, made through the Major Research Instrumentation (MRI) Program, provides partial support of the costs of acquiring a high performance computer system for the Center for Computational Geophysics (CCG), Rice University. The CCG will obtain a multi-CPU, shared memory, parallel architecture server which will be used in a variety of computer intensive geophysical research programs including 3-D reflection and refraction exploration seismology and geodynamic modeling. A number of faculty members will make use this state-of-the-art computational platform. William Symes (applied mathematician), Alan Levander (seismologist) and Richard Baraniuk (signal processing electrical engineer) will largely be involved in development of improved computational algorithms for inversion of large 3-D seismic data sets and their interpretation. Additional faculty members and researchers including Richard Gordon, Adrian Lenardic, Julia Morgan and Dale Sawyer will make use of the system for modeling geodynamic processes. Specific geoscience applications include improvements to the NUVEL-1 global plate motion model, tomographic imaging of the roots of the Rocky Mountains, profiling continental margin sedimentation and structure, modeling the physics of landslides, and simulating the geodynamics of continental rifting, seafloor spreading, mantle convection and plate motions. ***
|
0.915 |
2004 — 2005 |
Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Wama 2004: Wavelets and Multifractal Analysis Workshop @ William Marsh Rice University
The Wavelets and Multifractal Analysis Workshop (WAMA 2004) will bring together world experts in wavelets and multifractal analysis, researchers new to these fields, and students in Carg`ese, Corsica, France between 19-31 July 2004. Support from NSF is being sought to help defray registration fees, travel expenses, and lodging costs for some of the invited speakers, researchers, and students. Partial support has already been obtained in the US from ONR-IFO, AFOSR and in France from the Direction Generale de l'Armement, Minist`ere de l'Enseignement Superieure, and INRIA. The participants will comprise 25 invited speakers, 20 researchers, and 40 graduate students from around the world, distributed approximately one-half each from the US and Europe (with several from other regions).
|
0.915 |
2004 — 2009 |
Riedi, Rudolf [⬀] Druschel, Peter (co-PI) [⬀] Johnson, David (co-PI) [⬀] Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sp: Collaborative Research: Safari: a Scalable Architecture For Ad Hoc Networking and Services @ William Marsh Rice University
Within little more than a decade, digital information and the Internet have assumed a critical role in virtually all sectors of society, including education, commerce, science, government, and entertainment. However, today's Internet is dependent on wired or cellular wireless infrastructure. This dependence limits the reach of digital communication to regions of the world where the required infrastructure is technically and economically feasible; at the same time, it renders the network vulnerable to disasters and attacks that threaten this fixed infrastructure. This proposal aims to develop technologies to reduce the dependence of digital communication on wired and cellular wireless infrastructure, thus extending its reach into underdeveloped parts of the world and economically disadvantaged part of society and increasing its resilience to natural disasters, acts of war, or terror attacks on its physical infrastructure. The work exploits synergies between two areas of research that have enjoyed dramatic advances in recent years, but have to date mostly worked independently: (1) ad hoc networking, and (2) decentralized, self-organizing distributed systems. They have assembled a team of experts in each of these areas that will jointly tackle the major technical challenges towards a network architecture that exploits infrastructure when it is available but does not depend on it: Self-organizing network hierarchy: Will develop a novel, self-organizing buoy protocol that recursively subdivides the network into an adaptive, proximity-based hierarchy of cells. The cell hierarchy provides the foundation for scalable routing and provides a low-overhead, proximity-based overlay structure that can be used to support network services. Periodic broadcasts from buoy nodes in each cell efficiently disseminate aggregated location, addressing, and routing information. Scalable ad hoc network routing: Based on the buoy protocol, they will develop an ad hoc network routing architecture for mobile and stationary devices that scales to at least tens of thousands of nodes. Nodes maintain only a small amount of routing state that is logarithmic in the size of the network, in exchange for a slightly longer route length. Nodes maintain their routing state passively by listening to buoy broadcasts, which results in very low routing overhead. The per-node space and message requirements of the protocol grow at most logarithmically with the size of the network. Self-organizing network services: The proposers plan to develop self-organizing, robust, and secure network services that exploit the hierarchical overlay structure of the buoy protocol. Basic naming, host configuration and network time services will ensure the operation of the network in the absence of fixed infrastructure servers that provide conventional DNS, DHCP and NTP services. Other self-organizing services will provide email, instant messaging, storage, and content distribution in the absence of a server infrastructure, manual administration, high-capacity backbones or trusted entities. Our approach builds on foundations from p2p systems, but takes advantage of the hierarchical, proximity-based low overhead overlay structure provided by the buoy protocol to provide a solution suitable for ad hoc wireless environments. Integrated ad hoc network architecture: The proposers plan to develop a network architecture that will integrate wired and wire-less networks, infrastructure-based, and self-organizing services. The architecture takes advantage of existing infrastructure when and where available, without depending on its presence. In the wake of a disaster, the architecture will allow remaining islands of surviving infrastructure to self-organize jointly with wireless, mobile components to recover and resume connectivity and emergency network services. Similarly, the architecture will allow the integration of islands of wired infrastructure via wireless ad hoc communication in developing countries. The intellectual merits of this work include the development of the science and technology to meet these challenges; they will evaluate theoretical results, algorithms and protocols through analysis, simulation, and experimental evaluation of prototype implementations; disseminate the results via publications, industrial collaborations, and student training; and to distribute software artifacts for evaluation and use by industry and the research community. The broader impacts of this work include the development of technologies that will substantially increase the resilience of digital networks to physical disasters or attacks and that will extend its reach into economically disadvantaged parts of society and underdeveloped parts of the world. Educational impacts include the training of students and research personnel and outreach to educational institutions not historically involved in research.
|
0.915 |
2004 — 2009 |
Druschel, Peter (co-PI) [⬀] Ng, T. S. Eugene Johnson, David (co-PI) [⬀] Baraniuk, Richard Heinkenschloss, Matthias (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets Noss: Assimnet @ William Marsh Rice University
Abstract:
In a battery-powered sensor network, energy and communication bandwidth are both limited. Moreover, processing a sensor measurement locally often requires orders of magnitude less energy than communicating it to a distant node, yielding an interesting communication/computation tradeoff: whenever possible, the network should reduce the need for global communication at the expense of increased local processing and communication. A promising approach for reducing global communication is to perform signal processing to extract key information inside the sensor network in a distributed fashion, thus dramatically reducing global communication requirements without losing fidelity.
This project aims to develop a sensor network architecture whose communications hierarchy is aligned with the information flow of its computations. In particular, the research involves developing (1) a multi-overlay sensor network architecture that supports both multi-scale and proximity communication and computation; (2) new multiscale sensor data representations based on wavelet transforms; and (3) network services for sychronization and localization of network nodes. The research includes analysis, simulation, and a small-scale testbed of sensor nodes on the Rice University campus.
|
0.915 |
2004 — 2012 |
Choi, Hyeokho (co-PI) [⬀] Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Multiscale Geometric Analysis For Higher Dimensional Signal Processing @ William Marsh Rice University
Over the past twenty years multiscale methods like the discrete wavelet transform (DWT) have evolutionized signal processing; for example, wavelets lie at the core of the FBI fingerprint data base and the new JPEG 2000 image compression standard for digital photos. While wavelets are arguably ideal or near-ideal for analyzing and processing 1-dimensional (1-D) signals (price fluctuations of interest rates, for example), a surprising realization of the past few years is their inability to capitalize in a similar way on 2-D, 3-D, and higher-D signals containing "singularities" (edges and ridges in 2-D digital photos or 3-D videos, for example). The new, confounding aspect is "geometry": the singularities are typically localized along smooth, lower-dimensional manifolds. There is a great need for new theory and tools to exploit these geometric structures. Other than a few scattered promising results, however, such theory and tools do not exist today.
This project aims toward a unified theory and practical toolset for the analysis and processing of higher-dimensional piecewise smooth signals that feature geometric manifold structures. In particular,the research involves (1) new multiscale signal representations based on local, directional atoms having a clear geometric meaning with respect to the underlying manifolds; (2) new statistical models for these representations; (3) new multiscale processing algorithms that exploit the new representations and models; and (4) educational outreach to engage undergraduates, graduate students, and other researchers in this timely research area. Given recent successful preliminary results in 2-D and 3-D, there is significant potential for progress in these directions to eventually yield orders of magnitude better compression, approximation, modeling, and denoising performance in real applications in higher dimensions.
|
0.915 |
2005 — 2011 |
Druschel, Peter (co-PI) [⬀] Ng, T. S. Eugene Johnson, David (co-PI) [⬀] Baraniuk, Richard Heinkenschloss, Matthias (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Noss: Adaptivity in Sensor Networks For Optimized Distributed Sensing and Signal Processing @ William Marsh Rice University
This project aims to develop an adaptive sensor network architecture that enables the efficient, large-scale, long-term, low-cost, on-demand monitoring of a variety of physical phenomena with high fidelity. The core theme is that distributed signal processing and data assimilation of sensor data, as well as network management and monitoring, should be performed inside the sensor network in order to reduce energy consumption and global communication needs, leading to dramatically increased sensor lifetimes and much higher fidelity in the tracking of the physical phenomena of interest. The goal is to develop a flexible, self-monitoring architecture for this type of in-network processing and sensor networking that exploits adaptivity to significantly improve the network's efficiency, robustness, and usefulness. Two kinds of adaptivity are considered: (1) data adaptivity, where the network topology is adapted to align communications with the natural data flows; and (2) resource adaptivity, where the network topology is adapted based on computational, battery, or bandwidth resources. The expected results include the development of adaptive communication protocols and routing topology, the development of network management tools for sensor communication performance monitoring and inference, and for sensor distribution monitoring, as well as the experimental deployment of the adaptive sensor network architecture in a small-scale testbed of sensor nodes on the Rice University campus. Results will be disseminated through technical reports posted on the project web page, through papers presented at professional meetings, as well as through journal publications.
|
0.915 |
2006 — 2010 |
Dholakia, Paul Burrus, C. Sidney [⬀] Tapia, Richard (co-PI) [⬀] Baraniuk, Richard Johnson, Don (co-PI) [⬀] Keller, Sallie |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Building Communities and Sharing Knowledge in Engineering Education: a University/Industry Partnership @ William Marsh Rice University
0538934 Burris
This award is to William Marsh Rice University to support the activity described below for 36 months. The proposal was submitted in response to the Partnerships for Innovation Program Solicitation (NSF-05566).
Partners
William Marsh Rice University (lead institution) and National Instruments
The primary objective of the proposal is as follows: to revolutionize the way science and engineering are taught by breaking away from traditional textbook and lecture-based education in order to build a new framework, where communities of educators, students, and field practitioners continually interact, collaborate, connect, and explore active content. The specific objectives of the project are to 1) build a world-wide community of educators, students, and practitioners in DSP led by Rice faculty and NI technology evangelists which will develop and refine a critical mass of free Connexions DSP course materials; 2) enrich these materials with interactive LabVIEW visualizations to make the concepts come alive and encourage experimentation, exploration, and design; 3) develop semantic mathematics representations for displaying and exploring science and engineering concepts based on MathML--in particular a suite of tools for authoring, sharing, and exploring mathematics on the web; 4) translate the materials into a number of languages, including Spanish, to reach both local and worldwide audiences; 5). Study the marketing and business issues associated with growing and sustaining the project into a win-win for both the university and industrial partners; and 6) assess the projects impact and widely disseminate the lessons learned.
Potential Educational and Economic Impact
There is a crisis in engineering education today, with decreasing enrollments, less engaged and less prepared students, and pressure to cover increasing amounts of material. Curricula are increasingly stove-piped and disconnected, in spite of research indicating that for women and underrepresented minority students, the study of science and engineering is made meaningful by connections to other fields. Moreover, a leading complaint from industry regarding engineering graduates is their lack of collaboration and team skills and lack of hands-on design experience.
The intellectual merit of the project follows. A new approach has been identified for building and sustaining virtual educational communities around active content and applying the results to the spectrum of engineering education venues: university undergraduate and graduate courses, industrial training and continuing education, just-in-time learning on the job, and high-school laboratories. The research involves and balances education, community development, technology development, marketing and business planning, and impact assessment. The foundation for the project is provided by Rices Connexions Project (cnx.rice.edu) and by NIs LabVIEW (ni.com) DSP platform. Connexions is an open-access repository of free scholarly materials and an open-source software toolkit to help authors publish and collaborate, instructors rapidly build and share custom courses, and learners explore the links among concepts, courses, and disciplines. LabVIEW is a personal computer-based DSP system for interactively visualizing, processing, and interacting with multimedia such as audio, images, and video from a wide range of applications.
The broader impacts of the activity follow. This research include the development of people-resources and technologies that will substantially increase the performance and capabilities of engineering educators and that will open up education into underdeveloped parts of the State of Texas, the Nation, and the world. In particular, education in DSP and related technologies is critical to sustain the high-tech complex in Dallas, Austin, and Houston, Texas. Additional educational impacts include the training of undergraduate and graduate students involved in this project and two workshops that will bring together DSP educators and practitioners to share their knowledge and build communities.
|
0.915 |
2007 — 2012 |
Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Design and Analysis of Compressed Sensing Dna Microarrays @ William Marsh Rice University
The diverse functions performed by a living cell during her life cycle are controlled and regulated through complicated gene- and protein- interaction networks. Any pattern of irregular behavior of genes in the network can lead to cell malfunctioning, cell death, or the emergence of diseases like cancer. It is therefore of crucial importance to recognize erroneous gene interaction patterns and compare them to those in healthy cells. For this type of study, one of the most frequently used bioengineering systems is the well known DNA microarray device. DNA microarrays consist of grids of spots containing unique genetic identifiers for each of the tested genes, capable of generating snapshots of gene activity in terms of selective DNA sequence annealing. Microarrays have also found many other applications in the field of molecular biology, most notably for the purpose of detecting hostile microbial agents in food, water, and in the air. One of the main drawbacks of current microarray designs is that they are, for the purpose of whole genome studies, severely underutilized; similarly, for biosensing applications, existing microarray systems cannot be used for simultaneous identification of a large number of microorganisms and their strains due to technological limitations.
The investigators study novel array architectures, termed compressed sensing DNA microarrays. The research involves finding DNA probes that serve as group identifiers for classes of microorganisms; designing sparse sensing matrices for DNA group identifiers; developing compressed sensing reconstruction algorithms capable of handling saturation effects arising due to high agent concentration levels; characterizing the fundamental trade-offs between distortion and sensor dimension for non-linear arrays; and, analyzing the complexity of integrating compressed sensing microarrays into existing biosensor networks.
|
0.915 |
2010 — 2014 |
Vardi, Moshe (co-PI) [⬀] Johnson, Don (co-PI) [⬀] Burrus, C. Sidney (co-PI) [⬀] Baraniuk, Richard Embree, Mark (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Ci-Team Implementation Project: the Signal Processing Education Network @ William Marsh Rice University
This project is addressing a crisis throughout engineering education, and cyberinfrastructure education regarding decreasing enrollments, under preparation and disengagement of students, and the pressure of faculty to cover increasing volumes of material. Curricula have become stove-piped and disconnected, in spite of research indicating that science and engineering education best resonates with women and underrepresented minority students and when clear connections are drawn to transformative applications and other fields of study. Industry routinely lobbies for better engineering graduates that are at ease when collaborating on teams, and eager to attack hands-on design challenges.
The academic/industrial/professional society partnership between Rice University, Georgia Institute of Technology, Rose-Hulman Institute of Technology, the University of Texas at El Paso, National Instruments, Texas Instruments, Hewlett-Packard, and the Institute for Electrical and Electronics Engineers Signal Processing Society directly attacks these issues by aiming to revolutionize the way they teach and learn about cyberinfrastructure. They are guided by a common vision: to prepare the cyberinfrastructure leaders of tomorrow, to break away from the traditional textbook, lecture, homework-based approach to education, and to build a new framework where a vibrant network of educators, students, and field practitioners continually interact, collaborate, connect, and explore interactive content.
The innovative aspects and scientific merits of this collaborative project lie in their new approach to building and sustaining virtual educational communities around interactive content and applying the results to the full spectrum of engineering education venues: university undergraduate and graduate courses, industrial training and continuing education, just-in-time on the job learning, and high-school laboratories. Their research focuses on one strategic discipline in engineering, signal processing, and involves and balances education, community development, technology development, marketing and business planning, and impact assessment. The partnership is: 1. Implementing a light-weight Technology Framework that enables faculty and student users to exploit and expand upon the existing signal processing education content; 2. Building a signal processing Education Network of champions from faculty, students, and industry leaders nationwide that continually expands, improves, and diversifies the materials and that promotes the use of the framework both at network member institutions and at institutions in the wider engineering education community; 3. Assessing the effectiveness of the framework and network for accelerating adoption and use as well as the value of the mentoring and support provided by the network of champions; 4. Widely Disseminating the results and lessons learned.
Broader impacts of this research include the development of people-resources and technologies that will substantially increase the performance and capabilities of engineering educators, effectively opening up engineering education for motivated self-learners in all parts of the nation as well as the world. In particular, education in digital signal processing and related technologies is critical in sustaining many high-tech industries. Finally, digital signal processing educators, practitioners and students will be brought together to form dynamic knowledge sharing communities that greatly impact education not only on their home campuses but around the world.
|
0.915 |
2011 — 2015 |
Padley, Paul Vardi, Moshe (co-PI) [⬀] Johnson, Don (co-PI) [⬀] Burrus, C. Sidney (co-PI) [⬀] Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dip: Collaborative Research: a Personalized Cyberlearning System Based On Cognitive Science @ William Marsh Rice University
Investigators from Rice University and Duke University will build a Personalized Cyberlearning System, designed around three principles from cognitive science (retrieval practice, spacing, and enhanced feedback), that leverages advances in machine learning and makes use of an existing instructional content material and problem set database aimed at undergraduate engineering students. The system will use artificial intelligence methods to optimize practice and feedback for students. Research will seek to advance knowledge, in a real-world setting, about a range of issues concerning how feedback facilitates learning, how individual differences come in to play, as well as those more specifically aimed at the development of the learning technology system itself.
The project is important as part of the effort to harness the vast quantities of information on the web to personalize instruction for a wide range of learners. Moreover, the development of such cyberlearning technologies holds promise for opening up STEM education for motivated self-learners while also allowing access to a large volume of material for a range of students who might not otherwise have it.
|
0.915 |
2011 — 2016 |
Baraniuk, Richard Sankaranarayanan, Aswin Veeraraghavan, Ashok [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Computational Tools For Visual Inference of Complex Materials @ William Marsh Rice University
Central to the imaging process is the interaction of light with the objects in the scene. Remarkable progress has been made over the past several hundred years on solving inference problems (such as detection, classification, or estimation) for a large class of objects constructed from simple materials with Lambertian reflectance. Such objects scatter light such that the apparent brightness is invariant to the observer's view angle. Unfortunately, real world objects are made of considerably more complex materials that cannot be characterized in terms of such an isotropic reflectance. While humans are able to effortlessly reason about complex materials, today's image analysis and processing algorithms fail miserably. The reason is that complex, non-Lambertian materials can be characterized only by higher-dimensional functions that are relatively poorly understood and even more poorly modeled.
This research is developing new ways to model, capture, and process the rich reflectance patterns of complex materials. The key tool is the object's plenoptic transport function, which describes the transformation of the incident to the irradiated light due to the properties of the material. In full generality, the plenoptic transport function is 14-dimensional; hence, a fundamental complication for sensing, analysis, and processing systems for complex materials is the dimensionality gap between the high dimensional plenoptic function and the ability of most conventional sensors (cameras) to acquire at best 2D or 3D image projections. The tools and techniques under development include sparse and manifold models (to bridge the dimensionality gap), geometric features (to mitigate the presence of environmental illumination and other nuisance parameters), and new sensor designs (to most efficiently acquire plenoptic information from natural scenes).
|
0.915 |
2012 — 2014 |
Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Integrating the Etextbook: Truly Interactive Textbooks For Computer Science Education @ William Marsh Rice University
The objective of this project is to develop a storyboard defining a detailed roadmap for an entire interactive eTextbook on data structures and algorithms, an end-to-end framework for the development process, and several complete prototype sections. The project improves the development of future eTextbooks by demonstrating ways to integrate text, interactivity, and assessment in a creative commons environment. The integration of online textbook content, interactive courseware, collaborative creation, open source, and online assessment benefits students, instructors, content authors, and algorithm visualization developers.
Interactive hypertextbooks are valuable beyond Computer Science since online instruction in many fields can be enhanced by student interaction with well-designed simulations. The project provides an exemplar of how collaborative, open-sourced workflows could be used to develop hypertextbooks for many disciplines. The goal is to allow instructors to modify existing eTextbooks by adopting major portions and then changing sections, or taking text and visualizations from different books and combining them.
|
0.915 |
2015 — 2017 |
Baraniuk, Richard Robinson, Jacob (co-PI) [⬀] Veeraraghavan, Ashok [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Fabrication of Thin, Lens-Free Cameras For Visible and Swir Imaging @ William Marsh Rice University
Image sensors have benefited immensely from the steady advances in semiconductor fabrication, resulting in an order of magnitude resolution increase every decade for the last three decades. Moore?s law has had a similar impact on both computing and image sensor technology. In addition to the sensor, however, cameras and other imaging devices require lenses whose manufacturing processes has not benefited from Moore?s law. As a result the ultimate scaling and cost of imaging devices is limited by physical optics. This research project involves the development of Lens-Free Cameras (LFCs), a novel imaging architecture for visible, short-wave infrared (SWIR), mid-wave infrared (MWIR) and thermal wavelengths that exploits amplitude multiplexing masks and computational demultiplexing algorithms to replace lenses in traditional cameras. Such lensless imaging technology will produce thin cameras that can be directly fabricated using contemporary semiconductor fabrication processes, thereby benefitting from the scaling laws of semiconductor fabrication.
An LFC consists of an image sensor and a thin amplitude modulation mask fabricated at a few hundreds of microns above the image sensor surface. The amplitude modulation mask creates a linear mapping between the scene and the sensor measurements. The linear system describing the LFC contains millions of variables and millions of measurements. Solving this large linear system of equations will allow for the reconstruction of high resolution images from the sensor measurements. This project will focus on the fabrication of a thin lensfree camera. The result of this fabrication process will be the world?s first camera less than a millimeter thick. The advance in lensfree cameras will be more widely applicable to several challenging applications such as microscopy, endoscopy and other space constrained imaging scenarios.
|
0.915 |
2015 — 2018 |
Baraniuk, Richard Veeraraghavan, Ashok (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Lens-Free Imaging: Can Signal Processing Replace Lenses? @ William Marsh Rice University
The basic design of the camera has remained unchanged for centuries. To acquire an image, light from the scene is focused using a lens onto an image sensor. This design has served users well in a broad range of applications, with lenses forming the bedrock of current imaging practice. Unfortunately, lenses also introduce a number of limitations. First, while image sensors are typically thin, cameras end up being thick due to the lens complexity and the large distance between the lens and sensor required to achieve focus. Second, while Moore?s Law has cut the cost of image sensors exponentially over time, lenses, and thus cameras, do not benefit from this exponential scaling, particularly at wavelengths further into the infrared and ultraviolet, where inexpensive glass and plastics cannot be used. Third, lens-based cameras invariably require post-fabrication assembly, resulting in manufacturing inefficiencies.
In this project, the PIs develop new camera architectures that dispense with lenses completely. Lens-free cameras retain the high light-throughput of lens-based cameras, have a thin form factor, and are amenable to direct fabrication while operating over a wide spectral range. The design replaces the lens with a series of coded masks placed almost immediately atop a bare conventional sensor and uses a computational algorithm to recover the image from the sensor measurements. The goal of this project is a modern signal processing based framework for the design, optimization and development of lens free and refractive optics-free imaging systems that will establish the fundamental limits of imaging with such systems. This will revolutionize imaging capabilities and, for the first time, enable thin, cameras that can directly be fabricated using traditional semiconductor fabrication tools.
|
0.915 |
2016 — 2020 |
Grimaldi, Phillip Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Collaborative Research: Operationalizing Students' Textbooks Annotations to Improve Comprehension and Long-Term Retention @ William Marsh Rice University
While traditional textbooks are designed to transmit information from the printed page to the learner, contemporary digital textbooks offer the opportunity to study learners as they interpret and process information being read. With a better understanding of a learner's state of mind, textbooks can make personalized recommendations for further study and review. How can the learner's state of mind be determined? Open a used printed textbook and the answer is clear: students feel compelled to engage with their texts by annotating key passages with highlights, tags, questions, and notes. Despite students' spontaneous desire to annotate as they read, this form of interaction has reaped few educational benefits in the past. At best, highlighted passages are re-read to study for exams, a strategy not nearly as effective as other strategies such as self-quizzing. This project will develop a new methodology that: assesses student knowledge level automatically based on annotations, transforms highlighted passages into appropriate study questions, and provides each student with well-timed, personalized review. Because the project is based on free, peer-reviewed, openly licensed materials from OpenStax that have been widely adopted at a range of institutions, particularly community colleges, the technology will reach beyond elite institutions to provide a broad spectrum of underserved students with access to a potentially powerful learning tool.
This project adopts a big-data approach that involves collecting annotations from a population of learners to draw inferences about individual learners. The project will determine how to exploit these data to model cognitive state, enabling the team to infer students' depth of understanding of facts and concepts, predict subsequent test performance, and perform interventions that improve learning outcomes. A tool will be developed that administers appropriately timed quizzes on material related to a student's highlights. A collaborative-filtering methodology will be employed that leverages population data to suggest specific passages for an individual to review. The proposed tool will reformulate selected passages into review questions that encourage the active reconstruction and elaboration of knowledge. The design and implementation of the tool will be informed by both randomized controlled studies within the innovative OpenStax textbook platform and coordinated laboratory studies. These studies will address basic scientific questions pertaining to why students annotate, how to improve their annotation skills, and techniques to optimize the use of annotations for guiding active review.
|
0.915 |
2018 — 2022 |
Shrivastava, Anshumali Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bigdata: F: Collaborative Research: Theory and Practice of Randomized Algorithms For Ultra-Large-Scale Signal Processing @ William Marsh Rice University
The dramatic increases in our abilities to observe massive amounts of measurements coming from distributed and disparate high-resolution sensors have been instrumental in enhancing our understanding of many physical phenomena. Signal processing has been the primary driving force in this knowledge of the unseen from observed measurements. However, in the last decade, the exponential increase in observations has outpaced our computing abilities to process, understand, and organize this massive but useful information. In this project the investigators plan to blend efficient hashing algorithms with Randomized Numerical Linear Algebra, which can overcome these computational barriers. The project will engage diverse graduate and undergraduate students in computer science, statistics, ECE, and applied mathematics both at UCB and Rice. The efforts of this project will also be utilized to push data science for social good, through collaborations with a human rights data analysis group in leveraging hashing algorithms to reduce human efforts in estimating the extent of war crimes. The results of the project will be made available to a wide audience through OpenStax CNX, which will to disseminate course materials free-of-charge to anyone in the world and thereby foster the growth of vibrant communities around the subject.
This project will achieve two complementary goals: first, extend the foundations of RandNLA by tailoring randomization directly towards downstream end goals provided by the underlying problem, rather than intermediate matrix approximations goals; and second, use the statistical and optimization insights obtained from these downstream applications to transform and extend the foundations of RandNLA. The investigators will propose and extend several fundamental ideas, including probabilistic hashing, sketching, streaming, sampling, leverage scores, and random projections, to make SP significantly resource-frugal. Precise mathematical quantification of these tradeoffs will be provided.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2020 |
Heller, H. Craig Genesereth, Michael Vardi, Moshe (co-PI) [⬀] Baraniuk, Richard Grimaldi, Phillip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Accelerator Phase I (Raise): Scalable Knowledge Network to Enable Intelligent Textbooks @ William Marsh Rice University
The NSF Convergence Accelerator supports team-based, multidisciplinary efforts that address challenges of national importance and show potential for deliverables in the near future.
Properly educating the STEM leaders of tomorrow requires moving beyond knowledge being transmitted from teacher to learner via paper textbooks and lectures. The broader impact and potential social benefit of this Convergence Accelerator Phase 1 project is to launch the Textbook Open Knowledge Network (TOKN). The TOKN project will be executed by a multidisciplinary team of cognitive scientists, machine learning and artificial intelligence engineers, and educators from Rice University's OpenStax and Stanford University who are committed to creating a library of intelligent textbooks, engaging more partners as the project progresses. To begin, this project will build knowledge graphs that capture the complex relationships between educational concepts. A knowledge graph enables artificial intelligence algorithms to provide enhanced and personalized functionality to intelligent textbooks. Consequently, the intelligent textbook can provide more robust courseware functionality (text, videos, simulations, etc.), learning analytics, and personalized tutoring, such as automatically generating summaries of textbook content, generating useful practice exercises for students, providing interactive dialogues with students to help them better understand and master the underlying source material, and more. Integrating this intelligent technology into the full OpenStax free and open library has the potential to impact academic outcomes for millions of students in both secondary and higher education, while significantly advancing the state of education worldwide.
Intelligent textbooks provide an opportunity to facilitate better learning for students. However, they require major investments of time, money, and expertise. An appropriate knowledge graph is at the heart of an intelligent textbook and is often the biggest challenge to intelligent textbook creation due to the need for human subject experts to develop the semantic connectivity of terms and ideas. TOKN aims to develop new, scalable processes and supporting technologies for generating high-quality and extensible knowledge graphs for intelligent textbooks. The proposed research aims to lower both the cost and time required to produce high-quality knowledge graphs. In contrast to using subject matter experts, this project proposes to use a combination of machine learning algorithms and crowdsourcing of knowledge from students. Crowdsourcing will not only provide data for knowledge graphs, but it will also provide an opportunity to evaluate the pedagogical effectiveness of concept mapping on student learning. Phase 1 of this project will include a proof of concept to construct and validate a knowledge graph for one chapter of OpenStax Biology, a free and open-source text used by more than 30% of students in college biology programs. The overarching goal is to eventually apply this approach at scale during Phase 2 to generate knowledge graphs for the entire OpenStax library of 38 general educational textbooks, transforming them into intelligent open textbooks for society.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2022 |
Baraniuk, Richard Lin, Yingyan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Enabling Intelligent Cameras in Internet-of-Things Via a Holistic Platform, Algorithm, and Hardware Co-Design @ William Marsh Rice University
There has been a tremendous demand for bringing Deep Neural Network (DNN) powered functionality into Internet of Thing (IoT) devices to enable ubiquitous intelligent "IoT cameras". However, state-of-the-art DNNs have a prohibitive energy cost, making them impractical to be deployed in resource-constrained IoT platforms. This project will develop a novel energy-efficient DNN framework, via a systematic integration of platform, hardware, and algorithm co-design innovations. Despite a growing interest in energy-efficient DNNs, existing techniques lack a systematic optimization across the full stack of design abstraction, from systems through algorithms to hardware implementation. The proposed research advocates an innovative, holistic effort towards energy-efficient and adaptive DNN-powered "IoT cameras" by jointly optimizing the platform-, hardware-, and algorithm-level co-design efforts. On the system level, we will address how to automatically generate and adapt DNN models and implementation, to meet a variety of "IoT devices" application-specific performance needs and device-specific resource constraints. On the hardware level, we will leverage the observed high sparsity in DNN activations for energy-efficient hardware implementations of both DNN training and inference by using low-cost zero predictors and hence bypass unnecessary computations. On the algorithm level, we will develop innovative factorized sparsity regularization in DNN training as well as efficient, controllable adaptive inference mechanisms, fully complementing and closely integrating with our hardware innovations. The proposed research will advance the scientific domain of each level, from system and algorithm, to hardware and a holistic, systematic cross-level methodology for designing energy-efficient intelligent systems. Progress on this project will enable ubiquitous DNN-powered intelligent functions in a significantly increased number of resource-constrained daily-life devices, across numerous camera-based Internet-of-Things (IoT) applications such as traffic monitoring, self-driving and smart cars, personal digital assistants, surveillance and security, and augmented reality. As camera-based IoT devices penetrate all walks of life, by enabling DNN-powered intelligence to be pervasive in these devices, the proposed research can have a tremendous impact on global societies and economies. The research will be integrated with education on energy efficient deep learning. Educational activities include curriculum development, undergraduate research, and outreach to K-12 students.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2022 |
Baraniuk, Richard Lin, Yingyan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rtml: Large: Collaborative: Harmonizing Predictive Algorithms and Mixed-Signal/Precision Circuits Via Computation-Data Access Exchange and Adaptive Dataflows @ William Marsh Rice University
Recent advances in machine learning are fueling a growing demand for intelligent Internet of Things (IoT), i.e., edge network applications. Many of them, such as autonomous vehicles, robots, and healthcare wearables, require real-time and in-situ learning to be perceived as truly intelligent. However, the limited computing and energy resources available at the edge device (e.g., mobile devices, sensors) stand at odds with the massive and growing cost of state-of-the-art machine learning training, posing a grand challenge for real-time machine learning (RTML) at the edge. This goal of this project is to foster a systematic breakthrough in achieving efficient online training of state-of-the-art machine learning algorithms in pervasive resource-constrained platforms and applications. An order of magnitude advance in RTML would enable numerous edge devices to proactively interpret and learn from new data, improve their own performance using what they have learned, and adapt to dynamic environments, all in real time. Success in this project will enable truly intelligent edge devices to penetrate all walks of life and thus generate significant impacts on societies and economies. This project will lead to new courses and open-education resources that can attract diverse groups of students and eventually deliver a platform for inclusion and innovation.
The project addresses the RTML grand challenge using a three-pronged 'co-design' approach that seamlessly integrates algorithm, architecture, and circuit-level innovations. Specifically, at the algorithm level, an efficient training framework for RTML, for which trained models are also natively efficient for inference, will be established. Aggressive time and energy reductions can be achieved, at first by improving general training techniques, and then by focusing particularly on online learning and adaptation. At the architecture level, the project will first target reducing the high cost of data movement by trading it for lower-cost computation, and then generate optimal dataflows and hardware architectures to maximize the joint benefits of algorithms and hardware. At the circuit level, the project will leverage adaptive low-precision algorithms and architectures to design ultra-energy-efficient mixed-signal compute fabrics. Statistical computing techniques will be incorporated to demonstrate efficient, scalable, and robust machine learning chips. Finally, at the system level, an integration effort will be included to aid the realization of realistic system goals and to evaluate the innovations of the three core thrusts.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2022 |
Baraniuk, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: a Probabilistic Theory of Deep Learning Via Spline Operators @ William Marsh Rice University
Deep learning has significantly advanced the ability to address a wide range of difficult machine perception tasks, such as recognizing objects from images, activities from videos, or text from speech. As a result, deep learning systems not only are playing a key role in emerging products and services, from conversational assistants to driverless vehicles, but they are also revolutionizing existing ones, from robotics to legal document analysis. Moreover, in the scientific realm, deep learning is enabling new ways to find patterns in large complicated datasets. This success is impressive, but a fundamental question remains: Why does deep learning work? Intuitions abound, but a coherent framework for understanding, analyzing, and designing deep learning architectures has remained elusive. This project will develop a theoretical foundation for deep learning systems by connecting them to classical and recent results from the signal processing, approximation theory, information theory, and statistics. A key goal is the development of new kinds of deep learning systems whose inner workings are explainable and interpretable. This project will have a range of impacts, from developing trustworthy, interpretable models and algorithms for mission-critical applications like autonomous navigation and decision making to advancing machine learning and signal processing education.
This project builds on an elegant connection between a wide class of deep (neural) networks based on piecewise-affine, convex nonlinearities and max-affine spline operators (MASOs). The research is organized around two interlocking themes. The first theme revolves around the extension of the MASO framework beyond piecewise-affine, convex nonlinearities by linking deterministic MASOs with probabilistic Gaussian mixture models. The extended, probabilistic MASO will enable the analysis of deep networks with more general nonlinearities than those that are piecewise-affine and convex, such as the sigmoid, hyperbolic tangent, and softmax. The second theme revolves around extending deterministic MASO deep networks to a new class of hierarchical, probabilistic, generative models that generalize the feedforward inference calculations and backpropagation learning of conventional deep networks to optimal Bayesian inference via a closed-form variational expectation-maximization (EM) algorithm. The probabilistic structure will enable the full arsenal of probability and statistics methodology to be applied to deep learning.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2022 |
Baraniuk, Richard Drezek, Rebekah [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nonspecific Dna Sensors For Scalable Pathogen Diagnostics @ William Marsh Rice University
The rapid identification of pathogens is essential for clinical diagnostics and biodefense. Given timely information, clinicians can prescribe better treatments and slow the rise of drug resistant bacteria, and U.S. biosurveillance efforts can track the emergence and spread of both existing and newly evolved pathogens. While most rapid diagnostic tests have sensors that are each specialized to detect only one kind of pathogen, this research effort is developing new DNA sensing techniques to efficiently detect large numbers of pathogens with just a few sensors. By minimizing the number of necessary sensors, new pathogen surveillance devices could be portable and inexpensive for routine use. The technology additionally provides an approach to detecting emerging novel pathogens without the need to create and deploy new test kits. In addition to furthering development of new pathogen identification technologies, the project develops a hands-on experimental module exploring this methodology and implements it in a biosensing and imaging course serving undergraduate and graduate engineering students at Rice University.
This research effort applies compressed sensing which allows just a few DNA probes to (1) give each species a fingerprint response and (2) unmix these fingerprint signals when a few pathogens are in the same sample. Compressed sensing assumes that the sample is sparse. Sparsity occurs because although there may be dozens or hundreds of possible species to account for, any single sample from a patient or the environment will have only a few pathogens of interest for relevant applications. For pathogen detection, DNA probes each bind multiple times to microbial species. The number of binding events between a few probes and the microbes gives each of hundreds of species a ?fingerprint? response, much like how 10 digits can give everyone in the U.S. a unique phone number. The first aim of this project is to develop a new DNA signal amplification technique that allows fewer binding events to be detectable. This amplification strategy also lets signals be positive or negative, effectively allowing fingerprint signals to spread further apart. The enhanced incoherence of the sensors is known to be very helpful for compressed sensing. The second aim is to use new technologies in microfluidics to capture individual pathogen cells in very small droplets and acquire fingerprint signals from each droplet separately. By analyzing every droplet, this project pushes towards single-cell resolution which would enable its broader application in any task that demands the characterization of unknown microbes.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |