1980 — 1983 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Numerical, Theoretical, and Astrophysical Relativity @ University of Illinois At Urbana-Champaign |
0.945 |
1982 — 1984 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computer Graphics Data Management Facility @ University of Illinois At Urbana-Champaign |
0.945 |
1983 — 1988 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Numerical, Theoretical, and Astrophysical Relativity (Physics) @ University of Illinois At Urbana-Champaign |
0.945 |
1985 — 1990 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Phase Ii: Supercomputer Center @ University of Illinois At Urbana-Champaign |
0.945 |
1990 — 1997 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The National Center For Supercomputing Applications @ University of Illinois At Urbana-Champaign
This award supports the renewal of the cooperative agreement between the National Science Foundation and the National Center for Supercomputing Applications (NCSA). Dr. Larry Smarr is the Director of the Center. NCSA is one of the national supercomputer centers funded by the Division of Advanced Scientific Computing and provides supercomputer access to scientific researchers as well as training/educational and other related scientific services. The effective date of the agreement is October 1, 1990. This is an administrative action.
|
0.945 |
1990 — 1993 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Massively Parallel Computing At the National Center For Supercomputing Applications (Ncsa) @ University of Illinois At Urbana-Champaign
Two areas which need further enhancement in the parallel computing effort of the National Center for Supercomputer Applications are the creation of a balanced hardware environment that will facilitate greater user access and system throughput, and consulting support for the migration of users to the newly acquired Connection Machine. This award will provide hardware to increase the communication bandwith, and also allow a greater number of users to access the machine. It also provides for staff/consultants who can assist users.
|
0.945 |
1990 — 1992 |
Smarr, Larry Lander, Eric Hardin, Joseph |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exploratory Research and Initial Development of Software Forthe Analysis of Multiple Hybridization Images @ University of Illinois At Urbana-Champaign
A collaboration will be established between the National Center for Supercomputing Applications at the University of Illinois, under the direction of Dr. Larry Smarr, and the biological research group at the Whitehead Institute for Biomedical Research at Cambridge, Massachusetts, under the direction of Dr. Eric Lander. This collaboration will work toward developing a software package for analyzing clone fingerprint data used for constructing physical maps of genomes. Specifically, the project will involve extending the NCSA GelReader software (already developed by NCSA) so that it will be capable of combining information from multiple hybridization images of the same blot to obtain much more accurate information about fragment sizes, thereby allowing NCSA GelReader to be used for physical mapping projects.
|
0.945 |
1990 — 1992 |
Smarr, Larry Sabelli, Nora |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cs&E Education: Computational Workshop On the Role of Supercomputer Centers in Science Education, October 28-31, 1990 @ University of Illinois At Urbana-Champaign
The purpose of the workshop is to discuss ways in which the NSF supercomputer centers can promote computational science and engineering in the education of pre-college, college and university students. The target audiences for the workshop recommendations will be educators and their institutions, high performance computing centers and funding bodies.
|
0.945 |
1993 — 1997 |
Schowalter, William Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Curriculum Development in Computational Science and Engineering @ University of Illinois At Urbana-Champaign
9315536 Schowalter This award of $190,000 is for curriculum development in computational science and engineering. The focus is on to develop new computational science and engineering courses and to transform prototype applications into instructional tools. The curriculum to be developed as a part of this effort includes both an application- independent course and a set of application-specific courses. The new application-independent course will contain material that is common to the use of computational methods in science and engineering which includes architectures of high-performance computers, programming paradigm, and interactive visualization. For application-specific course will include characterization of the application, numerical and symbolic methods, and performance measurement for application code.
|
0.945 |
2002 — 2009 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: the Optiputer @ University of California-San Diego
The OptIPuter project explores a new architecture for the distributed information infrastructure (which NSF terms infostructure) required by a number of this decade's science and engineering shared facilities. The project is driven by a close collaboration with leaders of two of these community systems?NSF's EarthScope and NIH's Biomedical Imaging Research Network (BIRN)?both of which are beginning to produce an accelerating flood of data which will in stored in distributed federated data repositories. One characteristic blocking such science is that the individual data objects (a 3D brain image or a terrain dataset) are large (Gigabytes) compared to what can be interactively manipulated or visualized over today's networks. What these scientists require are ultra-high-speed predictable "clear-channel" networks linking PC clusters, storage and visualization systems, enabling collaborating scientists to explore interactively massive amounts of previously uncorrelated data.
An important opportunity exists over the next few years to develop a radical new architecture for this needed scientific infostructure. Observing that the exponential growth rates in bandwidth and storage are now much higher than Moore's Law, this research "goes to the end of the rainbow" to exploit a new world in which the central architectural element is optical networking, not computers. This transition is caused by the use of parallelism, as in supercomputing a decade ago. However, this time the parallelism is in multiple wavelengths of light, or lambdas, on single optical fibers, creating supernetworks. The OptIPuter project aims at the re-optimization of the entire Grid stack of software abstractions, learning how, as George Gilder suggests, to "waste" bandwidth and storage in order to conserve "scarce" computing in this new world of inverted values.
Such a period of technological paradigm shifting requires large-scale application-driven system experiments and a broad multidisciplinary team to understand and develop innovative solutions for a "LambdaGrid" world. Smaller scale efforts are unlikely to produce the radical changes needed to seize the opportunity. Without tackling this research arena, more traditional approaches to developing e-Science infostructure will instead be widely adopted, but at what ultimate cost to the country in our loss of scientific leadership in the long term? Our approach is not without risk; however, with these exponentials crossing, and therefore a technologically driven reordering of infostructure architectural considerations inevitable, an exciting research opportunity is now made possible.
Essentially, the OptIPuter is a "virtual" parallel computer in which the individual "processors" are widely distributed clusters; the "backplane" is provided by IP delivered over multiple dedicated lambdas (each 1-10 Gbps); and, the "mass storage systems" are large distributed scientific data repositories, fed by scientific instruments as OptIPuter peripheral devices, operated in near real-time. Furthermore, collaboration will be a defining OptIPuter characteristic; goals include implementing a next-generation Access Grid with optical multicast, enabling tiled stereo HDTV screens of reality-matching visual resolution. The OptIPuter is an embodiment of the vision of the "hollowing out of the computer" prophesized by Erich Schmidt in the mid-1990s. The old "computer-in-a-box" is being blown up and scattered across the Net. The OptIPuter's fundamental inventions include software and middleware abstractions to deliver unique capabilities in a lambda-rich world, a world in which endpoint-delivered bandwidth is greater than individual computers can saturate.
This research on campus and metro-scale OptIPuters complements State (CENIC/ONI in California, I-WIRE in Illinois), national (TeraGrid), and international (StarLight) projects. The six-university OptIPuter research team spans applications, software, networking and hardware expertise to address the challenges. This team has a long history of successfully managing large multidisciplinary and multi-institutional projects, working with companies, and delivering high-impact innovations in hardware and software systems. Since industry is developing key subcomponents of the OptIPuter, several leading companies are involved as partners. The new architectural models and the software systems to make them function should contribute important capabilities to other large-scale Federally funded networked science facilities. Going beyond research, the collaborative visualization data-fusion OptIPuter centers coupled to remote instrumentation will be enabling technologies for broader societal needs, including emergency response, homeland security, health services, and science education.
|
1 |
2003 — 2005 |
Smarr, Larry Seible, Frieder (co-PI) [⬀] Rao, Ramesh [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dg: the New Role of Science and Engineering in Risk Reduction @ University of California-San Diego
Ramesh Rao University of California San Diego
"The New Role of Science and Engineering in Risk Reduction"
In February 2002, NSF funded a successful workshop in New York City, inspired by the events of September 11, to develop a research agenda related to unexpected events. Now, one year later and with the Department of Homeland Security established, the topic of homeland security is much better defined as it relates to the Federal Government and NSF. NSF is now engaged in developing a large cross-agency initiative in Cyber-Infrastructure. The workshop being funded by this grant is intended to bring the two areas of cyber-infrastructure and homeland security into juxtaposition, with the goal of developing a computer/information science research agenda within the context of homeland security, as related to cyber-infrastructure.
|
1 |
2004 — 2010 |
Smarr, Larry Orcutt, John [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: Looking Ahead: Designing the Next Generation Cyber-Infrastructure to Operate Interactive Ocean Observatories @ University of California-San Diego Scripps Inst of Oceanography
This project involves research and experimentation with cyberinfrastructure components in order to develop and prototype new approaches to information technology for ocean observing systems. The goal is to develop and test ways of managing data from ocean observatories as well as mechanisms for facilitating the operation of ocean observatories. A centerpiece of the work is the initiation of a prototype ocean observatory grid that links submarine facilities off the Pacific coasts of Mexico, the U.S. and Canada, the "Laboratory for an Ocean Observatory Knowledge INtegration Grid (LOOKING)," using experimental wireless networks, optical networks, and Grid technology. The LOOKING framework provides middleware to facilitate and enable instrument and infrastructure control, data generation and distributed storage, data assimilation through comparison with ocean simulations, integrative data fusion and analysis, visualization, and spontaneous user collaboration. The effort needed to develop this involves three parts: a physical layer based primarily on network connectivity provided by the OptIPuter and ROADNet projects; research on shore-side software technology based on web-service based implementations of a variety of tools including ocean ontology-based semantic search tools, event-detection in real-time data streams, the integration of OPeNDAP and SRB, and data visualization; and ocean-side research focused on instrument monitoring and control, as well as data-flow management. As part of the proposed work a converged data grid model for the MARS and ROADNet data systems will be developed, data from CODAR sites, ships, cabled observatories and surface buoys will be integrated into the system, and a variety of user services will be implemented. The expectation is that lessons learned from this activity will inform the design of cyberinfrastructure systems for future ocean observatories such as those envisaged within the ORION project. If successful, this project could have a significant impact on the success of the planned ORION ocean observatory project by developing data management and observatory control approaches that can be implemented in ORION and by developing effective user interfaces to a collection of ocean observing systems. Research challenges include developing a software infrastructure suitable for heterogeneous real-time data streams, monitoring and control in non-robust networks, and real-time event detection and characterization. This project involves numerous institutions and combines the efforts of computer science researchers and oceanographers. It leverages, or links to, a number of existing or funded pilot observatory, networking and data system projects including: SCCOOS, MARS, NEPTUNE-Canada, ROADNet, and OptIPuter. New approaches to ocean observing systems involve large arrays of sensors and many mobile, robotic platforms for research and educational activities throughout the ocean basins. Innovative full-ocean-depth observatories in coastal, regional, and global settings can vastly expand human tele-presence within, and the basic understanding of, interlinked processes that modulate climate, create energy and mineral deposits, influence the carbon cycle, drive major hazards, such as earthquakes, tsunami's and violent storms, and support the recently discovered volcanically hosted microbial biosphere-a potential analog for life on other planets.
|
1 |
2004 — 2008 |
Smarr, Larry Fainman, Yeshaiahu (co-PI) [⬀] Papadopoulos, Philip Ford, Joseph (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri:Development of Quartzite, a Campus-Wide, Terabit-Class, Field-Programmable, Hybrid Switching Instrument For Comparative Studies @ University of California-San Diego
This project, building a campus wide ultra high-speed optical fiber network that supports scientific application and experiments of high volume data, develops an experimental next-generation instrument to efficiently investigate and compare campus-scale terabit-class lambda network architectures that span from optical-circuits-only to packet-switched-only networks (and a range of hybrid combinations in between). Current commercial approaches to storage systems do not scale in either performance levels or data abstractions. The proposed approach builds on the foundation of the shared-nothing compute cluster emerging from data systems, visualization walls, and high-end instrument interfaces, having raw horsepower to serve and ingest high volumes of data required by applications. Constructing a next generation switch for simultaneously switching 10Gbs streams efficiently, the work aims at building a 21st century photonic instrument to explore the practical tradeoffs of network and application design in bandwidth-rich infrastructure. Supporting large scientific problems and enabling big simulations, the project constructs Quartzite, the experimental, next-generation instrument. While fostering comparative studies, Quartzite, a data-intensive application breadboard, enables stitching together resources, bringing them virtually in. Thus, this wavelength-selective switch creation, communication and delivery project, adds hybrid-networking structure to a unique campus-scale platform and enables the study of network architecture and application design in a band-width-rich infrastructure and the sharing of large data sets across clusters. The work involves high risk, with a promise of even higher impact, since data intensive scientific exploration can be brought into the scientists' lab, by using on-demand high-speed data flows to harness campus- to international-scale resources. The work explores the following issues: How surplus of on-demand bandwidth can be exploited by end user applications, How distributed systems can be best architected, When is a non-shared packet network needed, How should control of a hybrid fabric be handled, Can applications truly exploit a high-speed parallel infrastructure, Is dynamic reconfiguring of campus network to meet transient capacity demands practical, Is it beneficial to expose direct circuits to individual endpoints, and Do novel packet scheduling strategies for shared links dramatically improve the capacity. Broader Impact: The Quartzite-enabled comparisons will influence the network structure of future research university networks, greatly increasing the capability for data-intensive research throughout the country. Working with industrial partners, the hybrid Quartzite core system and software will service us all.
|
1 |
2006 — 2009 |
Smarr, Larry L. |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Community Cyberinfrastruct For Adv Mar Micro Ecol Res &Analy @ University of California San Diego |
0.958 |
2008 — 2012 |
Smarr, Larry Vahdat, Amin (co-PI) [⬀] Papadopoulos, Philip Krueger, Ingolf (co-PI) [⬀] Defanti, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Instrumentation For Project Greenlight @ University of California-San Diego
Proposal #: CNS 08-21155 PI(s): DeFanti, Thomas A. Krueger, Ingolf H.; Papadopoulos, Philip M.; Smarr, Larry L.; Vahdat, Amin M. Institution: University of California ? San Diego La Jolla, CA 92093-0934 Title: MRI/Dev.: Development of Instrumentation for Project Green Light Project Proposed: This project, developing an instrument called GreenLight, measures, monitors, and optimizes the energy consumption of large-scale scientific applications from many different areas. The work enables inter-disciplinary researchers to understand how to make ?green? (i.e., energy efficient) decision for IT computation and storage. Consequently, an experienced team might be able to make deep and quantitative explorations in advanced architecture, including alternative circuit fabrics such as Field Programmable Gate Arrays (FPGAs), direct-graph execution machines, graphics processors, solid-state disks, and photonic networking. The enabled computing and systems research will yield new quantitative data to support engineering judgments on comparative ?computational work per watt? across full-scale applications running at-scale computing platforms, thus helping to re-define fundamentals of systems engineering for a transformative concept, that of green CyberInfrastructure (CI). Keeping in mind that the IT industry consumes as much energy (same carbon footprint) as the airline industry, this project enables five communities of application scientists, drawn from metagenomics, ocean observing, microscopy, bioinformatics, and the digital media, to understand how to measure and then minimize energy consumption, to make use of novel energy/cooling sources, and employ middleware that automates optimal choice of compute/power strategies. The research issues addressed include studying the dynamic migration of applications to virtual machines for power consumption reduction, studying the migrations of virtual machines to physical machines to achieve network locality, developing new power/thermal management policies (closed loop, using feedback from sensors), classifying scientific algorithms in the context of co-processing hardware such as GPUs and FPGAs, and developing algorithms for resource sharing/scheduling in heterogeneous platforms. The full-scale virtualized device, the GreenLight Instrument, will be developed to measure, monitor, and make publicly available (via service oriented architecture methodology), real-time sensor outputs, empowering researchers anywhere to study the energy cost of at-scale scientific computing. Hence, this work empowers domain application researchers to continue to exploit exponential improvements in silicon technology, and to compete globally. Although the IT industry has begun to develop strategies for ?greening? traditional data centers, the physical reality of modern campus CI currently involves a complex network of ad hoc and suboptimal energy environments in departmental facilities. The number of these facilities increases extremely fast creating campus-wide crisis of space, power, and cooling due to the value of computational and data intensive approaches to research. This project addresses these important issues offering the possibility to improve.
Broader Impacts: The project enables researchers to carry-out quantitative explorations into energy efficient CyberInfrastructure (CI) and to train the next generation of energy-aware scientists. It enlists graduate students from five disciplinary projects, involves minority serving institutions, and is likely to have direct impact on commercial components of the nation?s CI.
|
1 |
2011 — 2012 |
Smarr, Larry Braun, Hans-Werner |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Wildfire Modeling and Prevention Initiative: Developing a Technical Framework For Integrating Research With Public Policy Decision Support @ University of California-San Diego
Wildfires are increasing in both absolute number and severity in the American southwest and this trend is predicted to continue over decades to come. Therefore, the need for descriptive and predictive simulation tools to support wildfire prevention, or suppression during future wildfire events, is becoming critical. As a first step, this project aims to develop a technical framework for integrating three dimensional landscape models, real-time environmental data, and suite of simulation codes, and wildfire management protocols. This research will involve determining how best to merge elevation and ground classification datasets, couple fire propagation, atmospheric, and hydrologic simulation codes, and verify the accuracy of the coupled computations against historical wildfire data. Key components in the development of the technical framework include: 1) Identifying and obtaining access permissions to the wide variety of datasets needed to create the high-resolution digital model of the topography and landscape of San Diego County. 2) Investigating how these datasets can be seamlessly "sewn together" using GIS software systems. The integration methodology being developed will be investigated using an unburned area of the San Diego County?s Santa Margarita Ecological Reserve, which is an ideal rapid prototyping and validation site for this project. It is prone to the strong Santa Ana wind events we wish to study and is already equipped with intensive real-time wireless ground sensors connected through the NSF's High Performance Wireless Research and Education Network (HPWREN).
|
1 |
2013 — 2017 |
Smarr, Larry Braun, Hans-Werner De Callafon, Raymond (co-PI) [⬀] Altintas, Ilkay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hazards Sees Type 2: Wifire: a Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure For Wildfires @ University of California-San Diego
The University of California at San Diego (UCSD) is awarded a grant to build an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. San Diego Supercomputer Center (SDSC) and Calit2's Qualcomm Institute will build this integrated CI system for supporting social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change, together with collaborators from UCSD's Jacobs School of Engineering and UMD's Department of Fire Protection Engineering. The WIFIRE CI integrates networked observations, e.g., heterogeneous satellite data and real-time remote sensor data, with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters will architect and implement CI pathways that enable joint innovation for wildfire management. Scientific workflows will be used as an integrative distributed programming model and will simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before, during and after a wildfire. The approach is scalable which allows many sensors to be subjected to user-specified data processing algorithms to generate threshold alerts within seconds. Integration of this sensor data into both rapidly available fire image data and models will better enable situational awareness, responses and decision support at local, state, national, and international levels.
The products of WIFIRE will be initially disseminated to project collaborators (SDG&E, CAL FIRE, USFS), covering academic, private, and government laboratories while generating value to emergency officials, and consequently to the general public. WIFIRE may be used by government agencies in the future to save lives and property during wildfire events, test the effectiveness of response and evacuation scenarios before they occur and assess the effectiveness of high-density sensor networks in improving fire and weather predictions. WIFIRE's high-density network, therefore, will serve as a testbed for future applications worldwide. The team is inclusive across a spectrum of collaborators and will create an open-source CI environment with intuitive workflows that lead to reusable software components for a wide range of science and engineering disciplines that can be extended to secondary education. Results are disseminated via an interactive website at SDSC in which students from high school to graduate level can participate in uploading their own data logging, data processing or data-driven alerts.
|
1 |
2013 — 2015 |
Smarr, Larry Papadopoulos, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc-Nie Network Infrastructure: Prism@Ucsd: a Researcher Defined 10 and 40gbit/S Campus Scale Data Carrier @ University of California-San Diego
Prism@UCSD creates a campus-wide "Big Data freeway" composed of high-bandwidth end-to-end optical connections routed by a next-generation Arista switch. This creates an optical fabric capable of more than 10 Terabit/s of aggregate bandwidth, has full bisection similar to in-machine room clusters but is deployed at a campus scale. This researcher-defined network unites users of in-lab scientific instruments such as genome sequencers and microscopes with remote compute, visualization, data-storage and analysis systems. Prism bridges to, augments, and protects the existing campus production network by providing a complementary, specialized, cost-effective, massive-capacity network to a targeted group of data-intensive labs.
Prism builds upon and upgrades the Quartzite "campus-scale network laboratory" NSF MRI (awarded 2006) that was motivated by applications with extreme-scale bandwidth requirements. Compared to Quartzite, Prism not only adds IPv6 capability and support for software defined networks via OpenFlow, but also increases port capacity by 4x, lowers power consumption by 3x, and removes all card-to-switch-backplane over-subscription at the core switch. In addition, the existing optical fiber connection to the San Diego Supercomputer Center is being expanded to 120Gbps as a high-bandwidth bridge to cloud/parallel storage and NSF XSEDE resources. This fundamentally enables research in multiple disciplines, including physics, chemistry, biology, climate change, oceanography, and computer science to address big-data challenges.
Workshops will be held with regional optical networks and EDUCAUSE to disseminate experience to research campuses and fully describe this cost-effective, expandable and replicable infrastructure. In addition, a summer workshop aimed at minority serving institutions will build on Calit2 / SDSC's tradition of diversity outreach. Prism will form a model for others to follow in building their own big-data transportation systems.
|
1 |
2015 — 2016 |
Smarr, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Towards Deploying a Science-Driven Regional Interoperable Science Dmz @ University of California-San Diego
High-performance networking is a key enabler of data-intensive science, as researcher collaborations and the sharing/accessibility of data require large, fast transfers of data on a national and international scale. This workshop focuses on regional-level interoperability of campus "Science DMZs", so that researchers can move data to/from their labs and their collaborators, national supercomputer centers or data repositories, traversing multiple, heterogeneous Science DMZs without performance degradation. This capability leverages substantial investments by NSF, DOE and regional networks, and will advance data-intensive scientific discovery by advancing collaborations, making data readily accessible to large communities, and increasing researcher productivity.
It has always been a technical challenge to achieve high-speed networking performance between end-points where researchers typically work (instruments, computers, labs). The Science DMZ concept is a major advance in removing these roadblocks within a campus, and interoperability between heterogeneous Science DMZs is the next logical step in successfully addressing performance at both ends. This workshop is to address interoperability challenges by focusing near-term on a few specific scientific applications at a regional level, while laying the foundation for extensibility to other applications and institutions.
|
1 |
2015 — 2020 |
Smarr, Larry Papadopoulos, Philip Wuerthwein, Frank (co-PI) [⬀] Defanti, Thomas Crittenden, Camille |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc*Dni Dibbs: the Pacific Research Platform @ University of California-San Diego
Research in data-intensive fields is increasingly multi-investigator and multi-institutional, depending on ever more rapid access to ultra-large heterogeneous and widely distributed datasets. The Pacific Research Platform (PRP) is a multi-institutional extensible deployment that establishes a science-driven high-capacity data-centric 'freeway system.' The PRP spans all 10 campuses of the University of California, as well as the major California private research universities, four supercomputer centers, and several universities outside California. Fifteen multi-campus data-intensive application teams act as drivers of the PRP, providing feedback to the technical design staff over the five years of the project. These application areas include particle physics, astronomy/astrophysics, earth sciences, biomedicine, and scalable multimedia, providing models for many other applications.
The PRP builds on prior NSF and Department of Energy (DOE) investments. The basic model adopted by the PRP is 'The Science DMZ,' being prototyped by the DOE ESnet. (A Science DMZ is defined as 'a portion of the network, built at or near the campus local network perimeter that is designed such that the equipment, configuration, and security policies are optimized for high-performance scientific applications rather than for general-purpose business systems'). In the last three years, NSF has funded over 100 U.S. campuses through Campus Cyberinfrastructure - Network Infrastructure and Engineering (CC-NIE) grants to aggressively upgrade their network capacity for greatly enhanced science data access, creating Science DMZs within each campus. The PRP partnership extends the NSF-funded campus Science DMZs to a regional model that allows high-speed data-intensive networking, facilitating researchers moving data between their laboratories and their collaborators' sites, supercomputer centers or data repositories, and enabling that data to traverse multiple heterogeneous networks without performance degradation over campus, regional, national, and international distances. The PRP's data sharing architecture, with end-to-end 10-40-100Gb/s connections, provides long-distance virtual co-location of data with computing resources, with enhanced security options.
|
1 |
2017 — 2020 |
Smarr, Larry Rosing, Tajana (co-PI) [⬀] Altintas, Ilkay Defanti, Thomas Kreutz-Delgado, Kenneth (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-New: Cognitive Hardware and Software Ecosystem Community Intrastructure (Chase-Ci) @ University of California-San Diego
This project, called the Cognitive Hardware And Software Ecosystem Community Infrastructure (CHASE-CI), will build a cloud of hundreds of affordable Graphics Processing Units (GPUs), networked together with a variety of neural network machines to facilitate development of next generation cognitive computing. This cloud will be accessible by 30 researchers assembled from 10 universities via the NSF-funded Pacific Research Platform. These researchers will investigate a range of problems from image and video recognition, computer vision, contextual robotics to cognitive neurosciences using the cloud to be purpose-built in this project.
Training of neural network with large data-sets is best performed on GPUs. Lack of availability of affordable GPUs and lack of easy access to the new generation of Non-von Neumann (NvN) machines with embedded neural networks impede research in cognitive computing. The purpose-built cloud will be available over the network to address this bottleneck. PIs will study various Deep Neural Network, Recurrent Neural Network, and Reinforcement Learning Algorithms on this platform.
|
1 |
2018 — 2021 |
Wuerthwein, Frank (co-PI) [⬀] Smarr, Larry Rosing, Tajana (co-PI) [⬀] Altintas, Ilkay Papadopoulos, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc* Npeo: Toward the National Research Platform @ University of California-San Diego
Academic researchers need a simple data sharing architecture with end-to-end 10-to-100Gbps performance to enable virtual co-location of large amounts of data with computing. End-to-end is a difficult problem to solve in general because the networks between ends (campuses, data repositories, etc.) typically traverse multiple network management domains: campus, regional, and national. No one organization owns the responsibility for providing scientists with high-bandwidth disk-to-disk performance. Toward the National Research Platform (TNRP), addresses issues critical to scaling end-to-end data sharing. TNRP will instrument a large federation of heterogeneous "national-regional-state" networks (NRSNs) to greatly improve end-to-end network performance across the nation.
The goal of improving end-to-end network performance across the nation requires active participation of these distributed intermediate-level entities to reach out to their campuses. They are trusted conveners of their member institutions, contributing effectively to the "people networking" that is as necessary to the development of a full National Research Platform as is the stability, deployment, and performance of technology. TNRP's collaborating NRSNs structure leads to engagement of a large set of science applications, identified by the participating NRSNs and the Open Science Grid.
TNRP is highly instrumented to directly measure performance. Visualizations of disk-to-disk performance with passive and active network monitoring show intra- and inter-NSRN end-to-end performance. Internet2, critical for interconnecting regional networks, will provide an instrumented dedicated virtual network instance for the interconnection of TNRP's NRSNs. Cybersecurity is a continuing concern; evaluations of advanced containerized orchestration, hardware crypto engines, and novel IPv6 strategies are part of the TNRP plan.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2023 |
Smarr, Larry Rosing, Tajana (co-PI) [⬀] Altintas, Ilkay Defanti, Thomas Yu, Qi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: Abr: Cognitive Hardware and Software Ecosystem Community Infrastructure (Chase-Ci) @ University of California-San Diego
This project, called the Cognitive Hardware And Software Ecosystem Community Infrastructure (CHASE-CI), is to continue and expand a cloud of hundreds of affordable Graphics Processing Units (GPUs), networked together with a variety of neural network machines to facilitate development of next generation cognitive computing. This cloud will be accessible by 30 researchers assembled from 10 universities via the NSF-funded Pacific Research Platform. These researchers will investigate a range of problems from image and video recognition, computer vision, contextual robotics to cognitive neurosciences using the cloud to be purpose-built in this project.
Training of neural network with large data-sets is best performed on GPUs. Lack of availability of affordable GPUs and lack of easy access to the new generation of Non-von Neumann (NvN) machines with embedded neural networks impede research in cognitive computing. The purpose-built cloud will be available over the network to address this bottleneck. PIs will study various Deep Neural Network, Recurrent Neural Network, and Reinforcement Learning Algorithms on this platform.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |