2003 — 2006 |
Hauser, Thomas Held, Eric (co-PI) [⬀] Hardy, Thomas Spall, Robert (co-PI) [⬀] Farrelly, David (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development of a Low-Cost 64 Bit Cluster Supercomputer For Engineering and Physics Simulations
ABSTRACT
Proposal No. CTS-0321170 Principal Investigator: J. T. Hauser, Utah State University
This grant focuses on combining two leading-edge technologies, dual-processor AMD Opteron systems and Flat Neighborhood Networks (FNNs) based on Gigabit Ethernet, to develop a uniquely designed cluster supercomputer. In addition to providing a 64-bit memory address space, the Opteron uses a new memory architecture that should avoid the memory performance degradation normally associated with shared-memory multiprocessors. Similarly, using a FNN interconnection pattern makes it possible to achieve single-switch latency despite having more nodes than a low-cost network switch has ports. FNNs also provide the possibility to optimize the networking characteristics toward the communication patterns of applications running on the proposed cluster supercomputer. This project will provide a low-cost alternative to commercially available high performance cluster networking solutions such as Myrinet and Quadrics. The software to design and optimize such a network will be made available for free, so that other research groups can design and build their own cluster with FNNs. The proposed system will also provide valuable insight in the performance of a newly designed memory system for large applications compared to other available memory system designs.
The research to be conducted includes computational fluid dynamics, ecohydraulics, fusion plasmas, geophysical fluid dynamics and computational chemistry. To meet the rising demand for students skilled in high-performance computing, the College of Engineering will introduce a course in parallel computing for engineering applications targeted at senior level undergraduate and beginning graduate students. This course as well as a one-week workshop on cluster computing will also be open to students in mathematics, physics, chemistry and computer science.
In addition, the proposed cluster supercomputer will enhance current relationships with nearby national laboratories such as the Idaho National Engineering and Environmental Laboratory (INEEL). USU faculty and graduate students are developing very computationally intensive CFD models in conjunction with INEEL engineers. The visibility arising from this cooperative effort will further enhance the goal of attracting more students and faculty.
|
1 |
2003 — 2005 |
Hauser, Thomas Haupt, Sue Ellen (co-PI) [⬀] Spall, Robert [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Enhancement of Computational Engineering Within An Undergraduate Mechanical Engineering Curriculum
Engineering - Other (59) Our project addresses issues in undergraduate education at research universities highlighted in the 1998 Boyer Commission Report. Our aim is to incorporate advances in information technology into the curriculum, and is consistent with recent increased emphasis in the applied areas of information technology by the National Science Foundation. This has been accomplished by developing an option to the department's undergraduate Mechanical Engineering degree in the area of Computational Engineering. We have worked with faculty from the Engineering Research Center at Mississippi State University to adapt the curricula from their undergraduate minor in Computational Engineering to our Utah State University program. We have also built a PC Beowulf cluster consisting of 5 server and 20 compute nodes to support the Computational Engineering option. This will be accomplished by adopting a cluster design process developed at the University of Kentucky. Undergraduate students are being exposed to cluster designs and programming models through a week long seminar, which will be offered on a yearly basis. Results of our efforts will be disseminated through the national ASEE conference and through publication in an engineering education journal. In addition, materials from the cluster workshop will be made available through our Department's website.
|
1 |
2005 — 2008 |
Hauser, Thomas Tullis, Blake (co-PI) [⬀] Spall, Robert (co-PI) [⬀] Nguyen, Kytai Smith, Barton [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Realizing the Potential of Time-Resolved Piv Through Development of Integrated in-Lab Beowulf Cluster Processing
CTS-0521621 B. Smith, Utah State University
This grant is to develop an instrument that can acquire and rapidly process and store the very large samples generated by Time-Resolved Stereo Particle Image Velocimetry (TRSPIV). Currently, there is a large gap between the time required to acquire such data and the time necessary to process it. Closing this gap by providing an order of magnitude increase in processing power will realize the potential of this powerful technique. The new system is based on a Beowulf cluster supercomputer specifically designed to handle and process TRSPIV image data sets of 5.2 Giga Bytes or more. The parallel processing will be completely transparent to the user and will require no knowledge of parallel processing, removing inhibitions about acquiring large samples and allowing the user to concentrate on the experiment. This system will be used in several departments of the College of Engineering at Utah State University and at the Idaho National Engineering and Environmental Laboratory (INEEL). TRSPIV measurements will become accessible to a large segment of the fluid dynamics community. For a modest increase in cost, it will become possible to process the very large samples that time-resolved SPIV generates. Other measurements involving large samples, such as phase-locked measurements in periodic flows (e.g. flow control studies) will also benefit. Experience has shown PIV to be very accessible measurement technique, making this system valuable in the productive undergraduate research program at Utah State. The PIs also plan to use the system for outreach activities with local high schools and under represented groups.
|
1 |
2011 — 2015 |
Gallaher, David Williams, James Hauser, Thomas Banich, Marie (co-PI) [⬀] Guralnick, Robert (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of a Scalable Petascale Storage Infrastructure For Data-Collections and Data-Intensive Discovery @ University of Colorado At Boulder
The Center for Research Computing (CRC) at the University of Colorado Boulder (CU-Boulder) will acquire, deploy and maintain an expandable petascale storage instrument, the CU-Boulder petaLibrary. This instrument, consisting on an integrated disk and tape system, will enable faculty, staff, postdocs, graduate and undergraduate students to build, store, share and merge large and growing data collections. The petaLibrary will be made available to all CU-Boulder researchers at no usage charge other than the cost of the media.
The petaLibrary will serve as the infrastructure for a number of data services provided by the University Libraries, the Museum, the National Snow and Ice Data Center (NSIDC) and the CRC to the CU-Boulder campus community. The services will include the deposit and discovery of a wide range of digital resources, including digital surrogates of archives and special collections materials owned by libraries and museums on campus, openly accessible resources, digital media only library resources and the scholarly output of the campus. Data-intensive projects include digital collections; high fidelity, high definition recordings of human interaction; global-scale land remote sensing; bio-digitization efforts by the Museum; as well as research programs utilizing CU-Boulder's computational resources.
The broader impacts of the project include the development of workshops on data archival and long-term data formats for researchers, research projects and educational opportunities with the CRC for graduate and undergraduate students. Other efforts will include working with The Colorado Coalition for Gender and IT (CCGIT) and the National Center for Women and Technology (NCWIT) to address the limited diversity in IT, specifically in the area of cyberinfrastructure by providing summer internship opportunities for female faculty in data management related topics.
|
1 |
2012 — 2014 |
Cumalat, John (co-PI) [⬀] Stowe, Michael Hauser, Thomas Grunwald, Dirk (co-PI) [⬀] Dowell, Robin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc-Nie Network Infrastructure: Improving An Existing Science Dmz @ University of Colorado At Boulder
Research Computing has been a leader in deploying data movement services to research groups at the University of Colorado Boulder such as Globus Online on a separate network, the RC-DMZ, which is dedicated to high-performance applications and research data transfers. This project implements significant improvements to the RC-DMZ in order to increase the manageability, reliability and bandwidth. This has become necessary driven by the large amount of research in Science, Technology, Engineering and Mathematics (STEM) that is highly data intensive, with observational and simulation data collections in the tens or hundreds of terabytes (TB), and soon growing to petabytes (PB). The recent "Campus Bridging" task force report also demonstrates the need for continued investment in campus networking capabilities. The upgrades to the RC-DMZ include:
* Physical separation between commodity and research traffic through two new border routers * Improved network design by adding redundancy and bandwidth improvement to 20 Gigabit * Active, integrated performance measurements * Security and traffic monitoring * Experimental software defined / OpenFlow Networking
Improving the existing RC-DMZ is a collaboration between Research Computing (RC), the departments of Computer Science (CS), Physics, Molecular, Cellular and Developmental Biology (MCDB), the National Snow and Ice Data Center (NSIDC), and the Office of Information Technology (OIT) in consultation with other departments and institutes. The network improvements enable better data sharing, and improved access to local, regional and national computational and data resources for all of CU Boulder's researchers.
|
1 |
2014 — 2017 |
Hauser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cc*Iie Region: Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance (Rmcmoa) @ University of Colorado At Boulder
High Performance Networking (HPN) and other components of advanced cyberinfrastructure (CI) are key enabling technologies vital to a college's ability to prosper in a rapidly evolving scientific and technical environment. All components of CI, especially advanced network infrastructure, are swiftly changing, making it challenging to maintain currency, agility, and competitiveness. Smaller institutions are at a severe disadvantage, lacking in capital and personnel who know how to deploy, optimize, maintain, and sustain such technologies.
Colorado State University, the Idaho Regional Optical Network, the University Corporation for Atmospheric Research, the University of Colorado Boulder, and the University of Utah have partnered proactively to train and mentor faculty and staff from smaller institutions in the region to enhance their use of advanced technologies, emphasize their advantages, and highlight their critical role as campus, regional, and national scientific infrastructure. This project educates in an effort to drive adoption and expansion of advanced networking and CI technologies to smaller colleges and universities in Colorado, New Mexico, Idaho, Utah, and Wyoming. The project conducts four regional workshops, as well as site visits to smaller institutions to support network engineering activities and sharing knowledge and best practices in deploying and applying network-based cyberinfrastructure.
|
1 |
2015 — 2018 |
Ruprecht, Peter Hauser, Thomas Jansen, Kenneth (co-PI) [⬀] Hasenfratz, Anna (co-PI) [⬀] Syvitski, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri Collaborative Consortium: Acquisition of a Shared Supercomputer by the Rocky Mountain Advanced Computing Consortium @ University of Colorado At Boulder
A cluster supercomputer is deployed by the University of Colorado Boulder (CU-Boulder) and Colorado State University (CSU) for the Rocky Mountain Advanced Computing Consortium (RMACC). This high-performance computing (HPC) system supports multiple research groups across the Rocky Mountain region in fields including astrophysics, bioinformatics, chemistry, computational fluid dynamics, earth system science, life science, material science, physics, and social sciences with advanced computing capabilities. It also provides a platform to investigate and address the impact of many-core processors on the applications that support research in these fields.
The system integrates nodes populated with Intel's conventional multicore Xeon processors and Many-Integrated-Core (MIC) 'Knights Landing' Phi processors interconnected by Intel's new Omni-Path networking technology. Users of the new HPC system have access to existing data management services including data storage, data sharing, metadata consulting, and data publishing, leveraging the NSF-funded high-performance networking infrastructure and long term storage system, as well as additional cyberinfrastructure, at CU-Boulder and CSU. The many-core feature of this HPC system enhances graduate and undergraduate students' education and training as they develop, deploy, test, and run optimized applications for next generation many-core architectures. Training for researchers and students is provided through workshops appropriate for introducing diverse audiences to the efficient and effective use of HPC systems, the challenges of vectorization for single core performance, shared memory parallelism, and issues of data management. Additionally, advanced workshops on large-scale distributed computing, high-throughput computing, and data-intensive computing are offered during the year and at the annual RMACC student-centric HPC Symposium. The Symposium brings together hundreds of students, researchers, and professionals from universities, national laboratories and industry to exchange ideas and best practices in all areas of cyberinfrastructure. For-credit HPC classes will be delivered for online participation, educating the next generation of computational scientists in state-of-the-art computational techniques.
|
1 |
2017 — 2019 |
Burns, Patrick Williams, James Hauser, Thomas Siegel, Howard Cheatham, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc* Cyber Team: Creating a Community of Regional Data and Workflow Cyberinfrastructure Facilitators @ University of Colorado At Boulder
A distributed team of data and workflow facilitators ("cyberteam") from the University of Colorado Boulder, the University of Utah, and Colorado State University support experimental and observational science (EOS) research as part of the Rocky Mountain Advanced Computing Consortium (RMACC). Advances in the number/diversity of data sets require enhanced capabilities to access, reuse, process, analyze, understand, curate, share, and preserve data. A critical aspect of these efforts is to provide expert support for efficient and effective workflows involving data generation, data analysis, visualization, and preservation. Typically, these activities have been the responsibilities of individual researchers, and as a result, data can be difficult to reuse by others. Likewise, computational and data generation workflows are often cobbled together, hard-coded, and not readily amenable for sharing.
These problems are addressed by the distributed RMACC cyberteam who will provide support for researchers in the region by assisting them with data and workflow reuse and management. The facilitators have complementary skills and expertise, and are fully integrated into campus and regional efforts. Their focus is on data curation and metadata, and data and compute workflows, including protected information. These facilitators, in collaboration with others in the region, provide and develop regional, shared resources to support data management for small research groups and under-resourced communities, including working with regional RMACC partners.
|
1 |
2019 — 2021 |
Hauser, Thomas Knuth, Shelley (co-PI) [⬀] Johnson, Brian Anderson, Jonathon Lindquist, Thea |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc* Compute: a Hybrid Cloud Environment For the Rocky Mountain Advanced Computing Consortium @ University of Colorado At Boulder
The Research Computing group at the University of Colorado Boulder provides a hybrid cloud infrastructure to support computing, data, and science gateway needs that are currently not met by the existing high-performance and high-throughput computing infrastructure. The system is also integrated into the Open Science Grid (OSG) to enable full utilization of the deployed on-premise hardware using otherwise idle compute capacity. The hybrid cloud system provides one integrated system view of the on-premise and public cloud to researchers so they can select the right resource to support their work. This new capability eases the burden of finding appropriate computational tools for their work, allowing researchers to focus on new discoveries and the advancement of their fields. Major emphasis areas of the research supported are in the geosciences, hydrological modeling, natural language processing, machine learning, and earth analytics. Members of the Rocky Mountain Advanced Computing Consortium (RMACC) have access to 20% of the provided cloud resources with a focus on smaller institutions.
This hybrid cloud provides: virtual machines either from a library or customized by the researcher; execution and orchestration of containers; serverless computing; and hosting for persistent science gateways and other science services. This project heavily leverages the NSF award #1659425 "CC* Cyber Team: Creating a Community of Regional Data and Workflow Cyberinfrastructure Facilitators" by reusing training materials on containerization and capitalizing on relationships established during focus groups and one-on-one consults.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |