1985 — 1989 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Systematic Methodology For Designing Vlsi Systolic Arrays With Applications in Pattern Recognition and Image Proces- Sing
This research is an engineering approach to the solution of two fundamental problems in designing systolic arrays, namely, restructuring of algorithms for increased parallelism and efficient mapping of algorithms into systolic arrays. The study investigates: new syntactic and semantic transformations on complex algorithms for optimizing area, time, input/output, and other parameters in the systolic architecture; efficient methodology for mapping algorithms onto two- or three-dimensional systolic arrays with feedbacks, replicated data, general timing, and limited or full broadcasting and other architectural constraints; design of macropipelines of systolic arrays and the related problems in interconnection, buffering, timing, and closure property; and design of macropipelines of systolic arrays for image processing and pattern recognition. Emphasis is on both theoretical and practical approaches in this research. With the theory developed, an automated design tool is being implemented so that it can be interfaced with existing VLSI design tools.
|
0.961 |
1992 — 1993 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
International Conference On Application Specific Array Processors, August 4-7, 1992, Berkeley, Ca
This is an attendance and travel grant for the 1992 International Conference on Application Specific Array Processors. The conference is sponsored by the University of California at Berkeley in cooperation with the IEEE Computer Society and is held in Berkeley California. The conference emphasize parallel computing, systems, design methodologies technology and implementations within the context of application specific computing. This is their sixth of a series of conferences which provide an important forum for the discussion and dissemination of research being done on application specific computing systems in the US, Europe and Japan.
|
0.961 |
1992 — 1996 |
Fortes, Jose Nof, Shimon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Models For the Design of Engineering Tasks Integration @ Purdue Research Foundation
The objective of the research is to investigate the use of analytic methods and results from multiprocessor computing to design the most effective integration structures. The scope of this research will include integration of distributed engineering tasks involving decisions and designs in the areas of distributed product design, production planning and quality assurance. Distribution will be considered within and between plants and under dynamically changing conditions. The research will focus on the theoretical analysis of models for three interrelated integration issues: (1) static and dynamic organizational configurations for the integration; (2) decomposition of engineering problems to tasks that are designed for integration by the above configurations; and (3) suitable classification and an analytic approach to compare integration alternatives that will increase integration effectiveness, including fault-tolerant computer-based integration. To achieve the objective, a multi-disciplinary approach will be used, combining manufacturing engineering, operations research, and computer engineering. The potential of creating significant new knowledge on integration of distributed engineering tasks and influencing its usefulness to American manufacturing is high.
|
0.906 |
1995 — 1998 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Data Distribution Independent Parallel Processing @ Purdue Research Foundation
This project is exploring the possibility of a parallel programming paradigm that is data-distribution independent (DDI) in the sense that the user would not be required to program or even invoke data communication routines (hereon called modules). The need for data redistribution would either be eliminated or transparent to the user. The emphasis this work is on the systematic design of computational modules so that either there is no need to redistribute input data or, when this cannot be achieved, the cost of (automatic) redistributionis minimized. In this context, source-to-source program transformations, called modular mappings, and properties that allow commutative parallel processing are being explored as techniques and concepts that enable DDI computation. The extent to which a DDI paradigm could replace existing approaches, complement them or merely apply to special application domains is unclear and this is one of the issues under investigation. In addition, hardware and architecture features that support DDI computation on both general and special purpose aprallel processors are being investigated. The experimental validation is being done in the context of three application areas and program implementations on a fine-grain distributed memory SIMD machine (Maspar MP-1 with 16 thousand processors) and a coarse- grain distributed memory MIMD machine (Intel Paragon with 140 processors). The areas of interest are dense linear algebra, sparse linear algebra and symbolic compute algebra which can be applied to numerous scientific and engineering computing problems. For these three areas, DDI modules are being developed as well as entire programs built of several modules. Performance comparisons are being made between DDI implementation and their non-DDI counterparts.
|
0.906 |
1997 — 2001 |
Fortes, Jose Lundstrom, Mark Neudeck, Gerold |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Designing Microelectronic Technologies Through a Network- Based Simulation 'Hub' @ Purdue Research Foundation
EEC-9700762 Lundstrom Abstract This award provides funding to Purdue University under the direction of Dr. Mark Lundstrom, for the support of a Combined Research-Curriculum Development project entitled, "Designing Microelectronic Technologies through a Network-Based Simulation 'HUB'." This project will involve the development of a curriculum on the design of silicon and compound semiconductor technologies using advanced computer simulation tools. Students will learn to design fabrication processes, devices, and circuits and to integrate previously gained, course-specific knowledge. Unique features of this program are that its objective is to bring advanced, research-level simulation tools into the classroom and thereby into engineering practice, its emphasis on bringing the computational techniques that have been successful in the silicon industry to the compound semiconductor industry and its focus on the use of simulation by engineers who are not computational specialists. Producing engineers proficient in the use of simulation and computational prototyping can have a major impact on the way that such technologies are developed and at the same time respond to a critical need continually expressed by industry.
|
0.906 |
1998 — 2002 |
Lehman, James Fortes, Jose Choudhary, Alok Taylor, Valerie Eigenmann, Rudolf [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcd: Integration of Computer Architecture and Parallel Programming Tools Into Computer Science and Engineering Curricula Through Network Computing Hubs @ Purdue Research Foundation
This CRCD project supports the incorporation of tools for computer architecture simulation, performance prediction, program optimization and application characterization into computer science and engineering curricula. Existing courses are being enriched in both content and with experiments not previously possible. This work is accomplished through use of the Purdue University Network Computing Hubs (PUNCH), specifically through expansion of the recently created Computer Architecture and Parallel Programming Hubs to include the tools needed by the targeted courses. Three institutions (Purdue University, Northwestern University and Chicago State University) are involved in the project. Purdue University is leading the effort by integrating several tools into two computer architecture courses and three courses on compilers and parallel programming. The other two institutions are also incorporating tools in at least three courses on computer architecture and parallel programming. All three universities are developing educational materials and are installing tools for shared use through PUNCH.
|
0.906 |
1999 — 2004 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cadre: Network-Computer For Computer Architecture Research and Education (Netcare)
EIA-9975275 Fortes, Jose A. Eigenmann, Rudolf Purdue Research Foundation
CADRE: NETwork-computer for Computer Architecture Research and Education (NETCARE)
This award provides support for NETCARE (NETwork-computer for Computer Architecture Research and Education), an infrastructure supporting a repository of computer architecture tools, to be managed by a consortium of three universities: Purdue University, University of Wisconsin and Northwestern University. NETCARE will use the PUNCH network computing software developed at Purdue University, to provide the user interface, manage the repository of architecture tools, and perform high-level scheduling and dedicated resource access control. Condor, a resource management system that uses idle CPU cycles of networked workstations, developed at the University of Wisconsin, will be used to perform low-level scheduling and management of resources. NETCARE will be universally accessible through conventional World Wide Web browsers and will provide software tools and computing resources for the computer architecture research and education community.
|
1 |
1999 — 2003 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Design of Machines Organized as Processor-and-Memory Hierarchies |
1 |
2000 — 2003 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Cnpq: Us-Argentina and Us-Chile Collaborative Research On Computer Science and Engineering
EIA-0083083 Fortes, Jose A. Purdue University
CISE CNPq: US-Argentina and US-Chile Collaborative Research on Computer Science and Engineering
This is an award to support two workshops, each of two days duration in May 2000, in Chile and Argentina. It is anticipated that these workshops will build on the success of the CISE/NSF programs with Mexico and Brazil (CONACyT and CNPq, respectively) to begin creating the collaborations and personal connections necessary to support the possibilities for similar government to government bilateral arrangements with Chile and Argentina.
|
1 |
2001 — 2005 |
Fortes, Jose Principe, Jose (co-PI) [⬀] Harris, John (co-PI) [⬀] Harris, John (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Sy: Design and Simulation of Biologically-Inspired Nanolattice
EIA-0135946 Fortes, Jose A University of Florida
ITR/SY: Design and Simulation of Biologically inspired Nanolattice
This joint project between the University of Florida and Purdue University is pursuing scientific principles for designing and engineering biologically inspired neuromorphic computing architectures using radically new molecular electronic devices and biologically inspired, ultra-dense, self-assembled systems. Examples of applications of these architectures include unprecedently small and inexpensive nanoscale intelligent sensors. The architectures can be used to implement neurocomputing models and are well suited for nanotechnologies, thus accelerating the development of useful nanotechnology by providing clear functional targets for nanodevices. The team of investigators includes computer architects, neurocomputing experts and device physicists working in close collaboration along three highly synergistic thrusts. One of the two thrusts is focused on advancing the understanding of biologically-inspired dynamic information processing systems in order to understand the impact of constraints imposed by architectures and technologies on the properties of these systems. Another thrust investigates neurocomputing system architectures that can be engineered within the constraints of nanotechnologies. The third thrust develops a toolbox of novel mechanisms for integration, self-assembly and interconnection of nanoscale devices. The architectures are investigated via formal methods and simulation. Internet resources are used to conduct simulation, and to disseminate models, software and other research results. A new course, summer internships and educational materials are being developed to educate students on the key interdisciplinary aspects and results of the project.
|
1 |
2001 — 2003 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Digital Government Sger: Exploratory Research On Transnational Digital Government @ Purdue Research Foundation
EIA- 0107686 Purdue University Jose Fortes
Title: SGER: Exploratory Research on Transnational Digital Government
This grant will support a preliminary study of the uses of information technologies to support multilateral collaborations between two South American countries in the topic area of drug abuse and trafficking. Important topics to be surmounted include the many differences in infrastructure and processes related to data gathering, and the differing legal and security regimes in place. There will be an element of multi-lingualism to be explored also. The collaboration will begin with the support and approval of the Inter-American Drug Abuse Control Commission of the Organization of American States (OAS). The Commission has developed a process and procedures for multilateral government evaluation of various drug abuse parameters, which will serve as the context in which to evaluate collaboration technologies research. Researchers from several US universities and the OAS will participate.
|
1 |
2002 — 2006 |
Fortes, Jose Principe, Jose (co-PI) [⬀] Su, Stanley (co-PI) [⬀] George, Alan (co-PI) [⬀] Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise RR: Collaborative Research On Wide-Area Network Computing Using Virtual Machines
EIA 02-24442 Fortes, Jose A. Figueiredo, Renato; George, Alan D.; Principe, Jose C.; Su, Stanley Y. University of Florida
CISE RR: Collaborative Research on Wide-Area Network Computing Using Virtual Machines
This collaborative research project (with Dinda at Northwestern University, proposal 02-24449), requiring a wide-area test bed that enables experimentation with, access to, and running of applications on unique resources, requests PC clusters, an IBM server, and other ancillary hardware for projects in 1. Distributed grid computing and information processing systems using virtualization technologies and 2. Information grids with real users and research applications requiring capabilities enabled by virtual machines (VMs). Deploying a distributed system based on clusters connected by local, metropolitan, and wide area networks, the work aims to provide a virtual computing and data storage interface to clients that access resources on the underlying "information grid." The test bed includes the following defining features. 1. Virtualization capabilities, i.e., the ability to instantiate independent logical machines that can be multiplexed on physical processors (or fractions of them), storage and network I/O channels, and can use distinct operating systems; 2. Wide-area distribution, i.e., Internet-linked test bed components in independently-administered geographically-apart network domains; 3. Scalable capacity for both scientific computing and information processing; and 4. Heterogeneity. Interrelated projects enabled by the test bed towards the goal of developing VM-based middleware for grid computing include virtualized end resources, monitoring and prediction, interactive computing, virtual file systems, data management, cycle selling, and security. Information grids and web portals for use of CAD tools are also enabled by the infrastructure for dissemination of collaborative research results and data, and for digital government services. From the availability of the portals and grid-computing resources benefits are expected in brain-machine interfaces, biologically-inspired nanocomputing, auction-based computing, distributed knowledge applications, medical imaging and data archiving, light-scattering spectroscopy, and mixed non-linear optimization. Collaborations include the Sigmicro microarchitecture center, NETCARE and the Purdue-hosted Nanohub (enabling users to run tools for computer architecture and parallel computing, and nanoelectrnics). The project impacts some minority serving institutions such as Chicago State and Florida A& M Universities and enables a testbed for a transnational digital government projects involving Carnegie Mellon University, University of Belize, University of Colorado, University of Florida, University of Massachusetts, and Pontificia Universidad Catolica Madre y Maestra of the Dominican Republic.
|
1 |
2002 — 2007 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Digital Government: Transnational Digital Government
EIA-0131886 Jose Fortes University of Florida
Digital Government: Transnational Digital Government
With entre provided by Organization of American States (OAS), the US research team will collaborate with researchers from universities in Belize, Mexico, and the Dominican Republic. The domain is drug interdiction, under an arm or the OAS - the Inter-American Drug Abuse Control Commission. Under the COmmission, member states of OAS collect, share and analyze information in standard ways. Within this context, research will be undertaken in the areas of spoken dialogue systems, data management and security for rule-based data sharing and filtering, information retrieval and machine translation, middleware to support these transnational information grids, and network behavior modeling for acceptable quality of service.
|
1 |
2002 — 2005 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Resource and Data Management For Virtualized End-Resources in Computational Grids
This is a Collaborative Proposal between Northwestern University and University of Florida which addresses the key research challenges associated with management of dynamic Virtual Machines (VM) and interfacing these mechanisms with existing grid middleware techniques. The project will develop novel solutions that will address - resource management for distributed virtualized end-resources that can be created dynamically, - image management for the on-demand transfer of data representing the entire state to create a dynamic VM instance, and - data management for the on-demand transfer of user and application data between decoupled compute and data servers on the grid.
The proposed approach will lead to middleware solutions that will form an information processing foundation for grid computing. The software generated in this project will be sued to implement the next generation of network computing hubs currently being used to support simulation needs in nanotechnology, electronics CAD, computer architecture and parallel programming.
The project addresses key research challenges to allow the management of dynamic instances of virtual machine middleware and will lead to solutions that will form an information processing foundation for grid computing on virtualized end-resources. In particular, the software generated in this project will be used to implement the next generation of network computing hubs currently being used to support simulation needs in nanotechnology, electronics CAD, computer architecture and parallel programming.
|
1 |
2005 — 2009 |
Sheng, Y Fortes, Jose Davis, Justin Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-Team: Coastal and Estuarine Science Cyberinfrastructure: Training and Deployment
The objectives of this project are to educate and train the coastal and estuarine science (CES) and engineering communities on the use and development of cyberinfrastructure (CI) that incorporates Grid computing and Web-based portal techniques for the use of simulation tools. As a cornerstone of this effort, a Grid-enabled application, CH3D-GTM, which applies the robust shallow water estuarine model CH3D to the Guana Tolomato Matanzas (GTM) National Estuarine Research Reserve (NERR), is used to demonstrate and teach CI concepts to a diverse audience. At the GTM Environmental Education Center, personnel and student fellows are trained, and educational programs are developed to reach K-20 students, teachers and the general public. At the State of Florida's St. Johns River Water Management District (SJRWMD), scientists and engineers are trained on how to deploy models using the CH3D-GTM application as a tutorial. At the University of Florida in Gainesville (UF), faculty and students develop curriculum content based on Grid middleware to teach students how to use, support, deploy, develop and design the CH3D-GTM application and similar simulation models.
This project focuses on workforce development for current scientists and engineers, as well as for future practitioners. In the short term, NERR educators and student fellows and SJRWMD engineers and researchers are being trained, and at least three graduate engineering courses at UF are being enhanced. The long-term goal is to make a significant contribution towards a national CI for science, government and public use. The improved simulation capabilities could then translate into improved ecosystem-based water quality and resource management, emergency preparedness and response, and planned development for these heavily populated and vulnerable regions.
While focused on the CES community, the training methods, materials and software are conceived such that they can be generalized to other application domains and reused in other regions of the country. Outreach activities further broaden the scope of this project by bringing basic CES and CI concepts to K-20 teachers and students that routinely visit the GTM in field trips. This activity builds upon a Web-accessible CES application portal - an on-line, interactive resource for outreach at the GTM, and an educational resource with CI-based simulation experiments that are made publicly available to CES and CI educators at other universities. Through feedback from surveys and assessments conducted in the training and outreach activities, this demonstration provides key insights on how CI educators can better tailor teaching to diverse audiences.
|
1 |
2006 — 2010 |
Fortes, Jose Principe, Jose (co-PI) [⬀] Figueiredo, Renato Sanchez, Justin (co-PI) [⬀] Hermer, Linda (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dddas-Tmrp: Dynamic Data-Driven Brain-Machine Interfaces
Two related DDDAS application areas considered in this project are (1) cognitive brain modeling from experiments with live subjects and (2) the design of brain-inspired assistive systems to help human beings with severe motor behavior limitations (e.g. paraplegics) through brain-machine interfaces (BMIs). Simply stated, a BMI uses brain signals to directly control devices such as computers and robots. Today's BMI designs are extremely primitive and are a far cry from the seamless interface between brain and body in animals. In a healthy animal, the brain constantly learns and adapts to the needs of new physical movement, in addition to providing perfectly timed signals to the motor system. In this process, the brain receives and uses sensory feedback to both learn and generate the signals that lead to purposeful motion. In order to inch closer to BMI designs that are of use to humans, better models of brain motor control and movement planning are needed along with the necessary adaptive algorithms and computational architecture needed for their execution in real time. In light of such goals, this project's activities aim to significantly advance the state of the art of BMI research by developing the models, algorithms and computational architecture of dynamically-data-driven BMIs (DDDBMIs) that implement recently proposed advanced brain models of motor control. Achieving this goal in the proposed approaches will also allow to address a chief problem in current BMI research: The fact that paraplegics cannot train their own network models because they cannot move their limbs.
The research on DDDBMI systems conducted under this project is a drastic departure of the conventional BMI paradigm. The control interface architecture is distributed and borrowed from recent models of neurophysiology of movement, which will enable better overall performance. Learning occurs simultaneously for the subject and the control models in a synergistic manner, which requires more powerful adaptation schemes. Selective use of many computational models is the reason why a dynamically data-driven system is needed to provide the computational needs of a DDDBMI. The project interdisciplinary activities are closely intertwined around the development and integration of the DDDBMI components into a platform for BMI research. Research on middleware addresses the need for dynamic aggregation of Grid-resources with Quality-of-Service guarantees, and support for dynamic computation steering. Research on adaptive algorithms focuses on new data models and learning algorithms. Research on brain modeling concentrates on cognitive models of motor control and advancing our understanding of the neurobiology of movement. In the long run, the BMI experimental research platform will have a dual role: it will help validate the brain models under investigation and it will provide insights on to how to design BMIs for use by paraplegic patients.
|
1 |
2007 — 2010 |
Fortes, Jose Figueiredo, Renato Keahey, Katarzyna |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sdci Nmi New: Middleware For Missing Links in Virtualized Grids
National Science Foundation NSF Software Development for Cyberinfrastructure (SDCI) Program Office of Cyberinfrastructure
Proposal Number: 0721867 Principal Investigator: Jose Fortes Institution: University of Florida Proposal Title: SDCI New: Middleware for Missing Links in Virtualized Grids
Abstract
This project will support virtual organizations and their use of grids by developing and integrating virtual workspaces and networks combined with management, monitoring, and testing capabilities. By developing these missing links, configuration management tools, resource management tools, and management tools for virtual namespaces and networks the work promises to integrate virtualization management capabilities into existing grids. Intellectual merit includes the creation of these missing links in middleware, and will contribute real extensions to existing grid services. Broader impact includes the potential for the work to lead to a new resource provisioning model in grids, and more directly impact TeraGrid, OSG, and other existing grids.
|
1 |
2007 — 2010 |
Fortes, Jose Sinnott, Susan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Atomic-Scale Friction Research and Education Synergy Hub (Afresh)
PUBLIC ABSTRACT
Proposal Number: CBET-0742580 Principal Investigator: Sinnott, Susan Affiliation: University of Florida Proposal Title: Atomic-scale Friction Research and Education Synergy Hub (AFRESH)
Friction is a critical component of everyday life. Numerous applications require some friction to function effectively, such as car brakes and tires, while for other applications minimal friction is necessary to prevent the untimely wear of parts and the inefficient dissipation of energy, which is the case for the movement of pistons in car engines. It has been established that optimization of friction performance across systems first requires a thorough atomic-scale understanding, as all sliding surfaces ultimately interact via small surface areas. However, there is still much about atomic-scale friction, and the laws that govern it, that remain unknown. This engineering virtual organization will provide a vehicle for the atomic-scale friction community to come together to (i) share data, (ii) develop, disseminate and encourage the adoption of standards for the performance and analysis of both experiments and simulation, (iii) enhance and share computational and theoretical tools, and (iv) develop powerful teaching tools and related materials. It will also train graduate students in a multi-disciplinary environment, support forums for professional networking, build a diverse atomic-scale friction community, and facilitate outreach to other scientific communities and cyberinfrastructures.
|
1 |
2007 — 2009 |
Fortes, Jose Li, Tao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr---Sma: Characterizing, Modeling and Mitigating Soft Error Vulnerability in Multithreaded and Multi-Core Execution Environments
Future microprocessors will consist of billions of nanoscale transistors organized as multi-core multithreaded microarchitectures. Since nano-sized transistors are sensitive to external events and manufacturing variabilities, there is a non-negligible probability of one or more faults occurring in one of billions of transistors and affecting one of many threads. High data-integrity and availability requirements make reliability as important for computers as performance, power consumption, and yield. This project studies techniques for characterizing and mathematically modeling the vulnerability of system-level components (i.e. at the microarchitecture, OS and program levels) to soft-errors.
Today's design methodologies optimize the performance and power of multi-core and/or multithreaded architectures, but largely ignore reliability in the presence of soft errors. An important and urgent research task is to develop frameworks, models and techniques to characterize and estimate the deleterious impact of soft errors. This research addresses the above challenge by 1) developing a unified, reliability-aware simulation framework to quantify microarchitecture soft-error vulnerability of simultaneous-multithreading and multi-core systems consisting of a wide range of heterogeneous hardware and software components; and 2) creating fast and accurate analytical models to estimate and forecast soft-error vulnerabilities of hardware and software components without using lengthy and detailed simulations;
Frameworks that can quantitatively study soft error vulnerability will enable reliability-aware designs and research for emerging simultaneous-multithreading and multi-core architectures. The PIs will use the concepts, tools, techniques and other results of this research project to introduce graduate and undergraduate students to the nature of soft errors and their impact on execution environments. These teaching activities will lead to improvements in courses on computer architecture, fault-tolerant computing and nanocomputing. The tools developed in this project are accessible and usable over the Web, using equipment and middleware developed by the PIs laboratory. This makes it straightforward for other academics and engineers to use them in their own work.
|
1 |
2007 — 2008 |
Fortes, Jose Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Planning of a Center For Autonomic Computing
A planning meeting will be held to determine if a multi-university Industry/University Cooperative Research Center for Autonomic Computing will be formed at the University of Florida, Rutgers University, and the University of Arizona. The mission of the proposed center is to advance the knowledge of how to design and engineer systems that are capable of running themselves, adapting their resources and operations to current workloads and anticipating the needs of their users.
The center will not only advance the science of autonomic computing, but will also accelerate its transfer to industry by closely working with partners in the definition of projects to be pursued, and contributing to the education of a workforce capable of designing and deploying autonomic computing systems.
|
1 |
2008 — 2012 |
Fortes, Jose Li, Tao Boykin, Patrick Figueiredo, Renato Peir, Jih-Kwon (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cri: Crd Collaborative Research: Archer - Seeding a Community-Based Computing Infrastructure For Computer Architecture Research and Education
Researchers at eight U.S. universities are collaborating in the development of Archer, a community-based computing infrastructure for computer architecture research and education. Archer addresses, in an integrated manner, computer architecture researchers' broad need to access powerful computational resources and share simulation environments. The project achieves this objective by deploying a large-scale, flexible, and easy-to-use computing infrastructure which adds users' local computing resources to Archer-dedicated resources as users join the network, thereby increasing Archer's computing power as more users join. In addition, the Archer project comprehends a repository where users can share tools and simulation environments they've created, adding another incentive for users to join the Archer community, facilitating collaboration among research groups, and advancing the field of computer architecture.
The core hardware of Archer consists of computer clusters to be deployed at Cornell University, Florida State U., Northeastern U., Northwestern U., U. FLorida, U. Minnesota, and U. Texas at Austin. The hardware infrastructure provides hundreds of processors to support the demanding simulation workloads required by computer architecture research in areas including soft-error modeling, statistically-based design exploration, and chip-multiprocessor micro-architectures. The key software technologies enabling community resources to be aggregated in a secure, scalable and seamless manner are resource virtualization and robust batch job scheduling, which include the IPOP virtual network from U. Florida and the Condor middleware from U. Wisconsin.
The Archer community resource will enable advances in computer architecture by allowing researchers, particularly those in institutions not equipped with local resources for high-throughput computing, to run large-scale simulation experiments. Its technology provides a new way to swiftly create ad-hoc shared "Grid" computing pools within or across institutions, which will pave the way to the creation of similar cyberinfrastructures for other communities.
|
1 |
2008 — 2009 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Instrumentation Needs of Computer and Information Science and Engineering Research
Proposal #: CNS 08-36228 PI(s): Fortes, Jose A. Institution: University of Florida Gainesville, FL 32611-2002 Title: Workshop on Instrumentation Needs of Computer and Information Science and Engineering Project Proposed: This proposal, planning a workshop to examine the nature, needs, importance, challenges, and funding mechanisms of instrumentation development, acquisition, utilization, and sharing, for purposes of ongoing and/or anticipated research in different CISE areas, brings together recognized CISE research leaders to conduct such an assessment and produce a report that can be shared with the CISE community, colleagues, academic administrators, government funding agencies, and industry. CISE research is increasingly concerned with extremely large and complex objects whose behavior cannot be entirely distilled from first principles or investigated using reduction models. Among other factors, scale is a consequence of Moore?s law and pervasive networking. Complexity arises from the embedding of computing into artifacts, interconnection of many components and/or multiple layers of functionality. Research needed to design and/or model such objects often requires special instruments to either peer into individual components at very small space/time granularity or to observe/emulate/simulate many objects at large enough scale and during long enough times.
Over the last two decades the CISE research instrumentation needs, and mechanisms to address them, have changed as a reflection of the evolution of IT technology, both from the standpoint of the research challenges to be faced and the instruments enabled. The amazing progress of computer and information technologies (IT) has led to the current era of microprocessors with billions of transistors, software environments with millions of lines of code, multi-layered IT systems, and networks of thousands of computers, users, and applications. As a consequence, the objects computer scientists and engineers study often have an unprecedented scale and complexity. CISE instruments often grow by connecting many other artifacts (in some cases, on the fly) leading to complexity that cannot be mastered by any single designer or user of those objects. Thus, an urgent need exists for such assessment.
Broader Impact: IT is increasingly being embedded into artifacts which might include non-IT components whose natures include mechanical, electrical, communication, energy production, chemical, transportation, entertainment, medical, and defense. Thus, instrumentation touches society overall.
|
1 |
2008 — 2012 |
Fortes, Jose Principe, Jose (co-PI) [⬀] Mcintyre, Lauren Moroz, Leonid (co-PI) [⬀] Sanchez, Justin (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Instrumentation For Coupled Experimental-Computational Neuroscience and Biology Research
Proposal #: CNS 08-21622 PI(s): Fortes, Jose A. McIntyre, Lauren M.; Moroz, Leonid L.; Principe, Jose C.; Sanchez, Justin C. Institution: University of Florida Gainesville, FL 32611-2002 Title: MRI/Acq.: Instrumentation for Coupled Experimental-Computational Neuroscience and Biology Research
Project Proposed: This project, acquiring a virtualized multicomputer instrument with shared-memory subsystems and storage capacity, intends to use this instrument as a shared instrument whose resources can be virtualized, reserved, and configured on demand for different research activities related to neuroscience, computational biology, and cyberinfrastructure. Its configurations can also be dedicated and tightly coupled to in vivo experiments using network connections to in situ instrumentation used for experiments. It can simultaneously support multiple research activities because of its unique capability to support real-time computer-in-the-loop experiments, its ability to run many concurrent computation threads, its shared memory and storage subsystems, and its use of virtualization technology to manage the coexistence of multiple computing environments. To develop and validate autonomic computing nodes and techniques for multiuser virtualized computational systems, the instrument provides traces of performance and other needed monitored data. Research activities in brain-machine interfaces, neurogenesis, genomics, bioinformatics, signal processing, cyberinfrastructure, autonomic computing and other areas include: - Brain-machine interfaces where cortex models for motor control are dynamically learned and applied in real-time, - Experimental drug discovery through real-time analysis of large amounts of genetic data and many thousands of compounds, - Analysis of Terabytes of genetic data captured in real-time as a neuron grows, learns, and remembers, and - Online learning algorithms using dynamic filter topologies with online computation requirements that increase over time. The activities have transformative goals, including the introduction of real-time, and address the high performance computation into closed-loop experiments and/or systems whose behavior is driven by complex processing of sensed data. Another goal involves providing off-line computing capabilities to match the unprecedented rate and volume of genetic data produced by massively parallel DNA sequencing technology.
|
1 |
2008 — 2014 |
Fortes, Jose Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Center For Cloud and Autonomic Computing
This award establishes the Industry/University Cooperative Research Center (I/UCRC) for Autonomic Computing at the University of Florida, University of Arizona and Rutgers University. The I/UCRC will focus on multi university research on improving the design and engineering systems that are capable of funning themselves, adapting their resources and operations to current workloads and anticipating the needs of their users. The center will work on improving hardware, networks and storage, middleware, service and information layers used by modern industry.
The research performed at this center is important for U.S. industry to help maintain its lead in the information technology field. This I/UCRC will have a broad impact on the participating students and faculty through involvement with the industrial members. This center has the potential to develop new knowledge in this area that will increase US industrial competitiveness.
|
1 |
2009 — 2013 |
Fortes, Jose Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research, Ii-New: An Instrumented Data Center Infrastructure For Research On Cross-Layer Autonomics
This project's goal is to acquire and develop an instrumented datacenter testbed spanning the three sites of the NSF Center for Autonomic Computing (CAC)-the University of Florida (UF), the University of Arizona (UA) and Rutgers, the State University of New Jersey (RU). Datacenters are a growing component of society's IT infrastructure, including services related to health, banking, commerce, defense, education and entertainment. Annual energy and administration costs of today's datacenters amount to billions of dollars; high energy consumption also translates into excessive heat dissipation, which, in turn, increases cooling costs and increases servers' failure rates. The proposed testbed will enable a fundamental understanding of the operations of data centers and the autonomic control and management of their resources and services. The design of the underlying infrastructure reflects the natural heterogeneity, dynamism and distribution of real-world datacenters, and includes embedded instrumentation at all levels, including the platform, virtualization, middleware and application layers. Its scale and geographical distribution enables studies of challenges faced by datacenter applications, services, middleware and architectures related to both "scale-up" (increases in the capacity of individual servers) and "scale-out" (increases in the number of servers in the system). This testbed will enable fundamental and far-reaching research focused on cross-layer autonomics for managing and optimizing large-scale datacenters. The participant sites will contribute complementary expertise-UA at the resource level, UF at the virtualization layer, and RU in the area of services and applications. The collaboration between the university sites will bring coherence across ongoing separate research efforts and have a transformative impact on the modeling, formulation and solution of datacenter management problems, which have so far been considered mostly in terms of individual layers. The testbed will also provide a critical infrastructure for education at multiple levels, including providing students with hands-on experience via course projects, enable development of new advanced multi-university and cross-disciplinary courses, as well as multi-site group projects focused on end-to-end autonomics, which will use the proposed testbed. Students from underrepresented groups will be actively involved in the research and their participation will be increased through ongoing collaborations with minority institutions. Even broader community participation will result from an evolving partnership with the recently proposed industry cloud initiatives.
|
1 |
2009 — 2011 |
Fortes, Jose Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tie: Uf-Fiu Inter-I/Ucrc Collaboration to Explore Autonomic Computing For the Terrafly Server System
IIP - 0931517 Florida International University Rishe
IIP - 0932023 University of Florida Fortes
This proposal is to request supplemental funding to support a collaborative project between the I/UCRC Center for Advancement Knowledge Enablement (CAKE) at Florida International University (FIU), and the I/UCRC Center for Autonomic Computing (CAC) at the University of Florida (UF).
This TIE proposal brings together the two centers in a synergistic project that investigates, on one hand, the benefits of applying autonomic computing principles to the management of Web-based geospatial infrastructures, and on the other hand, provides autonomic computing researchers with access to data and expertise in the context of a rich IT environment representative of an important class of Web applications, thereby helping conduct quantitative, experimental systems research.
The proposed collaborative study will provide direct benefit to the I/UCRC stakeholders (industry partners) and help recruit additional members. Member organizations that provide servers, server software, and data center services would benefit from a direct technology transfer; and their customers in turn would thereby benefit indirectly, from the improved application performance that would result from acquisition of software and services. The successful completion of this work will, therefore, aid companies and universities by allowing them to be able to tap into this powerful tool and not have to invent and deploy the same solution for their own organizations to utilize.
|
1 |
2009 — 2014 |
Fortes, Jose Li, Tao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Leveraging the Interplay Between Process Variation and Nbti in Nanoscale Reliable Noc Architecture Design
The trend towards multi-/many- core design has made network-on-chip (NoC) a crucial hardware component of future microprocessors. With the continuous down-scaling of CMOS processing technologies, reliability is becoming a primary target in NoC design. Negative Bias Temperature Instability (NBTI) is a critical reliability threat for deep sub-micrometer CMOS technologies. NBTI increases the PMOS transistor threshold voltage and reduces the drive current, causing failures in logic circuits and storage structures due to timing violations or minimum voltage limitations. Meanwhile, process variation (PV) - the divergence of transistor process parameters from their design specifications - caused by the difficulty in controlling sub-wavelength lithography and channel doping as CMOS manufacturing technology scales, results in variability in circuit performance/power and has become a major challenge in the design and fabrication of future microprocessors and NoCs. Since NBTI and PV affect both NoC delay and power, it is imperative to address these challenges at the NoC architecture design stage to ensure its efficiency as the underlying CMOS fabrication technologies continue to scale.
The goal of this project is to develop techniques for designing novel, cost-effective router microarchitectures and adaptive routing schemes that mitigate NBTI and PV impact on NoCs by leveraging the interplay between the two. The scalability and sustainability of future many-core processors crucially depend on the dependability of NoCs. Mechanisms that can simultaneously tolerate PV and NBTI will be investigated for enhancing the reliability of NoCs fabricated using nanoscale transistor technologies. The educational and outearch activities include recruiting graduate and undergraduate students from under-represented groups for this project and integration of research and education.
|
1 |
2009 — 2014 |
Fortes, Jose Grimshaw, Andrew Fox, Geoffrey [⬀] Keahey, Katarzyna Smith, Warren (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Futuregrid: An Experimental, High-Performance Grid Test-Bed
This project provides a capability that makes it possible for researchers to tackle complex research challenges in computer science related to the use and security of grids and clouds. These include topics ranging from authentication, authorization, scheduling, virtualization, middleware design, interface design and cybersecurity, to the optimization of grid-enabled and cloud-enabled computational schemes for researchers in astronomy, chemistry, biology, engineering, atmospheric science and epidemiology. The project team will provide a significant new experimental computing grid and cloud test-bed, named FutureGrid, to the research community, together with user support for third-party researchers conducting experiments on FutureGrid.
The test-bed will make it possible for researchers to conduct experiments by submitting an experiment ?plan? that is then executed via a sophisticated workflow engine, preserving the provenance and state information necessary to allow reproducibility.
The test-bed includes a geographically distributed set of heterogeneous computing systems, a data management system that will hold both metadata and a growing library of software images, and a dedicated network allowing isolatable, secure experiments. The test-bed will support virtual machine-based environments, as well as native operating systems for experiments aimed at minimizing overhead and maximizing performance. The project partners will integrate existing open-source software packages to create an easy-to-use software environment that supports the instantiation, execution and recording of grid and cloud computing experiments.
One of the goals of the project is to understand the behavior and utility of cloud computing approaches. Researchers will be able to measure the overhead of cloud technology by requesting linked experiments on both virtual and bare-metal systems. FutureGrid will enable US scientists to develop and test new approaches to parallel, grid and cloud computing, and compare and collaborate with international efforts in this area. The FutureGrid project will provide an experimental platform that accommodates batch, grid and cloud computing, allowing researchers to attack a range of research questions associated with optimizing, integrating and scheduling the different service models. The FutureGrid also provides a test-bed for middleware development and, because of its private network, allows middleware researchers to do controlled experiments under different network conditions and to test approaches to middleware that include direct interaction with the network control layer. Another component of the project is the development of benchmarks appropriate for grid computing, including workflow-based benchmarks derived from applications in astronomy, bioinformatics, seismology and physics.
The FutureGrid will form part of NSF's TeraGrid high-performance cyberinfrastructure. It will increase the capability of the TeraGrid to support innovative computer science research requiring access to lower levels of the grid software stack, the networking software stack, and to virtualization and workflow orchestration tools. Full integration into the TeraGrid is anticipated by 1st October 2011.
Education and broader outreach activities include the dissemination of curricular materials on the use of FutureGrid, pre-packaged FutureGrid virtual machines configured for particular course modules, and educational modules based on virtual appliance networks and social networking technologies that will focus on education in networking, parallel computing, virtualization and distributed computing. The project will advance education and training in distributed computing at academic institutions with less diverse computational resources. It will do this through the development of instructional resources that include preconfigured environments that provide students with sandboxed virtual clusters. These can be used for teaching courses in parallel, cloud, and grid computing. Such resources will also provide academic institutions with a simple opportunity to experiment with cloud technology to see if such technology can enhance their campus resources. The FutureGrid project leverages the fruits of several software development projects funded by the National Science Foundation and the Department of Energy.
|
0.957 |
2010 — 2012 |
Fortes, Jose Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Autonomic Middleware For Self-Protection, Data Transfers, and Anomaly Analytics as a Service
This proposal seeks funding for the Center for Autonomic Computing (CAC) studies conducted by the University of Florida site (lead) and the University of Arizona site. Funding Requests for Fundamental Research are authorized by an NSF approved solicitation, NSF 10-507. The solicitation invites I/UCRCs to submit proposals for support of industry-defined fundamental research.
This project proposal focuses on autonomic computing and systems, applicable to the important areas of security, fault management, and data centers. Planned are the prototypes to be deployed on test bed environments driven by the requirements from industry. The most interesting part of the proposal are applications to SCADA environments, which integrate monitoring, multi-level behavior analysis, decision fusion and risk analysis relative to security of SCADA environments. Other areas include autonomic peer-to-peer systems, as well as study related to the data centers, both at the application layer and data center hardware, where focus is on robustness.
The successful completion of this project will represent a significant step toward the design and deployment of highly secure SCADA systems and networks. In addition to security, the autonomic detection of anomalies in applications or subsystems of a datacenter addresses inefficiencies in data center design by eliminating the dependence on over-provisioning, which is a resource-inefficient strategy for ensuring quality of service by trying to offset any sub-system failures and malfunctions. The project has also potential for broader impact on data-intensive applications that leverage the availability of ad-hoc desktop grids for high-throughput computing.
|
1 |
2010 — 2012 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Adaptive It Appliance For Collaborative Review of Child-Death Cases
Proposal #: 10-42644 & 10-42642 PI(s): Fortes, Jose & Winston, Flaura K.; Zonfrillo, Mark Institution: University of Florida & The Childrens Hospital of Philadelphia Title: SpProj.: Collab Rsch: Adaptive IT Appliance for Collaborative Review of Child-Death Cases Project Proposed: These collaborative projects, investigating techniques for the creation, deployment, and management cyberinfrastructure for collaborative review of cases of child death, aim to identify strategies to improve children?s safety. In this context, cyberinfrastructure encompasses the tools and services, encapsulated in an IT appliance) needed for capturing, communicating, authoring, viewing, sharing, controlling access to, storing, and conferencing about data and information regarding events resulting in the children?s death. The project builds on an existing functional IT appliance developed for collaborative mechanisms of injury to children in motor vehicle crashes. Enabling its extension, this appliance could quickly be adapted to a more diverse range of causes of death, allowing for different kinds of participants with varying degrees of security and privacy. The work is expected to enrich the presentation of death scenarios for quicker analysis of their causes, leading to more efficient identification of potential prevention strategies. Expected contributions within the research thrusts include techniques for: - Automatic generation of interfaces, integration of components and services, and recovery of domain-specific collaborative IT appliances and - Fine-grained spatio-temporal access-control of shared objects. Broader Impacts: This project addresses a real need for easy-to-use tools that reflect semantics and workflow collaborative activities by non-IT experts as undertaken by professional teams engaged in child death reviews. The project engages graduate students in advanced IT research. Minorities and women will be recruited and encouraged to apply.
|
1 |
2011 — 2017 |
Macfadden, Bruce (co-PI) [⬀] Fortes, Jose Page, Lawrence [⬀] Soltis, Pamela Riccardi, Gregory (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Digitization Hub: a Collections Digitization Framework For the 21st Century
The accelerated loss of biodiversity, the "Biodiversity Crisis", is one of the leading environmental and social issues of the 21st century. Although natural biological diversity is fundamentally important to a healthy and sustainable planet, responses to the crisis have suffered from incomplete information and inadequate policies for sustainable use of natural resources. Information relevant to addressing the impact of declining biodiversity is housed within the nation's natural history and related biosystematic collections, but is inaccessible to most researchers and policy makers. The National Resource for Digitization of Biological Collections (Integrated Digitized Biocollections; iDigBio) was established to develop an integrated national infrastructure for digitization of existing biodiversity collections in the U.S. The iDigBio project team and infrastructure provides leadership, facilitates communication, and coordinates digitization and educational activities at TCNs (thematic collections networks) and many other biocollections throughout the U.S. Through iDigBio, data for millions of biological specimens are being made publically available in electronic format via customized cloud computing. Diverse data, such as taxonomy, geographic location, 2- and 3-dimensional images, vocalizations, and molecular resources, are tied to the specimens themselves. These data promote integrative biodiversity research on living and fossil species and provide an immense baseline for assessing the impacts of climate change, invasive species, and other environmental issues. The iDigBio project is thus changing the culture of the biocollections community to make digitization become a standard practice through new technologies, techniques, training, and standards of operation.
The iDigBio project serves several target audiences and stakeholders, including the biological research community, young biologists entering the profession, non-federal agencies, and the general public. These audiences are being engaged through diverse activities, including workshops, small focus groups, training sessions, symposia at professional meetings, on-line webinars, and related e-learning opportunities developed via the iDigBio web-site. University students are learning about, and are engaged in, this national digitization effort via courses, seminars, mentoring, online resources, and individual research. Graduate student assistantships and young faculty sabbaticals supported by iDigBio are broadening representation and diversifying the professional workforce. The iDigBio web-site, newsletter, and exhibits communicate advances, discoveries and opportunities to the public. The goals, activities, outputs, and outcomes of iDigBio are being evaluated by the Center for Informal Science Education at UF. Although primarily focused on the target audiences described above, the outputs and outcomes of iDigBio will also be available to federal agencies and international stakeholders. The long-term vision for iDigBio is a highly coordinated biocollections digitization community to serve the needs of diverse stakeholders, from the professional systematist to the general public, and communicate the importance of this national resource to advance research and education in the 21st century.
|
1 |
2011 — 2013 |
Fortes, Jose Figueiredo, Renato |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Unified Cloud Computing and Management
Center for Autonomic Computing (CAC) Proposal #1127965
This proposal seeks funding for the Center for Autonomic Computing (CAC) sites at the University of Florida (lead), Rutgers University, the University of Arizona, and Mississippi State University. Funding Requests for Fundamental Research are authorized by an NSF approved solicitation, NSF 10-601. The solicitation invites I/UCRCs to submit proposals for support of industry-defined fundamental research.
A unified cloud computing environment enabling anytime, anyplace access to limitless computing resources still represents an ideal given the current environment of heterogeneous resources distributed geographically and offered by vastly different service providers. The proposed effort spans the expertise and capabilities of the four center sites. The work aims to apply autonomic computing principles to address elements such as security, inter-cloud networking, and resource provisioning via thermal sensing and model-based adaptive performance control in order to create a framework for realization and management of trustworthy unified cloud computing environments. Results will be derived from the implementation of the proposed approaches on a planned distributed cloud testbed.
Advancement of the cloud computing paradigm has the potential to enable transformative change to user access to information technology enabling major advances in economic productivity and access to a broad range of new services. The proposed center effort works with a small member company and large systems integrator member. Industry will further benefit via dissemination of the results through the center membership and the extension of the center?s project portfolio into this area. Efficiency gains potentially realized at cloud computer centers from the proposed efforts has the potential to reduce energy costs system wide. The work plans to bring together a distributed cloud environment deployed across CAC sites to establish an open testbed for research and development of inter-cloud interoperability that has the potential to serve as a resource for the work of the broader community in this important area.
|
1 |
2011 — 2013 |
Fortes, Jose Figueiredo, Renato Tsugawa, Mauricio [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
It Virtualization For Disaster Mitigation and Recovery
Proposal #: CNS 11-39707 PI(s): Tsugawa, Mauricio; Figuereido, Renato J.; Fortes, Jose A. Institution: University of Florida Title: IT Virtualization for Disaster Mitigation and Recovery Project Proposed: This RAPID project, aiding the process of recovering Information Technology (IT) infrastructure damaged by catastrophic events, conducts research on the use of virtualization technologies to provide such aid. The work includes IT infrastructure needed to recover damages to non-IT infrastructures and human beings. Machine virtualization offers key mechanisms to move applications from one location (e.g., a data center) potentially affected by a disaster to another safe location. The project responds to many challenges such as: The - Inability to migrate Virtual Machines (VMs) from a disaster site to an unaffected site maintaining live services; - Severe limitation of power of network failures that limit the ability of performing live-migrations; - Need for coordination with recovery efforts to effectively prioritize critical services. Machine virtualization offers the ability to checkpoint VMs, thus enabling the creation of back-ups not only of data but also of partial application executions. VM checkpoints can be used to recover an IT infrastructure in a different location with minimal loss of data. The challenge lies in how to efficiently manage the massive amount of data and network traffic generated by the VM check-point process. With the main goals of keeping alive IT services as long as possible, and restoring recovery-critical IT services as quickly as possible during and after a disaster, the project focuses on - Analyzing data and events associated with damaged IT services due to the Great-East Japan Earthquake, - Studying scalability of wide-area VM live-migration and Back/checkpoint, and - Developing a resilient architecture to partial physical infrastructure failure in order to deploy IT infrastructures in virtualized and distributed datacenters. The investigators collaborate with Dr. Satoshi Sekiguchi, Director of the Information Technology Research Institute (ITIR) within the National Institute of Advanced Industrial Science and Technology (AOST), an Institution under the Ministry of Economy, Trade, and Industry (MET), Japan. This group are experts in the area of virtualization and has had some interactions with the Florida group. Broader Impacts: The work develops an understanding of how well virtualized IT systems can cope with partial physical damages, of what changes in hardware, software, and general practice are needed, and how to determine the best way to adopt them. In the long term the project should enable informing the adoption of a virtualized datacenter to host essential IT services. Hence, the project is likely to enable informed decisions and should also contribute in graduate student education.
|
1 |
2012 — 2014 |
Deumens, Erik Fortes, Jose Avery, Paul Li, Xiaolin (co-PI) [⬀] Asseng, Senthold |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc-Nie Network Infrastructure: 100gig Connection to Florida Lambdarail (Flr)
The project is one crucial building block in a comprehensive strategy to create a highly functional cyberinfrastructure for researchers at the University of Florida (UF) and their collaborators in the state, the nation, and the world. Supporting collaborations means moving large amounts of data or providing interactive access to data and processing capability from remote locations. The University of Florida and its partner institutions in SSERCA (Sunshine State Education and Research Computing Alliance) are creating a collaborative framework on top of the Florida Lambda Rail (FLR) for geographically dispersed research teams to easily transfer data between institutions, analyze the same data shared via file systems mounted at their respective high-performance computing resources, and share data with colleagues across the world, with appropriate authentication and authorization as needed.
The award funds the acquisition of a switch capable of transmitting at the rate of 100 Gbps to connect the UF campus research network to the FLR regional network and to the newly built 100 GigE Internet2 network. Some of the ongoing collaborative projects that will benefit directly from this high bandwidth capacity are the following: 1) The Compact Muon Solenoid experiment at the Large Hadron Collider, searching for the Higgs boson. 2) The Agricultural Model Intercomparison and Improvement Project enhances the capability to accurately model both long-term climate evolution and crop growth to allow decision makers to prepare for ever changing weather patterns. 3) The Integrated Digitized Biocollections is a project to create the infrastructure for a national resource providing a searchable collection of data.
|
1 |
2012 — 2014 |
Fortes, Jose Matsunaga, Andrea [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Enabling Continued Operation of It Services and Infrastructures During Floods and Other Disasters
Proposal #: CNS 12-40171 PI(s): Matsunaga, Andrea Fortes, Jose A. Institution: University of Florida Title: Rapid: Enabling Continued Operation of IT Services & Infrastructures during Floods & Other Disasters Project Proposed: Thi RAPID project, studying the effectiveness of virtualized Internet data centers on improving IT service continuity during and after a disaster through virtual machines (VM) live migration and backup/checkpointing, conducts research on the use of virtualization technologies. These technologies enable mitigating and recovering from the impact of catastrophic events on IT infrastructures and the services they deliver. Working with Thammasat University in Thailand, Internet Data Centers (IDCs) will be leveraged as disaster recovering sites, where government and corporate data can be backed up and operational servers can be temporarily located in order to provide high-availability and resiliency for the organizations? operations and services. The recent catastrophic events in the 2011 Thailand flood raised a many issues in the disruption of operation and services provided by various organizations that can contribute in handling other future catastrophic events. Several research questions have arisen: The - Need to assess existing infrastructures since suitable solutions have dependencies on the type of disasters and the realities of the IT environments in the disaster locations, and - Need to address challenges when migrating VMs across geographic locations, given that existing VM migration technology have been developed with local area network assumptions that do not hold true in disaster recovery scenarios. Thus, the work leverages existing infrastructure and experience in machine visualization technologies and cloud computing deployments to conduct realistic experiments, to assess the effectiveness of mechanisms offered by existing visualization technologies to maximize the availability of services and minimize costs to maintain and/or recover all the services during and after disaster. Research efforts will be developed in the following thrusts: - Collection and analysis of data related damaged IT services due to the 2011 Thailand Flood; - Studies of the nature of the IT services and their infrastructure design; - Studies of the practicality and scalability of VM live migration and backup/checkpointing in wide-area setting; and - Investigation of virtualized-based resilient middleware architectures for service continuity. Broader Impacts: This project will advance our understanding of how to provide robust middleware for protection and recovery of IT infrastructure that performs well for different types of disasters. It will also inform policy-makers and IT managers in Thailand and the USA on how to evaluate and integrate emerging commercial virtualization solutions for backup and/or recovery-oriented computing systems under the extreme conditions found during and after a disaster. The research supports a female minority PI and a US graduate student. The project has been submitted for co-funding from the Thailand Research Fund (TRF), which, if successful, would represent a precedent-setting partnership between the NSF and TRF which could be a model for future collaboration.
|
1 |
2012 — 2016 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Second Workshop On Instrumentation Needs of Computer and Information Science and Engineering (Incise2) Research
Proposal #: CNS 12-32197 PI(s): Fortes, Jose A. Institution: University of Florida Title: Second Workshop on Instrumentation Needs of Computer and Information Science and Engineering (INCISE2) Research Project Proposed: This proposal, planning a second workshop to examine more deeply the nature, needs, importance, challenges, current changes, and funding mechanisms of instrumentation development, acquisition, utilization, and sharing, for purposes of ongoing and/or anticipated research in different CISE areas, brings together recognized CISE research leaders to conduct such an assessment and produce a report that can be shared with the CISE community, colleagues, academic administrators, government funding agencies, and industry. CISE research is increasingly concerned with extremely large and complex objects whose behavior cannot be entirely distilled from first principles or investigated using reduction models. Among other factors, scale is a consequence of Moore?s law and pervasive networking. Complexity arises from the embedding of computing into artifacts, interconnection of many components and/or multiple layers of functionality. Research needed to design and/or model such objects often requires special instruments to either peer into individual components at very small space/time granularity or to observe/emulate/simulate many objects at large enough scale and during long enough times. The findings of the first INCISE workshop held in 2008 confirmed that research instrumentation - broadly construed to include hardware, software, datasets and services - is essential for the advancement of CISE disciplines. Since then we have witnessed the emergence and rapid maturation of cloud computing, global IT systems, community-oriented instruments, increasingly IT-rich (a.k.a. ?smarter?) systems and cyberinfrastructures for scientific research. These profound changes in both the nature of IT systems and how IT systems can be provisioned call for a follow-up meeting to revisit the scope and need of CISE instrumentation and to discuss strategies for enabling researchers to create and access the instruments needed by their projects on a sustainable basis. This proposal requests funds for organizing such a follow-up meeting ? INCISE 2 ? with diverse participants drawn from several areas of CISE research. INCISE 2 will take place on two days preceding the 2012 CRA Snowbird conference whose dates are July 22-24, 2012. Intellectual merit: Among others, the workshop aims to address the following questions: 1. Is instrumentation-as-a-service, as provided by clouds or other means, appropriate for CISE research? When the answer is yes, the workshop will further investigate and characterize applicable cases and what can be done. When the answer is no, the workshop will investigate the limitations of instrumentation-as-a-service and suggest steps to address them. 2. As the needs and offerings of instrumentation start having a global or regional nature, how are such needs best met and how are existing global and regional cyberinfrastructures best used for instrumentation purposes? Can we improve the community awareness of the existence and uses of these cyberinfrastructures? What are the relationships between cyberinfrastructure and instrumentation? 3. What are the main challenges currently faced by CISE instrumentation? For example, when is reproducibility of experiments needed, possible and/or affordable? Are there standards emerging for CISE instrumentation? What needs to be done to ensure sustainability of instrumentation solutions for CISE communities? Broader impact: The workshop expects to provide a valuable assessment of the evolving instrumentation needs of CISE research and how they can be best provided. It will make recommendations to both the community and the funding agencies regarding the necessary funding mechanisms to meet the identified needs. The recommendations will be widely disseminated, including institutions with large enrolments of minority students and underrepresented groups. An expected impact of this activity is an increased ability on the part of the researchers of these institutions in competing for instrumentation funds and an understanding of the nature of CISE instrumentation by academic administrators and funding agencies. The participants will be diverse with respect to discipline, gender, ethnic group and geographical distribution in order to enable different representative perspectives to bear on the conclusions of the workshop. A verbal report will be made to CISE department chairs at the CRA workshop, and a written report will be made available to NSF and CISE departments in more than 200 universities. IT is increasingly being embedded into artifacts which might include non-IT components whose natures include mechanical, electrical, communication, energy production, chemical, transportation, entertainment, medical, and defense. Thus, instrumentation touches society overall. Over the last two decades the CISE research instrumentation needs, and mechanisms to address them, have changed as a reflection of the evolution of IT technology, both from the standpoint of the research challenges to be faced and the instruments enabled. The amazing progress of computer and information technologies (IT) has led to the current era of microprocessors with billions of transistors, software environments with millions of lines of code, multi-layered IT systems, and networks of thousands of computers, users, and applications. As a consequence, the objects computer scientists and engineers study often have an unprecedented scale and complexity. CISE instruments often grow by connecting many other artifacts (in some cases, on the fly) leading to complexity that cannot be mastered by any single designer or user of those objects. Thus, an urgent need exists for such assessment.
|
1 |
2013 — 2016 |
Fortes, Jose Figueiredo, Renato Tsugawa, Mauricio (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Collaborative Research: Model-Based Autonomic Cloud Computing Software Technology
The project entails research on software engineering (e.g., mapping requirements to software architectures, software architecture models and their adaptation, software re-factoring, adaptable/autonomic middleware, etc), networking (software-defined networking, ad-hoc peer-to-peer networks, social private networks, etc), computer systems (virtualization, distributed computing, etc). The work will also provide techniques and middleware for distributed cyberinfrastructure supportive of mobile-hosted apps for science and education (e.g. middleware for virtual networking, social collaboration, resource virtualization and virtual appliances for different domains). By studying the mobile application-induced workload at the lowest layers of the systems software stack, such as the VMM, the project will identify how these layers can be improved to better support such workloads.
The project will advance the fundamental understanding of multiple technical issues faced in enabling flexible end-to-end management of cloud-supported mobile applications. Methods are needed to (re)implement applications so that their software architectures can reconfigure in response to changes in requirements as the environment changes, and to allow for runtime re-factoring of components for execution locally or remotely. Virtual networking techniques are needed to enable seamless off-loading of computation from a device to remote resources possibly provided by multiple clouds. In addition, both software adaptation and network configuration must be done on demand.
|
1 |
2015 — 2018 |
Fortes, Jose Soltis, Pamela Soltis, Douglas [⬀] Matsunaga, Andrea (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Abi Innovation: Connecting Resources to Enable Large-Scale Biodiversity Analyses
Extracting biological knowledge from complex datasets such as those now being compiled requires integration of powerful computational tools. Recent developments in computational biology as well as rich new data sources provide novel opportunities for integrating massive amounts of biological data. This perfect storm of new data and advanced data acquisition, management, and integration afford the unique opportunity to drive the discovery of new, complex patterns in biology. The project will leverage NSF's considerable investment in biodiversity tools provided by Open Tree of Life (the framework for the project), Lifemapper (which handles geospatial data), iDigBio (data from ~1 billion museum specimens that carry locality data and their ecological information), and Arbor (computer tools that permit new analyses from the sources noted). It will create much needed computational connections among these tools. It will then build upon these new linkages and tools, enabling novel research in biodiversity. These linkages will provide researchers the opportunity to rapidly synthesize datasets and use them to address diverse evolutionary questions. The tools and infrastructure the project will build will connect species relationships with species distribution models, climate projections, genes and traits. The project will transform future studies of biodiversity; it will provide a global integration of powerful tools that will permit new data-driven discovery in "next generation" biodiversity science. It will provide interdisciplinary post-doc and graduate student training in bioinformatics, use of digitized specimen data, and complex analyses (e.g. ecological analyses), preparing the biodiversity scientists of the future. The project will recruit underrepresented students and women and develop an undergrad course that will help train students with the integrative skills (field biology to computational biology) needed in the workforce. We will further develop this module for wider classroom use. We will introduce an annual week-long course at University of Florida (UF) for students and post-docs on the use of the resources developed. With education specialists at UF, the project will produce video materials and a coordinated display for general audiences on the importance of digitized specimen data, and their utility for studies of climate change.
The project will develop a computational framework linking diverse data (trees of species relationships, morphology, ecology, fossils, geography, and climate) across research tools used by the biological community, including Open Tree of Life, which will serve as the framework to which all other biological data - traits, genes, genomes, and especially specimens - will be linked, as well as Lifemapper, iDigBio, and Arbor. Use of the large, hyper-diverse plant group Saxifragales will provide precisely what is needed to drive the development of these tools--a comprehensive dataset that covers morphology, ecology, geography, fossils, and climate provides a test case for refining the tools the project will develop and their integration. The project will: 1. Facilitate new synergistic research of broad utility at the interface of phylogenetics, ecology, evolutionary biology, biogeography and biodiversity science, enabling scientists to address novel questions relating phenotypic and ecological biodiversity, spatial and temporal variation, community assembly, and diversification across landscapes and through time. 2. Increase visibility and accessibility of iDigBio, Open Tree of Life, Arbor, and Lifemapper resources by linking them together and making them available through multiple access points (e.g., pre-existing tools associated with Arbor and Lifemapper) in a variety of appropriate formats. 3. Develop a complete, multifaceted species-level dataset for a large clade (Saxifragales), which will not only fill in this branch on the ToL, but will produce a resource of great utility for the scientific community to explore. 4. Demonstrate the utility of iDigBio, Open Tree, Lifemapper, and Arbor resources with a comprehensive analysis using near complete sampling of Saxifragales, for which we will add the following data layers: DNA sequences, morphology, fossils, ontologies, geospatial and environmental data, digitized voucher specimens, and link to the Encyclopedia of Life (EOL). The project will: 1) provide interdisciplinary post-doc and graduate student training in bioinformatics, large-scale phylogeny reconstruction, use of digitized specimen data, and complex post-tree analyses (e.g. niche modeling, niche diversification), preparing the integrative biodiversity scientists of the future; 2) recruit underrepresented students and women; 3) developed an undergrad course that uses field collection, herbarium specimens, digitized data (iDigBio), and niche modeling (with climate change; 4) introduce an annual week-long course (UF) for students and post-docs on the use of the resources produced; 5) produce video materials and a coordinated display for general audiences on the importance of digitized specimen data, their utility for modeling niche evolution through time and implications for climate change. The project will provide a platform that will enable other researchers to take the same integrated approach in other groups. It will also establish web links to EOL and 1) build species pages; 2) place morphological and other trait data on TraitBank, making these widely available; 3) work with EOL and iNaturalist to engage citizen scientists.
|
1 |
2015 — 2018 |
Fortes, Jose Tsugawa, Mauricio (co-PI) [⬀] Matsunaga, Andrea (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Sse: Human- and Machine-Intelligent Software Elements For Cost-Effective Scientific Data Digitization
In the era of data-intensive scientific discovery, Big Data scientists in all communities spend the majority of their time and effort collecting, integrating, curating, transforming, and assessing quality before actually performing discovery analysis. Some endeavors may even start from information not being available and accessible in digital form, and when it is available, it is often in non-structured form, not compatible with analytics tools that require structured and uniformly-formatted data. Two main methods to deal with the volume and variety of data as well as to accelerate the rate of digitization have been to apply crowdsourcing or machine-learning solutions. However, very little has been done to simultaneously take advantage of both types of solutions, and to make it easier for different efforts to share and reuse developed software elements. The vision of the Human- and Machine-Intelligent Network (HuMaIN) project is to accelerate scientific data digitization through fundamental advances in the integration and mutual cooperation between human and machine processing in order to handle practical hurdles and bottlenecks present in scientific data digitization. Even though HuMaIN concentrates on digitization tasks faced by the biodiversity community, the software elements being developed are generic in nature, and expected to be applicable to other scientific domains (e.g., exploring the surface of the moon for craters require the same type of crowdsourcing tool as finding words in text, and the same questions of whether machine-learning tools could provide similar results can be tested).
The HuMaIN project proposes to conduct research and develop the following software elements: (a) configurable Machine-Learning applications for scientific data digitization (e.g., Optical Character Recognition and Natural Language Processing), which will be made automatically available as RESTful services for increasing the ability of HuMaIN software elements to interoperate with other elements while decreasing the software development time via a new application specification language; (b) workflows leading to a cyber-human coordination system that will take advantage of feedback loops (e.g., based on consensus of crowdsourced data and its quality) for self-adaptation to changes and increased sustainability of the overall system, (c) new crowdsourcing micro-tasks with ability of being reusable for a variety of scenarios and containing user activity sensors for studying time-effective user interfaces, and (d) services to support automated creation and configuration of crowdsourcing workflows on demand to fit the needs of individual groups. A cloud-based system will be deployed to provide the necessary execution environment with traceability of service executions involved in cyber-human workflows, and cost-effectiveness analysis of all the software elements developed in this project will provide assessment and evaluation of long standing what-if scenarios pertaining human- and machine-intelligent tasks. Crowdsourcing activities will attract a wide range of users with tasks that require low expertise, and at the same time it will expose volunteers to applied science and engineering, potentially attracting interest of K-12 teachers and students.
|
1 |
2015 — 2018 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-Ea Centra: Us - East Asia Collaborations to Enable Transnational Cyberinfrastructure Applications
This project supports collaborative cyberinfrastructure (CI) research activities designed to educate a new generation of researchers who are technically and culturally competent to engage with international scientific networks. The goal of this project is to advance the scientific understanding of distributed software-defined cyberinfrastructures and the ability to run scientific applications using data and tools in different countries. Specifically, the targeted applications are in environmental modeling, disaster management and smart cities, focusing on how these domains impact each other. The related middleware research seeks solutions for software-defined data sharing, middleware interoperability via software-definition, and coordinated software-definition of distributed IT systems. This project pursues rigorous understanding and solution of the scientific problems via international collaborations which bring out real-world contexts of transnational applications and create transnational cyberinfrastructure instances where researchers get practical insight.
Junior U.S. Ph.D. candidate researchers are expected to have short stays at collaborating sites in different countries working on project-related topics and have direct access to leading-edge facilities, local instances of global problems and top scientists working on these problems. At the same time they are immersed in international team activities. Annual workshops help conceptualize, frame, advance and report on collaborative research projects and contribute to establishing the coordination framework. Key institutional partners in East Asia include the National Center for High-Performance Computing of the National Applied Research Laboratories of Taiwan and the National Institute of Information and Communication Technologies of Japan. This project is to contribute towards a framework to coordinate CI-based and CI-enabled research with East Asian partners in order to achieve scientific progress and engage junior researchers in international activities.
|
1 |
2016 — 2021 |
Macfadden, Bruce (co-PI) [⬀] Fortes, Jose Page, Lawrence [⬀] Soltis, Pamela Riccardi, Gregory (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Digitization: Idigbio: Integrated Digitized Biocollections Phase 2
Integrated Digitized Biocollections (idigbio.org) at the University of Florida (UF) and Florida State University (FSU) is the national resource for Advancing Digitization of Biodiversity Collections (ADBC). The ADBC program was developed in response to the need for a sustained effort to digitize, and make available online, the vast amount of information in the nation's biodiversity collections, which may contain up to 1 billion specimens. For biological specimens, information digitized include names of species, localities and dates of collection, digital photographs, sound, video, and 3-D models created from a variety of sources. This community digitization effort is supported by ADBC through Thematic Collections Networks (TCNs), groups of institutions that digitize specimens to address a major research topic such as the relationship between agricultural crops and insects or the impacts of invasive species on natural ecosystems. iDigBio assists in coordinating activities of the TCNs by facilitating development of standards and workflows for digitization of specimens and related information, providing cyberinfrastructure resources to enable long-term preservation of digital data, promoting novel and traditional uses of collections data in research and outreach, and working with the collections community to plan for the long-term sustainability of the national program and the resources that program has produced. The availability of digitized information about specimens greatly enhances the ability to conduct research on biological diversity and to address some of the most fundamental questions in biology.
In the past five years, the national effort to digitize information in the nation's biodiversity collections has been successfully catalyzed by funding from NSF and advanced by activities of iDigBio. Collaborations with data providers and users have been developed, goals and priorities defined, and best practices related to digitization identified. Cyberinfrastructure resources, including a national search portal, have been provided. These community-driven activities have led to improved digitization practices, increased involvement in digitization and training, and adoption of instruments and informatics tools that improve the efficiency and scalability of digitization and research workflows in all types of biodiversity collections. iDigBio works with staff in more than 450 collections in 270 institutions distributed in all 50 states. Communication among stakeholders to increase access to collections data has been established through workshops, webinars, and the use of social media. iDigBio has sponsored attendance of more than 2,200 participants from 511 institutions to over 65 workshops and symposia that targeted digitization. iDigBio has ingested more than 670 record sets containing 48 million records and 13 million images. All data ingested are indexed so that queries and other types of index-based access are supported. Searches for data can be done through a Web-based graphical interface or through programmatic APIs. Search and analytical tools enable users to mine diverse data such as taxonomy, location, images, traits, and vocalizations. Funding for another five years, will allow iDigBio to continue its successful strategies with an increasing emphasis on data improvement and use in research and outreach. This national resource created by the ADBC program is increasingly recognized as an essential resource for information on biodiversity and digitization. As the scientific and societal benefits of validated collections data are realized, digitization will become a common and sustained practice in natural history collections.
|
1 |
2018 — 2020 |
Fortes, Jose |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Towards the Web of Biodiversity Knowledge: Understanding Data Connectedness to Improve Identifier Practices
Biodiversity research investigates the variety and variability of life on Earth. This field of science crosses many research disciplines such as genetics, studies of organisms, plants and animals, habitats and ecosystems, and their interactions. A long-standing challenge for biodiversity researchers is to find, access, "mine", and integrate complex and diverse information from those disciplines. New approaches have now become possible with the increasing availability of "big data" techniques and infrastructure. This project will explore and employ such advanced techniques for retrieval and mining of a wide range of available open biodiversity data sources, with the aim of generating an improved holistic picture or "knowledge graph" of Earth's biodiversity. The project will also identify the data practices and discovered relationships that were needed to accomplish this graph-building task, with the aim of informing the development of future data systems and training on these techniques.
Many attempts have been made to link together biodiversity knowledge using linked identifiers coupled with data standards and taxonomies, but satisfactory results with such "exact matching" approaches have been elusive. This project aims to develop new methods of relating records across datasets that do not rely on matching identifiers but instead employ inferred rather than explicit relationships between data records. This is an experimental approach that has not yet been attempted at scale. Linkages between publicly available biodiversity, genetic, literature, and other data will be explored; and software infrastructure will be developed to combine and link multiple biodiversity datasets. Another goal is to quantify the relationship between identifier practices and the ability to construct links between available biodiversity, genetic, literature, and other data. This project will draw on and complement other large ongoing collaborative efforts that contribute to broad integration of biodiversity knowledge, data science, and infrastructure such as the Encyclopedia of Life (EOL) and the NSF-supported iDigBio project. The ultimate aim is to understand which data practices provide the most value to the biodiversity community and thereby inform policy, standards, and training on identifiers. This, in turn, can enable the exploration of new fundamental and cross-disciplinary research questions, and potentially improve practices of a wide range of US and international data aggregators and data producers.
This project is supported by the National Science Foundation's Public Access Initiative which is managed by the NSF Office of Advanced Cyberinfrastructure on behalf of the Foundation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 — 2022 |
Soltis, Douglas [⬀] Fortes, Jose Soltis, Pamela |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cibr: Collaborative Research: Integrating Data Communities With Biotaphy: a Computational Platform For Data-Intensive Biodiversity Research and Training
Heightened awareness of the importance of maintaining ecologically diverse natural areas has increased urgency for scientists to identify and explain the biological dimensions and services of the environment that are vital for supporting human populations and sustaining prosperity. Biological diversity, or the distribution, function, and interdependency of species, is a vital dimension of natural systems upon which the sustainability of human systems depends. The biological diversity of Earth's natural habitats has been inventoried for over 300 years by intrepid biological explorers. The species information produced by those surveys, derived directly from biological specimens, tissues, and DNA samples, is now being rapidly amassed in internet information systems. The data represent a monumental and unparalleled source of information about the origin, distribution, and diversity of life on Earth. With over a billion species occurrence database records online, internet-accessible information on the evolutionary relationships of millions of species in the online Tree of Life database, online global climate data, and supercomputing capabilities, biodiversity researchers are enabled for the first time, to analyze questions about the ecological and evolutionary factors that shape patterns and properties of biodiversity on the scale of continents and the entire globe. That knowledge will produce insight for public understanding and policy vital to the conservation and sustainability of diverse natural and human systems. In addition to facilitating research, the BiotaPhy Project is committed to the development of a scientifically literate public and plans educational, training, and outreach activities to address multiple constituencies. It will provide collaborative post-doc, graduate, undergraduate, and high school training in data-intensive biodiversity science while building on a strong record of recruiting members of underrepresented groups. This project will undertake software engineering, open-access, deployment and training for the BiotaPhy computational workflow environment on the web. The Platform will be supported as a biodiversity community gateway for data-intensive science. The software will enable integrative and powerful analyses, and quantitative models of the geographical patterns of Earth's species diversity past, present, and future. BiotaPhy software will enable scientists to address novel questions relating phenotypic and ecological biodiversity, community assembly, and species diversification across time and space. The computational workflows will have broad environmental research relevance and the potential for transformative impact on data-intensive research spanning ecology, phylogenetics, systematics, and conservation biology. The BiotaPhy workflow platform will provide those capabilities by assembling input datasets, computing single species distribution models and multispecies macroecological datasets, and then output statistical tests of evolutionary and ecological hypotheses to a graphical web interface for review and knowledge discovery. The BiotaPhy Platform?s production deployment will: (1) increase research into and creation of new synthetic approaches at the interface of phylogenetics, ecology, evolutionary biology, and biogeography; (2) increase the accessibility and utilization of Open Tree of Life, iDigBio, and Lifemapper resources; (3) incorporate community input and training; and (4) build and support cross-domain data integration services as sustainable biodiversity community infrastructure. The BiotaPhy Project can be accessed at http://biotaphy.github.io.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |