2002 — 2007 |
Warschauer, Mark (co-PI) [⬀] Hutchinson, Tara Bagherzadeh, Nader (co-PI) [⬀] Kuester, Falko Shinozuka, Masanobu (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcd: Expanding Engineering Thinking: Interactive Visualization of Numerical Models @ University of California-Irvine
0203528 Tara Hutchinson University of California Irvine "Expanding Engineering Thinking: Interactive Visualization of Numerical Models"
This project involves the development and application of a new interactive learning tool and the use of visualization techniques, enabled by the tool, in teaching engineering. The overall goal of this project is to contribute to a redesigned engineering curriculum that better matches the practice of engineering in the 21st century. The curriculum focuses on (1) visualization and hands-on manipulation, (2) computer-assisted design and virtual reality, (3) interactive learning, (4) critical thinking, (5) creative problem solving, and (6) problem-based learning. These objectives are achieved through the redesign of existing courses and through the creation of a new interdisciplinary course. The project directly addresses the connection of engineering with the changing needs of industry and society in merging technology areas by educating future engineers in the 'office of the future' environment. This instructional environment (VizClass) includes tele-conferencing capabilities, interactive whiteboards, spatial tracking and semi-immersive visualization operating within a completely wireless environment. The VizClass system uses readily deployable, cost effective components, such that other academic institutions may easily replicate it. The hardware configuration leverages existing technology where possible and introduces system components for the real-time control and the middleware required for fusing numerical solvers with visualization tools and hardware components. Components of the environment include a series of interactive projection display boards (white boards) and a semi-immersive projection table or wall display controlled by a dedicated visualization server. Digital whiteboards, with touch sensitive input surfaces, enable more accurate tracking of user input. Active white board screens providing either an input or output workspace are linked via a high-speed (large storage capacity) server. This in turn is connected to a numerical solver. The solver receives feedback from the input or 'working' screen in the form chosen by the user. If the input-mode is of type equation, the solver manipulates the equations and outputs the solution as either a 2- or 3-dimensional visual object to the selected output device in either mono- or stereoscopic form, respectively. Visual output can be a distorted or contoured mesh (with, for example stresses, strains or heat distribution), or figures annotating important engineering parameters. The advantage of teaching in this environment is the near-real time visual feedback articulating results of the solved equations utilizing the finite element method approach as applied to structural engineering, mechanics or electronics problems. The VizClass is equipped with a wireless network, giving collaborators and students instant access to all available on-line resources.
|
0.981 |
2003 — 2005 |
Hutchinson, Tara Kuester, Falko |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Performance and Tracking of Nonstructural Systems in a Full-Scale Building @ University of California-Irvine
PI's: Tara C. Hutchinson, Dept. of Civil and Environmental Engineering, University of California, Irvine and Falko Kuester, Dept. of Electrical Engineering and Computer Science, University of California, Irvine
This small grant for exploratory research (SGER) is requested to take advantage of a unique opportunity to study the response of nonstructural systems assembled within a full-scale building structure. The proposed investigation has two primary objectives: (1) to characterize the seismic response of an important class of equipment and building contents and (2) to study the applicability of tracking the response of these rquipment and contents using arrays of image-based monitoring systems. An SGER is requested to provide the resources to outfit a number of rooms within the interior of the building with nonstructural equipment and camera arrays to assess performance of both of these systems. We hope the work conducted under this SGER will demonstrate the applicability of this idea for use in full-scale building structures and identify needed areas for further study in the image acquisition and analysis portion of the process. As such, data collected from the proposed field experiments will be critical for the development of a larger proposal using NEES equipment to investigate alternative computer vision and motion tracking algorithms to advance camera-based systems as an earthquake monitoring technology.
The building of interest is a four-story building located at 4827 North Sepulveda Blvd., Sherman Oaks, California (approximately 80 km from the UC Irvine campus). The building was damaged significantly during the 1994 Northridge earthquake and has not been occupied since this event. Through separate funding provided by NSF, Professor John Wallace will use the UCLA NEES Mobile Equipment Unit to conduct field vibration experiments on this structure. Planned vibration experiments include placing pairs of linear and eccentric mass shakers on the roof of the building and driving the structure in the two orthogonal directions. In addition, a pair of shakers will be used to impose torsional motions into the structure. While much emphasis is placed on studying the structural response of this building, this field project provides a unique opportunity to study the response of elements mounted inside of the structure as well.
|
0.981 |
2004 — 2007 |
Zender, Charles (co-PI) [⬀] Jenks, Stephen Kuester, Falko Sorooshian, Soroosh (co-PI) [⬀] Gaudiot, Jean-Luc (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Hiperwall: Development of a High-Performance Visualization System For Collaborative Earth System Sciences @ University of California-Irvine
This project, developing a Highly Interactive Parallelized Display Wall (HIPerWall) new visualization facility, aims at advancing the state of Earth Science modeling and visualization. HIPerWall is a high performance visualization system with a wall-sized ultra high-density tiled display that operates at the perception threshold of the human eye, allowing researchers to view and manipulate their data sets at resolutions commensurate with large-scale grids or dense sensor network data. The facility will be able to display extremely high-resolution datasets that will drive and provide focus for on-going research into management, transfer, and visualization of terabyte-scale data. By rapid, visual comparison of theory with experimental data, scientists should be able to swiftly validate and comprehend theory and practice. Although the proposed research is focused on Earth System Sciences, other research areas will benefit, including, Computational Fluid Dynamics Direct numerical simulation of turbulent chemically reacting and dispersed 2-phase flows, Engineering Mechanics System identification using 3D video tracking; o Microwave imaging for damage visualization; o Remote system monitoring, Structural and Earthquake Engineering o Advanced scientific visualization of dynamics of systems; o Model-based simulation of experimental data from large and medium scale earthquake testing; o Analysis of large-scale earthquake field data, Materials and Devices o Molecular modeling and visualization; o Synthesis of structural materials and composites; o Mathematical modeling of advanced materials and processes; o Material characterization, Embodied Interaction in Immersive Systems o Novel sensor technologies and modes of interaction for cultural and technical applications, Scientific Computing, o Large scale data visualization; o Storage, compression and access of stored real time simulation data; o Image based rendering; o 3D data reconstruction, and Biomedical Engineering o Computer simulation and tissue engineering; o Imaging and image understanding.
Broader Impact: The facility, to be set in a large classroom, directly contributes to education through courses and recruiting efforts. The display wall benefits collaborations that have impact on areas such as homeland security and emergency response.
|
0.981 |
2005 — 2007 |
Hutchinson, Tara Kuester, Falko |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development of Concrete Damage-Flow Rate Correlation Using Integrated Structural Testing and X-Ray Tomography @ University of California-Irvine
Project Abstract Development of Concrete Damage-Flow Rate Correlation using Integrated Structural Testing and X-Ray Tomography Tara C. Hutchinson (PI)1 and Falko Kuester2
Concrete is by far the most widely used building material in the United States, with extensive use in the construction of our Nation's buildings, highways, tunnels, water supply and sewage systems and other infrastructure. The strong dependency of the service life of concrete on its transport properties means that investigations, which are geared at studying this internal microstructure, should link these transport properties to damage states of the concrete component. In this proposal, we seek to advance our fundamental knowledge of the damage characteristics of reinforced concrete members and link these characteristics with a fundamental transport property. Although many transport mechanisms are certainly of interest, we select the air permeability as the transport mechanism of interest, as a first step towards other mechanisms and to lay the foundation of the methodology proposed. Our proposed approach includes integrating structural testing of scaled specimens, X-ray computed tomography (CT) imaging, and air flow rate experiments of 20 model reinforced concrete panel-style specimens. Specimens are subjected to uniaxial and biaxial loading conditions, and include parameter variations of geometry (aspect ratio and panel thickness), material details (reinforcing steel ratio and concrete strength), and loading details (axial load and loading protocol). Numerical evaluation of permeability and leakage rate formula available in the literature as well as finite element simulations of the experimental specimens will be conducted in an overall effort to develop design guidance regarding the damage leakage rate correlation of reinforced concrete elements.
Intellectual Merits - The intellectual merits of this include developments in experimental methods and results, fundamental advancements in the understanding of damage-transport relations of reinforced concrete, and practical impacts on the design industry. The datasets available from this work (both image and numerical data) will spur new discoveries of the internal damage distribution of this complex, composite material. Ultimately, this work will result in design guidance, which accounts for the degrading permeability of concrete. Progress overall in concrete research will lead to a reduction in the cost of new infrastructure and rehabilitation of existing structures, while increasing the service life of one of our most abundantly used construction materials.
Broader Impacts - Broader technical impacts include the advancement of our understanding of concrete as a fundamental building material, thus leading to improved designs, with more reliable estimates of properties and hence performance. The acquisition and implementation of the X-ray CT system will have long reaching impacts on the research activities at UC Irvine, for its students and faculty, as well as the local academic community interested in using the system. The educational impacts of the data, images and numerical data, provide a natural haven for reaching all ranges of interested students, from the general public to K-12, and especially the diverse and/or minority student. The natural digital nature of this works makes it viable for Internet dissemination, and we plan to design a special web site (we termed ConcretePedia) devoted to disseminating these image datasets in VRML format (for the general public to interactively view and learn). This work also actively engages industrial and government collaboration, which helps facilitate its impact to the engineering public.
|
0.981 |
2010 — 2016 |
Levy, Thomas (co-PI) [⬀] Kuester, Falko Seracini, Maurizio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Training, Research and Education in Engineering For Cultural Heritage Diagnostics @ University of California-San Diego
This Integrative Graduate Education and Research Training (IGERT) award supports the development of an interdisciplinary graduate training program in engineering for cultural heritage diagnostics. Forensic engineering, the investigation of failures and failure mechanisms, is uniquely suited yet rarely used to assess the damage to artifacts of cultural heritage, including ancient monuments, historic buildings and paintings, from the ravages of time, pollution, natural disasters and decay.
This IGERT, based at the University of California at San Diego, trains graduate students to develop diagnostic methodologies and analytical models that will be deployed and tested on a) Renaissance art works and historic structures in Florence, Italy, and b) archaeological sites and artifacts in Jordan and Israel. Trainees learn and develop new state-of-the-art methods for the long-term "health monitoring" of cultural artifacts, including non-destructive testing, cyberinfrastructure and engineering analysis. Using a team-based approach, trainees will create an integrated platform for diagnostic and predictive modeling, including multispectral imaging, simulation, data synthesis and visual analytics. Students are advised by three co-mentors, one from the student's primary discipline and two from other disciplines. Substantial field research abroad will prepare them for careers in international science and engineering, and the tools and methodologies developed for cultural heritage can be applied more broadly to the analysis of degraded or obsolete infrastructure, such as public buildings, roads and bridges, as well as almost any other engineered component or system.
IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to catalyze a cultural change in graduate education by establishing innovative new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries.
|
1 |
2011 — 2013 |
Kuester, Falko Lanckriet, Gert (co-PI) [⬀] Lin, Albert [⬀] Lin, Albert [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Human Computation: Integrating the Crowd and the Machine @ University of California-San Diego
Because both information and connectivity are more available today than ever before thanks to digital technologies, questions can now be addressed by enlisting massive human demographics to supplement the limitations of computer computation. This is especially relevant in the case of visual analytics, where human intuition remains far superior to existing computer object recognition algorithms. While algorithms are limited by pre-labeling requirements, humans can perceive subtle variations and nuances to identify and classify unexpected objects. These tasks, however, are often too massive in scale for a single human to accomplish. Distributing this task over a massive network not only succeeds in categorizing data, but generates massive quantities of human quantifiers (training data) to potentially teach computer vision algorithms to mimic human perception in order to distinguish the normal from the abnormal.
This exploratory project will combine collective human visual perception with machine learning and object recognition, through a study of 1.25 million crowd-sourced inputs provided by over 6,000 volunteers labeling satellite imagery in a search for anomalies in northern Mongolia. These data, collected from June 2010 to the present via an online platform developed by the PI in collaboration with National Geographic Digital Media, afford an ideal "case study" environment to investigate the nature of crowd generated data and methods that distill the wide variability of human input into computational algorithms. The online participants, excited by the potential of discovering the tomb of Genghis Khan, examined massive amounts of ultra-high resolution multispectral satellite imagery to label loosely defined anomalies into various categories. Trends that emerged from the massive volume of labels represent a collective human perspective on what the images contain. A team led by the PI traveled to Mongolia to ground-truth areas of high user input convergence. The resulting ground-truthed anomalies provide a unique opportunity to both accurately measure the quality of human/automated analysis and to investigate the effect of supplementing noisy crowd-sourced data sets with small pools of absolute data in machine learning. In the current project the PI will develop a framework for applying and evaluating the following three research phases designed to study the nature of large scale human generated data for integration into supervised learning algorithms:
1. Consensus Clustering - Tag evaluation mechanisms based upon the volume and consistency of neighboring tags and the ability of the individuals creating those tags. Unsupervised methods for "merging" labels will also be applied for extended anomalies such as roads and rivers.
2. Feature Vector Extraction - Both the type of features (e.g., color, luminance, edges and gradients, scale, orientation, etc.) and the extent of the neighborhoods (e.g., local, wide and global) required to detect anomalies are unknown a priori. Thus, the aim is to determine sufficiently diverse features to capture all relevant cues within the image.
3. Machine Learning - Dominant features representative of, and excluded from, pixel groups of given categories will be determined from the results of Phase 2 above.
Broader Impacts: In this exploratory study the PI will lay the foundation for extracting new machine/human collaborative opportunities from the resource of the crowd. Understanding the bonds between human and computer intelligence will have a profound impact on many branches of science. Thus, concepts developed in this effort may ultimately prove transformative by affording migration of crowd-sourcing from a project-based tool for distributed analytics into a portal bridging collective human perception and machine learning.
|
1 |
2012 — 2016 |
Lin, Albert [⬀] Lin, Albert [⬀] Kuester, Falko |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Examining the Super User Versus the Crowd in Human-Centered Computation @ University of California-San Diego
This project investigates the nature of crowd-based human analytics at various scales, specifically how the concentrated efforts of a few contributors differ from the summed micro contributions of many. Automated approaches are good at handling huge amounts of data, but they lack the flexibility and sensitivity of human perception when making decisions or observations, especially when computational challenges revolve around visual analytics. Networks of humans, as an alternative, can scale up human perception by facilitating massively parallel computation through the distribution of micro-tasks, but human data interpretation is variant between individuals. Wide variability in the amount of participation of individuals in crowd-based computation creates non-uniform representations of a crowd, which is an important discrepancy that could significantly impact the validity of the term "crowd" in crowdsourcing. The research will explore data generated from the extreme ends of the participation curve and quantify the quality of data produced from a broad sampling of a crowd versus concentrated voice of the few "super users."
As one measure of comparison, the researchers will observe how characteristically variant samplings of human generated analysis alter the outcome when used as training data in a machine learning framework. This investigation will utilize data generated from a crowdsourcing effort that tapped over 10,000 volunteer participants to generate over 2 million human annotations on ultra-high resolution satellite imagery in search for tombs across Mongolia. Image tiles were distributed at random to participants who tagged anomalies of interest, while crowd consensus on points of interest provided a field survey team with locations to ground truth in Mongolia. Participation ranged widely, as illustrated by the fact that 20 percent of the data came from the most active 1 percent of participants, while at the other extreme 20 percent of the data came from the 80 percent of participants who were least active. While consensus of the crowd provided one metric to measure the quality of anomaly identifications, ground truth observations showed actual validation tended to correspond with identifications made from higher interest participants. This study will explore the nature of data generated from experts versus crowds of non-experts, starting from the discrepancies in participation levels.
Crowd-based human analytics has been welcomed as a potential solution to some of the world?s largest data challenges. Examples of crowdsourcing have shown that the power of distributed microtasking can engage challenges as overwhelming as categorizing the galaxies, or as complicated as folding proteins. However this concept depends upon the recruitment of human help, often at whatever levels of participation an individual is willing to contribute. The variation in contributions, and thus impact levels, between individuals can be staggering, with participation typically distributed across a longtail curve. That fundamental aspect of a recruited crowd should be recognized and understood when extracting knowledge from the data that is generated. This project will contribute to the necessary understanding by determining how the distributed inputs from a crowd differ from the concentrated efforts of an individual. Insight into the effects of crowd dynamics on results will determine how we pool and retain participation and, thus, have transformative impact on the development of crowdsourcing as a concept for analytics.
|
1 |
2013 — 2017 |
Schulze, Jurgen (co-PI) [⬀] Rosing, Tajana (co-PI) [⬀] Kuester, Falko Defanti, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Advanced Visualization Instrumentation For the Collaborative Exploration of Big Data @ University of California-San Diego
Proposal #: 13-38192 PI(s): Kuester, Falko; DeFanti, Thomas A.; Rosing, Tajana S.; Schulze, Jurgen P. Institution: University of California - San Diego Title: MRI/Dev.: Advanced Visualization Instrumentation for the Collaborative Exploration of Big Data Project Proposed: This project, developing a Scalable Omni-Presence Environment (ScOPE), a next generation visualization system for collaborative exploration of large volumes of data, provides an environment for analyzing, processing, and visualizing Big Data resulting from many different areas of science and engineering. The instrument serves as an integrative, virtual metaphor for a combined microscope and telescope, enabling users to explore data from the nano to macro to mega scale. ScOPE provides researchers the ability to study simulated and acquired data at a level of precision previously unmatched. It is expected to become the platform for training a new generation of users to be fluent in data analytics in collaborative environments. Initially, three universities will have direct access to the ScOPE instrument and all its features: U. California-San Diego (UCSD), Jackson State U. (JSU), and U of Texas Medical Branch (UTMB). Nonetheless, following the tradition of the project team (effectively done with earlier generations of visualization technologies (e.g., OptIPortal tile display walls now installed at more than 100 institutions worldwide), the critical components of the infrastructure will be broken such that they may be replicated for use at remote locations by other research or educational institutions. The developers anticipate that private-sector collaborators, such as Qualcomm and Intel, will help popularize use of specific components for the nation?s big-data analytics infrastructure. Notwithstanding, the broadest impact of the instrument should be evident in the discoveries and advances made by engineers and scientist that use ScOPE to enhance collaboration and analysis in the disciplines that have been singled out as ?Domain Drivers? for the project. These include projects led by researchers in ocean sciences (and ocean observatories); cyber-archaeology and cultural heritage diagnostics; real-time brain imaging; digital cinema and very-high quality digital media; integrative computational biology; underwater microscopy; molecular dynamics; structural biology and computational chemistry; and large-scale numerical simulation. In turn, these domain specialists will work alongside computer scientists who will address grand challenges in system architecture, data transport, security, representation, arching, processing multi-modal analytics, and human-computer interaction. ScOPE?s long-distance collaboration will be supported by telepresence at bandwidths ranging up to 40 Gigabits per second. Thus, the project creates a highly interactive collaboration space equipped with a natural human-computer interface and advanced 3D modeling and rendering at a sufficient scale to tackle complex experiments and analyze large amount of visual and numerical data pertaining to phenomena of wide dimensions and extreme time scales. Domain drivers have been identified to ensure that the resulting environment and tools are applicable to a broad array of scientific disciplines. These include earth system sciences, civil and structural engineering, mechanical and aerospace engineering, biomedical and electrical (and ocean observatories engineering, social sciences, and anthropology. This project takes a great leap forward into a new generation of collaborative environment that until recently was unthinkable. The display capabilities will no longer be passive; envisioned is a continuous spatial workspace imaging, including eye, skin response, and even mobile electroencephalography sensing, allowing ScOPE to respond to and infer user intent. The environment will be designed specifically to handle ?big data,? using a failure-tolerant and cloud-centric approach while also downsizing the supercomputer flash memory architecture. ?Big Data.? The instrument will enable scientific discoveries as well as research on how best to process, analyze, and visualize Scope will serve as a prototype for other similar instruments. The research enabled by ScOPE will have impacts in many areas of science. Broader Impacts: As previously mentioned, the ScOPE instrument provides researchers the ability to study simulated and acquired data at a level of precision previously unmatched. ScOPE is expected to become the platform for training a new generation of users to be fluent in data analytics in collaborative environments. The developers anticipate that private-sector collaborators, such as Qualcomm and Intel, will help popularize the use of specific components for the nation?s big-data analytics infrastructure. Notwithstanding, the broadest impact of the instrument should be evident in the discoveries and advances made by engineers and scientist that use ScOPE to enhance collaboration and analysis in the disciplines that have been singled out as ?Domain Drivers? for the project. These include projects led by researchers in ocean sciences (and ocean observatories); cyber-archaeology and cultural heritage diagnostics; real-time brain imaging; digital cinema and very-high quality digital media; integrative computational biology; underwater microscopy; molecular dynamics; structural biology and computational chemistry; and large-scale numerical simulation. In turn, these domain specialists will work alongside computer scientists who will address grand challenges in system architecture, data transport, security, representation, arching, processing multi-modal analytics, and human-computer interaction. The instrument will have direct impact on three universities, while the technology developed in building the instrument will inform the construction of similar instruments around the nation. To promote greater public appreciation of scientific research, the public will be invited to tour the visualization facilities, hopefully encouraging young people to enter career in science and engineering. The area of ScOPE accessible to the public will have significant impact on the public?s impression of academic research. ScOPE?s capabilities are likely to transform our ability to collaborate with distributed research teams and be directly applied to day-to-day research.
|
1 |