1999 — 2003 |
Cooper, David [⬀] Mumford, David (co-PI) [⬀] Kimia, Benjamin (co-PI) [⬀] Laidlaw, David Joukowsky, Martha (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Kdi: 3d Free-Form Models For Geometric Recovery and Applications to Archaeology
With National Science Foundation support Dr David Cooper and his colleagues will develop a technology for the recovery of 3D free-form object and selected scene structure from one or more images and video. The technique is based on the development of 3D shape representation and a semi-interactive, mixed-initiative system, along with machine decision-directed Bayesian surface-estimation. The main focus of the effort is the development of tools useful in archaeological site and artifact reconstruction and architecture. This will impact low level shape models and how they are assembled to form either more complex objects or complete ones. The latter condition often occurs at archaeological excavation sites where objects are found in pieces, or have been damaged from erosion.
Despite recent substantial progress in automated scene recovery in computer vision, the challenges presented by free-form shape extraction and assembly are still great. The researchers will employ an interactive approach in which a user can guide the recovery process or can be available when requested for assistance by the machine, e.g., in the complex task of assembling many fragments to re-create a large or complicated object. The intent is to develop the fundamentals of a user-controlled technology for the benefit of the naive user. The ultimate goal is to enhance productivity in interacting with large amounts of complex visual data by modeling the underlying 3D structure.
This project will provide new practical tools. It will also provide an effective testbed for 3D shape reconstruction and recognition, more descriptive local and global models for working with 3D shapes, a better understanding of human/decision-making-machine interaction for free-form geometric modeling and for extracting 3D geometry from one or more images and video, and associated computational complexity issues. As applied to the field of archaeology, this technology will provide improved surveying methods, an advanced archaeological record and a means to conduct high-level analysis not heretofore available. It will further permit the transfer of reconstruction capabilities from the artist directly to the archaeologist. Since the problems resolved in the archaeological context are to a significant degree generic and thus transferable to the fields of medicine, industry, defense etc, the potential impact is extremely broad.
|
0.915 |
1999 — 2003 |
Reiss, Steven [⬀] Laidlaw, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Visualization For Software Understanding
The impact of this work will be threefold. The immediate result will be a prototype system for using software visualization for understanding. Educationally, the work will expose students to visualization as a means of understanding and get them interested and involved in working on the difficult problems inherent in visualization and understanding. Finally, the broader impact of this project will be to establish foundations for future software visualization and understanding efforts by solving some of the basic problems in these areas.
|
0.915 |
2000 — 2001 |
Laidlaw, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computer Graphics Tools For Understanding Tensor-Valued Volume Data: a Painting Metaphor |
0.915 |
2000 — 2005 |
Tarr, Michael Laidlaw, David Karniadakis, George (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Visualization of Multi-Valued Scientific Data: Applying Ideas From Art and Perceptual Psychology
Visualization of Multi-valued Scientific Data: Applying Ideas from Art and Perceptual Psychology
This is a multi-disciplinary research project to discover new visualization tools for interacting with and understanding multi-valued volumes of scientific data and the physical phenomena they measure. The tools will be developed and evaluated in close collaboration with scientists in three disciplines: neurobiologists studying neural development and disease via biological imaging, computational flow researchers studying blood flow through arteries, and geographers using remote sensing for environmental monitoring and resource management. We will factor out common patterns from the problems in these multiple disciplines to develop interaction metaphors and visualization techniques that are generalizable and widely applicable.
This project develops new visualization evaluation methodologies, an area that has only begun to be addressed. And it compares the effectiveness of visualization applications in several interactive and static computing and display environments including a 4-wall Cave, a 40'x40' virtual environment with a head mounted display, stereo head-tracked workbenches, desktop workstations, paper, and 3D rapid-prototyping output. Immersive environments will be studied because the value of these non-traditional working environments has not been established and because they present an opportunity to explore fundamentally different interaction metaphors. Comparisons will be performed for both interactive and static cases with appropriate technology determined for each application.
This project brings together experience from art and perceptual psychology for inspiration. Through several centuries, artists have evolved a tradition of techniques to create visual representations for particular communication goals. Art history provides a language for understanding that knowledge. We will draw inspiration from painting, sculpture, drawing, and graphic design and apply these techniques to the scientific problems.
Beyond inspiration, perceptual psychology also brings a second set of knowledge to bear on scientific visualization problems. Evaluating the effectiveness of visualization methods is difficult because, not only are the goals difficult to define and codify, tests that evaluate them meaningfully are difficult to design and execute. These evaluations are akin to evaluating how the human perceptual system works. Perceptual psychologists have been developing experiments for understanding perception for decades, and they will help develop methodology and expertise for evaluating visualization methods in close collaboration with biologists, fluids researchers, geographers, artists, and computer scientists.
While many of the individual components of this project are important alone, the collaborative aspects are the most notable. Mining ideas from art and perception will suggest unusually innovative visualization ideas. The application of new visualization techniques and collaboration with researchers in other fields will provide us with a unique opportunity to validate the techniques and ensure that they are responsive to the needs of the scientific problems. Because the techniques will be developed with application to multiple disciplines, they are likely to find further application within these and other disciplines. The assembled team brings strengths in all of the disciplines and has already demonstrated a track record of collaborative work.
The broader impact of the proposed research lies not only in the information technology arena, where new methods will help scientists in many disciplines to more effectively interact with and understand their data and gain insight about the physical phenomena they represent, but also in the specific scientific domains we will study. The study of blood flow could lead to improved understanding of and treatment for cardiovascular pathologies. An understanding of early neural development could enable new therapies for birth defects, genetic disorders, and other diseases. Remote sensing advances could provide more effective resource monitoring and permit widespread improvements in global quality of life.
|
0.915 |
2001 — 2008 |
Laidlaw, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Shape Capture and Modeling For Wrist Dynamics and Ancient Pottery Analysis Using Manifold Surfaces and Signed-Distance Volume Images
This project will conduct research into shape modeling and its applications coupled with the development of a methodology for teaching the skills needed for successful multi-disciplinary research projects. The education plan is integrated with the research effort, which includes development of computational tools for capturing geometry, representing it within the computer, and using those representations for specific applications in archaeology and biological modeling.
The education plan consists of a course, Interdisciplinary Scientific Visualization, and a research group. Both welcome undergraduate and graduate students alike. The class mimics the research process, from choosing a project to presenting research results. Projects must include participants from multiple disciplines. During the semester students write a proposal, review proposals by others, work together on a project, write it up as a research abstract, review abstracts by others, and present their results. As the course progresses, lectures, readings, and discussion help to teach the art and craft behind each of those steps. The research group is derived from three research groups the PI participated in during during education: a primarily undergraduate group at Brown, a primarily graduate group at Caltech, and a second Caltech group that is primarily postdocs. The group combines aspects of all three to provide an environment that it is believed will be conducive to learning how to do multi-disciplinary research at many levels.
The research is multi-disciplinary and fits well into the research group. Computational tools for capturing geometric shapes, representing them, and calculating with them will be developed and applied to problems from two disciplines: assembly of ancient pottery sherds in archaeology and modeling of wrist motion in orthopaedics and bioengineering. In both cases, shape will be captured from sampled volumetric CT medical imaging data. It will be represented implicitly using sampled signed-distance volumes and explicitly using manifold surfaces. The two representations compliment one another, together offering the strengths of both. For the archaeology application, we will develop a software architecture for decoupling the process of reasoning about artifact shapes from the understanding of their geometry. A probabilistic interface will separate the two parts. For the biological modeling application, we will develop models for bones, ligaments, and cartilage that will predict motion. We will validate the model with motion that we have already acquired and then use it to examine motion in patients with deficits that have thus far eluded an understanding.
The impact of this research will be in both application areas and the computational domain. If we are successful, pottery assembly, a process that is estimated to take over 50% of the time an excavation team spends outside of excavating, will be automated. We anticipate that this speedup will enable significant new archaeological results to emerge. A better understanding of the biomechanics of the arm and wrist and the abnormal conditions that can affect them could impact many Americans. This understanding will likely have applications in biology, bioengineering, medical applications, animation, and robotics. The numerical methods we will develop for simulating joints are likely to apply to simulations of other biological systems. Because the algorithms and tools for geometric modeling will be applied to two specific problems, they are likely to be more widely applicable.
Perhaps the most significant impact will be not in the scientific results, but in a better understanding of what makes multi-disciplinary research projects succeed or fail and in the new scientists that will emerge from Brown.
|
0.915 |
2003 — 2007 |
Laidlaw, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr Collaborative Research: Perceptual Optimization For Data Visualization
Much of human intelligence can be broadly characterized as the ability to identify patterns and the visual system is the most sophisticated pattern finding mechanism that we have. Of all of our Perceptual systems, vision dominates. It is estimated to engage 50% of the cortex and 70% of all our sensory receptors are visual, but is only just becoming possible to display as much information as the human visual system is capable of absorbing. The proposed research has the following five elements: a) A display that supports real-time animation at the limit of the resolution of the human visual system, b) The development and evaluation of perceptually near-optimal solutions through the development of human-in-the-loop optimization techniques, c) A set of experiments that will measure the efficiency of different ways of displaying common information structures, including paths in graphs, aspects of flow patterns, and the shapes of overlapping surfaces, d) The development of algorithms that support the mapping between data attributes and visual display primitives in a flexible adaptive or tunable process, e) The application of the techniques to visualization problems in three areas: flow visualization, overlaying surface visualization and large network visualization.
The intellectual merit of the proposed research will be to firmly establish information psychophysics and define the field as an intellectual endeavor thereby combining existing techniques into a cohesive discipline. The high- resolution display will enable researchers to work at the limits of human perceptual capability endowing the results with long-term value.
The broader impacts of the proposed research will include design guidelines, immediately useable design solutions, as well as algorithms and information display theory. Material contributions will be made in the following areas: flow visualization, network visualization, and overlapping surface visualization.
|
0.915 |
2004 — 2009 |
Laidlaw, David Richardson, Peter (co-PI) [⬀] Karniadakis, George (co-PI) [⬀] Swartz, Sharon (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: (Ase)-(Sim+Dmc+Int): Computational Simulation, Modeling, and Visualization For Understanding Unsteady Bioflows
This project is to discover new distributed simulation, visualization, and analysis tools for interacting with and understanding multi-valued volumes of scientific data and the biological phenomena they measure. The tools will be developed and evaluated in close collaboration with biologists studying three independent flow-related problems: coronary artery lesion and thrombus formation, the mechanisms and evolution of bat flight, and the mechanism and evolution of fish propulsion and maneuvering.
The work includes advancing basic scientific understanding in the three biological application areas. In addition, the experimental methodology of acquiring 3D motion and flow data using 3D Digital Particle Image Velocimetry (DPIV) and high-speed video will advance the state of the art for studying flow interactions with other biological and man-made systems and may be used for prediction, risk-assessment and decision-making.
The simulation and modeling work to address the biological problems will create new simulation methods for coupling unsteady flow and structure calculations, new methods for incorporating uncertainty into unsteady simulation results, new methods for combining unsteady experimental and simulation data to facilitate comparisons between them, and new methods for filling in gaps in unsteady experimental data.
|
0.915 |
2005 — 2006 |
Tarr, Michael Ress, David (co-PI) [⬀] Laidlaw, David Sanes, Jerome [⬀] Blumstein, Sheila (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of a 3t Mri System
With support from a National Science Foundation Major Research Instrumentation Award, Brown University will acquire a 3 Tesla magnetic resonance imaging (MRI) system. The MRI system will become housed in a research-dedicated MRI suite within the newly constructed Life Sciences Building at Brown, and it will form the core infrastructure for MRI-related research conducted by more than 100 faculty, research staff, and students in the Brown University community, including its College of Arts and Sciences and Medical School. Researchers at Brown will use the NSF-fund MRI system primarily to investigate fundamentals of brain structure and function. In addition to Brown users, researchers from other nearby institutions, such as the University of Rhode Island, Regina Saliva University, and University of Massachusetts-Dartmouth can have access to the 3 Tesla MRI system.
Non-invasive imaging of the human brain has become a key research tool for life scientists interested in understanding brain mechanisms of sensation, perception, cognition, and voluntary movement. MRI has becoming a cornerstone of such activities since it can provide structure and functional information at previously unobtainable brain locations without the need for invasive measures. Structural MRI can provide sub-millimeter resolution of the cellular and fiber tract regions of the brain. These capabilities now allow precise measurement of local brain volumes and visualization of the source and destination of major axon pathways. Functional MRI can rapidly measure local changes in blood dynamics in volumes as small as 1 cubic mm. Blood dynamics reflect changes in local neural activity, and its exploitation has become a key tool in exploring brain mechanisms of a variety of functions that constitute everyday experience. The NSF funded resource will allow Brown researchers and those from nearby institutions to develop new strategies and knowledge about how the human brain mediates complex behavior.
Projects currently planned for the 3 Tesla MRI system include research in systems and cognitive neuroscience and biomechanics. A major effort will be to enhance spatial and temporal resolution of structural and functional MR imaging, using special equipment of the new 3 Tesla MRI system. In particular, the infrastructure will facilitate investigating specialization of the myriad brain areas that process visual stimuli, not only across the brain, but also within each area. The new MRI system will allow non-invasive imaging of the input and output processing zones of cortical areas. Several investigators will interrogate the functional MRI signals obtained during single instances of perceptual experience or voluntary movement to predict the conscious experience of the observer or to predict the performed movement(s). While these 'mind-reading' efforts currently occur off-line, the team plans to implement them in real-time, and ultimately at high spatial resolution. The addition of the 3 Tesla MRI resource at Brown will boost ongoing educational and research activities and will stimulate novel interactions between students, faculty and researchers working across life, social, physical and applied science disciplines. The instrumentation will also be used in out-reach programs for under-represented minority high-school students participating in summer programs at Brown.
|
0.915 |
2005 — 2008 |
Laidlaw, David H |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Mri+Dti-Based Tools For Analyzing White Matter Variation
DESCRIPTION (provided by applicant): In this multidisciplinary project, a team of investigators will design and apply software tools that can simultaneously segment neural tissues and identify the locations of bundles of neural fibers in the brain. The tools will operate on combined structural and diffusion magnetic resonance (MR) datasets of the nervous system and will produce morphometric measures of each white matter (WM) structure, including its trajectory; cross section, which may vary along the trajectory; and fiber density. The proposed software tools will globally model imaging datasets. Numerical algorithms will adjust the parameters of a model of neural tissues and WM structures until the model is consistent with all of the acquired imaging data and maintains anatomical constraints such as incompressibility and continuity of neural fibers. The tools will differ from current morphometric tools in several ways: they will be more automated; they will incorporate and use all of the complementary information available in the different MR modalities; and they will not have the inaccuracies that are inherent to most current tractography methods. This research project is innovative in several ways. First, the WM measures will be comparable across subjects without image-level registration because parameters based on WM structures can be compared directly. Second, the investigators will use inverse solution methods to model the multi-valued volume images, globally resolve ambiguities in morphometric measures from local image artifacts such as partial volume effects. Third, the modeling approach will not contract diffusion measurements to tensor values. Thus, hard-to-resolve features such as fiber intersections and projections will be preserved. Finally, we will validate the tools at many levels, including histology, macaque imaging, biological variation in normal volunteers, and clinical feasibility studies in brain tumors, HIV-related neuropathology, and multiple sclerosis. The successful development, validation, and application of these sophisticated software tools may spur further development of medical imaging data modeling. The precise measures of brain structures produced should have a significant impact on biomedical research, will provide a deeper understanding of the brain and how it changes, and could play an important role in surgical planning. More broadly, the tools should apply to research studies of any biological process that involves changes in WM.
|
1 |
2006 — 2010 |
Laidlaw, David Richardson, Peter (co-PI) [⬀] Karniadakis, George [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-Team Implementation Project: Collaborative Research: Training Simulation Scientists in Advanced Cyberinfrastructure Tools and Concepts
PIs: George Karniadakis (Brown University), Steven Dong (Purdue University) and Nicholas Karonis (Northern Illinois University) Number: CI-TEAM Proposal 0636336 Title: CI-TEAM Implementation Project: Collaborative Research: Training Simulation Scientists in Advanced Cyberinfrastructure Tools and Concepts Project Abstract:
Harnessing the most powerful open computational resources of the nation as integrated by the TeraGrid (TG) and its future extensions will enable otherwise infeasible discoveries in simulation science, in general, and in life sciences, in particular, as well as presenting an unprecedented opportunity to share this unique scientific achievement with our nation's students and educators. Our demonstration of using effectively TG resources in performing the first cross-site simulations and visualizations of the human arterial tree has created great enthusiasm among faculty and their students who are interested in accessing this new simulation environment. As developers and users of some of the most advanced tools in computational science today, we want to make these tools available to the wider scientific community, to train the trainers, and to engage postdocs, graduate and undergraduate students as well as high school students and their teachers in this effort. Our goal is to lower barriers to the use of TG simulations and our objectives are to: (1) mobilize the biomechanics research community, (2) train a new generation of simulation scientists, and (3) inspire young students to become tomorrow's leaders in inter-disciplinary simulation science. To this end, we will implement a biomechanics gateway on the TG and establish it as the main platform and simulation framework for further developments and biomedical research. The framework we have developed for the human arterial tree can serve as the common thread in integrating a number of large biological endeavors into a coherent and exciting future research direction and in developing effective new training methods.
Broader Impact: The software implemented in this project will be open source and will be distributed to all TG users, facilitating a transition in computational biology from traditional computing to grid computing on the TG with potentially unlimited scalability. The new simulation environment will be critical in educating and training a new generation of inter-disciplinary scientists to be comfortable in using advanced software tools and concepts across many disciplines. New courses developed in this project, on grid computing, multiscale biological modeling, and scientific visualization in immersive interactive environments will better prepare future generations of scientists and engineers and educators in the use and development of cyberinfrastructure. We will work with community colleges in the Chicago area to train their faculty. This project also promotes engaging students, especially undergraduates, of diverse groups in inter-disciplinary projects, and in outreach activities engaging high school students, especially women and minorities.
|
0.915 |
2006 — 2007 |
Breuer, Kenneth (co-PI) [⬀] Laidlaw, David Swartz, Sharon (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dddas-Tmrp: Interactive Data-Driven Flow-Simulation Parameter Refinement For Understanding the Evolution of Bat Flight
The research project is aimed to create a Dynamic Data Driven Application aimed at improving the understanding of a complex biophysical system - the flight of bats. The system is comprised of a multi-level hierarchical simulation of bat flight based on parameterized features of bat morphology and behavior. The simulation operates at multiple levels of physical approximation and computational speed, and is capable of very rapid solutions but requires input from measurements to ensure fidelity and optimality. This input will be provided in an integrated fashion, drawing from a parallel series of experiments in which several discrete data streams will be generated, including kinematic wing data, wake velocity data over a series of two dimensional cutting planes, as well as other data such as bone deformation, experimentally-determined material properties, etc. This data will direct the simulation ensuring accurate solutions, but still with high responsiveness. The data streams will be monitored, synthesized, combined and processed using an advanced immersive visualization environment which will be used to guide the interactions between the measurement and simulation and to organize the disparate streams of data. The simulation environment to be developed is the first such system capable of generating timely solutions of complex flows over highly unsteady and deformable structures. This has multiple scientific and engineering benefits ranging from the ability to address fundamental questions in evolutionary biology to the design of bio-mimetic structures that draw from the abilities of bats on the wing. The direction provided by the experiment will guide scientists to the key sensitivities of these complex flying systems and provide insight to the complexities of animal flight mechanics. Lastly, the visualization systems will provide a unique tool for the synthesis and management of dynamic data drawn from a wide and disparate variety of data sources each having different qualities.
|
0.915 |
2006 — 2010 |
Laidlaw, David Gatesy, Stephen (co-PI) [⬀] Brainerd, Elizabeth [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Idbr: Hardware and Software Development For 3d Visualization of Rapid Skeletal Motion in Vertebrate Animals
This award is for the development of instrumentation for 3D visualization of rapid skeletal motion in vertebrates. Its two primary components are (1) a high-speed, biplane X-ray fluoroscopy system and (2) automated software for precise, 3D skeletal animation by aligning 3D CT bone models with pairs of 2D X-ray images. The result will be a substantial advance over technology that is currently available for research in vertebrate functional morphology and biomechanics. The objective is to make dynamic 3D skeletal imaging an affordable and widely available technique. The new combination of high-speed, biplane X-ray and 3D visualization software is named "CTX imaging." Vertebrate functional morphology is an active and growing subfield of organismal biology in which the mechanical and evolutionary relationships between anatomical form and biomechanical function are investigated. For example, the action of long tendons as springs in kangaroo hopping, the effect of mouth size and shape on suction feeding in fish, and the function of the "wishbone" in bird flight have all been explained in the past two decades by functional morphologists. New discoveries in functional morphology have consistently been driven by the introduction of new technologies, such as high-speed cameras, electromyography, force plates and digital particle image velocimetry. Natural movements in animals almost always occur in 3D and often are very fast. Quantification of rapid skeletal movement in 3D would be a powerful technique for relating form to function, but functional morphologists have had no technique for measuring bone movements in 3D. The movement of external markers on the skin is generally used as a proxy for skeletal movement, but skin is often loose and the markers do not track the skeleton well. CTX analysis of a CT scan plus two X-ray movies will produce a highly accurate 3D animation of skeletal elements moving in space. These will be more than stick figures -- the complete 3D morphology of each bone will be present and animated precisely with this technique. Biplane X-ray imaging and CTX analysis will make it possible to study aspects of skeletal kinematics that are largely inaccessible with other techniques, such as long axis rotation of bones, putative bending of fine bones in small animals, and the relative 3D motions of the articular surfaces of joints. CTX will provide more accurate data for input into musculoskeletal models, such as joint angles for inverse dynamics and neural control models. This is an interdisciplinary proposal combining the expertise of two functional morphologists (Brainerd and Gatesy) who have extensive experience with dynamic X-ray imaging of animal movement and a computer scientist (Laidlaw) who specializes in building computational tools for accelerating science, with particular emphasis on scientific visualization tools.
Three-dimensional visualization of rapid skeletal motion in vertebrates will be possible with instrumentation to be developed under this award. CTX imaging, made possible with this instrumentation, will open up new areas of research in vertebrate functional morphology, such as the comparative study of 3D joint biomechanics. The animations developed with CTX will be powerful scientific tools, but they will also be accessible and appealing to the general public. These animations will be used to increase appreciation for basic research whenever possible.
|
0.915 |
2007 — 2010 |
Breuer, Kenneth (co-PI) [⬀] Laidlaw, David Swartz, Sharon [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bat Wing Structure and the Aerodynamic Mechanisms of Flapping Flight
Flight is the most common mode of animal locomotion, used by over 1200 species of bats, 10,000 birds, and more than a million species of flying insects. It is thus surprising that understanding of the mechanics, aerodynamics, and evolution of biological flight is quite limited. For example, it was long believed that the wings of bats generate lift in the same way as human-engineered airplanes. Recently, it has been demonstrated that the aerodynamics of bat wings are very different from those of rigid wings, and that bat wings undergo enormous shape changes during flight. Two major impediments to in-depth understanding of bat flight are lack of information about the mechanically unique bone, skin, and muscle of bat wings, and the limited ability of human scientists to consider many complex streams of data, such as wing motions, air velocities, and degree of bone bending, together at one time. An interdisciplinary team of researchers from Brown University will carry out the first detailed mechanical tests on the special materials of bat wings, and document the degree to which bat bones bend and skin stretches then recoils during flight. These results will be interpreted by novel computer visualization tools that will bring 3D virtual reality out of the gaming world and into scientific research. One of the broader impacts of this project will be the training and mentoring of a number of undergraduate and graduate students from biology, engineering, and computer science. They will learn to work together effectively, aided by new interdisciplinary courses that will be developed by team faculty. Visualization techniques developed here will have broad application in the natural sciences. Additionally, progress will be made toward identifying biological design characteristics that can be used in the future for the construction of novel technologies such as miniaturized autonomous air vehicles.
|
0.915 |
2009 — 2015 |
Laidlaw, David Van Dam, Andries Karniadakis, George (co-PI) [⬀] Hesthaven, Jan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of a Next-Generation Interactive Virtual-Reality Display Environment For Science
Proposal #: CNS 09-23393 PI(s): Laidlaw, David H.; Hesthaven, Jan S.; Karnadiakis, George E.; van Dam, Andries Institution: Brown University Providence, RI 02912-9002 Title: MRI/Dev.: Next-Generation Interactive Virtual-Reality Display Environment for Science
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Project Proposed: This project, developing a world-class interactive large-field-of-view 95 megapixel immersive virtual-reality environment, aims at creating a novel, demonstrably useful, rich, and expressive interaction, visualization, and analysis that truly leverage the human visual and motor systems in Virtual Reality (VR). This work intends to help accelerate scientific work, research into innovative visualization methods for accelerating science in the future, and even leverage the fundamental advantages of immersive large-field-of-view visualization and body-centric human-computer interaction. Two decades of research have established the value of immersive displays as a research tool in many scientific domains and has also identified a set of currently unmet needs that block application of such displays to new problems and domains. These needs encompass high display resolution, brightness, contrast, and size; fast, responsive tracking with high accuracy and low latency; ease of use in working with new kinds of data; and reliability. Although a few multi-million dollar systems exist that may be able to address these needs, these few systems do not match the proposed display?s color gamut, small physical space requirement, and lower replication cost. The system is expected to support more natural and effective interaction with data than the current 3D point-and-click wand driven CAVEsTM by maximally utilizing as appropriate full-body, motion-captured user interactions and gestures. More display information will be made accessible to the human visual system with less user effort by matching, or exceeding the perceptual qualities of a modern LCD monitor. An immersive stereo display will be integrated with the perceptual resolution of a desktop display and superior brightness and contrast. Integration of software tools for creating virtual-reality applications quickly will address ease-of-use and reliability. The new tools are expected to be simple, support a spectrum of displays, and provide rich support for gestural interaction. A monitoring process to identify potential problems among the interacting hardware and software components will be put in place to identify and address problems before instruments are delayed. Users of the system include planetary geologists, systems biologists, brain scientists, cell and molecular biologists, biologists studying animal motion (including flight), fluid dynamicists, bioengineers studying arterial hemodynamics, visual designers developing interactive techniques for scientists, digital literary artists, and visualization and interaction researchers. Within interaction research, experiments using the system are expected to establish the appropriate level of display technology (e.g., resolution, interactivity, or stereographic display) needed for different classes of scientific analysis. The techniques, monitoring system, and software environment will be distributed on SourceForge to respectively help accelerate scientific progress nationwide, for developing multi-display applications, and for ensuring reliability.
Broader Impacts: While educating many students, the instrument is expected to enable new advances in all of the scientific disciplines of the users listed above, including a better understanding of the workings of cells and genes and proteins they contain (which could consequently improve quality of life broadly), behavior of fluids in arteries and around moving animals, animal locomotion (which could lead to improved biomimetic locomotive, floating, or flying vehicles), the wiring of the human brain, how it affects human capabilities, and how it can degrade; and Mars. The efforts are likely to produce a new generation of scientists who can better analyze research problems using scientific visualization, computer scientists more cognizant of scientists? analytical needs, and artists and designers who can accelerate the design process for immersive scientific visualization tools.
|
0.915 |
2010 — 2014 |
Laidlaw, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Gv: Small: Collaborative Research: Supporting Knowledge Discovery Through a Scientific Visualization Language
This collaborative research brings together computer scientists from University of Southern Mississippi (USM) and Brown University and neuroscientists from the University of Mississippi Medical Center (UMMC) to study the design of a scientific visualization language (SVL). Despite the numerous visualization approaches already devised, visualization remains more of an art than a science. Grounded in theories and methods from human-centered computing, machine learning, and cognitive psychology, this work is to develop and evaluate a scientific visualization language (SVL) to provide a principled way to help scientists understand how and why visualizations work. Tools and theories developed in this project can lead to efficient knowledge discovery to help neuroscientists study brains using diffusion tensor magnetic resonance imaging (DTI).
This work has the following specific objectives and outcomes: (1) close collaboration with scientists to discover, refine, and verify a symbol space, (2) a semantic space that describes the relationship among symbols, (3) a testbed that implements SVL for neuroscientists to compose visualizations, (4) development of new and enhanced courses at University of Southern Mississippi and Brown University, and (5) wide dissemination of the research outcomes through open-source software, experimental data, open labs, publications, and presentations.
This project is expected to have broad impact. It may lead to significantly better approaches to human knowledge discovery and decision making in many disciplines where visualizations have found successful application, including neuroscience, biomedicine, bioinformatics, biology, chemistry, geosciences, business, economics, and education. Undergraduate and graduate students are expected to participate in the research through our courses, and student exchanges are planned between USM and Brown. K-12 students can visit the USM lab while the project is in progress. Software and results will be disseminated via the project Web site (https://sites.google.com/site/simplevisualizationlanguage).
|
0.915 |
2013 — 2017 |
Laidlaw, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cgv: Small: Collaborative Research: Immersive Visualization and 3d Interaction For Volume Data Analysis
This collaborative research project (IIS-1320046, IIS-1319606) designs a 3-dimensional immersive visualization environment for volume data that is critical in a variety of application domains, such as medicine, engineering, geophysical exploration, and biomechanics. For example, biomechanics researchers examine volumes derived from insect scans to understand how form relates to function, particularly in regard to how insects create internal fluid flows. For effective analysis of a 3D volume, scientists and other users need to integrate various views, peer inside the volume, and separate various structures in the data. However, despite many advances in volume rendering algorithms, neither traditional displays nor traditional interaction techniques are sufficient for efficient and accurate analysis of complex volume datasets. This project develops an approach for interactively exploring and segmenting volume datasets by combining and extending: (1) utilization of advanced, high-fidelity displays based on virtual reality (VR) technologies for improving the visual analysis of volume data, and (2) the use of natural, gesture-based 3D techniques. Using controlled, empirical studies with real-world volume datasets from biomechanics and other biological sciences, the investigators are determining what characteristics of advanced displays can lead to faster, more accurate visual analysis. Iterative design and evaluation methods are being used to develop usable and natural 3D interaction techniques with which users can explore the interior of volume datasets. Beyond the empirical findings of these studies, an important outcome of the project is the design of a next-generation volume data analysis system that can be used by scientists and researchers to improve the efficiency and accuracy of their work.
The expected results will provide a deep understanding of visualization principles fostering further advancements in the realm of volume data analysis. Easier, more accurate, and faster analysis can lead to improvements in healthcare, breakthroughs in science, and advances in education. For example, this work may lead to insights into fundamental physiological mechanisms of feeding, breathing, and circulation in insects - one of the most important animal groups on earth. There are millions of insect species living in almost every habitat, and their lifestyles have profound impacts on human societies. Their effects in areas such as agriculture and health can be both positive (e.g., pollination) and negative (e.g., crop damage, disease), and understanding their fundamental physiologies is critical to controlling their impact. The project provides opportunities for interdisciplinary educational and research activities for graduate and undergraduate students, and outreach activities to underrepresented students. The results of this work will be disseminated broadly via publication in archival journals, peer-reviewed conferences, and online forums. The project website (https://research.cs.vt.edu/3di/node/188) will provide access to research results, including data sets and software.
|
0.915 |