1999 — 2002 |
Grimm, Cindy Hughes, John (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Lumigraphs and Manifolds
Introdurtion and Objectives. Image-based rendering (IBR) is a relatively new computer graphics technique in which multiple images (either computer-generated or photographs) of a scene are modified and combined to produce an image representing a novel view of the scene. The resulting images often have great detail but are sometimes incorrect, partly because of the lack of an underlying geometric model. Traditional graphics, by contrast, uses geometric and lighting models to generate correct views of a scene from an arbitrary view-point; but because of modeling costs, they often lack detail. An ideal geometric model would have both the complete geometry of the object plus a BRDF (bidirectional reflectance distribution function) at every point on the surface to describe how light interacts with the object, allowing computation of any view of the object under any lighting conditions. A simple approximation to this ideal is a geometric model with a single color; texture maps improve this approximation by mapping an image onto the geometry. This project is to devlop a new modeling paradigm, based on IBR, that extends this idea further by adding an approximation to the BRDF at every point on the surface. This new modeling paradigm improves upon the efficiency of generating and storing image-based rendering models by combining them with geometric model information, thus bridging the gap between the two methods. The intent is to construct geometric models which are visually complex and geometrically faithful, but avoid the overhead of geometrically modeled surface detail. Just as important as developing an underlying model is the ability to use real-world objects to create the models. Many of today's complicated geometric models were captured, using a laser scan or a 3D digitizer, from physical objects. One goal of this research is a similar mechanism for capturing and rendering objects or scenes with complex visual properties.
Methods. The two disciplines to be merged are image-based rendering and geometric modeling. The project will use the lumigraph for IBR, partly because it already incorporates a limited geometric model, and will use a manifold-based models for the geometric component. The lumigraph is a very general method for capturing the behavior of light as it leaves an object. Manifold-based geometry supports construction of arbitrary-topology surfaces by blending small surface pieces. The result of combining a lumigraph with a manifold is a geometric model that has information about how light rays are emitted from it. With this structure, any local constancy in emitted light along the surface is easily recognized, leading to considerable data-compression advantages. Thus, in addition to an interesting geometric model, this structure allows for efficient storage of IBR datasets. This same local-constancy can be used as a guide to goodness-of-fit of the surface to the actual geometry of the object being modeled. Each of these operations requires a homogeneous representation of the underlying geometry - one in which all of the points can be treated equally, rather than, for example, the division of the points into the vertices, edges, and faces of polyhedra, or the abutting patches of B-spline models. The manifold structure provides exactly this homogeneity.
Impact. Many image-based rendering approaches use some knowledge of geometry, usually depth. The addition of explicit geometry object has two potential benefits; a richer modeling type, in which global lumigraph-like data is attached to a geometric foundation, and a means for explicitly exploring the relationship between geometry and the compression of data gathered from images. The resulting models will be useful in multiple applications, particularly those in which high image quality is essential, but where explicit modeling is impossible. These include special-effects production and educational applications (e.g., on-line medical models), as well as possible applications in reverse engineering and architectural lighting simulation.
|
1 |
2002 — 2004 |
Grimm, Cindy Pless, Robert |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Reu Sites: Summer Undergraduate Research Program
EIA-0139576 Pless, Robert Washington University
TITLE: Summer Undergraduate Research Program
The department has a long history of supporting undergraduate summer research opportunities, and these opportunities will exist with or without external support. However, external support does further particular NSF objectives having to do with the size of the program, the quality of the student experience, and the balance f student recruiting. In particular, the requested funding would:
oMaintain an efficient size of the program, sixteen summer students. In addition to summer research projects funded from their research funds, this gives a critical mass of students that improves the overall student experience.
oFoster a broader range f student projects.
oClearly separate the objectives of the program from individual faculty research agendas. Faculty who support students from individual research grants tend to be conservative in hiring and must choose projects that serve the objectives of their funding sources. This may negatively impact both the student experience and the student selection. There are three explicit goals for this REU program. First, to give undergraduates from many backgrounds the opportunity and training to participate in frontier research. Second, to foster and extend the undergraduate research culture already present at Washington University. Third, to create a supportive and encouraging environment to do research. This first goal will be met with a targeted advertising campaign. This campaign will not exclude any source of potential students, but will focus n (a) predominately teaching colleges, (b) historically minority schools, (c) non-computer science departments. Second, a coherent program involving a designated group of summer research program students gives a reason to organize events, a social group, and a critical mass for which it is appropriate to offer programs which aid all undergraduate students involved in research. Finally, we propose a program that includes an explicit training component to teach students the tools t attack research problems effectively. A set of defined check points during the course of the summer will allow program coordinators to identify and solve potential problems.
|
1 |
2003 — 2009 |
Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: a Composition System For Computer Graphics
CAREER: A Composition System for Computer Graphics
Abstract
Engineers and artists approach problems in a very different manner. The terms engineer and artist are used in a very broad way; engineers are people who primarily build things and tend to approach problems by asking how something can be done. Artists, on the other hand, create content and focus on what information they are trying to convey. Both of these approaches are important, and it is often the intersection of the two that produces the most compelling results. Computer graphics has been fortunate in that a large artistic community is actively involved its development. Unfortunately, software design is still largely in the hands of the engineers.
The goal of this proposal is to bring the artistic approach into existing software and to bridge the communication gap between artists and engineers. As a maturing field, computer graphics is ready to address the problem of purposeful display - not just how to render an image from a model, but how should it be rendered so as to convey specific information about that model.
|
1 |
2004 — 2008 |
Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Surface Construction and Comparison Using Manifolds
This research focuses on the comparison of surfaces which are similar, but not identical. For example, almost all leg bones are similar in shape (a cylinder with knobs on the ends) but may differ greatly in length, diameter, size and shape of the knobs, etc. The goal of this research is to provide robust methods for comparing, categorizing, and analyzing surfaces of this type. Researchers in the bio-medical, physical anthropology, and forensics fields, to name a few, currently rely on human experts to perform these measurements. The types of measurements they can make are also limited to simple ones, such as length and circumference. These measurements are used to answer questions such as: What race/sex is this bone from? How tall was the person? Is the surface is abnormal? Was there an injury at some time? Why does the joint not function normally? Analytical measurements will not only remove the (sometimes subjective) human component, but will also broaden the scope of surface comparison beyond simple length measurements.
The first part of this research is a constructive approach for creating manifolds of arbitrary topology. These manifolds provide a natural mechanism for analyzing specific subsets of the surface via the creation of local maps that cover any desired portion of the surface. Manifolds also provide a parameterization mechanism for identifying corresponding points on surfaces. The research addresses theoretical issues related to manifolds such as improved local map placement, more controllable blend strategies, manifolds with boundary, and using manifolds to parameterize existing meshes or point clouds. The second part of his research, detailed surface comparison, is a relatively novel task, and includes a well-posed statement of the problem in addition to the development of algorithmic solutions. For example: What is a feature? How do we quantitatively measure feature differences in a meaningful way? How do we measure the quality of the correspondence between two surfaces?
|
1 |
2007 — 2012 |
Chamberlain, Roger Goldman, Kenneth Gill, Christopher Sawyer, Keith Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cpath T: Active Learning For Transformation of the Undergraduate Experience
Educational research provides strong evidence that active and collaborative learning result in a deeper and more integrated understanding of concepts, as well as significant improvement in student retention in degree programs. Engaged students remember concepts longer, enjoy the learning process more, and are more likely to continue. Collaborative learning builds important communication, teamwork, and leadership skills. In addition, active learning in the classroom provides an opportunity to teach the creative design process through discussion and critique of student work. The proposed transformation will serve as a testbed for further study and development of active learning in the context of computer science and engineering. Several forms of active learning (including inquiry-based, problem-based, and collaborative learning) will be applied and studied in partnership with educational experts. The proposed transformation will apply teaching methodology historically used in art and architecture design studios. Through an educational partnership with the College of Architecture, computer science and engineering faculty will learn these teaching techniques in order to adapt and apply them to computer science and engineering studios. Finally, the process of making the curriculum more accessible to individuals outside the discipline will result in an unprecedented carefully interwoven framework of concepts that will support novel approaches to the study of computer science and engineering. Both the transformation process and the resulting program will serve as models for other institutional transformations to active learning. Tools, teaching practices, and other artifacts developed to support the transformation will be disseminated for adoption. The modularized curriculum structure resulting from this transformation will support ongoing outreach efforts in K-12 education by enabling efficient targeted professional development for K-12 teachers.
|
1 |
2007 — 2011 |
Ju, Tao Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Geometric Modeling For Spatial Analysis of Bio-Medical Data
Advances in bio-medical imaging, such as microscopy, MRI, CT, and PET scanning, are producing a growing body of available 3D data. While computer modeling to date has largely focused on viewing or recovering the 3D shapes of anatomical structures from this data, more and more application areas involve further analysis of the data itself. One such area is the comparative analysis of 3D data collected from different individuals or from the same subject at different times. These comparisons are often the basis for making clinical decisions as well as scientific hypothesis. Supporting multi-subject analysis on this type of data requires establishing a mapping from one individual's volume to another's. This is a challenging problem not only because the shape of anatomical structures may vary greatly from one individual to another, but also because of noise and resolution issues inherent in the 3D capture process.
The goal of this research is to develop a new modeling paradigm suitable for comparative analysis of 3D data of different subjects. This paradigm consists of constructing a topologically correct surface from segmented data of individual subjects, establishing a consistent mapping among all individuals on their boundary surfaces, and a simple, direct extension from surface mapping (2D) to smooth mappings between enclosed volumes (3D). The methods can be applied to both manifold and non-manifold surface representations, and the resulting volume mapping supports organization of a large repository of 3D images for efficient volume-based queries. As the boarder impact, the research will directly benefits medical researchers by providing them with the necessary computational tools to perform multi-subject, 2D and 3D data comparisons.
|
1 |
2010 — 2014 |
Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Biological Shape Spaces, Transforming Shape Into Knowledge @ Oregon State University
Collaborative Research: Biological Shape Spaces, Transforming Shape into Knowledge
This project will develop a framework to represent, analyze and interpret shapes extracted from images, supporting a wide range of biological investigations. The primary objectives are: (1) to develop a mathematical framework and computational tools for the quantification and analysis of shapes; (2) to integrate these computational models with machine learning and statistical inference methods to enable new discoveries, transforming imaging data into biological knowledge; (3) to deliver novel quantitative methodologies for shape analysis that start from a biological premise, rather than a purely geometric one. The aim is thus not only to quantitatively describe shape, but to develop methods for linking morphological variation to its underlying biological causes. To ensure that the project focuses on methods that are most promising to biology with significant breadth of application, model and tool development will be guided and supported by a set of diverse case studies, ranging from the sub-cellular to organismal scales.
Shape represents a complex and rich source of biological information that is fundamentally linked to underlying mechanisms and function. However, shape is still often examined on a qualitative basis in many disciplines in biology, an approach that is time consuming and prone to human subjectivity. While ad hoc quantitative methods do exist, they are often inaccessible to non-experts and do not easily generalize to a wide variety of problems. The inability of biologists to systematically link shape to genetics, development, environment, function and evolution often precludes advances in biological research spanning diverse spatial and temporal scales, from the movement of molecules within a cell to adaptive changes in organismal morphology. The primary goal of this project is to develop a new suite of widely applicable quantitative methods and tools into the study of biological shape to address the significant need for consistent and repeatable analysis of shape data.
|
1 |
2010 — 2014 |
Pless, Robert Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Imagequest: Citizens Advancing Biology With Calibrated Imaging and Validated Analysis
Washington University, the University of Arizona, University of California San Diego, and the University of Vermont are awarded collaborative grants to develop tools that support deeper integration of citizen scientists with a broader range of biological imaging projects. Our specific aims are to develop tools (1) to make it easier for volunteers with smart phones to capture calibrated imagery supporting quantitative data measurements, (2) to facilitate volunteer contributions to image analysis tasks with better interfaces and formal methods to validate and create consensus estimates from many responses, and (3) to foster long-term motivation and engagement within virtual communities through online environments that combine aesthetics, social networking and visualization tools. These tools will have an initial focus on problem domains including monitoring of urban re-forestation efforts and analysis of 3D biological and neurological imaging data at many different scales. It will produce a set of smart phone and web apps that will be available to the broader biological research community that require calibrated image capture and analysis, and will create an experimental web portal to engage virtual communities of volunteers and scientists collaborating to work on imaging problems.
The ImageQuest project will make it simpler for volunteers and scientists to engage in a much broader set of biological imaging research. Providing tools which enhance a volunteers ability to capture and analyze images relieves a technical burden on biologists trying to develop citizen science approaches. Making these tools smarter and more interactive encourages volunteers to contribute meaningfully to novel research questions, and building a virtual community around the volunteers and the scientists helps to recruit, motivate and retain long term participation of the public in scientific research.
|
1 |
2013 — 2017 |
Grimm, Cindy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cgv: Medium: Collaborative Research: Developing Conceptual Models For Navigation, Marking, and Inspection in the Context of 3d Image Segmentation @ Oregon State University
3D image segmentation is an important and ubiquitous task in image-oriented scientific disciplines, particularly biomedicine, where images provide the basis for biological discovery. While imaging techniques reveal spatial content and activities within an entire subject, ultimately biologists are interested in specific anatomical structures (e.g., organs, tissues, cells, etc.). Delineation of the structures of interest within a given set of images is therefore a typical first-step in the data-to-knowledge pipeline, with both the efficiency and accuracy of segmentation critically affecting how the data is utilized in research and clinical practice. Creating accurate segmentations, particularly for 3D biomedical images, is a non-trivial task that calls for cooperation between humans and computers. While human experts, with their superior visual perception skills and vast knowledge and experience acquired from years of training, ultimately decide what constitutes an accurate segmentation, they lack the objectivity or efficiency of computational algorithms. On the other hand, without expert guidance, segmentation algorithms easily fail in the presence of the noise and ambiguity that are inevitable in biomedical images. In this research the PIs will investigate 3D image segmentation as a human-computer interaction paradigm to better understand the human factors that are involved in the current segmentation process, with the goal of making the process more efficient, accurate and repeatable. The team's hypothesis is that the segmentation process could be significantly improved through a deeper understanding of how people perform low-level perception and cognition tasks in the context of 3D segmentation (e.g., visual cues, delineation of structures by marks, and local accuracy or quality criteria), and how domain experts wish to specify high-level segmentation constraints (e.g., connectivity, topology, and shape). To test this hypothesis the PIs will analyze the segmentation process by domain experts that span a reasonable subspace of the actual segmentors and segmentation tasks in biology and clinical practice, to define a conceptual framework that captures the low-level perception and cognitive elements of segmentation as well as the higher-level information related to navigation, marking, and inspection. Building upon and instantiating the framework, the team will work with experts to develop a prototype segmentation tool that explores novel interaction and visualization paradigms as well as their supporting algorithms. The prototype tool will be used to both verify the conceptual framework and to create a more effective practical solution to segmentation.
Broader Impacts: By formulating and studying segmentation as a human perception and cognitive task, this work represents a major departure from existing research on either segmentation algorithms or tools. The resulting conceptual framework will serve as a bridge between the two communities, leading both to better designs for current and future segmentation tools and the framing of new problems for segmentation algorithms. For end users, the working prototype will support a more effective segmentation experience that is powered by the underlying conceptual framework. Furthermore, formalizing the kinds of perceptual cues and conceptual models users have when approaching the segmentation problem will serve as a useful test case for understanding the more general question of how perception and cognition interact when they are re-mapped to solve a problem they were never designed for. To disseminate the findings of this research, the PIs will release their working prototype as an open-source project, which can then serve as a shared communication platform between algorithm developers, tool developers, and end users.
|
0.942 |
2014 — 2020 |
Grimm, Cindy Smart, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Reu Site: Robots in the Real World @ Oregon State University
The goal of this REU Site is to engage undergraduate students on a wide range of aspects in robotics research - including mechanical engineering, design, electrical engineering, and computer science - while learning how to build complete systems that are able to deal with all of the complexities and uncertainties of the real world. REU students gain insight into the process of doing research in a university environment, what it takes to be a successful graduate student, working on a multi-disciplinary robotics project in a large research group, and what graduate student life looks like. The skills that students are learning by participating in this REU Site will prepare them for graduate studies and to enter the rapidly-expanding robotics industry, and help the United States maintain its lead as a technological innovator. We are starting to see robots enter our daily lives, and we need students who understand how to put together complete working systems and make them work in the real world. The ability to work as a team, across disciplines, is a vital skill in today's research and work environments, and not only in robotics. This REU Site is teaching students the non-technical skills that will make them attractive hires in any field.
REU students work alongside with faculty, graduate students, and other undergraduates on cutting-edge problems in robotics, with the goal of getting robots to work robustly in the real world, not just in the laboratory. Student projects are advancing the state of the art in walking robots, human-robot interaction, control of complex mechanical systems, and long-term autonomy. More information about this REU Site can be found at the web site (http://robotics.oregonstate.edu/reu).
|
0.942 |
2017 — 2018 |
Grimm, Cindy Balasubramanian, Ravi (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-P: Physical Robotic Manipulation Test Facility @ Oregon State University
The robotics community has a rich history of research and development in teaching and training robots to pick up and manipulate objects. However, it has proven to be very difficult to transition this research from structured laboratory settings to real world settings, such as homes, small-scale industrial settings, and search and rescue domains. The development of large-scale, standardized testing and benchmarking of robotic manipulation approaches is necessary to move robotic manipulation from the research lab to the real world. Placing the burden of conducting these tests on every individual robotics researcher is inefficient, at best. Cleaning the captured data to make it available to other researchers requires additional work for the producers of the data - and even more work on potential users. All of this impedes progress for the community as a whole, and makes it difficult to bring to bear recent developments in deep learning. This project addresses this problem by setting the groundwork for a dedicated physical robotic grasping and manipulation testbed infrastructure that can be remotely accessed and operated by anyone doing research in this area.
This testbed will provide several critical components that the robotics grasping and manipulation community needs: (i) "Test suites" that enable repeated testing and controlled manipulation of several variables that have confounded robotic grasping and manipulation research. Variables include object and gripper material properties, compliance, force/torque of physical interaction, mass of manipulator elements and the objects, surface texture, low-level control algorithms, and higher-level planning techniques. (ii) Extensive instrumentation to capture (nearly) all aspects of the physical interaction, such as the forces, kinematics of movement, and three-dimensional geometry. (iii) A modular and customizable human-robot interface for enabling robotic physical interaction. Users will be able to directly control the robot using low-level interfaces, such as the knobs that control the movements of individual joints, or use higher-level interfaces that encapsulate robot-object interactions, such as close the fingers until they contact the object. (iv) Data collection, where the data will be made publicly available in a standardized form. The focus of this planning activity is to develop the necessary technological elements to prove the feasibility of such a test facility (automated object return in a fully instrumented space, a fully instrumented "door" to evaluate opening and closing doors) and to evaluate the community's needs in this area.
|
0.942 |
2019 — 2022 |
Grimm, Cindy Balasubramanian, Ravi (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: Medium: Collaborative Research: Physical Robotic Manipulation Test Facility @ Oregon State University
The robotics community has a rich history of research and development in grasping and manipulation. However, it has proven to be very difficult to make grasping work in the real world (e.g., homes and small-scale industry). Part of the problem is the limited amount of testing that can be done in a lab. Testing requires specialized hardware and a very large number of trials, which is difficult for a single researcher to do on their own. The goal of this project is to set up dedicated test centers for grasping and manipulation that can be used by anyone with an internet connection. The test centers will provide standardized benchmarks, software and tutorials to teach people the basics of grasping, and hardware to perform the actual testing. This infrastructure will make it easier for people to share and compare results, and make it easier for people to contribute without needing to buy (and maintain) a large amount of specialized hardware.
There are four parts to this proposal; 1) Developing specialized hardware for testing and deploying in two facilities (one at Oregon State University, one at the University of Massachusetts Lowell) that have dedicated robotic arms and manipulators; 2) Software infrastructure to enable remote access for specifying tasks, running those tasks, then visualizing the results; 3) Implementing new and existing benchmark protocols (such as the ones developed at the National Institute of Standards and Technology (NIST)); and 4) Developing a community of users, including academic, industrial, and governmental institutions. The testbed approach should allow new and existing researchers a low cost method to participating in the grasping and manipulation scientific community and to provide standards from which the field can grow.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.942 |
2019 — 2022 |
Grimm, Cindy Fern, Xiaoli (co-PI) [⬀] Balasubramanian, Ravi (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri:Small: Leveraging Human Manipulation Skills to Advance Near Contact Robotic Grasping and in-Hand Stabilization @ Oregon State University
People are incredibly good at picking up and manipulating objects. Robots, however, still struggle with reliably picking up objects. The goal of this project is to improve robotic grasping by observing how humans respond to challenging grasping situations. The project's central idea is that humans strive for grasp success rather than trying to find the best possible grasp. Humans use a grasp that is likely to work, even if they have misjudged where the object is, what shape it is, how heavy it is, etc. Finding what these strategies are (in a form a robot can use) will improve robotic grasping in the "real world".
The technical challenge addressed in this project is how to go from human demonstrations all the way to robot hand control strategies. First, human user study participants will demonstrate grasping with actual robotic hands by "puppeteering" the hands through a structured set of grasping and manipulation tasks. This demonstration will then be reconstructed in a physics simulator. This provides a rich set of data in an environment - grounded in physical reality - that algorithms can learn in. Finally, the data is put in a novel form that is ideally suited to capturing what happens as the hand comes into contact with the object in a form suitable for machine learning. The result is a learned set of near-contact robotic controllers that can be incorporated into existing grasp planning algorithms in order to improve robotic grasping.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.942 |