2004 — 2010 |
Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Practical 3d Model Acquisition
Despite dramatic increases in the performance of graphics hardware, the use of 3D graphics by most users remains limited because of the difficulty of creating or obtaining 3D content. The widespread availability of inexpensive, easy to use 3D scanning systems will help bridge this "content gap", with impacts in fields ranging from art education and cultural heritage preservation to construction and law enforcement. The aim of this research is to develop a theoretical understanding of triangulation-based 3D scanning systems and to design new 3D scanners that are flexible, robust, inexpensive, and easy to use. As a specific example, the project involves bringing 3D scanning into elementary school classrooms by developing a scanning system that can be built out of Lego cameras and building blocks, set up by teachers, and operated by fourth graders. The goal is to encourage early interest in science, technology, and arts by letting students create their own computer-animated movies.
The research builds upon a framework of "space-time stereo" to generalize many currently used scanner designs, such as stereo and laser stripe scanning. An analysis of existing and novel scanners relates the processing performed by the systems, consisting of correspondence across spatial, temporal, or space-time windows, to properties of the object being scanned (such as depth variation, presence or absence of color texture, and speed of motion). Based on this analysis, new 3D scanner designs are developed that eliminate the need for expensive optics and motors while being more adaptable to a wide range of scanning scenarios. Several specific applications are implemented, including an inexpensive, high-resolution desktop scanner for small objects, scanners for building interiors that flexibly incorporate multiple cameras and projectors, and an omni directional scanner that captures an entire room at once. Efficient and stable alignment algorithms combine the multiple scans obtained by each scanner into complete 3D models.
|
0.915 |
2004 — 2008 |
Li, Kai [⬀] Funkhouser, Thomas (co-PI) [⬀] Rusinkiewicz, Szymon Troyanskaya, Olga (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ngs: Software Tools For New-Generation, Display-Centric Applications
The goal of this research project is to develop new software tools and applications for scalable display systems. These primary focus is on methods that coordinate multiple displays, multiple users, and multiple applications to enable true display-centric computing. For coordinating multiple displays, the project will develop dynamic feedback to build adaptive layered multi-resolution display systems and to study how to achieve integrated, continuous calibration capable of delivering high-quality information display. For coordinating multiple users, software tools that manage information display intelligently and securely for seamless exchange of visual information will be developed. For coordinating multiple applications, the project will study how to design an adaptive infrastructure that enables multiple applications to share a scalable display efficiently.
|
0.915 |
2005 — 2009 |
Klawe, Maria Finkelstein, Adam [⬀] Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Telling the Story - Learning Math, Science and Engineering Through Animation
This project will explore new and potentially powerful technological teaching tools for introducing the concepts of computing and physics to children (and teachers). The goal is to broaden the class of students who are not merely exposed to but rather engaged with technology, by empowering children to express ideas with usable tools for creating stop-action and 3D-animated movies, and by developing methodologies for incorporating such tools into Science, Technology, Engineering, and Mathematics (STEM) education. This effort leverages emerging public fascination for computer animation, as well as recent technological advances that have moved the graphics power of yesterday's million-dollar visualization supercomputers into every desktop PC.
A proof of concept of this approach, based on stop-motion animation, was prototyped by one of the PIs, and initial trials were encouraging. In a high-school physics class for noncollege-bound seniors, students who typically skipped class were now attending, some coming even during free time to complete their movies. Through animations, students were able to critically examine their own understanding of the physics and more effectively convey that understanding to teachers. (The same technique is also being used to teach reading to 7 year olds and biology to 9 year olds, replacing book reports and lab notebooks with animated stories and documentaries.) Informed by that experience, this project will have two arms: one to develop and evaluate teaching methodology based on moviemaking (at Tufts University), the other to create new 3D computer animation tools useable in the classroom (at Princeton University).
Technological teaching tools are often developed in the absence of strong educational research; in this project, the PIs will use accepted metrics (and develop new ones) to quantify the STEM learning improvement in high school physics as a result of using animations, comparing student understanding in conventional "hands-on" physics classes with those that include movie journaling. Results from this work will not only contribute to our understanding of how students learn physics and computing, but will also help bridge the student's experience and intuition with modern scientific theory. Further development of moviemaking tools will allow students to move from the jerky animation of the stop-action world to the smooth animations of modern computer graphics. Unfortunately, existing animation systems are barely usable by professionals, let alone grade-school students. This project will address that research challenge by developing inexpensive and robust 3D scanning hardware, point-and-click animation interfaces, and methods for stylized (e.g. cartoon-like) rendering of 3D animation.
Broader Impacts: Anecdotal evidence from the prototype system (gathered over the last three years in five classrooms) already suggests the potential significant impacts of the work. Science-phobic students and computer-shy teachers enthusiastically argue about the underlying physics to improve their movies. Movie making gives teachers a multi-media portfolio to assess student learning and test student preconceived models. If formal evaluations agree with this experience, the results of this project have the potential to change the way students learn science at all ages, opening up a new channel to students to show their understanding and test their hypotheses. This may lead to innovations in teaching computing, math, biology, chemistry, engineering, and even story telling and literature. (Nonetheless, this study chooses an emphasis on physics education because of established metrics for evaluation in this subject.) Even more broadly, animation represents a new medium of expression - visual rather than written - that is compelling but currently limited to highly skilled professionals. The tools the PI plans to develop in this project will make animation more accessible both to children and, more generally, to everyone outside the animation industry. Making this technology more widely available has the potential to affect the way we all communicate, learn, work, and play, turning us into media developers rather than media consumers.
|
0.915 |
2007 — 2011 |
Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Images With Normals: Acquisition, Analysis, and Depiction
Rusinkiewicz Princeton University
Abstract:
The development of computer-based methods for stylized depiction of 3D models promises to help scientists and engineers produce clear and compelling illustrations and visualizations. However, despite steady progress on making 3D acquisition inexpensive and practical, obtaining complete 3D models of complex objects remains challenging. This project investigates the creation of illustrations from a data type lying between simple 2D images and full 3D models: images with a surface normal stored at each pixel. These ``RGBN images'' have the potential of becoming a widely-used data type because of the ease, flexibility, and quality with which they may be acquired, and because they contain enough information to permit many analysis and depiction tasks. That is, they combine an acquisition process only mildly more complex than that for digital photographs with the power and flexibility of tools originally developed for full 3D models. Methods for RGBN shape analysis and nonphotorealistic rendering will allow for exploration and communication of surface shape and detail in domains such as medical and technical illustration, art history, and forensic analysis.
This project encompasses a comprehensive investigation of the RGBN image data type, with the aim of developing a practical pipeline for acquiring images with normals and generating stylized depictions. On the acquisition side, the project is developing hardware/software acquisition systems for robustly acquiring RGBN images in contexts ranging from millimeter-scale objects through cityscapes, and including both static and moving objects. Next, the project includes a mathematical analysis of methods for signal processing on RGBN images, including scale-space analysis and derivative estimation. These signal processing techniques are used to develop methods for depicting shape and color, including shading, line drawing with suggestive contours and crease lines, exaggerated shading, and enhancement of depth discontinuities. Finally, RGBN analysis and processing algorithms such as texture analysis/synthesis, inpainting, and similarity-based search are being developed.
|
0.915 |
2010 — 2013 |
Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type I: Automated Documentation and Illustration of Material Culture Through the Collaborative Algorithmic Rendering Engine (Care)
The ability of computers to unify visual information from multiple imaging modes into comprehensible illustrations will revolutionize the ability of scientists, engineers, and humanities scholars to gain and communicate knowledge about the visual world. Achieving this goal, however, will require a joint focus on developing novel shape and image analysis methods, and designing collaborative user interfaces that allow multiple domain experts and illustrators to bring together their expertise. The Collaborative Algorithmic Rendering Engine (CARE) will be an open-source tool for extracting and merging visual details available only under certain lighting conditions, certain wavelengths, or certain imaging modalities. By focusing on minimal user effort, cross-site collaborative visualization design, and integrated archiving and process history (provenance) tracking, the CARE tool is specifically designed to remove existing obstacles to widespread adoption of digital tools for visual analysis and communication.
As part of the project, investigators are developing novel image analysis techniques that build upon existing technologies such as Reflectance Transformation Imaging (RTI) and non-photorealistic rendering using images with normals (RGBN NPR), which have already received enormous interest within the cultural heritage community. The research includes methods for: (1) analyzing the collection of images to decompose them into "maps" of color, orientation, and material at each pixel; (2) performing an arbitrary sequence or combination of image-processing operations on some or all of the maps separately; and (3) combining several maps into the final illustration. The whole process is driven by (4) a user interface designed for interactive response and including special features that enable collaborative illustration design.
The project involves a close collaboration between a university-based research group, responsible for development of new technologies, and a non-profit company with a demonstrated track record of working with museums and archaeological sites to deploy novel imaging and computational photography systems. This joint development will ensure that the underlying technologies will have immediate high impact in the field: cultural heritage scholars and scientists will be able to generate high-quality, comprehensible illustrations for scientific papers and textbooks, with control over selective emphasis, contrast, attention, and abstraction, at lower cost and greater flexibility than generating such figures by hand. The subject matter of art history also offers the unique opportunity to stimulate the interest of students who would not normally take courses in computer science, broadening the class of students exposed to the tools and capabilities of computing.
|
0.915 |
2010 — 2015 |
Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Large: Collaborative Research: Beyond Flat Images: Acquiring, Processing, and Fabricating Visually Rich Material Appearance
Despite revolutionary advances in how images are recorded, manipulated, and reproduced, our ability to re-create the visual experience remains remarkably limited. Few realistic computer models exist for the characteristic appearance of natural materials such as marble, wood, coral, or skin, or man-made ones such as color-shifting automotive paints. Digitizing and creating realistic images of these substances involves reproducing their interaction with light: the way light is reflected from surfaces, or scattered and absorbed within the materials. Full reproducibility also involves "printing" a material as a real, physical object that modulates the light around us. However, it is currently impossible to output complex appearance the way we print color on a paper with fixed gloss, or create shapes using a 3D printer. This project encompasses a comprehensive, collaborative research agenda in computer graphics and related areas, to develop an end-to-end framework for acquiring, representing, and fabricating complex appearance, as well as to understand how it is perceived by the human visual system.
The enabling technical idea of the project is to treat materials as thin three-dimensional volumes populated with general scattering sites. This is a radical departure from the hitherto standard approach in computer graphics, which has studied materials purely as surfaces. The volumetric representation subsumes and generalizes the diverse set of conventional representations that currently exist in graphics, including surface-based notions such as bidirectional reflectance (BRDF), spatially varying BRDF, and subsurface scattering distributions (BSSRDF). Moreover, it enables fundamentally improved approaches to efficient yet general acquisition, fast and realistic rendering, and fabrication of objects exhibiting phenomena beyond simple surface reflectance and spatially homogeneous subsurface scattering.
|
0.915 |
2014 — 2017 |
Rusinkiewicz, Szymon Finkelstein, Adam (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Micro-Gps: Localization Using Visual Landmarks in Commonplace Texture
This project develops a Micro-GPS system that provides centimeter-level accuracy and is reliable both indoors and out, based on specific landmarks in the "random" textures present in the world. The key idea is that all floors, such as the carpet in a building, the grain of a wood floor, the concrete on a sidewalk, and the asphalt on a road, have small imperfections, bumps, or variations in color from location to location. A downward-pointing camera mounted underneath a vehicle can observe specific, unique arrangements of these seemingly random variations, looking them up in an index to find out their precise position in the world. The developed technology can provide capabilities for better in-car navigations, such as accurate parking in a particular spot, pothole avoidance, and lane departure warning. Other applications might include smart wheelchairs that can stay on a sidewalk and avoid rough patches, scooters for the elderly and disabled, assistive technologies for the visually impaired, marker-free smart highways, smart robots in warehouses that can precisely position themselves next to shelves, and even domestic assistants that can handle day-to-day chores inside a home.
This research is based on a key idea that localization is possible based on specific features in the "random" textures present in the world: seemingly-heterogeneous textures that have unique variations everywhere but globally consistent image statistics. The key challenges of this project include developing methods for (1) detecting uncommon locations or "features" in a close-up image of the ground surface; (2) computing a feature descriptor for each detected landmark, in a way that is invariant to changes in orientation and lighting; (3) matching the features against a map: a pre-built database of features, their arrangements, and their locations in the world; and (4) being able to create and update the database to increase coverage and to account for changes. All of these are common components in contemporary systems for tracking, image alignment, and recognition. However, the individual algorithms have been tuned to work best for "natural" images. Instead, the project focuses on developing detectors, descriptors, matching algorithms, and update strategies that are tuned to the statistics of common ground textures. The research team investigates whether accuracy can be improved by combining descriptors based on color with ones based on surface normals or height fields; and the systems issues involved in scaling the system to widespread coverage.
|
0.915 |
2016 — 2019 |
Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Collaborative Research: Detailed Shape and Reflectance Capture With Light Field Cameras
A major transformation is occurring in the way we sense the visual world. Traditional 2D photography is increasingly being replaced with light-field sensors that capture the full spatial and angular variation of the incoming light field, rather than simply pixels that integrate over incoming directions. This development opens up the possibility of ubiquitous 3D imaging of our visual world. Light-field sensors are particularly attractive as a depth acquisition device, since they are completely passive without needing to project light into the scene, and they do not experience a reduction in performance outdoors. Moreover, the rich ray-space of light fields provides significant cues for recovering fine-scale depth. However, current RGBD and light-field systems produce only coarse depth; while useful for tasks like refocusing images, the depth channel offers little benefit for photography beyond conventional 2D RGB images. This research seeks to address these challenges, by developing practical algorithms for detailed 3D shape and reflectance capture with light-field cameras, coupled with a theoretical and experimental analysis of the achievable accuracy. Project outcomes will have broad impact on diverse fields including computer graphics and virtual/augmented reality, enabling acquisition of high-quality detailed 3D shape and the subsequent use of the 3D geometry with computer-generated synthetic objects. Methods to acquire 3D images, including on mobile sensors, will transform the photographic process from 2D to 3D, with immense industrial and societal impact.
The PIs will address four important problems in light-field shape acquisition. First, they will exploit the rich nature of light-field data, combining multiple cues (defocus, correspondence, shading, specularity) in a unified way to obtain the overall global 3D shape. Moreover, they will seek to go beyond the common Lambertian reflectance assumption, developing a novel BRDF-invariant framework for surface reconstruction with general glossy materials like metals, plastics, or ceramics, while supporting textures and spatially-varying reflectance. Another key objective will be to ground the practical results in a theoretical framework that can establish the limits of light-field camera shape resolution, and the signal-to-noise accuracy, and how this relates to novel designs for light-field cameras to obtain the best achievable resolution in 3D shape capture. Finally, the PIs will move from overall shape to fine-scale surface detail, proposing new methods for shape/reflectance capture for fine-scale geometry like hair. The ultimate goal is to enable a full 3D processing pipeline for photography, computer graphics and applications like virtual and augmented reality, which will bring ubiquitous 3D to the casual photographer.
|
0.915 |
2018 — 2021 |
Rusinkiewicz, Szymon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Collaborative Research: 3d Printing For High Fidelity Image Reproduction Capturing Texture, Spectral Color, Gloss, and Translucency
CHS: Small: Collaborative Research: 3D Printing for High Fidelity Image Reproduction Capturing Texture, Spectral Color, Gloss, and Translucency
3D printing can exploit fine-scale interleaving of multiple materials to enable fabrication of objects that have not been possible to make with other technologies. In particular, a novel research area enabled by multi-material additive manufacturing is the fabrication of objects with desired appearance properties. This project will develop technology for ultra-high fidelity reproduction of texture, spectral color, gloss, and translucency. While consumer-grade printers typically use 4 ink colors, our system will employ 10 different inks in order to accurately reproduce color across the spectrum, including regions that are at the edge of visibility. The use of these 3D printers will enable not just the reproduction of color, but also the recreation of gloss, translucency, and fine-scale detail such as brush strokes. Our testbed for this work is painted fine art images. This domain provides technical challenges that will push the state of the art for high-fidelity reproduction in general, as well as immediately benefiting cultural heritage preservation efforts. Fine art objects are inherently subject to degradation or damage from light and air when on display for long periods of time. Ultra-high quality facsimiles of these cultural treasures can be exploited in place of the originals in many applications, including restoration practice, conservatory studies, education in museums, and enjoyment at home. Collaboration with local museums in Boston and Princeton will enable strong outreach with curators and the general public. Project outcomes will have additional broad impact by enabling accurate appearance reproduction with 3D printing in support of a wide range of applications in rapid prototyping and the manufacture of end-user products.
To achieve these goals, the project will investigate several key methods and technologies. First, there is the question of how to use advanced camera systems to capture the properties of the original paintings. This involves measuring the full spectrum of light refracted by each point on the painting, as well as how shiny or opaque the surface is. Moreover, it involves the use of high-resolution 3D scanners to measure the fine bumps on the surface left by brushes or other tools. A second component of the project is to characterize the full capabilities of multi-material 3D printers, and to incorporate the measured ink properties into computer simulations that can predict exactly how the print-outs will look. The final components of the project are to perform computer optimization to determine the exact type, number, and concentration of printing materials that should be used, and how the printing process should be tweaked to achieve maximum reproduction quality.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |