Kostas Daniilidis - US grants
Affiliations: | Computer and Information Science | University of Pennsylvania, Philadelphia, PA, United States |
Area:
Intelligent Systems: Computer VisionWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Kostas Daniilidis is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2000 — 2003 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pennsylvania Surround perception is crucial for an immersive sense of presence in communication and for efficient navigation and surveillance in robotics. To enable surround perception, new omnidirectional systems |
1 |
2003 — 2009 | Daniilidis, Kostas Pappas, George (co-PI) [⬀] Pappas, George (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: Multi-Robot Emergency Response @ University of Pennsylvania This project, a collaborative with 03-24864 Papanikolopoulos at University of Minnesota, and 03-25017 Joel Burdick at California Institute of Technology, addresses research issues key to an important application of robot teams and information technology (emergency response in hazardous environments for various tasks). The research sets 6 goals: |
1 |
2004 — 2006 | Lee, Daniel (co-PI) [⬀] Shi, Jianbo [⬀] Taylor, Camillo (co-PI) [⬀] Daniilidis, Kostas Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
RR:Macnet: Mobile Ad-Hoc Camera Networks @ University of Pennsylvania This project, developing an experimental testbed to work on different aspects of control and sensing for mobile networks of cameras and microphones, envisions a system of cameras, MACNet, moving in three dimensions enabling a Mobile Ad Hoc Camera Network. The development of this testbed provides the experimental infrastructure for the following interdisciplinary projects: |
1 |
2004 — 2007 | Limp, W. Frederick Vranich, Alexei Shi, Jianbo (co-PI) [⬀] Daniilidis, Kostas Biros, George (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computing and Retrieving 3d Archaeological Structures From Subsurface Surveying @ University of Pennsylvania Today's archaeological excavations are slow and the cost for conservation can easily exceed the cost of excavation. This project is investigating and developing methods for the recovery of 3D underground structures from subsurface non-invasive measurements obtained with ground penetrating radar, magnetometry, and conductivity sensors. The results will not only provide hints for further excavation but also 3D models that can be studied as if they were already excavated. The three fundamental challenges investigated are the inverse problem of recovering the volumetric material distribution, the segmentation of the underground volumes, and the reconstruction of the surfaces that comprise interesting structures. In the recovery of the underground volume, high-fidelity geophysics models are introduced in their original partial differential equation form. Partial differential equations from multiple modalities are simultaneously solved to yield a material distribution volume. In segmentation, a graph spectral method for estimating graph cuts finds clusters of underground voxels with tight connections within partitions and loose connections between partitions. A method based on multi-scale graph cuts significantly accelerates the process while the grouping properties of the normalized cuts help in clustering together multiple fragments of the same material. In surface reconstruction, boundaries obtained from segmentation or from targeted material search are converted from unorganized voxel clouds to connected surfaces. A bottom-up approach is introduced that groups neighborhoods into facets whose amount of overlap guides the triangulation process. The archaeology PIs are providing prior knowledge on what structures are expected to be found which can lead the segmentation and the reconstruction steps. |
1 |
2007 — 2010 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Collaborative Research: Bio-Inspired Navigation @ University of Pennsylvania There has been successful research on establishing metric representations of the environment required together with motion planning for any navigation task. Such metric maps require though excessive amounts of storage to memorize the robots' trajectories and all landmark positions. On the other hand, animals have excellent navigation capabilities based on visual sensing and simple path integration. |
1 |
2007 — 2014 | Kumar, R. Vijay Taylor, Camillo (co-PI) [⬀] Daniilidis, Kostas Pappas, George (co-PI) [⬀] Pappas, George (co-PI) [⬀] Yim, Mark (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Safety, Security, Rescue, and First Response @ University of Pennsylvania The University of Pennsylvania has joined the multi-university Industry/University Cooperative Research Center for Safety, Security and Rescue Research located at the University of South Florida and the University of Minnesota. The I/UCRC will bring together industry, academe, and public sector users together to provide integrative robotics and artificial intelligence solutions in robotics for activities conducted by the police, FBI, FEMA, firefighting, transportation safety, and emergency response to mass casualty-related activities. |
1 |
2008 — 2012 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pennsylvania Wayfinding is an essential capability for any person who wishes to have an independent life-style. It requires successful execution of several tasks including navigation and object and place recognition, all of which necessitate accurate assessment of the surrounding environment. For a visually-impaired person these tasks may be exceedingly difficult to accomplish and there are risks associated with failure in any of these. Guide dogs and white canes are widely used for the purpose of navigation and environment sensing, respectively. The former, however, has costly and often prohibitive training requirements, while the latter can only provide cues about obstacles in one?s surroundings. Human performance on visual information dependent tasks can be improved by sensing which provides information and environmental cues, such as position, orientation, local geometry, object description, via the use of appropriate sensors and sensor fusion algorithms. Most work on wayfinding aids has focused on outdoor environments and has led to the development of speech-enabled GPS-based navigation systems that provide information describing streets, addresses and points of interest. In contrast, the limited technology that is available for indoor navigation requires significant modification to the building infrastructure, whose high cost has prevented its wide use. |
1 |
2009 — 2011 | Lee, Daniel (co-PI) [⬀] Shi, Jianbo (co-PI) [⬀] Daniilidis, Kostas Likhachev, Maxim [⬀] Kuchenbecker, Katherine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pennsylvania This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5). |
1 |
2010 — 2014 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pennsylvania The project investigates computational methods for object detection, spatial scene construction, and natural language spatial descriptions derived from real-time visual images to describe prototypical indoor spaces (e.g., rooms, offices, etc.). The primary application of this research is to provide blind or visually impaired users with spatial information about their surroundings that may otherwise be difficult to obtain from non-visual sensing. Such knowledge will assist in development of accurate cognitive models of the environment and will support better informed execution of spatial behaviors in everyday tasks. |
1 |
2010 — 2017 | Brainard, David (co-PI) [⬀] Lee, Daniel (co-PI) [⬀] Taylor, Camillo (co-PI) [⬀] Daniilidis, Kostas Muzzio, Isabel (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Complex Scene Perception @ University of Pennsylvania This Integrative Graduate Education and Research Training (IGERT) award to the University of Pennsylvania supports the development of a new training paradigm for perception scientists and engineers, and is designed to provide them with a unique grasp of the computational and psychophysical underpinnings of the phenomena of perception. It will create a new role model of a well-rounded perceptual scientist with a firm grasp of both computational and experimental analytic skills. The existence of such a cadre of U.S. researchers will contribute to the country's global competitiveness in the growing machine perception and robotics industry. |
1 |
2011 — 2015 | Kumar, R. Vijay Daniilidis, Kostas Fluharty, Steven (co-PI) [⬀] Michael, Nathan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pfi: Autonomous Robotic Rotorcraft For Exploration, Surveillance and Transportation (Arrest) @ University of Pennsylvania This Partnerships for Innovation (PFI) project, Autonomous Robotic Rotorcraft for Exploration, Surveillance, and Transportation (ARREST), from the University of Pennsylvania will create a partnership among small-business entrepreneurs in small rotor-craft and sensors and robotics researchers at the University of Pennsylvania developing autonomous robots. The partnership will enable the translation of fundamental, federally-sponsored, research results into products with societal benefits and commercial impact by implementing a loosely structured, application-focused, "play-like sandbox" environment among its partners. The intellectual merit of this project is derived from four projects aimed at developing products/ technologies enabled by the cross-fertilization of technologies for robotics and Unmanned Aircraft Vehicles (UAVs) and from the establishment of a sandbox to encourage exploration of potential applications and markets for technologies that are being, have been, or can be developed. New research contributions will include the development of small, autonomous micro aerial vehicles and the algorithms/software for sensing, perception, control and navigation without direct human command or intervention. Additionally, the partnership proposes innovation in doctoral education by adding a new dimension to student training to encourage "lateral and analogous thinking." It will consist of structured interactions with technology transfer specialists, inventors, and venture capitalists; collaborative research at the sites of the industrial partners; and new mechanisms for spurring innovation that includes an "inverted X-prize" competition called the "Y-prize," which focuses on the platform as the solution space (instead of as the problem space), thus fostering innovation by generating novel application ideas for using or adapting existing technological solutions and methodologies. |
1 |
2012 — 2014 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I-Corps: Blindnav: Indoor Navigation For the Visually Impaired @ University of Pennsylvania There are currently very few ways for the blind to navigate a new indoor space without the assistance of a fully-sighted person. The technology proposed by this project is designed to enable a visually-impaired individual to find their way through large indoor environments such as airports, train stations and shopping malls by recognizing semantic and salient visual features of the environment. There is no prior visit or mapping of the environment required, and there is no need to deploy or utilize any special infrastructure like WiFi access points or infrared beacons. Researchers plan to use publically available architectural lay-outs and information about the location of ships, tracks, gates and other visual cues. The platform is a cell-phone mounted on a necklace that provides turn-by-turn directions through an audio-voice command interface. This technology is designed to process video from the cell phone camera in real-time using text and logo detection, localization based on prior knowledge of the layout and integration of accelerometer and visual odometry. |
1 |
2013 — 2015 | Kannan, Sampath (co-PI) [⬀] Daniilidis, Kostas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Small: Collaborative Research: Active Sensing For Robotic Cameramen @ University of Pennsylvania With advances in camera technologies, and as cloud storage, network bandwidth and protocols become available, visual media are becoming ubiquitous. Video recording became de facto universal means of instruction for a wide range of applications such as physical exercise, technology, assembly, or cooking. This project addresses the scientific and technological challenges of video shooting in terms of coverage and optimal views planning while leaving high level aspects including creativity to the video editing and post-production stages. |
1 |
2014 — 2016 | Daniilidis, Kostas Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/Ucrc Frp: Collaborative Research: Autonomous Perception and Manipulation in Search and Rescue @ University of Pennsylvania This project wants to capitalize on a team of robots that move in order to better sense the environment and then perform basic manipulation tasks. The vision of the project is to integrate robots easily, provide human-team interfaces, and develop manipulation algorithms. The research will involve the development of perception strategies and manipulation schemes that will allow operation of the robot teams in real-world environments during search and rescue missions. In addition, this research will involve working in cluttered scenes where the lighting conditions may not be ideal. The project addresses: (i) Research on the novel problem of robotic perception and manipulation of target objects that interact with other objects as an integral part of the environment, which cannot be fully isolated in views and in physical arrangements before being manipulated; (ii) An appearance-based approach for recognition and pose estimation of 3D objects in cluttered scenes from a single view; (iii) Development of a measure of scene recognizability from each viewpoint to evaluate how accurately partially-occluded objects are recognized and how well their poses are estimated; (iv) Creation of solutions for disassembly analysis of 3D structures, extending our preliminary analysis of 2D structures; (v) Development of grasping based on the results of perception and with the aid of stability analysis of the arrangement of the objects and their interaction with the environment and with one another; and (vi) Experimental validation of the system in real-world settings, in close consultation with our industrial partners. |
1 |
2014 — 2017 | Daniilidis, Kostas Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Collaborative Research: Robotics 2.0 For Disaster Response and Relief Operations @ University of Pennsylvania The project develops and tests novel compressive sensing and sensor locating techniques that are adaptable to a myriad of different mobile robot designs while operable on today's wireless communication infrastructures. Unique in-situ laboratory and field experiments provide tangible results to scientists and other stakeholders that can be leveraged to advance these systems into future real-world hazard management scenarios. The research team develops new technological approaches that results in mobilizing more intelligent, automated "eyes and ears on the ground." Outreach efforts include: (i) integration of the activities with practitioners; (ii) Seminars/webcasts to audiences like environmental engineers and first responders; (iii) Annual technology day camps to attract middle-schoolers from under-represented groups to engineering; (iv) Demonstrations to local K-12 institutions; (v) Inclusion of the project themes to the regular curricula; and (vi) International collaborations. |
1 |
2014 — 2019 | Taylor, Camillo (co-PI) [⬀] Daniilidis, Kostas Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/Ucrc Phase I: Robots and Sensors For the Human Well-Being @ University of Pennsylvania The proposed I/UCRC for Robots and Sensors for the Human Well-being (RoSeHuB) will focus on complementing a broad variety of off-the-self sensors with intelligent processing software that enables them to extract useful information about the operating environments in medicine and agriculture. RoSeHuB research will make heavy use of commercial cameras that can work in different parts of the electro-magnetic spectrum (i.e., visible, IR, Thermal, etc.), laser or radar sensors, etc. Sensors or sensor systems may exhibit different degrees of mobility. They may be embedded in robots or flying drones or they may be fixed with limited degrees of motion (PTZ cameras). In the areas of algorithms and learning methods the focus and the challenge is on creating methodologies that can balance real-time operation and computational power while providing high level semantic information either for planning, interaction or situational awareness for human operators. With respect to robots, efforts will focus on building systems with advanced mobility, manipulation, human-machine interaction, and coordination skills. |
1 |
2016 — 2017 | Lee, Daniel (co-PI) [⬀] Yim, Mark [⬀] Kumar, R. Vijay Daniilidis, Kostas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf National Robotics Initiative (Nri) 2016 Pi Meeting @ University of Pennsylvania The objective of this award is to organize the annual Principal Investigators (PI) meeting for the National Robotics Initiative (NRI), which was launched in 2011. The PI meeting brings together the community of researchers, companies, and program officers who are actively engaged in the NRI to provide cross-project coordination in terms of common intellectual challenges, methods for education and training, best practices in terms of transition of results, and a centralized and lasting repository illustrating the research ideas explored and milestones achieved by the NRI projects. The meeting will be two days during late fall 2016 in the vicinity of Washington DC. The format will include short presentations by all the attending PIs, a poster session, keynote speeches, and panel discussions. Invitations to the meeting will include all PIs with active NRI grants, program managers with robotics-related programs, and members of the press. |
1 |
2016 — 2018 | Schmidt, Marc F. (co-PI) [⬀] Bassett, Danielle (co-PI) [⬀] Lee, Daniel (co-PI) [⬀] Shi, Jianbo (co-PI) [⬀] Daniilidis, Kostas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of An Observatory For Quantitative Analysis of Collective Behavior in Animals @ University of Pennsylvania This project, developing a new instrument to enable an accurate quantitative analysis of the movement of animals and vocal expressions in real world scenes, aims to facilitate innovative research in the study of animal behavior and neuroscience in complex realistic environments. While much progress has been made investigating brain mechanisms of behavior, these have been limited primarily to studying individual subjects in relatively simple settings. For many social species, including humans, understanding neurobiological processes within the confines of these more complex environments is critical because their brains have evolved to perceive and evaluate signals within a social context. Indeed, today's advances in video capture hardware and storage and in algorithms in computer vision and network science make this facilitation with animals possible. Past work has relied on subjective and time-consuming observations from video streams, which suffer from imprecision, low dimensionality, and the limitations of the expert analyst's sensory discriminability. This instrument will not only automate the process of detecting behaviors but also provide an exact numeric characterization in time and space for each individual in the social group. While not explicitly part of the instrument, the quantitative description provided by our system will allow the ability to correlate social context with neural measurements, a task that may only be accomplished when sufficient spatiotemporal precision has been achieved. |
1 |
2016 — 2019 | Schmidt, Marc F. [⬀] Daniilidis, Kostas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neural Bases of Song Preference and Reproductive Behavior in a Female Songbird @ University of Pennsylvania For many decades, neuroscientists and evolutionary biologists have been interested in the mechanics and function of the songbird's "song system": the interconnected neural circuit that connects the higher-order auditory areas in the brain with the motor circuits in the brainstem that drive behavior. This work predominantly has focused on how the song system allows male songbirds to learn and produce song. The role of this circuit in female songbirds, which do not sing, has largely been ignored. Rather than acting as a circuit that generates vocal behavior, this work investigates the hypothesis that the "song system" in females serves to organize preferences for males' songs and guides their behavioral reactions to song in the form of a copulation solicitation display that ensures survival of the species. The project capitalize on the robustness, selectivity, and social malleability of the copulatory behavior in the brown-headed cowbird, to investigate how the song system transforms a sensory stimulus (the song) into a motor command that controls a postural response. The project also provides opportunities for undergraduate and graduate students to engage in interdisciplinary research, and it includes science education activities aimed at elementary school children as well as a comprehensive summer course in neuroscience for high school students. |
1 |
2017 — 2020 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Collaborative Research: Closed Loop Perceptual Planning For Dynamic Locomotion @ University of Pennsylvania Modern robots can be seen moving about a variety of terrains and environments, using wheels, legs, and other means, engaging in life-like hopping, jumping, walking, crawling, and running. They execute motions called gaits. An example of a gait is a horse trotting or galloping. Likewise, humans execute walking, running and skipping gaits. Essentially, for either a biological or mechanical systems, a gait is a locomotion pattern that involves large-amplitude body oscillations. Naturally, these motions cause impacts with terrain that jostle on-board perceptual systems and directly influence what the robots actually "see" as they move. For instance, the body motion of a bounding horse-like robot may result in significant occlusions and oscillations in on-board camera systems that confound motion estimation and perceptual feedback. |
1 |
2019 — 2022 | Daniilidis, Kostas Sarkar, Saswati (co-PI) [⬀] Ribeiro, Alejandro [⬀] Ghrist, Robert (co-PI) [⬀] Dobriban, Edgar |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Pennsylvania Recent advances in artificial intelligence have led to significant progress in our ability to extract information from images and time sequences. Maintaining this rate of progress hinges upon attaining equally significant results in the processing of more complex signals such as those that are acquired by autonomous systems and networks of connected devices, or those that arise in the study of complex biological and social systems. This award establishes FINPenn, the Center for the Foundations of Information Processing at the University of Pennsylvania. The focus of the center is to establish fundamental theory to enable the study of data beyond time and images. The center's premise is that humans' rich intuitive understanding of space and time may not necessarily be applicable to the processing of complex signals. Therefore, matching the success in time and space necessitates the discovery and development of foundational principles to guide the design of generic artificial intelligence algorithms. FINPenn will support a class of scholar trainees along with a class of visiting postdocs and students to advance this agenda. The center will engage the community through the organization of workshops and lectures and will disseminate knowledge with onsite and online educational activities at the undergraduate and graduate level. |
1 |
2020 — 2023 | Daniilidis, Kostas Pappas, George (co-PI) [⬀] Pappas, George (co-PI) [⬀] Matni, Nikolai |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Medium: Robust Learning For Perception-Based Autonomous Systems @ University of Pennsylvania Consider two future autonomous system use-cases: (i) a bomb defusing rover sent into an unfamiliar, GPS and communication denied environment (e.g., a cave or mine), tasked with the objective of locating and defusing an improvised explosive device, and (ii) an autonomous racing drone competing in a future autonomous incarnation of the Drone Racing League. Both systems will make decisions based on inputs from a combination of simple, single output sensing devices, such as inertial measurement units, and complex, high dimensional output sensing modalities, such as cameras and LiDAR. This shift from relying only on simple, single output sensing devices to systems that incorporate rich, complex perceptual sensing modalities requires rethinking the design of safety-critical autonomous systems, especially given the inextricable role that machine and deep learning play in the design of modern perceptual sensors. These two motivating examples raise an even more fundamental question however: given the vastly different dynamics, environments, objectives, and safety/risk constraints, should these two systems have perceptual sensors with different properties? Indeed, due to the extremely safety critical nature of the bomb defusing task, an emphasis on robustness, risk aversion, and safety seems necessary. Conversely, the designer of the drone racer may be willing to sacrifice robustness to maximize responsiveness and lower lap-time. This extreme diversity in requirements highlights the need for a principled approach to navigate tradeoffs in this complex design space, which is what this proposal seeks to develop. Existing approaches to designing perception/action pipelines are either modular, which often ignore uncertainty and limit interaction between components, or monolithic and end-to-end, which are difficult to interpret, troubleshoot, and have high sample-complexity. |
1 |
2021 — 2023 | Daniilidis, Kostas Schmidt, Marc [⬀] Aflatouni, Firooz (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo:Tracking Social Behavior and Its Neural Properties in a Smart Aviary @ University of Pennsylvania Advances in technology, mathematics, computing and engineering are making it possible to quantify behaviors within complex naturalistic environments and to relate them to underlying neural mechanisms. For social animals, which have evolved to perceive and evaluate signals within a community context, the ability to link neural function with the precise social environment is especially important and challenging. Little is currently known about how the brain integrates complex social information and how such information might be encoded. This stems in part from the experimental challenge of measuring and assessing the variables that determine a social context and then linking the state of a social network to precise neural events. This project has assembled an interdisciplinary team of engineers, neurobiologists and computational scientists to create a platform to record and evaluate brain dynamics in individual animals navigating a complex social environment. In addition to the challenge and opportunity of using sophisticated engineering and computational approaches to study how brains encode social information, this work will generate a complex dataset that will offer unique opportunities for developing novel mathematical methods to quantify and visualize social networks that can be applied to other disciplines. |
1 |
2022 — 2025 | Daniilidis, Kostas | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Visual Tactile Neural Fields For Active Digital Twin Generation @ University of Pennsylvania Robots will perform better at everyday activities when they can quickly combine their sensory data into a model of their environment, just like how humans instinctively use all their senses and knowledge to accomplish daily tasks. Robots, however, must be programmed to create these models that humans do intuitively, effortlessly, and robustly. This robotics project explores a novel algorithmic approach that combines visual and tactile sensory data with a knowledge of physics and a capability to learn that makes robot planning and reasoning more effective, efficient, and adaptable. The project includes the development and testing of research prototypes, preparation of new curriculum, and outreach to high school students and teachers and to the general public.<br/><br/>This project introduces a new data representation, called a Visual Tactile Neural Field (VTNF), that allows robots to combine data from visual and tactile sensors to create a unified model of an object. The VTNF is designed to be used in a closed-loop manner, where a robot may use data from its physical interactions with an object to create or improve a model and may use its current understanding of a model to inform how best to interact with a physical object. Towards this end, the investigators create the mathematical techniques, computational tools, and robot hardware necessary to generate a VTNF model. The investigators also develop techniques to quantify the uncertainty about an object and use this uncertainty to learn search policies that allow robots to generate accurate models as quickly as possible. The VTNF, which allows for the easy addition of new properties about an object, provides a flexible representational foundation for other researchers and practitioners to use to enable robots to learn faster by having a more detailed understanding of both the surrounding environment and their interactions with it.<br/><br/>This project is supported by the cross-directorate Foundational Research program in Robotics and the National Robotics Initiative, jointly managed and funded by the Directorates for Engineering (ENG) and Computer and Information Science and Engineering (CISE).<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. |
1 |
2022 — 2026 | Roth, Dan (co-PI) [⬀] Daniilidis, Kostas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Learning to Map and Navigate With Vision and Language @ University of Pennsylvania This project aims to advance the state of the art in robotic mapping and navigation by enabling spatial understanding using semantic maps and spatial reasoning for following language instructions given only visual inputs. Current performance in those tasks is low because of the inability to ground semantic entities and instructions spatially. Instead of grounding semantics to images, spatial understanding and navigation can be achieved if a system uses maps as an intermediate representation, as also indicated by behavioral and neural findings in spatial cognition. Building a map of an unseen space without exhaustive exploration can be learned, and this process can be facilitated by cross-modal language-vision attentional mechanisms. The project will integrate research with education and outreach underrepresented groups in Philadelphia neighborhoods as a target broadening the participation.<br/><br/>This research is centered around understanding how vision and language interact to create better spatial representations like maps and facilitate navigation. The project will approach the vision-language from three angles. (i) How robots can learn to predict a map when entering an unseen environment using active learning. (ii) How navigation instructions can be encoded into spatial configuration schemata and navigational concepts that can be better aligned to maps and paths than raw language embeddings, and (iii) how navigational language representations can facilitate the creation of maps in unseen environments, and how one can follow instructions by using maps and language to create paths to follow.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. |
1 |