1999 — 2001 |
Fermuller, Cornelia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Powre: Enhanced Vision---Eyes From Eyes @ University of Maryland College Park
EIA-9973428 Fermuller, Cornelia M. University of Maryland
CISE/POWRE: Enhanced Vision ---Eyes from Eyes
The focus of this research is on designing and constructing new ways of assembling a set of video cameras in order to collect images whose analysis will provide accurate models of shape and action, such as human movements. The research is motivated by examining the wide variety of eye design in the biological world and obtaining inspiration for an ensemble of computational studies that relate how a system sees to what that system does (that is relating perception to action.) This, coupled with the geometry of multiple views points to new ways of constructing powerful imaging devices which suit particular tasks in robotics, visualization, video processing, virtual reality and various computer vision applications.
|
1 |
2005 — 2008 |
Aloimonos, John (Yiannis) [⬀] Fermuller, Cornelia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Proposal: Seer: a Gigascale Neuromorphic Visual System @ University of Maryland College Park
The goal of this project is to parse video from a moving platform in real-time to produce retinotopic maps that reveal the spatial layout of the scene as well as any independently moving objects present. The project proposes to duplicate the function of the early visual system in a multichip neuromorphic system with two hundred and thirty thousand silicon neurons (three quarters the number of pixels in a VGA image) and three billion synaptic interactions several orders of magnitude larger than anything built to date. The outcome of this project will be SEER, a subwatt, paperback-size seeing machine.
The intellectual merit of the proposed research stems from the tight synergy between the computational theory, based on the principle of compositionality ,and the neuromorphic implementation, based on reentrant networks. Compositionality dictates that the various parts of the vision problem should be attacked simultaneously and reentrancy gives us the capability to do exactly that.
The broader impact of the proposed effort could be enormous. A multichip neuromorphic system performing multimodal segmentations would reinvigorate robotics and computer vision. By providing the infrastructure for early vision, it will facilitate the study of cognition, which will most likely generate a flood of new theories and experiments, in the Neurosciences and in the sciences of the artificial.
|
1 |
2007 — 2009 |
Aloimonos, John (Yiannis) [⬀] Fermuller, Cornelia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Noss: Collaborative Proposal: the Behaviorscope Project: Sensory Grammars For Sensor Networks @ University of Maryland College Park
The BehaviorScope project seeks to develop a framework for understanding patterns and behaviors from sensor data and metadata in distributed multimodal sensor nodes. Patterns and behaviors (especially of humans) will be parsed by a hierarchy of probabilistic grammars and other mechanisms into a compact and more descriptive semantic form. These higher-level interpretations of the data will provide the necessary network cognition needed to provide services in many everyday life applications such as assisted living, workplace safety, security, entertainment and more. The project will use a lightweight camera sensor network as its primary platform and will focus on two types of spatio-temporal data processing. At the local sensor's field of view, this research will investigate the design of filters for robustly detecting humans as well as their gestures and postures. At a more macroscopic level, collections of sensors will coordinate to detect longer term patterns of behavior. The expected outcome is a new data interpretation framework that can understand the spatial and temporal aspects of data and respond to them with meaningful services. To collect real data and to demonstrate the developed concepts in practical applications, this work will use assisted living as the driver application. In this context, the developed sensor network will supervise the behaviors of elders living alone at home to generate daily activity summaries, post warnings and alarms when they engage in dangerous activities, and provide a variety of services that increase the autonomy and independence of these individuals.
|
1 |
2010 — 2013 |
Aloimonos, John (Yiannis) [⬀] Fermuller, Cornelia |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Small: Methods and Tools: Robots With Vision That Find Objects @ University of Maryland College Park
CPS:Small: Methods and Tools: Robots with vision that find objects
The objective of this research is the development of methods and software that will allow robots to detect and localize objects using Active Vision and develop descriptions of their visual appearance in terms of shape primitives. The approach is bio inspired and consists of three novel components. First, the robot will actively search the space of interest using an attention mechanism consisting of filters tuned to the appearance of objects. Second, an anthropomorphic segmentation mechanism will be used. The robot will fixate at a point within the attended area and segment the surface containing the fixation point, using contours and depth information from motion and stereo. Finally, a description of the segmented object, in terms of the contours of its visible surfaces and a qualitative description of their 3D shape will be developed. The intellectual merit of the proposed approach comes from the bio-inspired design and the interaction of visual learning with advanced behavior. The availability of filters will allow the triggering of contextual models that work in a top-down fashion meeting at some point the bottom-up low-level processes. Thus, the approach defines, for the first time, the meeting point where perception happens. The broader impacts of the proposed effort stem from the general usability of the proposed components. Adding top-down attention and segmentation capabilities to robots that can navigate and manipulate, will enable many technologies, for example household robots or assistive robots for the care of the elders, or robots in manufacturing, space exploration and education.
|
1 |
2012 — 2015 |
Shamma, Shihab (co-PI) [⬀] Andreou, Andreas (co-PI) [⬀] Fermuller, Cornelia Horiuchi, Timothy (co-PI) [⬀] Etienne-Cummings, Ralph (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Inspire: Signals to Symbols: From Bio-Inspired Hardware to Cognitive Systems @ University of Maryland College Park
This INSPIRE award is partially funded by the Science of Learning Centers Program in the Division of Behavioral, Cognitive and Social Sciences in the Directorate for Social, Behavioral and Economic Sciences; the Perception, Action, and Cognition Program in the Division of Behavioral, Cognitive and Social Sciences in the Directorate for Social, Behavioral and Economic Sciences; the Energy, Power, and Adaptive Systems Program in the Division of Electrical Communication and Cyber Systems in the Directorate of Engineering; and the Applied Mathematics and Mathematical Biology Program in the Division of Mathematical Sciences in the Directorate for Mathematical and Physical Sciences. This research project draws on knowledge from many disciplines (neuroscience, cognitive science, computational science, mathematics and engineering) to create cognitive systems capable of interpreting observed, complex human movements and actions. New design methodologies will be developed for the integration of sensory modalities (vision, audition, touch) and their support of higher cognitive function (language, reasoning). In contrast to existing approaches which tend to be assemblies of modular components each solving its task in isolation, this team takes a novel approach called Active Cognition which has the following features: 1) Instead of modeling the different perceptual processes (vision, audition, and haptics), cognition, and motor control in isolation, the modules are integrated and capabilities co-developed in the tradition of dynamical systems theory to obtain a reasoning system where "the whole is greater than the sum of its parts"; 2) instead of segregating the low level processing of signals from the processing of higher level symbolic information, they will interact in a continuous dialogue, such that high level knowledge will leverage perception; and 3) instead of separating physical embodiment from algorithmic considerations, biologically inspired real-time hardware will be developed that implements complex functions by integrating signals and symbols. The project is organized in two working groups. The first group will develop a cognitive robot that can recognize complex human activities using visual and auditory signals captured by biological-inspired hardware. The second group will study attention in humans by measuring human response to audition and vision through EEG and MEG, and subsequently implementing the findings in robots. A yearly three-week, hands-on workshop will educate students, serve as testing ground for the team's ideas, and stimulate new collaborations. This workshop will also engage the involvement of the interdisciplinary research community that has formed around the goal of building biologically inspired cognitive systems.
Success in integrating different components of a cognitive system (hardware, sensors, and software) has the potential to catalyze a new industry of biologically-inspired cognitive systems, including household and service robots, and systems for intelligent transportation and smart manufacturing. In addition, this interdisciplinary project will play a significant role in building capacity for a new emphasis area in engineering and training of cognitive systems engineers who need combined expertise in computer science, electrical engineering and cognitive neuroscience.
|
1 |
2015 — 2018 |
Baras, John (co-PI) [⬀] Fermuller, Cornelia Aloimonos, John (Yiannis) (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Synergy: Mona Lisa - Monitoring and Assisting With Actions @ University of Maryland College Park
Cyber-physical systems of the near future will collaborate with humans. Such cognitive systems will need to understand what the humans are doing. They will need to interpret human action in real-time and predict the humans' immediate intention in complex, noisy and cluttered environments. This proposal puts forward a new architecture for cognitive cyber-physical systems that can understand complex human activities, and focuses specifically on manipulation activities. The proposed architecture, motivated by biological perception and control, consists of three layers. At the bottom layer are vision processes that detect, recognize and track humans, their body parts, objects, tools, and object geometry. The middle layer contains symbolic models of the human activity, and it assembles through a grammatical description the recognized signal components of the previous layer into a representation of the ongoing activity. Finally, at the top layer is the cognitive control, which decides which parts of the scene will be processed next and which algorithms will be applied where. It modulates the vision processes by fetching additional knowledge when needed, and directs the attention by controlling the active vision system to direct its sensors to specific places. Thus, the bottom layer is the perception, the middle layer is the cognition, and the top layer is the control. All layers have access to a knowledge base, built in offline processes, which contains the semantics about the actions.
The feasibility of the approach will be demonstrated through the development of a smart manufacturing system, called MONA LISA, which assists humans in assembly tasks. This system will monitor humans as they perform assembly task. It will recognize the assembly action and determine whether it is correct and will communicate to the human possible errors and suggest ways to proceed. The system will have advanced visual sensing and perception; action understanding grounded in robotics and human studies; semantic and procedural-like memory and reasoning, and a control module linking high-level reasoning and low-level perception for real time, reactive and proactive engagement with the human assembler.
The proposed work will bring new tools and methodology to the areas of sensor networks and robotics and is applicable, besides smart manufacturing, to a large variety of sectors and applications. Being able to analyze human behavior using vision sensors will have impact on many sectors, ranging from healthcare and advanced driver assistance to human robot collaboration. The project will also catalyze K-12 outreach, new courseware (undergraduate and graduate), publication and open-source software.
|
1 |
2015 — 2018 |
Shamma, Shihab (co-PI) [⬀] Andreou, Andreas (co-PI) [⬀] Fermuller, Cornelia Horiuchi, Timothy (co-PI) [⬀] Etienne-Cummings, Ralph (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sl-Cn: Cortical Architectures For Robust Adaptive Perception and Action @ University of Maryland College Park
The motivation for this biologically-inspired approach is to design systems that perceive and act in cluttered and noisy scenes that they have never experienced. This stands in contrast with the state of the art in computational engineering systems that need to be re-trained each time they confront an unanticipated environment. The main reason is that current approaches to perception address specific problems in isolation and do not consider that the primary role of perception is to support systems with bodies in action. As a result, they are constrained to the situations for which they were trained and cannot react to changing tasks and scenes. By focusing on cognition primitives rather than specific applications, the work is expected to greatly advance the state of the art of machine perception and lead to the development of systems that can robustly and on-line adapt to new environments, react to novel situations and learn new contexts. To do so, novel theoretical formulations of perception and action and high-speed, low-power, hardware implementations with on-line learning capabilities will be studied while assimilating new insights from the neurosciences. Consequently, this work will network neuroscience, cognitive science, applied mathematics, computer science and engineering so as to lower one of the few remaining barriers that keeps interactive robots in the realm of science fiction. Beyond the scholarly contribution, the work is expected to provide know-how for the design of systems with adaptive perception in a modular fashion with reusable components. Such systems have applications in computational vision and auditory perception problems and can advance the industry of cognitive biologically-inspired robotics and assistive devices.
This proposal sets forward novel ideas in the design of intelligent perceptual systems and the development of synthetic intelligence. Just about any task which an intelligent system solves involves the interplay of four basic processes that are devoted to: (a) context, (b) attention, (c) segmentation and (d) categorization. The members of the proposed network will study these canonical cognitive primitives by combining neural modeling with neural and behavioral experiments, theoretical and computational modeling and implementation in robotics. The findings of theoretical insights will then be adapted to satisfy the demands of realistic behavior, and to develop technological solutions for applications of robust and invariant perception and action. The proposed collaborative network will consist of a small science and engineering research team to directly address the questions in robust adaptive perception and action. It will then direct personnel, and inject results and pedagogical content to a Summer Workshop that aims to include a global network of researchers.
|
1 |
2018 — 2021 |
Shamma, Shihab (co-PI) [⬀] Fermuller, Cornelia Etienne-Cummings, Ralph (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Research Coordination Network: Cognitive Functions in the Learning of Symbolic Signals & Systems @ University of Maryland College Park
The objective of this Research Coordination Network (RCN) is to advance understanding of how biological systems learn complex symbolic signals, and create artificial systems with similar capabilities. By defining a common framework to describe these signals, and their variability across space and time, the RCN will develop methods and tools applicable to a wide range of domains, including language, music, action, perception, and navigation. The RCN will build upon research in Neuromorphic Engineering and its development of bio-inspired, low-power computing platforms, sensors, and signal processing. Using these tools, the RCN will focus on high-level cognitive functions, to create complex, bio-inspired systems that learn through engagement in tasks. The network will bring together neuroscience, cognitive science, applied mathematics, computer science, and engineering, with emphasis on machine learning and artificial intelligence. Network members will participate in a yearly three-week, hands-on workshop, that will develop and test new tools and ideas, stimulate new collaborations, and educate students on unique interdisciplinary skills.
The RCN will facilitate interactions and collaborative projects among participating researchers employing a wide range of paradigms that specifically deal with three thrusts: the role of neural plasticity for learning symbolic systems; the adaptive mechanisms underlying the learning of sensory-motor tasks; and transitioning to real-world applications such as automatic speech and dynamic scene understanding, neuromorphic hardware implementations, cognitive computational algorithms, and databases acquisition. Specific examples of such diverse projects include brain process models that assess learning and expertise; algorithms, based on physiological or abstract events, that process input from neuromorphic hardware; and development of software and neuromorphic hardware for signal interpretation and action execution.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2025 |
Shamma, Shihab (co-PI) [⬀] Andreou, Andreas (co-PI) [⬀] Fermuller, Cornelia Etienne-Cummings, Ralph (co-PI) [⬀] Babadi, Behtash (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Accelnet: Accelerating Research On Neuromorphic Perception, Action, and Cognition @ University of Maryland College Park
Artificial intelligence is becoming ubiquitous in modern life. To build systems under the current paradigm, large amounts of energy are required for computing and sensing. This causes environmental problems, pollution, and challenges for small-sized systems, as well as privacy issues. The field of neuromorphic science and technology offers an alternative by seeking to understand principles of biological brains and build on their basis artificial systems using low-power hardware and software solutions. While its advantages have been demonstrated, further advances are necessary and will require common computational tools and principled experimental approaches. This AccelNet project, NeuroPacNet, links international experts in neuromorphic engineering with computational neuroscientists, roboticists, control theorists, and researchers of perception from seven global networks to set the foundations for building systems that can robustly process real-world signals in time and adapt to changes. This network of networks will facilitate the development of new methods and approaches for intelligent system design and prepare the next generation of leaders in neuromorphic science and technology. As different industries adopt neuromorphic hardware, society will have access to new applications, such as in computing on cell phones, neuroprostheses, intelligent hearing aids, and smart sensory systems with predictive capabilities.
NeuroPacNet will advance computational research on modeling the integration of perception, action, and cognition. The network of network will coordinate across those research thrusts and develop new approaches grounded in theoretical neuroscience for sensorimotor control, motor learning, event-based computations, and learning in spiking neural networks. NeuroPacNet will also include robotics research in the areas of drone navigation and human activity understanding for humanoids and will address social and ethical issues in humanoid robotics. The network of networks will use innovative hardware design and mixed signals computational systems to address computation for emerging and unconventional technologies. International collaboration and knowledge exchange will include an immersive research exchange program providing scholarships to students and postdoctoral researchers, an annual workshop to discuss common issues and concerns in a stimulating environment and to engage in hands-on projects, meetings to define challenges, opportunities, and actions to accelerate progress, and competitions with two challenges to be solved by teams of researchers and students. An interactive project website will become a portal for archived webinar talks, tools, and data.
The Accelerating Research through International Network-to-Network Collaborations (AccelNet) program is designed to accelerate the process of scientific discovery and prepare the next generation of U.S. researchers for multiteam international collaborations. The AccelNet program supports strategic linkages among U.S. research networks and complementary networks abroad that will leverage research and educational resources to tackle grand scientific challenges that require significant coordinated international efforts.
Co-funding for this project is provided by the Directorate for Social, Behavioral, and Economic Sciences.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |