2005 — 2007 |
Adams, Julie |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Requirements Assessment For a Multiple Robot-Multiple Human Interface
There exist many examples of Human-Robotic Interaction (HRI) for a single robot or a small number of robots (2-4). But HRI development for large robot teams requires novel techniques for presenting large amounts of data so as to support a human's ability to maintain situation awareness while not overloading his/her cognitive capabilities. In this project the PI will assess needs and requirements, and develop preliminary interface designs, for tools that will permit a small number of humans to supervise large robotic teams, a capability not available with current HRI methodologies. To this end, the PI will build on prior work in HCI, CSCW, and complex human-machine systems. She will interact continuously with and solicit feedback from the Metro Nashville Police Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE) search and rescue team, because a basic premise of the work is that in order to develop useful interfaces one must first understand how organizations will actually use large teams of robots, which in turn requires an understanding of how the human team currently completes their tasks (without robots), their expectations of a robotic team, and requirements for robotic team capabilities.
Broader Impacts: This work will advance the fields of robotics, artificial intelligence, and human factors. Specifically, it will advance the community's understanding of how large numbers of robots can be effectively supervised, managed, and guided by a small number of human operators, thus affording more usable systems while reducing the associated manpower and exposure to potentially dangerous contaminants. Due to the project's focus on deploying robots for CBRNE attacks, the results will have broad affects on developing deployable robotic systems based upon user feedback to meet ever-increasing Homeland Security needs.
|
0.915 |
2007 — 2013 |
Adams, Julie |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Human Interaction With Large Numbers of Unmanned Vehicles
Future unmanned vehicle (UV) systems will be deployed for homeland security missions including Chemical, Biological, Nuclear, Radiological, and Explosive (CBRNE) event response. UV systems that incorporate large numbers of ground and aerial UVs [67] (large, mixed-type UV systems) are envisioned. This project will develop visualization techniques that provide a scalable interface incorporating integrated and easily understood information, thus permitting supervision of large, mixed-type UV systems. This project includes three Human-Robotic Interaction (HRI) challenge areas: the development of a data abstraction framework; the development of scalable interface visualization techniques; and the development of visualization transition techniques. During the first four years, the PI will develop and evaluate data abstractions, visualization, and transition techniques. The fifth year focuses on CBRNE field evaluations. A new module for the PI's Complex Human-Machine Interaction course and a new Introduction to Robotics course will be developed. Each summer an undergraduate student and a high school teacher will join the research team. The intellectual merits of this project include: formulation of a data abstraction framework for providing scalable, integrated visualizations; creation and evaluation of visualization and visualization transition techniques; and validation of all hypotheses via quantitative and qualitative usability evaluations with simulated and real UVs. Broader impacts: The development of visualization techniques for large, mixed-type UV systems that allow emergency responders to quickly assess a situation while reducing exposure to contaminants. The development of HRI techniques that will impact UV system development for homeland security that increase personnel capabilities, reduce exposure to dangerous situations, and cover difficult terrain. The development of visualizations may influence interface design for complex system domains such as air traffic control and nuclear process monitoring. The inclusion of high school teachers in summer research projects encourages inclusion of research examples in their courses and may result in increasing student interest in engineering.
|
0.915 |
2008 — 2011 |
Adams, Julie Mcnamara, Timothy (co-PI) [⬀] Rieser, John (co-PI) [⬀] Bodenheimer, Robert [⬀] Sarkar, Nilanjan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Instruments For Interaction, Learning, and Perception in Virtual Environments
Proposal #: CNS 08-21640 PI(s): Bodenheimer, Robert E. Adams, Julie A.; McNamara, Timothy P.; Rieser, John J.; Sarkar, Nilanjan Institution: Vanderbilt University Nashville, TN 37235-7749 Title: MRI/Acq.: Instruments for Interaction, Learning, and Perception in Virtual Environments Project Proposed: This project, acquiring a high-fidelity instrument designed to facilitate and assess perception, interaction, and learning in immersive environments, pursues an ambitious research agenda dealing with people, their interactions with virtual environments, and the design factors underlying successful environments. The work aims to build a program to develop a better understanding of the cognitive capabilities of humans in immersive virtual environments, to inform the design process of such environments and to understand how humans reason about space. The instrument will be shared among diverse and interdisciplinary groups collaborating in the area of virtual environments, including Computer Science and Engineering (graphics, animation, artificial intelligence, human factors, robotic, etc.) and the Psychological Science (cognitive psychology, child development, rehabilitation engineering, brain sciences, etc.) The component parts of the instrument (comprising optical motion capture equipment, a head-mounted display with binocular eye-tracking, and high-performance wireless data gloves) allow the measurement, tracking, rendering, and animation of subjects in virtual environments (from their overall position, to their posture, to the actions of their hands and fingers) coupled with the measurement of their gaze. The project ranges from low-level research in how people experience virtual environments to user evaluations involving high-level interface and simulation design. Children with autism will also be studied. Broader Impacts: This project improves the quality of learning in virtual environments, reducing the time and cost of authoring and overcoming likely impediments to their widespread use. The instrument enables courses in robotics currently infeasible with real robots and provides experience for students. The work builds a scientific program to develop a better understanding of the cognitive capabilities of humans in immersive virtual environments and may be applied to understanding the development of children?s abilities to reason about space and to coordinate perceptual-motor skills as they develop. Moreover, it may help to treat autism spectral disorder.
|
0.915 |
2008 — 2014 |
Adams, Julie Biswas, Gautam (co-PI) [⬀] Saylor, Megan (co-PI) [⬀] Levin, Daniel [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Thinking About, and Interacting With Living and Mechanical Agents
Recent advances in artificial intelligence and robotics are confronting individuals of all ages with a series of category-defying entities that combine features of living and nonliving things. As such, these entities increasingly challenge people's basic understanding of mind and intelligence. The goal in this project is to explore adults' and children's beliefs about a range of living and mechanical agents, and to test how these beliefs affect people's ability to track, remember, and understand mechanical agents in two specific computer interfaces. First, it will explore a computer interface designed to allow a human operator to interact with and control a set of semi-autonomous robots. The second environment will be a teachable agent system in which middle school children learn about complex science phenomena, such as river ecosystems, by actively teaching an animated software agent.
This project represents one of the few research programs to empirically test people's understanding of living and artificial agents, and it will employ a conceptual framework that starts with naïve understandings of mind (e.g. "Theory of Mind") and applies them to engineered environments where these understandings are used. This framework describes the conditions under which participants apply different agent concepts, and can help understand how these beliefs might change over time as people interact with novel agents. Although the framework is not yet a complete theory, it represents a broadened approach to reasoning about both typical and novel living and mechanical agents that goes beyond existing dual-process models of Theory of Mind. These experiments also make links between concepts about agents, and the deployment of these concepts in realistic high-load perceptual tasks, so they can make an important contribution to our basic understanding of how knowledge affects vision.
The findings from this project may have important implications for educating both children and adults to deal with novel intelligent decision making technologies that move beyond the simple command-and-response cycle inherent to most current computer applications. Previous research by the PIs has already documented ways in which different people vary in their approach to these technologies (e.g. older and younger adults seem to have subtly different beliefs about the nature of computer intelligence), so this project may help improve the accessibility of novel agent-technologies to a wide range of different populations. More generally, because this research uses interactive educational tools and realistic robot-command systems to explore agent-understanding, it has the potential to improve user interfaces supporting social learning environments that focus on self-regulated learning, and that facilitate the effectiveness of human-machine emergency response teams. These technologies confront users with challenges to their most basic understandings of intelligence and thinking, and our research has the potential to guide both children and adults as they become successful users and creators of the interactive technologies of the future.
|
0.915 |