1975 — 1977 |
Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Research Initiation - Motion Picture Analysis by Computer @ University of Pennsylvania |
0.915 |
1976 — 1979 |
Smoliar, Stephen Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Techniques For the Representation of Movement-Related Information Using a Digital Computer @ University of Pennsylvania |
0.915 |
1977 — 1979 |
Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Interactive Vector Graphics Facility @ University of Pennsylvania |
0.915 |
1978 — 1982 |
Bajcsy, Ruzena [⬀] Joshi, Aravind (co-PI) [⬀] Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Scene Understanding @ University of Pennsylvania |
0.915 |
1987 — 1989 |
Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Representation For Natural Human Movement (Computer and Information Science) @ University of Pennsylvania |
0.915 |
1989 — 1995 |
Joshi, Aravind (co-PI) [⬀] Badler, Norman Bajcsy, Ruzena [⬀] Farber, David (co-PI) [⬀] Buneman, O. Peter |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Keeping Up With the 90"S in Computer Science Equipment @ University of Pennsylvania
This award will provide infrastructure for research that is organized around five laboratories: 1. LINC - for research on artificial intelligence and natural language processing; 2. GRASP - for research on machine perception and robotics; 3. GRAPHICS - for research on graphic interfaces, movement description, and animation; 4. DSL - for research in computer architecture and computer communication; 5. LOGIC & COMPUTATION - for research in logic and computation, including theory of computation, database systems, and programming languages. Two new facets of the research, integration and upward scaling, require an enhanced experimental environment involving machines with massively parallel architectures. The award will help to develop this environment by providing funds for a SIMD machine for work in natural language processing, and active perception and real time manipulation; a MIMD machine for simulation and research involving extensive scientific calculations; as well as high speed workstations with rich environments for work in theoretical computer science.
|
0.915 |
1991 — 1994 |
Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Undergraduate Scientific and Engineering Visualization Laboratory (Usevl) At the University of Pennsysvania. @ University of Pennsylvania
A major expansion of an Undergraduate Scientific and Engineering Visualization Laboratory (USEVL) to be directed by Professor Norman I. Badler. Based on the Director's existing nationally and internationally recognized Computer Graphics effort, the USEVL brings modern high-performance workstations for visualization into the Undergraduate curriculum and experience. Visualization provides the most direct and exciting pathway into modern high-performance computing. Not only does the Computer Science student learn the principles and use of high-speed workstations, but the experience carries over into many other aspects of scientific and technological education. Conversely, non-Computer Science majors (that is, other Engineering and Science majors), discover that the power of visualization can be applied to the fundamental concepts and applications of their own disciplines. The development expands the existing educational graphics nucleus of four Silicon Graphics Iris 4D workstations by adding five workstations plus a fast fileserver. Existing in-house and no-cost commercial software will be used to provide instant usability for the USEVL facility.
|
0.915 |
1993 — 1994 |
Badler, Norman Pelachaud, Catherine Viaud, Marie Luce |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Standards For Facial Models @ University of Pennsylvania
9314898 bADBadler This is a workshop to exchange views on research directions and update the mechanisms used by the research community investigating the area of facial expressions, their semantics, and their animation in the graphical media. Given the multiplicity and diversity of orientations in the field there is need for a global assessment of the actual state-of-the-art and of improving the methods of exchange of data and software. In particular, given the current state of analysis of facial expressions there is need to find commonality of tools for the synthesis of the extracted data. This workshop brings together a small group of leading researcher in the NSF, government, and industrial research community to define scientifically defensible, computationally reasonable, and experimentally useful facial models which may provide the basis for future research. The worksho produces a published account of the proceedings of the meeting.
|
0.915 |
1994 — 1996 |
Badler, Norman |
N01Activity Code Description: Undocumented code - click on the grant title for more information. |
Organ Modeling in Support of Virtual Surgery Simulation @ University of Pennsylvania |
0.915 |
1995 — 1998 |
Steedman, Mark Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Synthesizing Conversation Between Human-Like Cooperative Agents @ University of Pennsylvania
The goal of this research is to develop a system which automatically generates and animates conversations between multiple cooperative agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. The research is based on theory which addresses relations and coordinations between these channels. The significance of this research is to provide a three-dimensional computer animation testbed for cooperative conversation theories. Human-machine interaction and training systems need more interactive and cooperative synthetic agents. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the content of text, the intonation and the undertaken actions all determine facial expressions, lip motions, eye gaze, head motion, and arm gesture generators. This project will focus on domains in which agents must propose and agree on abstract plans, and may have to motivate and carry out physical actions and refer to objects in their physical environment during conversation.
|
0.915 |
1995 — 1998 |
Bogen, Daniel Cohen, Morris Badler, Norman Harker, Patrick (co-PI) [⬀] Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ari: Acquisition of Equipment For a Customized Production Systems Laboratory @ University of Pennsylvania
9512402 Kumar This grant provides funding to purchase equipment for a new Center for Customized Production Systems (CCuPS) at the University of Pennsylvania. The purpose of this center is to draw together an interdisciplinary team of academics and industrial leaders to develop engineering technologies and management practices that will support the rapid design and production of customized goods and services that are tailored to the needs of a specific consumer. Through a variety of educational initiatives, CCuPS will be the nexus of training and information on customized production technology and management. The research is interdisciplinary. The main technical foci include: Data Acquisition: Measurement of the User, Task, and Environment, Virtual Prototyping, Product Design for Customized Production, Rapid Production and Logistics Technology, Life-Cycle Product and Process Development, Systems Integration and Concurrent Engineering. To assure that problems of fundamental importance to industry are addressed, a partnership with a variety of organizations will be developed. The main benefit from the research will be the development of a technological base of knowledge needed for rapid design and manufacture of customized goods and services. In addition, the research will result in a better understanding of the product development and management practices required to move production systems from the traditional mass production model to systems where products and services can be customized to the individual needs of a consumer.
|
0.915 |
1998 — 2000 |
Smith, Jonathan (co-PI) [⬀] Smith, Jonathan (co-PI) [⬀] Badler, Norman Metaxas, Dimitris (co-PI) [⬀] Kessler, G. Drew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Live: the Laboratory For Visual Environments @ University of Pennsylvania
We are requesting $100,000 of NSF ILI support for LIVE: The Laboratory for Visual Environments at the University of Pennsylvania. With the requested NSF ILI support and School of Engineering and Applied Science matching funds, we would purchase 24 OpenGL Windows NT systems and networking infrastructure to support undergraduate teaching and projects in computer graphics, visual and virtual environments, and high speed multi-computer networking. Direct manipulation and programming of these concepts is the ideal setting for teaching and learning about complex phenomena. LIVE will upgrade an aging SGI facility, serve Computer and Information Science curriculum needs, and enhance the computer science aspects of a new undergraduate program in Digital Media Technology spanning the Engineering, Fine Arts, and Communications Schools at the University of Pennsylvania.
|
0.915 |
1998 — 2002 |
Metaxas, Dimitris [⬀] Liberman, Mark (co-PI) [⬀] Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Care: National Center For Sign Language & Gesture Resources @ University of Pennsylvania
The University of Pennsylvania and Boston University are collaborating on the establishment and maintenance of resources for research in sign language and gesture. The goal of this project is to make available several different types of experimental resources and analyzed data to facilitate linguistic and computational research on signed languages and the gestural components of spoken languages. Activities in the project include the following:
* A facility for collection of video-based language data will be established, equipped with synchronized digital cameras to capture multiple views of the subject.
* A substantial corpus of American Sign Language video data will be collected from native signers and made available in both compressed and uncompressed forms.
* Significant portions of the collected data will be linguistically annotated. The database of linguistic annotations will be made publicly available, along with the applications needed to access the database.
* Video data will be analyzed using computer-based algorithms, with analysis and software made publicly available.
Thus the project makes available sophisticated facilities for data collection, a standard protocol for such collection, and large amounts of language data. The combination of linguistic and computational expertise in this project will ensure scientific integrity of data collection, and will result in useful data for researchers in a variety of fields.
|
0.915 |
1999 — 2002 |
Palmer, Martha (co-PI) [⬀] Joshi, Aravind (co-PI) [⬀] Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Actionary: a Dictionary That Portrays Natural Language Expressions as Context-Sensitive Simulations of Human Actions @ University of Pennsylvania
Abstract
IIS-9900297 Badler, Norman I.; Joshi, Aravind, K.; & Palmer, Martha University of Pennsylvania $178,712 - 12 mos.
The Actionary: A Dictionary that Portrays Natural Language Expressions as Context-Sensitive Simulations of Human Actions
The `Actionary' is an action database that associates natural language expressions with context-sensitive graphical simulations acted out by ``smart'' virtual human agents. It rests on a foundation of Parameterized Action Representations (PARs) that explicitly link Lexicalized Tree-Adjoining Grammar structures to the Parallel Transition Networks that drive virtual human motion generators. PARs provide a conceptual representation of different types of actions: changes of state, changes of location (kinematic), and exertion of force (dynamic). To date, only change of state actions have been addressed by natural language processing systems. It is known that there are no linguistic distinctions between `running,' `jogging,' or `loping,' and that these can only be distinguished by making reference to visual models and context. The Actionary approach defines actions involving continuous changes or process execution with reference to situated concrete models suitable for a visualized performance. The Actionary will facilitate: Translation of human action instructions into different languages for sample action execution for education (e.g., foreign language learning) and training (e.g., machinery operation and repair). Low bandwidth communication of multi-person activities: by transmitting compact textual instructions locally interpreted via the Actionary, smart agents can execute instructions for potential applications in remote skill training, virtual video-conferencing, and 3D virtual communities.
|
0.915 |
2002 — 2005 |
Badler, Norman |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Resources: the Liveactor Virtual Reality Environment @ University of Pennsylvania
EIA 0224432 Badler, Norman I. University of Pennsylvania
Title: CISE RR: The LiveActor Virtual Reality Environment
This proposal, creating, controlling, and interacting with real-time embodied virtual human agents, aims at supporting research projects that require interactivity support to Detect, model, and describe human participant motion either for building action models or for providing real-time input to interpersonal interactions, such as training and Immerse the human participant in a visual environment conducive to the execution of actions in a context of objects and both real and virtual people. Components will be purchased to construct an immersive room for real-time participants in interactive experiences. The room will consist of an Ascension ReaCTor motion capture system and a 4 surface rear-projection room, called the LiveActor. In turn, this facility supports at least three research projects: Computational Models of Verb Semantics. Real Time Decision Critical Training, and Virtual Animated Environments from Language. The first project utilizes a Parameterized Action Representation (PAR) that holds computational definitions of human and other agent motions. PAR is used to both synthesize animations and recognize presence in a motion captured input stream. An immersive environment in which participants interact with each other and with virtual agents provides opportunities for capturing, characterizing, and representing genuine physical and emotional actions, and for employing these actions to affect and control reactive behaviors in virtual agents. The second project utilizes the graphical portrayal of embodied agents to close encounter training requiring the user to analyze and react to facial actions, body posture, and gesture quality. Fundamentally different from VR navigation and exploration, these interactions require realistic human models and detailed, variable, controllable parameters. The last project encourages exploration, but not construction and animation, since the necessity to design and script VR worlds in advance constitutes a bottleneck. This project proposes an "imagination" machine in which users create, populate, and animate their own virtual worlds. Natural language that understands descriptions of situations and action, depicts the graphical arrangements, and sets current and future context-dependent behaviors into its animated agents, serves as the primary ingredient in this system. The immersive LiveActor space allows a user to input her own body motions for specific yet parameterizable movements for characters and their interactions.
|
0.915 |
2002 — 2005 |
Badler, Norman Metaxas, Dimitris (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Synthesis and Acquisition of Communicative Gestures @ University of Pennsylvania
Procedural synthesis of natural and contextually appropriate gestures in embodied virtual human agents is challenging. Laban Movement Analysis (LMA) offers a descriptive system for human gesture qualities that fills the gap between pre-defined gesture playback systems and human animator intuition. A computational analog of LMA called EMOTE has been constructed whose parameters modify the performance qualities of arm gesture movements. EMOTE will be developed in several new ways:
* Connect EMOTE with an agent model so that an agent's affect, personality, and communicative needs set appropriate EMOTE parameters for gesture performance.
* Investigate motion analysis techniques for extracting EMOTE parameters from live dual or single camera views.
* Experimentally validate the automated acquisition of EMOTE parameters by using professional LMA notators for ground truth.
* Use the extracted parameters to create instances of parameterized actions which may be subsequently used for action, affect, and manner descriptions and, ultimately, for content-directed analysis of existing film or video material.
This study will help set synthetic agent animation techniques on a sound empirical footing, provide evidence that computers can in fact observe important motion qualities, and lead to strong connections between internal agent state and external behavior qualities.
|
0.915 |
2005 — 2006 |
Badler, Norman Marcus, Mitchell [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Generating Animations of American Sign Language Classifier Predicates @ University of Pennsylvania
American Sign Language (ASL) is a full natural language, with a linguistic structure distinct from English, used as the primary means of communication for approximately one half million deaf people in the United States. Furthermore, because they are unable to hear spoken English during the critical language acquisition years of childhood, the majority of deaf high school graduates in the U.S. have only a fourth grade English reading level. Because of this low English literacy rate and because English and ASL have such different linguistic structure, many deaf people in the United States could benefit from technology that translates English text into animations of ASL performed by a virtual human character on a computer screen. But previous English-to-ASL machine translation projects have made only limited progress. Instead of producing actual ASL animations, these projects have produced restricted subsets of the language, thus allowing them to side-step many important linguistic and animation issues, including in particular the ubiquitous ASL linguistic constructions called "classifier predicates" that are required in order to translate many English input sentences. Classifier predicates are an ASL phenomenon, in which the signer uses the space around his or her body to position invisible objects representing entities or concepts under discussion; the signer's hands show the movement and location of these objects in space. Classifier predicates are the ASL phenomenon that is most unlike elements of spoken or written languages, and they are therefore difficult to translate by machine translation software. In this research the PIs and their graduate students will build on prior research in ASL linguistics, machine translation and artificial intelligence, 3D graphics simulation and human animation, to design and implement a prototype software system capable of producing animations of classifier predicates from English text. In doing so, they will address some of the most challenging issues in English-to-ASL translation, with the goal of producing a software design that can serve as a robust framework for future implementation of a complete English-to-ASL machine translation system. The prototype implementation will have sufficient visual quality and linguistic breadth to enable a pilot evaluation of the design and the quality of the output animations by deaf native ASL signers.
Broader Impacts: This research will lead to significant advances in the state of the art relating to English-to-ASL machine translation software, which will eventually allow development of new applications to provide improved access to information, media and services for the hundreds of thousands of deaf Americans who have low English literacy. Instead of displaying English text, devices like computers, closed-captioned televisions, or wireless pagers could show deaf users an animation of a virtual human character performing ASL. Novel educational reading applications software for deaf children to promote English literacy skills could also be developed. The project will also expose the graduate students involved to research issues relating to ASL and animation, and will support a summer ASL language training program at Gallaudet University for these students.
|
0.915 |
2009 — 2013 |
Badler, Norman Safonova, Alla [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-En: Synchronous Multi-Sensor Human Motion Acquisition Facility @ University of Pennsylvania
"This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5)."
The objective of this project is to build a state-of-the-art Human Motion Acquisition Facility capable of capturing human motion data using an array of synchronized multiple sensors. It includes hardware and software necessary to capture simultaneously multi-modal aspects of human performance including human movement, ground reaction forces, video data, muscle activation patterns, heart rate, temperature, eye movement, etc. Importantly, all the data is synchronized with each other.
The facility enables researchers to explore new frontiers in human motion synthesis. In particular, the captured data is used to study novel compact representations of human motions and to develop new algorithms to efficiently synthesize detailed realistic motions of a single character and crowds. The data is also used to build, animate and populate virtual worlds. The facility also greatly benefits existing computer graphics degree programs at UPenn (Ph.D, Masters, Undergraduate, and summer high school students). The use of the facility is an integral part of the curriculum in these programs.
Currently, there are no extensive, publicly available, high quality databases of human motion data synchronized with data from other sensors. All the data we obtain in this project will be made publicly available. The facility will therefore enable new research ranging from human motion synthesis in computer animation to medical studies of people with disabilities to researching novel simulators. As a result, it will foster inter-disciplinary research. The facility will also make computer graphics programs at UPenn even more exciting and will help in attracting more women to computer science.
|
0.915 |