1988 — 1995 |
Peshkin, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Presidential Young Investigator Award: Robotic Assembly @ Northwestern University
This research deals with the mechanical interaction of parts with each other and with a robot during an assembly operation. Since friction is an important element of these interactions, techniques will be developed to exploit friction, enhancing both the capabilities and the reliability (robustness) of robotic assembly operations. One technique involves force-controlled manipulation in which the relationship between the force sensed by the robot arm and the displacement of the arm from a nominal position can be specified. A theoretical understanding of assembly operations performed by a force-controlled robot will be sought, with the objective of finding and implementing algorithms for the automated planning of such assembly operations. The important, but more difficult, inverse problem of designing parts which are easy to assemble will be considered.
|
0.915 |
1996 — 1998 |
Peshkin, Michael Mirkovic, Srdjan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Instrumentation: Robot For Human Workspace @ Northwestern University
CDA-9529463 Peshkin, Michael A. Mirkovic, Srdjan Northwestern University Robot for Human Workspace The requested equipment of computer hardware and software for real-time control of robotics systems is used in furtherance of research in computer and information science and engineering in the departments of Mechanical Engineering and Electrical and Computer Engineering. The real time control and computation system is associated with the robot requested under this proposal and other robots in several closely related projects. Specific research tasks are as follows. (1) Research and development of robot-assisted surgical systems. Most immediately, a system for directing linear insertions, such as biopsies and orthopedic pins, under fluoroscopic imaging. These systems require fast dedicated computers for control, and a 6-axis force sensor to facilitate interaction with a surgeon. (2) Research and development of assertive devices to be used in the final assembly of automobiles and light trucks, with General Motors. This project requires a considerable amount of high-speed real-time digital signal processing hardware, and associated software.
|
0.915 |
2000 — 2005 |
Peshkin, Michael Brinson, L [⬀] Wilson, Bruce (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Self-Sensing Actuation and Control With Shape Memory Alloys @ Northwestern University
Cate Brinson, Michael Peshkin, Bruce Wilson Northwestern University
Shape memory alloys (SMAs) have been used in a variety of actuation, energy-absorbing, and sensing applications. The key feature of this material is its ability to undergo large seemingly-plastic strains and subsequently recover these strains when a load is removed or the material is heated. This unique ability occurs due to a reversible thermoelastic phase transformation between austenite and martensite. The key feature allows SMAs to serve as very compact actuators. As SMAs can be used for both material-stiffening and energy-absorption, they have generated much interest in the smart structures field. Further, no other material or device can generate significant tensile forces over a large displacement while occupying such a small volume. A second useful feature of many SMAs is a change in resistivity with a change in strain. The change in resistivity as an SMA undergoes strain has enabled investigators to use them as coarse position sensors. The coarseness is due to a complex relationship between resistivity and the material state and its coupling with mechanical load and temperature.
Given the material characteristics above, economical, power-dense self-sensing-actuation (SSA) can be achieved with shape memory alloys. However, the nonlinear nature of SMA actuation and sensing, incomplete understanding of SMA thermomechanical response and the lack of suitable models for control result in an under-utilization of this useful material as an actuator, sensor, and SSA. In this research, a focused effort will be targeted at (1) improving the characterization of SMAs for a sensing and control context, (2) refining material modeling, (3) developing model-based control algorithms, and (4) demonstrating these in hardware to advance the understanding and range of applications of SMAs as actuators, sensors, and SSAs.
|
0.915 |
2000 — 2004 |
Colgate, J. Edward (co-PI) [⬀] Peshkin, Michael Stewart, Paul (co-PI) [⬀] Buttolo, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Goali: Haptic Cobots @ Northwestern University
We propose research underlying the use of cobots for haptic display of solid models. The project brings together CAD graphics/haptics researchers at Ford Motor Company with haptics/cobot researchers at Northwestern University. A recently completed 3R haptic cobot will be the experimental testbed for the project.
Large-scale haptic display opens up new opportunities, because people interact with large objects (arm size or larger) in a very different way from that in which they interact with small objects (hand size or smaller). An example is the current use of full-size "clay bucks" in automobile design. A sense of the feel and sweep of an automobile body panels cannot be obtained by touching a scale model of it with a finger, as current haptic displays permit. However a full sized virtual model experienced through the proprioception of whole arm motion, in conjunction with the excellent CAD graphics now available, could bring virtual prototyping and surface editing to a new level of utility.
Cobot control for haptics differs markedly from robot control for haptics, because cobots use servo-steered rolling mechanisms, rather than servomotor actuators, to create virtual surfaces. The project addresses (1) development of a control methodology for the new power-injection architecture of the 3R cobot. (2) finding cobot-appropriate algorithms for haptic surface rendering directly from NURBS descriptions of surfaces. (3) deriving control laws for dynamic behaviors beyond hard surfaces, including compliant and viscous effects, inertia masking, and artificial potentials (4) finding algorithms for solid-model collision detection which allow the collision to be predicted and rendered by the cobot without exceeding its dynamic limits.
|
0.915 |
2000 — 2004 |
Peshkin, Michael Lynch, Kevin Dewald, Julius (co-PI) [⬀] Wannasuphoprasit, Witaya |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr:- Virtual Surfaces For Human/Robot Mutual Labor @ Northwestern University
This is the first year funding of a three-year continuing award. This project addresses the need to assist humans in heavy materials handling, since such tasks expose the worker to known risk factors for work-related musculoskeletal disorders, such as lifting, bending, twisting, and maintenance of awkward postures. This project will study the use of cobots to implement ergonomic guiding surfaces to assist a human in the manipulation of a heavy load. With such a mechanism, the load can be constrained to move along a frictionless guide, and the human is allowed to apply forces in directions which are comfortable while the guide directs the motion to the goal. Cobots use rolling contacts to directly implement passive guiding constraints, and as a result they are safer to interact with and use less power than a conventional robot. To design assistive guide constraints, the project will study how humans naturally interact with constraints assuming that the essential nature of this interaction can be modeled by the human's desire to minimize some notion of effort. With this model, The PIs will design guides that minimize the necessary human effort and will experimentally verify the correctness of the model. The PIs will develop software for automatically planning near-optimal guides in cluttered workspaces and will test the guides using cobot hardware on realistic materials handling tasks. The resulted ergonomic virtual surfaces in materials handling will reduce the occurrence of work-related musculoskeletal disorders. It will also increase productivity, providing an intuitive and safe interface between human and computer. Finally, this work will expose new principles in human motor control, as the design of assistive guiding surfaces requires a better understanding of how humans interact with constraints.
|
0.915 |
2004 — 2008 |
Colgate, J. Edward Peshkin, Michael Ehmann, Kornel (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Variable Compliance Haptic Field Displays @ Northwestern University
Prop ID: 0413204 P I: Colgate, J. Edward Organization: Northwestern University Title: Variable Compliance Haptic Field Displays
Abstract
This research will address a novel class of haptic devices: "Haptic Field Displays" (HFDs). HFDs are large scale arrays of moveable pins or "tactels" covered by a flexible graphical display. HFD pin arrays will be much like the "tactile arrays" that have been developed for displaying Braille or virtual environments, but with some key differences. For instance, HFDs will be full-screen enabling scanning with the fingertip or using multiple fingers. They will operate not by actively controlling the displacement or force of each tactel, but by controlling compliance. The intellectual merit of this research stems from two sets of studies. The first is a set of psychophysical studies that will provide quantitative specifications for HFDs including the necessary dynamic range of tactel compliance and the resolution (or just noticeable difference) of compliance. These studies will also lead to an initial set of HFD primitives that naturally encode affordances (such as pushing, sliding or rotating). Finally, these experiments will assess human operator performance in using HFDs, especially in spatial localization and analog level-setting tasks. The second set of studies will address the design and integration of electronically programmable HFD tactels, each a meso-scale mechatronic system including a variable compliance mechanism, actuation and sensing. These studies will focus on the meso-scale realization of two low power "semi-active" approaches: variable damping achieved by controlling an orifice, and variable compliance achieved by placing a continuously variable transmission (CVT) between the fingertip and compliant element. Broader impacts of this research include the potential to improve the usability of graphical interfaces for normally-abled populations, as well as increase the accessibility of interfaces for differently-abled populations such as the elderly, those with low vision, and those who are blind. In addition, it has the potential to increase safety and utility in situations where high demands are placed on vision. For instance, HFDs would allow automobile drivers to focus visual attention on the road while simultaneously taking advantage of the growing assortment of electronic devices that may be found in a vehicle. This research will also impact undergraduate and graduate education at Northwestern more generally: tactel design and control will be integrated into at least two project-based courses taught by the P.I.s.
|
0.915 |
2004 — 2008 |
Peshkin, Michael Patton, James (co-PI) [⬀] Hartmann, Mitra (co-PI) [⬀] Vishton, Peter |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hsd/Dhb: Emergent Social Behaviors in Sensorimotor Control @ Northwestern University
A most basic form of human social interaction is the physical cooperation necessary to perform a manual task in pairs (dyads) or groups. This project will explore how the human sensorimotor control system implements cooperative motor control. In preliminary work, the project team has demonstrated that when a dyad embarks on a repeated simple manual task requiring speed and accuracy, motor strategies quickly arise that not only differ significantly from individual strategies on the same task, but also provide better performance. This result runs counter to conventional wisdom that high accuracy tasks are best performed by one individual alone. Nevertheless, from both evolutionary and ontological perspectives, the result is reasonable: humans are social animals and have developed sophisticated ways of working together physically. Motor interactions represent a social communication mechanism distinct from facial expression, gesture, and spoken language.
The project team encompasses cognitive science, neurobiology, robotics, and sensorimotor control, and will:
Investigate the language of physical communication between two or more individuals as they develop a cooperative strategy to perform a mechanical task. Identify channels of this communication, for instance modulation of arm stiffness. Investigate the adaptation that underlies the emergence of cooperative behaviors during physical communication, as the participants negotiate, compromise, specialize, teach and learn, or in some other way arrive at an effective cooperative behavior.
Investigate cognitive influences on cooperative behaviors. Determine the extent to which the cooperative behaviors reflect cognitive influences on motor control, as opposed to implicit or inherent biomechanical properties of the sensorimotor system.
Investigate the emergent behaviors as specifically social phenomena. Assess their extension to groups sizes of more than two, and the social aspects of adaptation, such as the effect of errors or "breaches of trust." Investigate the substitution of an automated partner (robot) for a human partner.
Investigate the factors at the sensorimotor level that allow dyadic motion control to optimize performance better than individual motor control. Hypotheses include reduction in delays associated with the triphasic burst pattern of muscle activation, and the partitioning of motor noise into separate spaces.
Broader impacts include: improving our fundamental understanding of therapist/patient interactions during physical or occupational therapy, many aspects of which are repetitive dyadic physical interaction. The work may also lead to better ways to make use of the social dynamics between individuals in physical interaction, which would be relevant to situations such as hands-on teaching/learning a in sports training or helicopter flight training, shared control of teleoperators or of unmanned aerial or underwater vehicles, and shared control of minimally invasive surgery or telesurgery.
|
0.915 |
2005 — 2008 |
Peshkin, Michael Hartmann, Mitra [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neuromechanical Models of the Rat Vibrissal System @ Northwestern University
Neuromechanical Models of the Rat Vibrissal System Mitra J. Hartmann, Michael A. Peshkin, Northwestern University
Animals use movements to acquire and refine incoming sensory data to construct meaningful representations of the environment. This process is often called "active sensing." During exploratory behaviors, each movement an animal makes aids in the extraction of task-relevant sensory data. As yet, however, neuroscientists have little understanding of how the body and brain work together to acquire, encode, and process the sensory data generated through movement. To study the neuromechanical principles that underlie active sensing behaviors, the investigators will construct an active sensing system in hardware based on careful modeling of a well-understood sensorimotor system: the rat vibrissal (whisker) array. The rat whisker system is an ideal model for studying active sensing behaviors. When exploring their environment, rats sweep their whiskers back and forth in the air and against objects at frequencies typically between 5 and 12 Hz. Using this whisking behavior, the rat can extract accurate information about an objects spatial properties, including size, shape, orientation, and texture. The core of the project involves (1) characterizing the mechanics of rat whiskers and natural whisking movements, both when moving freely in air and when in contact with objects (2) constructing an array of actuated, biomimetic (robotic) whiskers with sensors at the base (3) developing models to interpret the spatiotemporal patterns of whisker sensory activation (both real and robotic) to extract object features. The results will directly generate hypotheses about how information is represented in the rat nervous system, and shed light on the many hundreds of neural recordings from rat somatosensory cortex (barrelcortex) that are performed each year. This project begins to establish rigorous mathematical models for sensory encoding in the whisker system that may generalize to other sensorimotor pathways. In computer science, the research may inspire studies on unsupervised 3D object recognition using non-optical sensors. The project contributes to the interdisciplinary scientific training of both graduate and undergraduate students, brings a quantitative engineering approach to neuroscience, and directly complements coursework in Neural Engineering being developed at Northwestern University.
|
0.915 |
2006 — 2009 |
Cao, Jian [⬀] Peshkin, Michael Wang, Chuan-Tao |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Goali/Collaborative Research: Integrated Sensing System For Stamping Monitoring and Control @ Northwestern University
The objective of this Grant Opportunity for Academic Liaison with Industry (GOALI) Collaborative Research project is to invent a new sensing system integrated with stamping press to significantly improve the observability and controllability of the stamping process. The objective will be achieved by investigating the geometrical scalability of draw-in sensors that are based on the principal of mutual electrical inductance, and wireless pressure sensors that communicate with remote ultrasound receivers through acoustic waves. The new sensors will be embedded within the stamping tooling without interfering with high-rate operations, thus providing an enabling tool that is not yet existent in the state-of-the-art stamping industry within or outside of the US. The intellectual contribution of this project will be extending fundamental laws of physics on electromagnetic coupling and acoustic telemetry to guide the functional and parametric design of the envisioned sensors such that they can accurately and reliably measure the required process parameters, given the significant impact forces and interferences typically associated with the stamping operation, thus advancing the science base for stamping process monitoring.
The broader impact of this research includes creating means for direct comparison between physical measurements and numerical simulations through tooling-embedded sensors, such that monitoring and control of the process are integrated with, and therefore, taking advantage of, the advancement in numerical tools. Research results will be shared with both the academic and industrial communities through open-house demos and a short course. Joint seminar series and student exchange are also planned.
|
0.915 |
2009 — 2011 |
Colgate, J. Edward Peshkin, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Preliminary Investigation of Virtual Tactual Stereognosis @ Northwestern University
Imagine reaching into your pocket, locating and grasping your car;'s key fob amongst a clutter of coins, bills and keys, then finding the unlock button (not the lock button, not the trunk release, especially not the panic button) and pressing it. This is an example of Tactual Stereognosis (TS), the ability of people to identify familiar items using touch alone. It is commonplace and uneventful. Yet no programmable haptic interface has ever been developed that would allow people to identify virtual objects using what might be termed Virtual Tactual Stereognosis (VTS). Creating such an interface is a long-range goal of the PIs. VTS has long been out of reach because it depends on active touch, multi-finger interaction, and bare fingertips, which are all difficult to achieve with existing display technology. The PIs' recent research has led to a new class of prototype devices (xPaDs), which address each of these limitations. These devices use friction modulation to control forces between the fingertip and a flat plate. The basic TPaD (Tactile Pattern Display) can create effects such as virtual textures, virtual bumps and holes, and more. More advanced versions such as the ShiverPad and SwirlPad synchronize in-plane vibrations to friction levels in order to generate active pushing forces on the fingertip. It is possible to generate many additional effects with these devices, including virtual edges that can be traced with the fingertip. The xPaD devices thus appear to be well suited to VTS. They are active touch devices that work with bare fingertips (in other words, the xPaD is fixed and the finger slides over it). Moreover, they are very compact, potentially enabling multiple panels to be arrayed over the surface of an object in order to support a multi-finger interface. In this study the PIs will begin exploration of a multi-xPaD interface, by performing experiments on an interface consisting of two opposing ShiverPads with subjects employing a pinch grip (that is, index finger on one ShiverPaD, and thumb on the other), in order to demonstrate "binding," the perceptual fusion of the finger and thumb percepts into an integrated object representation.
Broader Impacts: Virtual Tactual Stereognosis is an important goal for many reasons. Graphical displays have become an important form of interface in venues ranging from the living room, to the office, to the car, to any place that a person may be. Yet as graphical displays grow more prevalent, natural touch interactions seemingly grow more obsolete. VTS aims to achieve the opposite, to empower future interfaces with sophisticated tactual capabilities that engage perceptual as well as sensory mechanisms in the hand and brain. Imagine a doctor able to simultaneously look at and palpate tissue within the body, a pregnant mother able to caress the ultrasound image of her unborn child, an autistic child able to cooperate with an animated character in a construction task, or a driver able to reach out, find and operate touch screen controls without taking his/her eyes off the road. VTS would be an enabler for new types of electronic displays for the blind. And it would be a powerful new tool to extend our basic knowledge of haptic perception as it occurs naturally.
|
0.915 |
2010 — 2015 |
Colgate, J. Edward Peshkin, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Medium: Collaborative Research: Surface Haptics Via Tractive Forces @ Northwestern University
Surface Haptics, or the creation of virtual haptic effects on physical surfaces, is a topic of rapidly growing importance in human?]computer interaction because of the increasingly widespread use of touch screens. Touch is at once an elegant and maddening interface modality. It is elegant in its simplicity: one can make a selection or tap a button or key with no intervening mouse or joystick. Moreover, touch (especially multi?]finger touch) supports gestures, such as swiping and expanding, which are satisfyingly natural. It is maddening, however, due to the lack of tactile and kinesthetic feedback that are so critical to natural touch. Typing on a virtual keyboard, for instance, is typically an experience of visually guided hunt?]and?]peck with liberal use of the back?]space key.
In this research the PIs will further develop a new class of surface haptic devices, called xPaDs, that promise to enrich the use of touch screen and touchpad interfaces for sighted as well as blind users. xPaDs are notable because they provide controllable shear forces between the fingertips and an ordinary sheet of glass. By controlling shear force in response to a measure of fingertip position (which may be obtained using a variety of existing technologies), it is possible to simulate a huge array of virtual effects; examples include toggle switches that flip from one state to another (each state is a "potential well" on the glass surface that pulls the finger to a given location), and contours that can be easily traced.
The heart of the current project lies in the systems engineering that will lead to practical and effective devices capable of controlling a force at one or more fingertips, and in the psychophysical and application?]based studies that will teach us how these capabilities may best be used. xPaDs are sophisticated dynamic systems that employ ultrasonic vibrations to modulate friction synchronized with in?]plane vibrations to produce controllable force vectors. The PIs will address the challenges of controlling force individually at each fingertip, of producing xPaDs with large surface area, and of minimizing energy consumption and audible noise generation. They will use the idea and methodology of "pop?]out" experiments to find haptic primitives, that is to say features the human perceptual system can extract with minimal or no perceptual load. The PIs will measure the information transmission capacity of surface haptic devices treated as symbolic channels. And they will explore the ability of the perceptual system to "bind" surface haptic features presented to different fingertips into a meaningful, coherent whole. These studies will position the PIs for investigating a set of applications for the blind.
Broader Impacts: Computer interfaces for the blind often rely heavily on speech, which necessarily presents information serially. The PIs argue that a haptic surface can augment a speech?]based interface with critical spatial information. They will study the editing and reading of mathematical expressions, of locating key content on a web page, of navigating intersections, and of planning routes with tools such as Google Maps. In addition, the PIs will develop a low?]cost xPaD development kit, make the plans and code available on the Internet, and develop a high school enrichment unit based upon these materials.
|
0.915 |
2013 — 2017 |
Peshkin, Michael Colgate, J. Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Medium: Collaborative Research: Force Feedback For Fingertips @ Northwestern University
Surface haptics is the creation of programmable haptic effects on physical surfaces such as touch screens and touch pads. Unlike traditional force feedback devices that require the operator to grasp an end effector, surface haptic devices must provide feedback directly to the fingertips. With the dramatic rise of touch screen interfaces in recent years, many approaches to surface haptics have been explored, including vibrotactile, shape morphing, and variable friction. The PI and his team have pioneered an approach in which the surface generates controlled shear forces on each fingertip. Force Feedback for Fingertips (F3), gives fingertips the opportunity to interact with physics-based virtual environments, much like force feedback devices enable the whole hand to do. With F3, fingers can interact with virtual objects that have mass, stiffness and damping as well as more complicated dynamics (e.g., collisions, mechanisms, and force fields). By coordinating haptic effects at multiple fingertips, even more compelling illusions can be generated.
The technology, underlying science, and application of F3 are, however, still in their infancy. F3 works by coupling lateral vibrations to some form of rectification. For example, one approach involves high-frequency lateral vibrations of the surface synchronized with a friction reduction effect, resulting in a slip-push transition at each oscillation. The friction is modulated by means of electrostatic forces or acoustical stimulation. Current approaches work at ultrasonic frequencies, but little is known about the mechanical or electrical behavior of fingertips at these frequencies, or how energy transfer from a surface to the finger can be optimized.
This research will produce new knowledge in three main areas: the physical underpinnings of F3, device design and interaction design. First, both tribological and acoustic measurements will be made to elucidate the mechanisms by which shear forces are generated. A high-bandwidth tribometer and optical imaging system will allow friction to be studied, and a custom-built exciter will allow the propagation of acoustic energy in the fingertip to be studied. Laser Doppler vibrometry will be used to measure surface wave propagation while magnetic resonance elastography will be used to study shear wave propagation within the subcutaneous tissues. Fractional calculus and finite element techniques will then be used to build biologically plausible models of fingertip tribology and mechanics that match the data. Second, a new generation of high-performance F3 devices will be developed. Armed with good models, it will be possible to design impedance-matched devices so that force production is maximized and energy wastage is minimized. Additionally, these new devices will provide control over the force vector at each of multiple fingertip locations. Thirdly, novel multi-finger interactions will be designed. The key idea is that sophisticated percepts, such as "objects" that can be grasped and that feel as though they are moving relative to the surface, can emerge from properly coordinated fingertip forces due to Gestalt-like grouping principles.
Broader Impacts: Historically, the PI and his team have had greatest impact when providing technology to and collaborating with colleagues in human-computer interaction. Inspired by this, an open source F3 kit will be developed and shared. In addition, undergraduate and high school students will participate in the research, developing software routines and sample applications for the open source kit. Finally, the kit will be integrated with two pedagogical innovations already implemented by the investigators: flipped classrooms and portable laboratories.
|
0.915 |
2014 — 2017 |
Makhlin, Alexander Solberg, James (co-PI) [⬀] Peshkin, Michael Maciver, Malcolm (co-PI) [⬀] Kording, Konrad (co-PI) [⬀] Smith, Joshua |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Electrosense Imaging For Underwater Telepresence and Manipulation @ Northwestern University
Human telepresence underwater is essential for tasks such as security sweeps in harbors and oil field servicing. Co-robotic solutions are needed, because the risks are great for human divers, while autonomous robots do not deal well with contingencies. A major problem is that vision works poorly in murky environments, such as when mud is kicked up from the bottom. In this National Robotics Initiative (NRI) project the researchers are investigating and developing a replacement for vision -- electrosense -- used by Amazonian fish that navigate and hunt in murky water. These "weakly electric fish" generate an AC electric field that is perturbed by objects nearby. Electroreceptors covering the body of the fish detect the perturbations, which the fish decodes into information about its surroundings. The researchers are developing methods of preprocessing electric images for human understanding, and new computed methods for machine interpretation.
The research creates electrosense hardware and practical testbeds, for navigation and for manipulation underwater. It investigates methods and software to facilitate human interpretation of electric images, as well as machine interpretation. In hardware, the researchers are creating a kilopixel-scale electrosense array as an input sensor for human interpretation of electric images, and development of preprocessing algorithms to make human interpretation workable. The researchers are also using sparser and non-coplanar groups of electroreceptors on a manipulator, for control of pre-grasp and manipulation tasks. For human interpretation, electric image preprocessing includes contour painting and spatial high-pass filtering, as well as temporal filtering. For machine interpretation, methods include specific recognition strategies for simple geometric primitives, and sparse beamforming techniques for more complex environments.
|
0.915 |
2014 — 2017 |
Peshkin, Michael Ehmann, Kornel (co-PI) [⬀] Cao, Jian [⬀] Stabile, Frank |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pfi:Air - Tt: Hybrid Tri-Pyramid Robot: a Novel Type of Double-Sided Incremental Forming Machine @ Northwestern University
This PFI: AIR Technology Translation project focuses on translating an innovative dieless forming technology to fill the need for rapidly producing three-dimensional sheet products either for prototyping or for real-applications. The project will result in a desktop prototype machine ? hybrid tri-pyramid robot - for the double sided micro-incremental forming process. This tri-pyramid robot has the following unique features: the ability to manipulate an object by generating three orthogonal translational output motions in space and the ability to provide an adaptive clamping mechanism on the workpiece material. These features provide the following advantages: a machine with higher load/weight ratio, stiffness and accuracy when compared to the leading competing technologies; and a flexible forming center that does not require geometric specific tooling, which has not existed in this market space. This project addresses the following technology gaps as it translates from research discovery toward commercial application: a novel translational parallel manipulator, an integrated model of combined meso-micro positioning system, and an adaptive clamping mechanism.
The project engages Scimplicity LLC to perform technology demonstration using commercial products that are currently made with the traditional deep drawing or stamping process in this technology translation effort from research discovery toward commercial reality.
The hybrid tri-pyramid robot is important because it will be able to rapidly produce products with micro-complex-precision features such as fuel cell bipolar plates, membranes, heat exchanger plates, spray nozzle heads, etc. The ability to create such features/products will find applications in aerospace, medical instruments, electronics, fiber optics, precision laboratory equipment, etc. In addition, the potential economic impact is expected to be expanded to areas of high precision milling, positioning stages, high speed manipulation for pick-place operations, and general robotic assembly operations in the next 3-10 years, which will contribute to the U.S. competitiveness in manufacturing.
|
0.915 |
2015 — 2019 |
Peshkin, Michael Colgate, J. Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Large: Collaborative Research: Textureshop: Tools For the Composition and Display of Virtual Texture @ Northwestern University
When we interact with the physical world touch is a vitally important sensory channel, but when we interact with the digital world that is not yet the case. Historically, this situation may have been principally due to inadequate tactile displays, but that limitation is quickly disappearing. Increasingly, the principal limitation is the lack of tactile content. The goal of this collaborative research that involves scientists at three institutions is to empower the content creator, by enabling people to perform the same sorts of operations with tactile textures that they routinely perform with photographs. Those operations include "capturing" a texture, building a mathematical representation of it, creating and displaying synthetic versions that feel very much like the original, enhancing it in various ways (e.g., making it rougher or more velvety), and ultimately "composing" novel textures that nonetheless feel realistic and credible. As a tangible step in this direction, an open source, open hardware project begun under prior NSF support will be continued and expanded. That project resulted in the distribution of surface haptic devices to about a dozen different labs, leading to a variety of research studies. In this project, a low-cost surface haptic display and a variety of applications and software tools will be distributed to about 50 early adopters in the research community. Those individuals will be engaged in this research (e.g., by helping to "tag" various textures), and will be empowered to carry out their own research. In addition, workshops will be organized at major human-computer interaction conferences to support the growing surface haptics community.
This work is timely and compelling for a number of reasons. One, scientific understanding of the physical and neuronal bases of texture perception has advanced considerably in recent years. For instance, the relationships between vibrations on the skin (produced when a finger slides across a surface), spike timing in afferent neurons, and high-level percepts such as recognition of a specific texture, have recently been elucidated. Two, "surface haptic" technologies for displaying texture to the bare fingertips have also advanced significantly and can now display complex stimuli across the full bandwidth of tactile acuity. Three, the prevalence of touch screen interfaces has created a plethora of applications such as children's e-books, interfaces for the blind, games, and automobile control panels, which would be well-served by high quality tactile content. The merit of this research is that it will provide a principled foundation for both the creation and manipulation of that content. Contributions will include: the development of a "tactile camera" that is able to capture the relevant frictional and vibratory data from which realistic textures can be recreated; a novel mathematical representation of the salient aspects of texture as well as algorithms for synthesizing artificial textures on the basis of that representation; a suite of techniques for enhancing aspects of texture by direct operation on the mathematical representation, interpolation between multiple textures, and interaction with audio cues; and finally a set of tools for composing novel textures including search, texture combination and scale transformation.
|
0.915 |