1991 — 1994 |
Hager, Gregory |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Resource-Bounded Sensor-Based Decision Making in Unconstrained Environments
A central problem in sensor data fusion is the recovery of complex, multi-component models from various sensory modalities. An essential component of this process is a description of what decision must be made from the recovered model. This description is referred to as the sensing task. The goal of this research project is to develop and analyze techniques for recovering the minimal or least detailed model required to make a satisfactory decision for a given sensing task. In particular, the focus is on the task-directed recovery of composite models (models composed os several components). The model recovery method is based on the numerical solution os systems of nonlinear constraints using interval bisection. Previous work, based on making decisions about objects modeled by a single parametric form, showed that these methods are general, natural, simple to implement, and computationally effective. In addition, the investigations were able to incorporate notions of the cost of computation and the value of information into the recovery process, and to terminate the recovery process when the model with the highest net value (decision payoff minus computational cost) was reached. They are now extending the underlying recovery method by incorporating model refinement and data segmentation in a manner that also exploits information about the sensing task. The results are expect to have particular impact in application area where good a priori models ar not available. Examples of these domains include classification and sorting of irregular (e.g. naturally occurring) objects, supervisory control, and ultimately, to sensor-based control of partially or fully autonomous vehicles.
|
0.915 |
1991 — 1993 |
Kriegman, David Hager, Gregory |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Engineering Research Equipment Grant: a Range Finder and Manipulator For Empirical Verification of Sensor-Based Decision Making
The objective of this research equipment effort is to support a broad, aggressive research program in sensor-based robotics and automation. Purchase of a laser range scanner and a robot arm will support several research projects, including: sensor data fusion, object recognition, sensor-based reasoning in controlled and uncontrolled environments, and combined manipulation and mobility. This research methodology is firmly grounded in theory. Results from elimination theory provide a general basis for relating sensor observations to geometric models of curved objects. Constraint solving systems are used to recover the parameters of these and other parametric models. Decision theory provides a framework for developing task-directed sensor control and data processing algorithms. The solution techniques advance two computational models, specifically symbolic equation rewriting and interval-based constraint solving. To validate these theoretical advances, they must be tested under realistic conditions. Range sensing, providing a direct source of real-world geometry, and a robot arm, providing sensor mobility and object manipulation capabilities, will be employed to support the endeavors. This equipment will provide the nucleus of a laboratory devoted to the construction of integrated automated systems with the potential for a broad range of future applications.
|
0.915 |
1995 — 1999 |
Hager, Gregory |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Calibration Insensitive Hand-Eye Coordination For Robotic Systems Based On Stereo Vision
The goal of this project is to develop and experimentally verify a skill-based approach to hand-eye coordination for robotic systems. The work focuses particularly on control of robot manipulators using weakly calibrated or uncalibrated stereo vision. The major innovations are: 1) the use of reconfigurable, feature-based tracking mechanisms that simplify image processing; 2) the use of projective invariant-based feedback controllers that perform correctly despite calibration error; and 3) the development of a taxonomy of geometric ``translation rules'' for converting the geometric specification of a task into a visual specification of a task. The research is driven by a series of benchmark problems chosen from the manipulation domain. In addition to software development and experimentation, theoretical methods for analyzing the stability of visual tracking and of hand-eye servoing systems will be developed. Methods for detecting and responding to execution errors will be investigated. The long-term goal of this work is to construct a system that can automatically synthesize and execute a vision-based task specification from a geometric task specification.
|
0.915 |
1997 — 1998 |
Hager, Gregory |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Vision and Control
This award funds a workshop to explore the research needs related to the interface between computer vision and control. The workshop will be held June 23-25, 1997 on Block Island, Rhode Island. A small number of key researchers from the computer vision and control communities will come together to examine the topics which bridge these two domains. Their presentations and discussions in working groups will undertake to delineate the common ground and to challenge one another with fundamental unresolved problems. The results of the workshop will appear in a widely distributed report on research directions in vision and control.
|
0.915 |
1997 — 2002 |
Hager, Gregory Kriegman, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Domain Independent Vision-Based Navigation
The goal of this research is to develop methods for robust, domain-independent vision-based navigation suitable for both structured and unstructured environments. Visual tracking is used to monitor a set of image features (markers), and vision- based control is used to to guide the robot's motion from the image trajectory of the markers while avoiding obstacles. An environment is represented as a graph (map) which may be constructed under human control (e.g. giving the system a tour) or autonomously as the system explores. As the robot moves during the process of mapping, markers are automatically selected from the video stream and tracked. Rather than using prestored models of landmarks, markers are selected based on image content using a suite of domain-independent operators. The selected markers will be visually distinctive, unique within the image, and stable under varying viewpoint and illumination. During passive map making, the robot is taken on a tour, and it instantiates a graph representing the paths that the robot can follow; recognition is used to annotate the map. During active mapping, the robot systematically explores the environment and incrementally constructs the graph representation. These algorithms will be tested empirically on two mobile platforms in both indoor and outdoor environments.
|
0.915 |
2006 — 2007 |
Hager, Gregory Donald |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Direct Video-Ct Registration For High-Precision Surgical @ Johns Hopkins University
[unreadable] DESCRIPTION (provided by applicant): The goal of this project is to demonstrate that direct registration of endoscopic video to pre-operative CT is a viable route to increasing the precision and usefulness of current surgical navigation systems. These results will set the stage for developing new approaches to high precision intra-operative navigation and visualization in anterior skull surgery. This exploratory application is specifically focused on developing and evaluating image-based structure and motion recovery algorithms and video/CT registration algorithms on a representative class of specimens. Working closely with our physician collaborator, we will perform algorithm development and validation according to the following specific aims: 1. Specific Aim 1: Compute surface structure from endoscopic images. Implement and evaluate algorithms for computing the motion of the endoscope and the geometric structure of the surrounding tissue from sequences of images. Evaluate the accuracy of motion and structure data against ground truth models. 2.Specific Aim 2: Registration of pre-operative CT to endoscopic images. Implement and evaluate algorithms for registering the three-dimensional (3D) endoscope data of specific aim 1 with surfaces segmented from pre-operative CT images. Evaluate the reliability and accuracy of this registration process. 3. Specific Aim 3: Accuracy evaluation against current surgical navigation systems: Perform tests on an animal cadaver model to evaluate the accuracy of endoscopic registration in realistic circumstances, and compare that accuracy with existing surgical navigation systems. This exploratory/developmental application sets the stage for new clinical treatments enabled by high- precision navigation and visualization. It also will provide the basis for highly accurate, intra-operatively updated surface models of the surgical field, new methods of intra-operative measurement, and possibly tracker-free navigation systems. [unreadable] [unreadable] [unreadable]
|
0.97 |
2007 — 2008 |
Hager, Gregory Donald |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Toward Quantitative Disease Assessment From Capsule Endoscopy Images @ Johns Hopkins University
[unreadable] DESCRIPTION (provided by applicant): Capsule endoscopy has recently emerged as a valuable imaging technology for the gastrointestinal (GI) tract, especially the small bowel and the esophagus. With this technology, it has become possible to directly evaluate the gut mucosa of patients with a variety of conditions, such as obscure gastrointestinal bleeding, celiac disease and Crohn's disease. Although the use of capsule endoscopy is gaining rapidly, the evaluation of capsule endoscopic imagery presents numerous practical challenges. In a typical case, the capsule acquires 50,000 or more images over an eight-hour period. The quality of these images is highly variable due to the uncontrolled motion of the capsule itself as it moves through the GI tract, the complexity of the structures being imaged, and inherent limitations of the imager itself. In practice, relatively few (often less than 100) of these images contain significant diagnostic content. As a result, it is challenging to create an effective, repeatable means for evaluating capsule endoscopic sequences. The goal of this project is create a tool for semi-automated, objective, quantitative assessment of pathologic findings in capsule endoscopic data. The clinical focus will be on quantitative assessment of lesions that appear in Crohn's disease of the small bowel. The technical approach to this problem will make use of statistical learning methods to create algorithms that perform lesion classification and assessment in a manner consistent with a trained expert. The underlying hypothesis of this project is that appropriately constructed algorithms will be able to perform assessment of lesions appearing in capsule endoscopic images with a level of consistency comparable to human observers. In proving this hypothesis, the proposed project will pursue the following three specific aims: [unreadable] [unreadable] Aim 1: Data acquisition. To develop a substantial database of images of intestinal lesions together with an expert assessment of several attributes indicative of lesion severity. [unreadable] [unreadable] Aim 2: Tissue classification and image enhancement. To develop algorithms for low-level classification of tissue type from image content using statistical learning techniques, and to create algorithms for registering multiple partial views of a lesion to create more complete views. [unreadable] [unreadable] Aim 3: Automated Lesion Assessment. To apply and validate statistical learning methods that can assess the images produced by Aim 2 in a manner consistent with the expert assessments compiled in Aim 1. [unreadable] [unreadable] The focus of this R21 is on the development of tools that have proven efficacy on a representative corpus of data. This will set the stage for subsequent technological developments leading toward the automated detection of lesions, and subsequent clinical studies addressing the development of quantitative measures for Crohn's disease severity in a more substantial clinical setting. [unreadable] [unreadable] [unreadable]
|
0.97 |
2008 — 2009 |
Hager, Gregory Donald |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Quantitative Endoscopic Measurement of Anatomy @ Johns Hopkins University
DESCRIPTION (provided by applicant): Video endoscopy is widely used in both diagnostic and interventional clinical applications. However, current video endoscopy systems do not support reconstruction and quantitative measurement of viewed anatomy. Yet, the ability to measure and model from video images has a number of potential clinical applications such as sizing a tumor, monitoring the change in size of a lesion over time, or computing area, size, or volume measurements of anatomy. At the same time, recent advances in algorithms for reconstruction from video images offer the opportunity of creating methods for quantitative endoscopic measurement (QEM) systems. The goal of the proposed project is to determine whether QEM is potentially usable as a routine diagnostic or interventional imaging modality. To do so, we intend to develop and evaluate a prototype system in the context of a specific, acute clinical need: the measurement of stenosis in pediatric airways. This is an ideal test application for QEM, as the current method of performing airway sizing is invasive, and it has a limited degree of accuracy. Furthermore, providing a new, more accurate modality would potentially enable better monitoring and treatment of this disease. The specific aims for this project are thus: 1. Aim 1: Develop a clinically deployable endoscopic data collection system. 2. Aim 2: Develop and validate algorithms for computing geometric properties of anatomic surfaces from a tracked video endoscope. 3. Aim 3: Demonstrate the feasibility of QEM in a controlled clinical setting. Finally, it is important to emphasize that, while we are focused on a specific clinical setting, the basic capabilities described here will have a much broader impact. Optical and video endoscopic devices are widely used in many areas of diagnosis and surgery. The ability to easily capture the full geometry of airways, sinus cavities, and so forth will open the door to a number of other scientific and clinical investigations. For example, it would become possible to perform repeat imaging to track the effect of treatment, and to perform in-office diagnostic procedures that currently rely on more expensive CT or MR imaging. Project Narrative: The proposed project will develop methods for computing accurate models of anatomy from endoscopic data. It will be specifically applied to a clinical problem of high relevant: the sizing of airway obstructions in young children. However, the possibility of performing simple, safe sizing of anatomic structures has wide relevant in many areas of medicine.
|
0.97 |
2012 — 2015 |
Hager, Gregory Donald |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Enhanced Navigation For Endoscopic Sinus Surgery Through Video Analysis @ Johns Hopkins University
DESCRIPTION (provided by applicant): This project will provide new registration and visualization tools for functional endoscopic sinus surgery (FESS) using widely available high-definition endoscopic video. These tools will provide higher accuracy navigation accuracy to the surgeon, and will make it possible to accurately measure change as surgery progresses. The key innovation in the project is the integration of algorithms for computational vision with traditional navigation methods to provide these enhancements. The algorithms will be evaluated retrospectively on video and navigation data acquired during FESS procedures. The project has four specific aims: Aim 1: Develop video-CT registration algorithms that are accurate to CT resolution. Aim 2: Develop methods for surface shape estimation from endoscopic images. Aim 3: Perform comparative evaluation of video-CT-based navigation on patient data. Aim 4: Assess the accuracy and reliability of intraoperative surface estimation on patient data. The significance of improved navigation is to 1) enhancement patient safety and outcomes by reducing potential complications and radiation exposure, and 2) to reduce cost by improving clinical workflow and clarity of intraoperative visualization. In the United States, it is estimate that there are more than 200,000 sinus surgery procedures performed annually. All of these are performed under endoscopic guidance, and a large fraction can or could employ surgical navigation. Thus, even moderate improvements in outcome and workflow efficiency can lead to significant benefits to both patients and the health care system. The innovation of the proposed approach is the use of the images from the endoscope itself as the basis for: 1) registration to pre-operative or intra-operative volumes, and 2) reconstruction of anatomic surfaces. Prior work has demonstrated that these problems are both solvable. The project will combine the efforts of an experienced team consisting of engineering and clinical faculty, and will focus on translation of the research to clinically relevant data. The methodology of the project will be to develop and validate algorithms extensively on cadaver models with the goal of achieving 0.5 mm accuracy for both registration and surface reconstruction. Once these goals are achieved, the algorithms will be assessed on patient data acquired during FESS procedures. Although aimed at FESS, the proposed methods are widely applicable to other areas of endoscopy and laparoscopy. PUBLIC HEALTH RELEVANCE: This project will provide new registration and visualization tools for sinus surgery using widely available high-definition endoscopic video. These tools will provide higher accuracy navigation to the surgeon, and will make it possible to accurately measure change as surgery progresses. The impact of these tools will be to enhance patient safety, reduce operative time, and reduce the need for intraoperative CT or cone beam imaging.
|
0.97 |
2021 |
Hager, Gregory Donald Ishii, Masaru (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Improved Surgical Navigation Using Video-Ct Registration @ Johns Hopkins University
Abstract In many surgical procedures, image guided surgery (IGS), also known as surgical navigation, facilitates precise surgical manipulations near critical structures such as the brain and eye. IGS suffers from critical limitations such as loss of precision and correspondence with preoperative imaging once surgery begins and the anatomy is altered. These limitations may be addressed through intra-operative imaging, which effectively ?resets? the model and thus enables both improved precision of IGS as well as improved surgical decision- making. However, intraoperative radiologic imaging has many limitations, including disruption of the workflow, cost, and additional radiation exposure. This project aims to develop solutions to resolve limitations of IGS without the need for intra-operative radiologic imaging. Our solutions rely upon data from a device that is present in every procedure using IGS, namely, the endoscope. Our goal in this project is use endoscopic video images to reconstruct surgical anatomy as it changes. This enables sustained precision in registration with pre- operative imaging throughout the procedure by updating the anatomical model as surgery progresses. We will achieve this by 1) using advances in computer vision to develop improved methods for continuous direct registration between the endoscope and the CT; and 2) developing methods that continuously compute the geometry of observed surfaces and update the preoperative CT with changes in that geometry. The technology developed in this proposal thus enables the following: 1) precise surgical navigation using the endoscope throughout the procedure; 2) surgical decision-making without the need for additional intra-operative imaging; and 3) new tools to reconstruct surgical anatomy and quantitatively evaluate the extent of surgery.
|
0.97 |
2021 |
Abadir, Peter M. Chellappa, Rama [⬀] Hager, Gregory Donald Walston, Jeremy D |
P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
Utilizing Technology and Ai Approaches to Facilitate Independence and Resilience in Older Adults @ Johns Hopkins University
The overarching goal of this application is to build an Artificial Intelligence (AI) and Technology Collaboratory (AITC) ecosystem that will serve as a national resource to promote the development and implementation of novel AI and technology approaches to improve care and health outcomes for older Americans. The specific aims are: 1) To engage AI and geriatric/gerontology investigators from across the country and to identify, validate, test, and develop new AI and technologies relevant to improving the health and wellbeing of older adults through crucial pilot study mechanisms; 2) To serve as a national resource center that stimulates and leads the development and implementation of effective novel AI and technology approaches and products that will promote the health, wellbeing and independence of all older Americans; 3) To support the engagement of stakeholders in AI research; 4) To build an ecosystem of overlapping innovation and business, academic, and communities- of-practice networks ; and 5) To provide highest quality expertise, support, and infrastructure needed to disseminate technical and policy guidelines and best practices for effectively incorporating AI approaches and technology for older Americans, in partnership with private industry, angel investors, venture capital firms, and healthcare systems. This AITC is directed by a multi-PI interdisciplinary team led by two world-class experienced investigators who have long worked successfully in the fields of AI and technology development areas partnered with investigators who have long and successfully worked at the translational interface that connects real-world medical, cognitive, and functional declines that impact older adults with medical and technological solutions. Each of these investigators has a complementary skill set and a long track records of organizing transdisciplinary teams and consortiums of investigators around core themes. This interdisciplinary, accomplished, and highly visible leadership team will work together to develop vision for the next generation of AI in aging science and to build a scientifically and culturally diverse community of AI scholars and trainees around Aging. To achieve our goals, we designed the JHU AITC to have robust scientific and technological expertise that are described in eight core components. This infrastructure will support the implementation of stakeholder input and the identification of relevant technologies and investigators locally and nationally through a vetting and feasibility testing process of both technology and data processes. It will include a pilot testing phase and related oversight process. We have also established a key partnership with the Iowa office of Rural Health and Veterans Rural Health Resource Centers Leadership and with organizations within Johns Hopkins University that focus on improvements in the health and well-being of older adults in underserved urban communities. Connections with key academic, industry partners have also been established to accelerate the development of relevant technologies into products. This team is dedicated to developing the next AI scientific advances and disseminating resulting strategies into practice and policy that will maximize health, well-being, and independence for older adults.
|
0.97 |