Francis Quek - US grants
Affiliations: | Wright State University, Fairborn, OH, United States |
Area:
Computer ScienceWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Francis Quek is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
1993 — 1996 | Quek, Francis Jain, Ramesh (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Gesture Interpretation and Voice Recognition Multi-Modal Human Machine Interface @ University of Illinois At Chicago |
0.942 |
1998 — 2004 | Ye, Nong (co-PI) [⬀] Moher, Thomas (co-PI) [⬀] Super, Boaz Brown, Maxine Kenyon, Robert (co-PI) [⬀] Defanti, Thomas [⬀] Johnson, Andrew Dieugenio, Barbara Buy, Ugo (co-PI) [⬀] Zefran, Milos (co-PI) [⬀] Banerjee, Pat (co-PI) [⬀] Grossman, Robert Franklin, Rhonda Quek, Francis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Infrastructure: Cavern: the Cave Research @ University of Illinois At Chicago EIA-9802090 |
0.942 |
2002 — 2008 | Mcneill, David (co-PI) [⬀] Quek, Francis |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Beyond the Talking Head and Animated Icon: Behaviorally Situated Avatars For Tutoring @ Wright State University Behaviorally Situated Avatars for Tutoring: Beyond the Talking Head and Animated Icon |
1 |
2004 — 2010 | Quek, Francis Bertenthal, Bennett Mcneill, David (co-PI) [⬀] Vernooy, Jeffrey Bargerhuff, Mary Ellen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Embodiment Awareness, Mathematics Discourse, and the Blind @ Virginia Polytechnic Institute and State University When we speak, our embodied behavior of gesture, gaze, posture, and facial expression are brought into the service of the communicative process. This research is grounded in psycholinguistic theories of multimodal human communication. One path from multimodal behavior to language is bridged by the underlying mental imagery. This spatial imagery, for a speaker, relates not to the elements of syntax, but to the units of thought that drive the expression (vocal utterance and visible display). Gestures reveal the focal points of the accompanying utterances, and relate to the meaning of the newsworthy elements of the unfolding discourse. This is the underlying premise of the current project, in which the PI will focus on math discourse and education for blind students. |
0.973 |
2006 — 2012 | Quek, Francis Bowman, Doug Tatar, Deborah (co-PI) [⬀] Xiong, Yingen Winchester, Woodrow (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cri: Interfaces For the Embodied Mind @ Virginia Polytechnic Institute and State University This project, exploring the concept and human-computer interaction (HCI) implications of the embodied mind, builds a comprehensive environment for HCI that includes vision/graphics/large displays, sound capture, textile, and haptics. The infrastructure aims at servicing 14 projects in five areas of research: |
0.973 |
2006 — 2008 | Quek, Francis Tatar, Deborah (co-PI) [⬀] Winchester, Woodrow (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Embodied Communication: Vivid Interaction With History and Literature @ Virginia Polytechnic Institute and State University The PI believes research in human-computer interaction is on the cusp of solidifying a paradigm of embodied cognition that emphasizes the role of bodily reality in the creation of meaning, which constitutes the essence of our thought and experience. When we speak, our heads, eyes, bodies, arms, hands, and face are brought into the service of communication. There is substantial evidence that communication does not exist for us separately from its embodied roots. The cognitive and perceptual resources of attention (both visual and conceptual), working memory, long-term memory, temporal sequencing, language processes, touch, internal physiological experience, and proprioception must be considered together to understand the range of meanings a behavior has to the actor and receiver in a situation. The PI recently submitted a proposal entitled "CRI: Interfaces for the Embodied Mind" by means of which he was able to secure funds for the acquisition of equipment necessary to exploring ideas such as those outlined above, but not for the exciting applications he envisages in collaboration with partner institutions. The supplemental funding provided by this SGER will enable the PI to explore the implications of embodied interaction technology to communication in the domain of the liberal arts, specifically through participant and travel funding to support collaborative projects: (a) with Prof. Cassandra Newby-Alexander and others in Norfolk State University's Department of History who are investigating the Underground Railroad network in the four large urban slave centers of Norfolk, Portsmouth, Richmond, and Petersburg; and (b) with Prof. Pinckney Benedict of Hollins University's English Department on a pilot project in which two or more creative writing students develop an embodied adaptation of James Joyce's classic short story "Araby" which is rich in concrete environmental cues and written in a dreamlike narrative style that lends itself gracefully to nonlinear storytelling. Each project will involve two faculty and two student members from the partner organization, along with students and faculty from Virginia Tech. |
0.973 |
2007 — 2010 | Quek, Francis Kavanaugh, Andrea [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Capturing Ephemeral Communication Data From the Virginia Tech Tragedy @ Virginia Polytechnic Institute and State University Many proposed technological solutions to future disasters involve the use of wireless cellular technology. However, these same networks become quickly saturated soon after a disaster and remain saturated for critical periods. This project will capture data on cell phone usage (and other communication modes) in terms of when, with whom (what relationship with subject), and for how long the calls take place, and whether the calls are incoming or outgoing. Such data should help in the modeling of communication network behavior in such situations and in understanding how such technologies may be utilized or extended in future emergencies in any community. The goal of this exploratory research is to capture time sensitive communications and other ephemeral data regarding the tragic events surrounding the Virginia Tech (VT) shootings on April 16, 2007. These data will be available for further analyses by multiple interested groups in order to plan for future emergency response and critical peer-to-peer communication. |
0.973 |
2008 — 2013 | Quek, Francis Ehrich, Roger (co-PI) [⬀] Wilkins, Jesse (co-PI) [⬀] Evans, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Virginia Polytechnic Institute and State University This research project will investigate the effects social organization, learning technologies, and their interrelationships, have on the geometric reasoning of PreK-3 students. Social organization is operationalized here as the strategic grouping of students into triads to engage in collaborative problem solving. Learning technologies are operationalized here as physical and virtual manipulative types (tangrams, pentominoes, geoboards) together with supporting software designed to facilitate students in accessing their mathematical knowledge. Based on findings from preliminary work in this area, it is proposed that learning technologies allow for fluid exchanges between peers, and transitions between physical and virtual manipulatives, significantly enhancing geometric reasoning. The current project presents a robust research and design framework that combines iterative cycles of field studies in the classroom with development and experimentation in the lab. The goal is to design, develop and prototype a tabletop computer interface that allows for unprecedented exchanges among student peers and transitions between manipulative types. The tabletop interface will serve first as the apparatus that facilitates experimental manipulations, later being used, through iterative refinement, as the learning technology platform by which students gain access to mathematical knowledge. The advancement proposed by the current work is that it places all interaction on the horizontal tabletop. This is critical in that it removes confounding effects of different seating and screen orientations from pilot study findings. Manipulations of geometric artifacts reside along two dimensions of research questions: the effects of social interaction and of learning technology on geometric reasoning. Hence, the proposed studies will allow investigation of different transitions between fully functional and fully operational thinking, functional and qualitative operational thinking, and qualitative and quantitative parametric thinking with the same interaction morphologies. |
0.973 |
2008 — 2009 | Quek, Francis Smith-Jackson, Tonya [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Embodied Interaction Paradigm For Users With Severe Visual Impairments @ Virginia Polytechnic Institute and State University The purpose of this exploratory project is to test a proof-of-concept that reflects an emerging and potentially transformative mobile system to enhance recognition and way-finding function for users with severe visual impairments. The PI's Near and Far Environmental Awareness System (NaFEAS) will run on a mobile phone platform in concert with RFID tags and sensors which serve as inputs and outputs to a knowledge database that can be updated by the user. NaFEAS will recognize tagged objects and provide audio and other feedback to users within a designated envelope around the sagittal and coronal body planes. In addition, NaFEAS will facilitate way-finding using tagged objects in the near and far environments. The focus here is on the development of a low-fidelity prototype using participatory design; the theoretical basis is embodied cognition, which emphasizes the full use of senses, gestures, and space to understand objects and the environment. The PI will take special pains to minimize the cognitive biases occurring when sighted designers or researchers retrofit visually-dominant interaction patterns to develop accessible devices. The prototype will be evaluated by target group members using an experimental design with self-report, behavioral, and physiological variables. |
0.973 |
2009 — 2014 | Quek, Francis Ramakrishnan, Naren [⬀] North, Christopher |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Formal Models, Algorithms, and Visualizations For Storytelling Analytics @ Virginia Polytechnic Institute and State University Modern direct manipulation and visualization systems have made key strides in bringing powerful data transformations and algorithms to the analyst's desktop. But to further promote the vision of powerful visual analytics, wherein automated algorithms and visual representations complement each other to yield new insight, we must continually increase the expressiveness with which analysts interact with data. This project focuses on the task of storytelling, that is to say the stringing together of seemingly unconnected pieces of data into a coherent thread or argument. To support storytelling, which requires both human judgment and algorithmic assistance, the PIs will first develop a new theory of relational redescriptions that provides a uniform way to describe data and to compose data transformation algorithms across a multitude of domains. Using this theory, the PIs will be able to define stories formally as compositions of relational redescriptions. They will develop scalable and steerable algorithms for storytelling that will respond to dynamic user input, such as preferences and constraints, and they will contextualize their use in interactive visualizations that harness the power of spatial layout. Finally, they will investigate how analysts engage in sense-making using the new storytelling algorithms and visualizations, in the hope of finding answers to questions such as: How do analysts achieve insight and advance their conceptualization of patterns derived from datasets? Project outcomes will include the formal conceptualization of storytelling as well as the compositional approach to building complex chains of inference. |
0.973 |
2009 — 2012 | Quek, Francis Cao, Yong |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Creative It: Hyper Drama Storytelling: Engaging and Nurturing Creativity in K-12 Students @ Virginia Polytechnic Institute and State University This research explores the concept of grounded imagination where creativity is grounded in knowledge and informed by experience. How might one nurture and cultivate such grounded imagination in K-12 students? To address this question, this project will develop and deploy a storytelling system for authoring and interacting with hyper-dramas. It incorporates the theory advanced by Lev Vygotsky, that creativity involves a process of combinatory imagination by which an individual creates new things for herself from elements of prior experience encoded in everyday concepts and new culturally transmitted information. This situates creativity within two developmental streams: intellectual development (acquiring grounding in the form of knowledge and experience), and the process of flexible re-combination. The aspect of social-cultural engagement suggests that creativity is a discourse with the larger culture and society. Drawing from this theoretical foundation, this project explores whether the creativity trough (decline in creativity from the 4th grade through middle school) may occur because social-cultural awareness precedes intellectual development so that the student judges herself an inadequate contributor. Facilitating hyper-drama authoring will allow students to draw on their experience in hyper-media and dramatic presentation, and thus nurture the combinatory creativity process. |
0.973 |
2009 — 2012 | Quek, Francis Bukvic, Ivica Cao, Yong Webster, Dane |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Drummer Game: a Massive-Interactive Socially-Enabled Strategy Game @ Virginia Polytechnic Institute and State University The goal of this project is to foster new research collaborations across different disciplines by developing a novel massive interaction game that represents a unique genre wherein teams of users strategically direct armies of virtual 3D "terra cotta" soldiers by employing physical drum beats as control signals to determine the warriors' behavior. It is intended that the game will engage and enthrall both user teams and a live audience with the special demands of cooperative and competitive game play in a compelling and creative scenario. Successful completion of the research will require intensive collaboration across multiple fields of computational science, art, and music. The culmination of the project will be a social event centrally located on the Virginia Tech campus, in which the finished game is publically introduced to the community at large. |
0.973 |
2010 — 2015 | Quek, Francis | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Multimodal Corpus For Vision-Based Meeting Analysis @ Texas a&M University Main Campus This project explores a multimodal corpus for vision-based meeting analysis. The research team is working on: (1) extracting the data from tapes and organizing them into multimedia databases; (2) developing a database visualization and analysis tool to support model development; and (3) developing an agent-based algorithm to extract hand and head tracking information so that higher level models may be built onto the data. |
0.973 |
2011 — 2017 | Quek, Francis | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Texas a&M University Main Campus The PI's goal in this research is to develop tools on a state-of-the-art platform (the Apple iPad) that will afford access to textual information for individuals who are blind or who suffer from severe visual impairments (IBSVI). The PI's approach is to use an embossed screen overlay to provide spatial and tactile correlates to text read aloud, and to engage the spatial cognition and memory resources of the target population for navigating through a document and annotating it if/as desired. The PI argues that from the invention of print media forward, information has been formulated and optimized for consumption by beings (people) with a dominant visual capability. This visuo-spatial bias is not well-understood or studied in the context of information access by and delivery to the IBSVI community; most technological information aids funnel information to them as sequential aural streams, obviating the use of broader spatial cognitive resources. In this project the PI will develop a Spatial Touch Audio Annotator and Reader (STAAR) testbed to explore a multimodal alternative that enables the user to fuse spatial layout and informational content through touch location on a slate-type device and audio rendering of text to speech, respectively. STAAR will enable self-paced reading using a tactile overlay pattern on an iPad surface, which will be designed to provide tactile landmarks to help the user navigate the "page." STAAR will render the text chunk touched audibly. The use of touch gestures to enable contextualized highlighting and note-taking will also be investigated. The PI will study how the target population may employ spatial strategies and exploration to re-find and re-access information both in the act of reading and for recall after some time interval. |
0.973 |
2011 — 2018 | Quek, Francis | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-En: Device and Display Ecologies @ Texas a&M University Main Campus As new personal devices of various form factors (from smart phones to laptop computers) proliferate, we believe that their potential to support human learning, activity, and information access can be fully realized only when these devices function together with a broader technology environment and with each other. We envision the day when such technology envelops society, and in concert empower individuals to function more effectively both individually and as members of society. In essence the projected infra-structure will allow us to investigate this future. |
0.973 |
2014 — 2017 | Quek, Francis Larke, Patricia (co-PI) [⬀] Chu, Sharon Lynn Burlbaw, Lynn (co-PI) [⬀] Schlegel, Rebecca (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Strategies: Making the Maker: a Pathway to Stem For Elementary School Students @ Texas a&M University Main Campus Building on the highly energized national Maker's Movement involving government offices, schools, and businesses now sweeping the country, this project will explore ways to engage more children in the effort and become motivated and interested in STEM. The project will target 200 children at grades 3-5 to involve them in the "Maker's mindset" from an early age. The goal is for children to develop strong self-identities in STEM areas through their involvement in arts, crafts, and narrative storytelling. |
0.942 |
2016 — 2019 | Quek, Francis Seo, Jinsil Cabrera, Joshua Yamauchi, Takashi (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Connecting Across Distances: Remote Tactile Communication @ Texas a&M University Main Campus This project will expand the methodologies and applications for computer-based haptics, by investigating how remote tactile communication, functioning in partnership with other contextual channels like speech, may facilitate supportive discourse for at-risk individuals. The research enhances infrastructure for research and education by building a testbed to study a heretofore unexplored mode of computer-assisted perception between people. It will advance science while promoting teaching, training, and learning by providing unique cross-disciplinary educational and learning opportunities for undergraduate and graduate students. Potentially, this new technology will facilitate greater partnership and emotional interaction among people, over long distances. This will help people understand the meaning of the other person's communications, maintain valued relationships, and enhance each other's well-being. It could usher in a new dimension in telecommunications that has significant economic potential. |
0.942 |
2016 — 2018 | Quek, Francis Chu, Sharon Lynn Kuttolamadom, Mathew Natarajarathinam, Malini |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Making in the Colonias: Motivating Stem Participation Through a Making as Micro-Manufacturing Model @ Texas a&M University Main Campus As part of an overall strategy to enhance learning within maker contexts in formal and informal environments, the Innovative Technology Experiences for Students and Teachers (ITEST) and Advancing Informal STEM Learning (AISL) programs partnered to support innovative models in Making poised to catalyze new approaches in STEM learning and innovation. Employing a novel design and development approach, this Early Concept Grant for Exploratory Research (EAGER) will test the feasibility of integrating Making concepts with real world micro-manufacturing engineering principles within the context of intense, multi-year team apprenticeship experiences for high school students. The apprenticeship model is particularly novel, as current Making research and experiences predominately take place in afterschool and summer programs for up to 25 youth. The proposed apprenticeships will require a two year commitment by a small cohort of Texas high school students, which will provide an opportunity to examine the feasibility and impact of the effort longitudinally. The cohort will learn to think critically, solve problems, and work together as a Making Production Team (MPT) in a customized makerspace in their high school, constructing engineering-based science kits for implementation in a local elementary school. Not only will the students enhance their content knowledge while developing design and development skills but the students will also receive stipends which will address two very practical needs for the targeted high need population - employment and workforce development. Few, if any, efforts currently serve the targeted population through the contextualization of Making within a supply chain management and micro-manufacturing framework that extends the Making experience by integrating the student designed products into elementary classrooms. As such, this project will contribute to essentially unexplored areas of Making research and development. |
0.942 |
2017 — 2020 | Quek, Francis Chu, Sharon Lynn Irby, Beverly |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exp: to Enact, to Tell, to Write: a Bridge to Expressive Writing Through Digital Enactment @ Texas a&M University Main Campus This cyberlearning project explores a novel approach, called Digitally-Augmented Enactment (DAE), to support learning expressive writing by children at the critical grade 4-5 developmental period, with a focus on English Language Learners (ELLs). Expressive writing is core to the learning of all school subjects. An alarmingly high percentage of students enter middle school with low proficiency in writing, which hinders further learning in high school and college. Children from these underserved populations are particularly at risk of withdrawing from writing during the grade 4-5 stage in school when it matters most. The key reason for children's difficulties is that writing requires not only mastery of writing mechanics (e.g., grammar, sentence structure), but also the possession of ideas to convey, and a mastery of the process of translating ideas into expression. Accordingly, the DAE approach bifurcates the writing process by harnessing the power of pretend play to bring forth children's imagination, enabling them to focus on ideas, without at the same time having to be concerned with the technical details of writing. DAE combines low-cost motion-tracking technologies, such as Kinects, with animation technologies. This technological environment allows children to enact with props and produces animated cartoons as the 'outline' for their essays which serves as a scaffold for their writing. The new technological approach to expressive writing is explored in a charter school and a public elementary school, both of which serve a high percentage of ELLs, and the majority of the students are on reduced lunch programs. The project will help children from diverse backgrounds to become literate individuals in a modern society. The project also provides unique cross-disciplinary educational and learning opportunities for undergraduate and graduate students to serve predominantly underserved communities. |
0.942 |
2017 — 2018 | Quek, Francis | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Htf: From Making to Micro-Manufacture: Reimagining Work Beyond Mass Production @ Texas a&M University Main Campus Intelligent, interactive, and highly networked machines --with which people increasingly share their autonomy and agency--are a growing part of the landscape, particularly in regard to work. As automation today moves from the factory floor to knowledge and service occupations, insight and action are needed to reap the benefits in increased productivity and increased job opportunities, and to mitigate social costs. The workshop supported by this award will promote the convergence of multiple fields of inquiry from Computer Science, Human-Computer Interaction, Electrical and Mechanical Engineering, Design, Economics, the Social Sciences, Industrial Engineering and Education to define key challenges and research imperatives of the nexus of humans, technology, and work. Convergence is the deep integration of knowledge, theories, methods, and data from multiple fields to form new and expanded frameworks for addressing scientific and societal challenges and opportunities. This convergence workshop addresses the future of work at the human-technology frontier. |
0.942 |
2019 — 2022 | Quek, Francis Irby, Beverly Natarajarathinam, Malini |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Texas a&M University Main Campus As workplaces change, people will need to develop stronger and more flexible skills around technology. Schools are one place students might learn these skills, but in the case of electronics, 3D design, and similar Making technologies, many students lack access to technically-oriented teachers and mentors. To help address this gap, this project will develop technologies, processes, and training to develop programs where STEM university students provide distance mentorship for rural students around technology. New technologies are needed because of the hands-on nature of Making activities, which makes remote mentoring through screen sharing or videoconferencing difficult. Thus, the project will develop technology to enable mentors to provide instruction that includes gesture, eye-gaze, and speech interaction through distance telepresence robotics and augmented reality. New processes and training are needed as well, to help the university mentors be effective teachers; this will be done through a MentorCorps program that provides education on classroom pedagogy and technology learning. The technologies and training programs will be developed and validated with high school students in three rural school districts, who will participate in a career and technology education (CTE) course. The course will provide knowledge in 3D design, fabrication, electronics, computer programming, and basic manufacturing; the high school students will use these skills in turn to prepare science learning kits for elementary schools in their districts. In addition to advancing the science of remote technology education, the project will have a number of broader impacts. The chosen school districts in the Colonias region at the Texas-Mexico border serve students who are underrepresented in STEM, and the work will improve their STEM education and work preparedness. The university mentors, meanwhile, will gain service learning opportunities that develop educational, interpersonal, and empathy skills beneficial to future professional lives. |
0.942 |
2019 — 2022 | Quek, Francis Romack, Timothi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Texas a&M University Main Campus Reading, doing, and sharing mathematics are cognitively intensive tasks that require significant amounts of learning and preparation. These skills form an indispensable foundation for participation in all STEM fields and most careers. So the lack of mathematical literacy severely disadvantages individuals who are blind or who have a severe visual impairment (IBSVI). By the time they reach middle school, IBSVI are typically 1-3 years behind their seeing counterparts in mathematics, and the gap continues to widen in subsequent years. The key challenge to accessing mathematical expressions is that they are laid out spatially, and this spatial notation needs to be accessed interactively. This project will develop and study solutions to support reading, doing, and sharing of mathematics on modern tablet and handheld devices for IBSVI by balancing across multiple modes of low-vision access, interactive sound, tactile and haptic support, spatial landmarking, audio reading, interaction methodologies and the structure of the mathematical expression. This will open the door to personalized solutions that honor the ranges of visual abilities (e.g., degree of tunnel vision, low-light capacities, tactile abilities) of IBSVI. The research will investigate such multimodal reading, doing and sharing of mathematics expressions starting in later elementary and middle school. |
0.942 |
2020 — 2023 | Quek, Francis Krishnamurthy, Vinayak Sueda, Shinjiro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Creating a Vr Workspace For Design by Physical Manipulation @ Texas a&M Engineering Experiment Station From an early age, humans learn to see, touch, hold, manipulate, and alter their physical surroundings using both of their hands and tools. A sculptor holds clay in one hand and precisely carves out material to create an intricate work of art. A mechanic uses hands and tools to reach the most inaccessible nooks of a car?s engine to make repairs. All who work their hands, threading needles, carving soap sculptures, or wiring electrical wall sockets know the extraordinary dexterity and precision they possess when working in the space within the near reach of both hands. The project investigates a new class of augmented and virtual reality (AR/VR) environments that allow designers to be more creative in this workspace. Existing AR/VR systems only allow users to interact at an arm?s length making it impossible to leverage the innate human ability to use hands and tools for precise actions close to the body. As a result, the original promise of AR and VR in facilitating creative thinking and problem-solving is still unrealized. Towards this, a workspace is designed in this project to enable designers to create virtual designs in three-dimensional (3D) space by mirroring the intricate, complex, and precise actions that humans can perform in daily life. The research team will develop AR/VR technology and approaches that will enable users to directly ?sketch? 3D digital objects, precisely control those objects, and assemble them to create physically feasible designs. The research team will integrate the new technology and tools from this project to promote geometric reasoning in K-12 students, undergraduate, and graduate curriculum. The project has the potential to make a long-term contribution to the future of creative work and may positively impact U.S. competency in design, manufacturing, and education by enabling small businesses to innovate effectively and economically. |
0.903 |
2021 — 2024 | Quek, Francis Yasskin, Philip Song, Dezhen (co-PI) [⬀] Yan, Wei Burte, Heather |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Texas a&M University The mathematics that are used to describe spatial transformations can be very difficult for undergraduate students. While moving something in the physical world may be easy to understand, describing the same operation with math in the digital world can be daunting. This project will develop new technology using Augmented Reality (AR) and Artificial Intelligence (AI) to improve the teaching and learning of these difficult concepts in STEM disciplines as well as creative endeavors. The project will test the use of new AR/AI technology to enhance students’ learning of the mathematics behind spatial transformations. This will improve the use of AR/AI-powered precise motion tracking of objects that can collect high resolution in-situ motion and scene data to enhance learning analytics. The project will identify the AR capabilities that can help students conceive, connect, and compare mathematical representations of motion to overcome the well-documented difficulties students face when learning spatial transformations. Strengthening this skill will support their continued development across many STEM disciplines. |
0.951 |
2022 — 2023 | Quek, Francis Ferris, Thomas Hipwell, Mary Friesen, Rebecca |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Texas a&M University Since the early in in the COVID-19 pandemic, telemedicine has seen tremendous growth in the U.S, providing people in under-resourced areas with broader access to healthcare through remote telemedicine visits with health professionals and experts to whom they may not otherwise have access. However, these visits are currently restricted to video teleconference-style exchanges and lack the capacity for physical examinations, which contribute significantly to diagnoses and the identification of health concerns that may not otherwise surface during a medical visit. This project investigates the possibility of supporting physical examination as a paired practice between a caregiver local to the patient and a remote physician, using touch-based and augmented-reality technologies. Caregivers will perform examinations while wearing touch-sensitive gloves, guided by remote physicians using a physical examination cockpit that lets them see and feel the patient using the outputs from the caregiver’s gloves and a video feed. The technology and practice will potentially transform the work of physicians and local caregivers, opening the door to more effective and more accessible telemedicine visits.<br/><br/>The foregoing discussion uncovers a number of questions that the project is designed to address. First, tactile interpretation is active, meaning that one has to typically be in control of the sensing process to be able to interpret the sensory output. The proposal will explore conditions under which one can meaningfully interpret passively received tactile information, such as what the remote physician would feel when a local (to the patient) caregiver is now in control of the sensing motions. One method to be tested for returning agency to the physician is the psychological phenomenon known as the ‘rubber hand’ or body transfer illusion, where one psychologically associates an appropriately-placed rubber hand as one’s own, and an impact on the rubber hand is viscerally felt by the subject. Even if the rubber hand illusion does not fully activate, joint expertise between the local touch explorer and the distant interpreter may facilitate the needed tactile understanding, and this understanding may be enhanced through practice. The project team will explore configurations (e.g., relative positions of the distal toucher’s hands to the subject’s hidden hands, degree of and type of movement/tactile exploration, visual/augmented reality presentation) that allow body transfer illusions or joint expertise to enable interpretation. The team will also conduct a series of video-based grounded theory explorations of physicians performing physical examinations to gain a better understanding of the process and to categorize specific actions that may serve as the basis of communication between the local caregiver and the distal physician, and to inform the technology developments necessary to enable this collaborative examination. Finally, the project team, comprising physicians and nurses, mechanical engineers, human-computer interaction researchers, and perceptual psychologists, will collaborate on the research described, engaging in team development exercises to gain a stronger shared understanding of the joint physical examination process and support future research and development of these practices and technologies.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. |
0.951 |