Augmented Cognition is about enhancing human information-processing capabilities through the design of adaptive interfaces via physical, psychological, and cognitive state estimation. Augmented Cognition Lab(ACLab) research centers on creating augmented cognition systems to enhance human cognition rather than replace it. The augmented cognition systems have three primary components: (1) the sensing element, (2) the analytic element, and (3) the feedback element, as shown in figure above. The sensing element gathers and fuses multimodal data from the human and the environment including 4D color-depth videos, neurophysiological signals, and audio/speech information. Robust analytics is the cornerstone of the system. At ACLab, we use both machine learning models and biomechanically and biologically inspired structural models. When possible, a structural model is preferred because it can incorporate existing knowledge and research into the model without requiring a large training set to gain such knowledge. In addition, structural models tend to be more transparent and easier to analyze for failure modes and edge cases. Careful design of the feedback element is also critical because unless the feedback is useful, timely, and understandable, the system will be unusable regardless of the quality of the other two components.
We are actively collaborating with physicians, psychologists, and therapists to define problems and fine-tune augmented cognition solutions for the healthcare providers, infants/children with Autism spectrum disorder, individuals with limited speech and physical abilities, individual with sensory impairments, and the elderly. Even apparently simple problems in these domains have a complex web of interconnected elements with significant engineering and science implications. Therefore, at ACLab, we extensively work at the intersection of computer vision and machine learning with multidisciplinary elements from behavioral sciences. In summary, two main lines of research in ACLab are:
Digital Prosthetics: Replacing lost sensing and data processing functionality
Intelligence Amplification: Enhancing sensing and data processing functionality
For many of these projects, augmented reality (AR) and virtual reality (VR) tools are essential for both the assessment and enhancement portions of the project. Below, you can find some of our active research projects. Also visit our The Signal Processing, Imaging, Reasoning, and Learning (SPIRAL) Group for more information about our collaborative cluster.
“PFI-RP: Use of Augmented Reality and Electroencephalography for Visual Unilateral Neglect Detection, Assessment and Rehabilitation in Stroke Patients” (Role: CoPI, Date: 12/2023-11/2026)
“Collaborative Research: Development of a precision closed loop BCI for socially fearful teens with depression and anxiety” (Role: PI, Date: 11/2023-10/2026)
“CAREER: Learning Visual Representations of Motor Function in Infants as Prodromal Signs for Autism” (Role: PI, Date: 5/2022-4/2027)
Northeastern-UMaine Seed Funding ($50K) “Using artificial intelligence to examine the interplay between pacifier use and sudden infant death syndrome” (Role: CoPI, Date: 11/2020-10/2023)
NSF-IIS 2005957 ($190K) “CHS: Small: Collaborative Research: A Graph-Based Data Fusion Framework Towards Guiding A Hybrid Brain-Computer Interface” (Role: PI, Date: 10/2020-09/2023)
Northeastern Tier 1 Award ($50K) “Novel methods to quantify the affective impact of virtual reality for motor skill learning” (Role: CoPI, Date: 07/2020-09/2021 )
“A-Eye: A Nanotechnology and AI-assisted Artificial Cone Cell Capable of Color and Spectral Recognition” (Role: CoPI, Date: 01/2020-12/2020)
ADVANCE Mutual Mentoring Grant ($3K) “Translating the ‘Mastermind’ Concept from Business to Academia: Facilitating Peer mentorship among female PIs leading active research labs” [Project Link] (Role: CoI, Date: 01/2020-12/2020)
NSF-NRI 1944964 ($102K) “NRI: EAGER: Teaching Aerial Robots to Perch Like a Bat via AI-Guided Design and Control” (Role: PI, Date: 10/2019-09/2020)
Instructions on Quantifying Infant Reach and Grasp Reported by: Victoria Berry, Franklin Chen, Muskan Kumar ACLab Students Mentors: Dr. Elaheh Hatami and Pooria Daneshvar YSP/REU Summer 2023 Abstract: In the United States, approximately 7% p..
If you are a USA-located parent of an infant under age one and want to participant in this data collection study (supported by an NSF grant and several industry awards), please email us at: aclabdata@northeastern.edu or fill out the sign up fo..
[Sponsored by the IEEE Signal Processing Society] MOTIVATION Every person spends around 1/3 of their life in bed. For an infant or a young toddler this percentage can be much higher, and for bed-bound patients it can go up to 100% of their tim..
Overview: In this research, we present a general pipeline to interpret the internal representation of face-centric inference models. Our pipeline is inspired by the Network Dissection, which is a popular model interpretability pipeline that is..
With the increasing maturity of the human pose estimation domain, its applications have become more and more broaden. Yet, the state-of-the-art pose estimation models performance degrades significantly in the applications that include novel su..
Instructions on Creating 3D Human Models and Virtual Reality Implementation of Them Reported by: Sophia Franklin and Caleb Lee ACLab Students Mentors: Shuangjun Liu, Naveen Sehgal, Xiaofei Huang, and Isaac McGonagle YSP Summer 2019 Overvi..
This research is funded by the NSF Award #1755695. Also special thanks to the Amazon for the AWS Cloud Credits Award. This research is also highlighted by News@Northeastern in August 2019, by Experience Magazine in October 2019, and by Si..
This research is funded by the NSF Award #1915065 entitled “SCH: INT: Collaborative Research: Detection, Assessment and Rehabilitation of Stroke-Induced Visual Neglect Using Augmented Reality (AR) and Electroencephalography (EEG)“...
Image-based generative methods, such as generative adversarial networks (GANs) have already been able to generate realistic images with much context control, specially when they are conditioned. However, most successful frameworks share a comm..
Our understanding of the world started with a research that aimed to improve the indoor navigation for first-person navigators by fusing IMU data collected from their smartphone with the vision information concurrently obtained through the ph..
Tracking human sleeping postures over time provides critical information to biomedical research including studies on sleeping behaviors and bedsore prevention. In this work, we introduce a vision-based tracking system for pervasive yet unob..
Although human pose estimation for various computer vision (CV) applications has been studied extensively in the last few decades, yet in-bed pose estimation using camera-based vision methods has been ignored by the CV community because it is ..
To track relative movements of the facial landmarks from a video, we have developed a robust tracking approach, in which head movement is also tracked and decoupled from the facial landmark movements. We first employed an state-of-the-art 2D f..
Pervasive use of rodents as animal models in biological and psychological studies have generated a growing interest in developing automated laboratory apparatus for long-term monitoring of animal behaviors. Classically, the animal’s beha..
Augmented Cognition Lab (ACLab) primarily researches digital prosthetics. Digital prosthetics are a class of cognitive and neurological assistive devices and can be used for rehabilitation, Parkinson’s patients, diabetics, and individuals on..
The foot complications constitute a tremendous challenge for diabetic patients, caregivers, and the healthcare system. The proposed sparse representation/reconstruction system can: (1) Reconstruct the pressure image of foot using a personali..