I worked as a research assistant at Harvard Medical School's Laboratory of Neuroscience at Boston VA Healthcare for 2 years, and at Columbia's Neural Acoustic Processing Lab since January of 2024.
At Harvard, I got to work directly with Principal Investigators across 5 different projects to better understand the role of specific brain features in regulating sleep and wakefulness, and ultimately find new clinical treatments for sleep disorders. The studies I worked on focused on analyzing how neural structures in the basal forebrain regulate sleep/wake cycles, investigating the role of specific GABA receptors in controlling sleep depth, and identifying the neural correlates of sleep homeostasis. Thanks to my mentors, I learned and taught a wide range of technical lab skills like stereotaxic surgery, opto/chemogenetic stimulation, microtome sectioning, immunohistochemistry, brain imaging, and sleep-scoring.
At Columbia, I'm working on the auditory attention decoding problem, developing machine learning models to understand which speaker someone is paying attention to in complex "cocktail party" auditory scenes based on their brainwaves. So far, I've worked on 2 different studies awaiting publication. I conducted 60+ biosignal recordings with participants using g.tec's EEG cap, built end-to-end neural signal processing pipelines from preprocessing to model evaluation, decoded auditory attention with 90% accuracy, and expanded our models to be able to estimate cognitive load from non-EEG biosignals.
This demo simulates real-time auditory attention decoding using EEG data recorded during a dual-speaker listening task. I use a ridge regression model in python to reconstruct the attended speech envelope from EEG signals, then compare it against two competing speech streams to infer listener focus. The interface displays window-by-window correlations and guesses which speaker the subject was attending to during that time. While this demo is not connected to live EEG, the system mirrors the behavior of a real-time decoder and could easily be extended to support streaming input.
During my time at Harvard, I worked on an automated sleep-scoring program that used a deep convolutional neural network to classify mouse EEG/EMG data into "Wake," "NREM," "REM," or "Artifact" categories. Sleep-Deep-Learner reduces the time it takes to score 24-hour sleep recordings from about 4-6 hours to about 30 minutes while performing at human-level accuracy.
Here is a demo of the GUI I created for the initial program using the plotly python library:
While at Columbia’s Zuckerman Institute, I developed a cognitive load estimator that extracts and fuses information from biosignals (pupil size, skin conductance, heart rate, respiration, and temperature) to determine if someone is performing an easy or difficult auditory task (listening to single vs. multiple speakers). I trained a random forest model on features extracted from the biosignals and achieved ~80% cross-validated accuracy across subjects.
Here is a Streamlit demo I created to visualize the raw biosignals, show model predictions, and summarize subject-level performance: