I worked as a research assistant at Harvard Medical School's Laboratory of Neuroscience at Boston VA Healthcare for 2 years, and at Columbia's Neural Acoustic Processing Lab since January 2024.
At Harvard, I got to work directly with Principal Investigators across 5 different projects to better understand the role of specific brain features in regulating sleep and wakefulness, and ultimately find new clinical treatments for sleep disorders. The studies I worked on focused on analyzing how neural structures in the basal forebrain regulate sleep/wake cycles, investigating the role of specific GABA receptors in controlling sleep depth, and identifying the neural correlates of sleep homeostasis. Thanks to my mentors, I learned and taught a wide range of technical lab skills like stereotaxic surgery, opto/chemogenetic stimulation, microtome sectioning, immunohistochemistry, brain imaging, and sleep-scoring.
At Columbia, I worked on the auditory attention decoding problem, developing machine learning models to understand which sounds someone is paying attention to based on their EEG brainwaves. So far, I have worked on 2 different studies awaiting publication. I conducted 40+ biosignal recordings with participants using g.tec's EEG cap, built end-to-end neural signal processing pipelines from preprocessing and feature extraction to model evaluation, and successfully decoded auditory attention using the stimulus reconstruction approach. Finally, I iteratively improved our neural decoding model's performance by adding different biosignals like pupil dilation and skin conductance to its training data.
In my research work, I found that one of the most tedious yet essential skills to develop in the lab is conducting literature reviews to understand the current state of knowledge on a topic. To streamline this process, I've been building and refining Kanopik - an AI assistant that gathers and summarizes the most relevant papers to any scientific question. It pulls sources from trusted sources like PubMed and Semantic Scholar, filters them for relevance, and generates clear summaries and a paper list to help anyone get up to speed on a selected topic.
During my time at HMS, I worked on an automated sleep-scoring program that used a deep convolutional neural network to classify mouse EEG/EMG data into "Wake," "NREM," "REM," or "Artifact" categories. We used transfer learning of GoogleNet to reduce the time it takes to score a 24-hour sleep recording from about 6 hours to 20 minutes while performing at human-level accuracy.
Here is a demo of the GUI I created for the initial program using the plotly python library: