Before this, I completed my B.S. in Computer Engineering at Boston University and my
M.S.E. in Computer and Information Science at the University of Pennsylvania.
I'm interested in how artificial and natural neural networks can learn efficient representations for planning. My research lies at the intersection of robotics and neuroscience.
My works take inspiration from the brain's planning system, which constructs two complementary maps: a semantic map that encodes information from multiple sensory modalities, and a metric map that path-integrates local displacements. I study how the two maps can emerge from simple random exploration, their manifold geometry.
Our recent study showed how to couple the two maps so agents can plan in metric space while reconstructing semantic information (e.g., landmarks), along the planned path.
We present REMI, a theoretical framework showing how coupling, via place cells, between semantic maps and metric maps (from grid cells) enables agents to plan paths in novel environments using metric representations, while reconstructing semantic information (e.g., landmarks) along the planned path.
We showed that auto-encoding sensory signals during spatial exploration can lead to a sparse representation of space, similar to hippocampal place cells. We explained how these representations can remain stable even after learning to encode many rooms continuously, without catastrophic forgetting. See also Trading Place for Space.
We showed that increasing the resolution of spatial encoding reduces the number of distinct contexts that can be stored by place cells, revealing a trade-off between position accuracy and contextual capacity. We derived theoretical bounds on this trade-off using manifold geometry and neural noise models. See also Time Makes Space.
We developed a segmentation and measurement pipeline for Transmission Electron Microscopy (TEM) images using U-Net to automate the diagnosis of proteinuria-related kidney disease.
Open-source PyTorch library for scalable simulation of structured, modular RNNs. Enables fast construction of many interacting recurrent modules with customizable sparse and signed connectivity. Automatically handles initialization to stabilize training and dynamics, and is optimized for GPU acceleration in large-scale sequence modeling.