Publications
* denotes equal contribution
2024
- Digital pathology assessment of kidney glomerular filtration barrier ultrastructure in an animal model of podocytopathyAksel Laudon*, Zhaoze Wang*, Anqi Zou*, Richa Sharma, Jiayi Ji, Connor Kim, Yingzhe Qian, Qin Ye, Hui Chen, Joel M Henderson, Chao Zhang, Vijaya B Kolachalama, and Weining LubioRxiv, 2024
- Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory ExperiencesZhaoze Wang*, Ronald W. DiTullio*, Spencer Rooke, and Vijay BalasubramanianIn NeurIPS 2024, 2024
The vertebrate hippocampus is thought to use recurrent connectivity in area CA3 to support episodic memory recall from partial cues. This brain area also contains place cells, whose location-selective firing fields implement maps supporting spatial memory. Here we show that place cells emerge in networks trained to remember temporally continuous sensory episodes. We model CA3 as a recurrent autoencoder that recalls and reconstructs sensory experiences from noisy and partially occluded observations by agents traversing simulated arenas. The agents move in realistic trajectories modeled from rodents and environments are modeled as continuously varying, high-dimensional, sensory experience maps (spatially smoothed Gaussian random fields). Training our autoencoder to accurately pattern-complete and reconstruct sensory experiences with a constraint on total activity causes spatially localized firing fields, i.e., place cells, to emerge in the encoding layer. The emergent place fields reproduce key aspects of hippocampal phenomenology: a) remapping (maintenance of and reversion to distinct learned maps in different environments), implemented via repositioning of experience manifolds in the network’s hidden layer, b) orthogonality of spatial representations in different arenas, c) robust place field emergence in differently shaped rooms, with single units showing multiple place fields in large or complex spaces, and (d) slow representational drift of place fields. We argue that these results arise because continuous traversal of space makes sensory experience temporally continuous. We make testable predictions: a) rapidly changing sensory context will disrupt place fields, b) place fields will form even if recurrent connections are blocked, but reversion to previously learned representations upon remapping will be abolished, c) the dimension of temporally smooth experience sets the dimensionality of place fields, including during virtual navigation of abstract spaces.
- Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal CodesSpencer Rooke, Zhaoze Wang, Ronald W. DiTullio, and Vijay BalasubramanianIn NeurIPS 2024 (Oral), 2024
Researchers have long theorized that animals are capable learning cognitive maps of their environment - a simultaneous representation of context, experience, and position. Since their discovery in the early 1970’s, place cells have been assumed to be the neural substrate of these maps. Individual place cells explicitly encode position and appear to encode experience and context through global, population level firing properties. Context, in particular, appears to be encoded through remapping, a process in which subpopulations of place cells change their tuning in response to changes in sensory cues. While many studies have looked at the physiological basis of remapping, the field still lacks explicit calculations of contextual capacity as a function of place field firing properties. Here, we tackle such calculations. First, we construct a geometric approach for understanding population level activity of place cells, assembled from known firing field statistics. We treat different contexts as low dimensional structures embedded in the high dimensional space of firing rates and the distance between these structures as reflective of discriminability between the underlying contexts. Accordingly, we investigate how changes to place cell firing properties effect the distances between representations of different environments within this rate space. Using this approach, we find that the number of contexts storable by the hippocampus grows exponentially with the number of place cells, and calculate this exponent for environments of different sizes. We further identify a fundamental tradeoff between high resolution encoding of position and the number of storable contexts. This tradeoff is tuned by place cell width, which might explain the change in firing field scale along the ventral dorsal axis of the Hippocampus. Finally, we demonstrate that clustering of place cells near likely points of confusion, such as boundaries, increases the contextual capacity of the place system within our framework.
- Towards Neural-Fidelity in Cognitive Task Modeling via Procrustes Distance OptimizationYihao Li, Wenxin Che, Zhaoze Wang, Nathan Cloos, Guangyu Robert Yang, and Christopher J. CuevaIn COSYNE, 2024
Neural networks are widely used for modeling neural activity in the brain. However, there is a growing recognition that the similarity between models and brains relies on a number of ad hoc design choices, including the specification of appropriate model inputs, outputs, and regularization hyperparameters applied during training. To address these challenges, we present an efficient and highly-generalizable approach that directly optimizes the Procrustes distance between recurrent neural network (RNN) activity and neural data with gradient descent. Directly tuning RNN connectivity via Procrustes distance minimization results in significantly better alignment between network activity and neural data, even when comparing models to neural data from experimental conditions that were never seen during training. Additionally, we have demonstrated that neural recordings collected during different tasks, albeit from the same region (M1), can enhance zero-shot neural alignment in neural network models trained on distinct tasks. Thus, we view this method as a general regularizer that leverages a small amount of neural data to select, from the huge space of potential models, those that are aligned with the brain. We subsequently leverage our method to facilitate an efficient search of the optimal inputs and outputs that guarantee higher neural-fidelity. Notably, the refined inputs show potential for mirroring neural perceptions of task variables, and the outputs appear to align more closely with downstream neural signals.
2023
- A Versatile Hub Model For Efficient Information Propagation And Feature SelectionZhaoze Wang, and Junsong Wang2023
2022
- Computational Assessment of Glomerular Basement Membrane Width and Podocyte Foot Process Width in an Animal Model of PodocytopathyAksel David Laudon*, Connor Kim*, Yingzhe Qian*, Zhaoze Wang*, Qin Ye*, Vijaya B. Kolachalama, Joel M. Henderson, and Weining LuJournal of the American Society of Nephrology, 2022