Nikhil Parthasarathy

Nikhil Parthasarathy

Research Scientist at Google DeepMind

About Me

Hi, I'm Nikhil, a Research Scientist at Google DeepMind (GDM)! My interests span a broad range of topics including efficient multimodal learning, active learning, self-supervised representation learning, visual neuroscience/perception, and AI alignment with human behavior. I completed my PhD at New York University and the Flatiron Center for Computational Neuroscience under the supervision of Prof. Eero Simoncelli- my research focused on developing computational models of vision that better align with primate visual cortex and human perception. I also have BS and MS degrees from Stanford University where I studied a wide-range of things including signal processing, machine learning, numerical methods, optimization, and computational biology.

In my free-time I'm an avid tennis player, skiier, rock climber, and classical guitarist.

Updates & Timeline

Feb 2025

We've released SigLIP 2: a new family of SoTA vision encoders for the open-source community to build on! Check out the blog post.

Feb 2025

Our work showing SoTA multimodal "distillation through data" was accepted to CVPR 2025!

Sep 2024

Our work developing a method to accelerate large-scale multimodal pretraining via online batch selection was accepted to NeurIPS 2024 Datasets and Benchmarks Track!

June 2024

Our work developing a biologically-inspired SSL model of cortical area V2 was accepted at TMLR with a Featured Certification!

Jan 2024

Started as a Research Scientist at Google DeepMind (now in the Vision Team led by Prof Andrew Zisserman)!

Nov 2023

Defended my PhD in Neural Science at NYU!

Sep 2023

Our works on learning robust and human-aligned visual representations from natural video and in-context scene understanding were accepted at NeurIPS 2023!

June 2022

Started as a Research Scientist Intern at Google DeepMind in the Deep Learning Team!

Selected Publications

Publication Teaser Figure 1

M. Tschannen*, A. Gritsenko*, X. Wang*, M. F. Naeem*, I. Alabdulmohsin*, N. Parthasarathy*, T. Evans*, L. Beyer*, Y. Xia, B. Mustafa, O. Hénaff, J. Harmsen, A. Steiner, and X. Zhai*, "Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features," Technical Report, arXiv preprint arXiv:2502.14786, 2025. [Paper]

Publication Teaser Figure 2

V. Udandarao*, N. Parthasarathy*, M. F. Naeem, T. Evans, S. Albanie, F. Tombari, Y. Xian, A. Tonioni, and O. J. Hénaff, "Active data curation effectively distills large-scale multimodal models," IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. [Paper]

Publication Teaser Figure 3

T. Evans, N. Parthasarathy*, H. Merzic, and O. J. Henaff*, "Data curation via joint example selection further accelerates multimodal learning," Advances in Neural Information Processing Systems, Track on Datasets and Benchmarks, Spotlight Award, 2024. [Paper]

Publication Teaser Figure 4

N. Parthasarathy*, O. J. Hénaff, and E. P. Simoncelli, "Layerwise complexity-matched learning yields an improved model of cortical area v2," Transactions on Machine Learning Research (TMLR), Outstanding Certification, 2024. [Paper]

Publication Teaser Figure 5

N. Parthasarathy, "Towards aligning artificial and biological vision systems with self-supervised representation learning," Ph.D. dissertation, New York University, 2024. [Thesis]

Publication Teaser Figure 6

M. Kuoch*, C.-N. Chou*, N. Parthasarathy, J. Dapello, J. J. DiCarlo, H. Sompolinsky, and S. Chung, "Probing biological and artificial neural networks with task-dependent neural manifolds," Conference on Parsimony and Learning, pp. 395-418, 2024. [Paper]

Publication Teaser Figure 7

N. Parthasarathy*, S. Eslami, J. Carreira, and O. Hénaff, "Self-supervised video pretraining yields robust and more human-aligned visual representations," Advances in Neural Information Processing Systems, vol. 36, pp. 65 743-65 765, 2023. [Paper]

Publication Teaser Figure 8

I. Balazevic*, D. Steiner*, N. Parthasarathy, R. Arandjelović, and O. Henaff*, "Towards in-context scene understanding," Advances in Neural Information Processing Systems, Spotlight Award, vol. 36, pp. 63 758-63 778, 2023. [Paper]

For a full list of publications, please refer to my Google Scholar profile.