MEIcoder: Decoding Visual Stimuli from Neural Activity by Leveraging Most Exciting Inputs

Abstract

Decoding visual stimuli from neural population activity is crucial for understanding the brain and for applications in brain-machine interfaces. However, such biological data is often scarce, particularly in primates or humans, where high-throughput recording techniques, such as two-photon imaging, remain challenging or impossible to apply. This, in turn, poses a challenge for deep learning decoding techniques. To overcome this, we introduce MEIcoder, a biologically informed decoding method that leverages neuron-specific most exciting inputs (MEIs), a structural similarity index measure loss, and adversarial training. MEIcoder achieves state-of-the-art performance in reconstructing visual stimuli from single-cell activity in primary visual cortex (V1), especially excelling on small datasets with fewer recorded neurons. Using ablation studies, we demonstrate that MEIs are the main drivers of the performance, and in scaling experiments, we show that MEIcoder can reconstruct high-fidelity natural-looking images from as few as 1,000-2,500 neurons and less than 1,000 training data points. We also propose a unified benchmark with over 160,000 samples to foster future research. Our results demonstrate the feasibility of reliable decoding in early visual system and provide practical insights for neuroscience and neuroengineering applications.

Publication
The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)
Jan Sobotka
Jan Sobotka
CS Master’s Student & AI/ML Research Assistant

I am a master’s student in computer science at EPFL, and a research assistant at the Autonomous Systems Group at the University of Texas at Austin. I am interested in reinforcement learning, (mechanistic) interpretability, and meta-learning.