Jan Sobotka

Jan Sobotka

CS Master’s Student & AI/ML Research Assistant

Swiss Federal Institute of Technology in Lausanne (EPFL)

About me

My name is Jan, I am a master’s student in computer science at EPFL and a research assistant at the Autonomous Systems Group at the University of Texas at Austin, where I work on Large Language Models (LLMs) in strategy games. Additionally, I conduct research on world models for reinforcement learning agents at the Biorobotics Laboratory.

At a high level, I am interested in (1) understanding how our mind and cognition work and (2) building machines that can perceive, think, and learn. This intersection of artificial intelligence and cognitive computational neuroscience excites me the most. Currently, my primary research interests include the following areas:

  • (Mechanistic) interpretability
  • Reinforcement learning
  • Meta-learning
  • Machine learning applications in brain-computer interfaces and neuroprosthetics

I am always happy to discuss these topics, so if you have related thoughts or questions, please do not hesitate to contact me.

Actively looking for a 6-month master’s thesis internship in industry (Start: Feb 2026).


Interests
  • (Mechanistic) interpretability
  • Reinforcement learning
  • Meta-learning
  • Machine learning applications in brain-computer interfaces and neuroprosthetics
Education
  • Master's degree in Computer Science, 2024 - 2026

    Swiss Federal Institute of Technology in Lausanne (EPFL)

  • Bachelor's degree in Informatics, Specialization in Artificial Intelligence, 2021 - 2024

    Czech Technical University in Prague

Recent Publications & Preprints

(2025). MEIcoder: Decoding Visual Stimuli from Neural Activity by Leveraging Most Exciting Inputs. Conference on Neural Information Processing Systems (NeurIPS 2025).

Poster OpenReview

(2025). Weak-to-Strong Generalization under Distribution Shifts. Conference on Neural Information Processing Systems (NeurIPS 2025).

Poster OpenReview

(2025). Reverse-Engineering Memory in DreamerV3: From Sparse Representations to Functional Circuits. Conference on Neural Information Processing Systems (NeurIPS 2025, Spotlight at Mechanistic Interpretability Workshop).

OpenReview

(2024). Enhancing Fractional Gradient Descent with Learned Optimizers. Optimization Letters (Springer).

(2024). Investigation into the Training Dynamics of Learned Optimizers. 16th International Conference on Agents and Artificial Intelligence (ICAART 2024).

PDF DOI

(2024). Investigation into Training Dynamics of Learned Optimizers (Student Abstract). The 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24).

PDF DOI