PI:Lu MI
Research Direction: The Convergence and Empowerment of AI and Neuroscience, and Brain-Computer Interface Algorithms
The Convergence and Empowerment of AI and Neuroscience, and Brain-Computer Interface Algorithms.
- Computational Neuroscience (AI for Neuroscience): Developing computational models and analytical tools for neuroscience, utilizing interpretable artificial neural networks to analyze multimodal neural data and simulate brain phenomena. Constructing large-scale models such as BrainGPT for digital twin brains. Empowering scientific research and data-driven discovery through AI, and introducing tools like large language models to develop scientific agents. These agents assist in digital simulation of neuroscience experiments, hypothesis testing, and experimental design, thereby accelerating the discovery of mechanisms of encoding, computation, and learning in the brain.
- Neuromorphic Intelligence (Neuroscience for AI): Investigating existing large models and intelligent agents in AI through the research paradigms and perspectives of neuroscience, including conducting interpretability and mechanistic studies and comparing them with known brain mechanisms. Further developing brain-inspired AI frameworks by imitating the brain's encoding, computation, and learning methods, and drawing on its characteristics such as sparsity, plasticity, diversity, and modularity.
- Brain-Computer Interface (BCI): Engineering applications of brain-computer interfaces, developing computational tools to decode brain signals into behavioral actions, visual images, and speech language.
Promoting the integration of biological neural networks and artificial neural networks, and accelerating scientific discoveries in brain research through AI. Utilizing AI algorithms to collect, process, analyze, and interpret large-scale, high-dimensional, and multimodal brain data, and understanding new mechanisms of brain encoding, computation, and learning at the single-cell level of neurons and their synaptic connections.
- Publications and Honors:
- A total of 15 papers have been published in top international conferences on AI and computational neuroscience, such as NeurIPS, ICLR, ICML, CVPR, AAAI, MICCAI, and Cosyne. Among them, 11 papers were authored as first author, co-first author, or corresponding author. Additionally, one U.S. patent has been obtained, and several honors have been received, including EECS Rising Star, Shanahan Fellowship, MathWorks Fellowship, NIH Award, and Most Creative Applications of AI. Invited oral presentations have been given at multiple academic seminars in the United States.
- Media Coverage and Collaboration:
- Research achievements have been covered by MIT News and Forbes. The work has gained widespread attention and application in neuroscience research, with multiple citations in significant neuroscience publications in Nature and its sub-journals in recent years. Active collaborations have been established with researchers from top institutions such as Harvard University, MIT, the University of Washington, and the Allen Institute for Brain Science.
1. Concept-Based Unsupervised Domain Adaptation.
Xinyue Xu, Yueying Hu, Hui Tang, Yi Qin, Lu Mi, Hao Wang, Xiaomeng Li.
Forty-second International Conference on Machine Learning (ICML 2025).
2. NetFormer: An Interpretable Model for Recovering Dynamical Connectivity in Neuronal Population Dynamics.
Ziyu Lu*, Wuwei Zhang*, Trung Le, Hao Wang, Uygar Sümbül, Eric Shea-Brown, Lu Mi.
The 13th International Conference on Learning Representations (ICLR 2025), Spotlight (Top 5.1%).
3. Active Learning of Two-Photon Holographic Stimulation for Identifying Neural Population Dynamics.
Andrew Wagenmaker*, Lu Mi*, Marton Rozsa, Matthew Storm Bull, Karel Svoboda, Kayvon Daie, Matthew D. Golub, Kevin Jamieson.
The 38th Conference on Neural Information Processing Systems (NeurIPS 2024).
4. Energy-Based Concept Bottleneck Models.
Xinyue Xu, Yi Qin, Lu Mi, Hao Wang, Xiaomeng Li.
The 12th International Conference on Learning Representations (ICLR 2024).
5. Learning Time-Invariant Representations for Individual Neurons from Population Dynamics.
Lu Mi*, Trung Le*, Tianxing He, Eli Shlizerman, Uygar Sümbül.
The 37th Conference on Neural Information Processing Systems (NeurIPS 2023).
6. Connectome-Constrained Latent Variable Model of Whole-Brain Neural Activity.
Lu Mi, Richard Xu, Sridhama Prakhya, Albert Lin, Nir Shavit, Aravinthan D.T. Samuel, Srinivas C. Turaga.
The 10th International Conference on Learning Representations (ICLR 2022).
7. Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate.
Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit.
The 36th Association for the Advancement of AI (AAAI 2022).
8. HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps.
Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov.
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021 (CVPR 2021).
9. Learning Guided Electron Microscopy with Active Acquisition.
Lu Mi, Hao Wang, Yaron Meirovitch, Richard Schalek, Srinivas C. Turaga, Jeff W. Lichtman, Aravinthan D. T. Samuel, Nir Shavit.
The 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2020).
10. Cross-Classification Clustering: An Efficient Multi-Object Tracking Technique for 3-D Instance Segmentation in Connectomics.
Yaron Meirovitch*, Lu Mi*, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit.
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019 (CVPR 2019).