PI:Alex LAMB
Research Direction: Developing learning systems that rapidly acquire knowledge and adapt to new environments
Dr. Alex Lamb is devoted to developing learning systems that rapidly acquire knowledge and adapt to new environments. His notable contributions include 1. Identifying how to infer latent state variables and their types directly from observations, and providing theoretical guarantees and empirical results for multi‑step inverse models that predict actions from current and future observations—work that is highly valuable for offline reinforcement learning and model‑based continuous control; 2. Showing how to attain optimal learning with limited data by constructing synthetic training examples; 3. Proposing Manifold Mixup, which trains on synthetic examples to boost data efficiency, achieving outstanding performance in few‑shot learning scenarios. These achievements have attracted wide attention in academia and generated significant industrial impact. Dr. Lamb has published 40+ papers in top AI venues (NeurIPS, ICML, ICLR, etc.), with ~10,000 Google Scholar citations, presented a tutorial at ICML 2023, and he serves as a reviewer for NeurIPS, ICLR, ICML, AAAI, IJCAI and UAI.
Conference and Journal Publications:
▪ Recurrent Independent Mechanisms. Anirudh Goyal, Alex Lamb, Shagun Sodhani, Jordan Hoffmann, Sergey Levine, Yoshua Bengio, Bernhard Scholkopf. ICLR 2021 Spotlight Oral.
▪ Neural Function Modules with Sparse Arguments: A Dynamic Approach to IntegratingInformation across Layers. Alex Lamb, Anirudh Goyal, Agnieszka Słowik, Michael Mozer,Philippe Beaudoin, Yoshua Bengio. AISTATS 2021. 29.8% Acceptance Rate.
▪ GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning. Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang. AAAI 2021.
▪ Combining Top-Down and Bottom-Up Signals with Attention over Modules. Sarthak Mittal, Alex Lamb, Anirudh Goyal, Vikram Voleti, Murray Shanahan, Guillaume Lajoie, Michael Mozer, Yoshua Bengio. ICML 2020. 21.8% Acceptance Rate.
▪ KuroNet: Regularized Residual U-Nets for End-to-End Kuzushiji Character Recognition. Alex Lamb, Tarin Clanuwat, Asanobu Kitamoto. Springer-Nature Computer Science 2020.
▪ KaoKore: A Pre-modern Japanese Art Facial Expression Dataset. Yingtao Tian, Chikahiko Suzuki, Tarin Clanuwat, Mikel Bober-Irizar, Alex Lamb, Asanobu Kitamoto. ICCC 2020.
▪ SketchTransfer: A New Dataset for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks. Alex Lamb, Sherjil Ozair, Vikas Verma, David Ha. WACV 2020. 34.6% Acceptance Rate.
▪ On Adversarial Mixup Resynthesis. Chris Beckham, Sina Honari, Alex Lamb, Vikas Verma, Farnoosh Ghadiri, R. Devon Hjelm, Christopher Pal. NeurIPS 2019. 21.2% Acceptance Rate.
▪ State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations. Alex Lamb, Jonathan Binas, Anirudh Goyal, Sandeep Subramanian, Ioannis Mitliagkas, Denis Kazakov, Yoshua Bengio, Michael C Mozer. ICML 2019. Long Oral, 5.0% Acceptance Rate.
▪ Manifold Mixup: Learning Better Representations by Interpolating Hidden States. Alex Lamb*, Vikas Verma*, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio. ICML 2019. 22.6% Acceptance Rate.
▪ Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing too much Accuracy. Alex Lamb*, Vikas Verma*, David Lopez-Paz. AiSec 2019. 23.8% Acceptance Rate.
▪ KuroNet: Pre-Modern Japanese Kuzushiji Character Recognition with Deep Learning. Alex Lamb*, Tarin Clanuwat*, Asanobu Kitamoto. ICDAR 2019. Oral, 12.9% Acceptance Rate.
▪ Interpolation Consistency Training for Semi-Supervised Learning. Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, David Lopez-Paz. IJCAI 2019. 17.9% Acceptance Rate.
▪ End-to-End Pre-Modern Japanese Character (Kuzushiji) Spotting with Deep Learning. Tarin Clanuwat, Alex Lamb, Asanobu Kitamoto. Information Processing Society of Japan Conference on Digital Humanities 2018. Best Paper Award (1/60 accepted papers).
▪ GibbsNet: Iterative Adversarial Inference for Deep Graphical Models. Alex Lamb, Devon Hjelm, Yaroslav Ganin, Joseph Paul Cohen, Aaron Courville, Yoshua Bengio. NeurIPS 2017.
▪ Adversarially Learned Inference. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. ICLR 2017.
▪ Professor Forcing: A New Algorithm for Training Recurrent Networks. Alex Lamb*, Anirudh Goyal*, Ying Zhang, Saizheng Zhang, Aaron Courville, Yoshua Bengio. NeurIPS 2016.
▪ Separating Fact from Fear: Tracking Flu Infections on Twitter. Alex Lamb, Michael J. Paul, Mark Dredze. NAACL 2013.
Pre-print and Workshop Papers:
▪ Deep Learning for Classical Japanese Literature. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, David Ha. NeurIPS Creativity Workshop 2019.
▪ Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations. Alex Lamb, Jonathan Binas, Anirudh Goyal, Dzmitry Serdyuk, Sandeep Subramanian, Ioannis Mitliagkas, Yoshua Bengio. Arxiv.
▪ Learning Generative Models with Locally Disentangled Latent Factors. Alex Lamb*, Brady Neal*, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Ioannis Mitliagkas. Arxiv.
▪ ACtuAL: Actor-Critic Under Adversarial Learning. Anirudh Goyal, Nan Rosemary Ke, Alex Lamb, Devon Hjelm, Chris Pal, Joelle Pineau, Yoshua Bengio. Arxiv.
▪ Demand Forecasting Via Direct Quantile Loss Optimization. Kari Torkkola, Ru He, Wen-Yu Hua, Alex Lamb, Murali Balakrishnan Narayanaswamy, Zhihao Cen. US Patent. P36059-US.
▪ Discriminative Regularization for Generative Models. Alex Lamb, Vincent Dumoulin, Aaron Courville. CVPR Deepvision Workshop 2016.
▪ Variance Reduction in SGD by Distributed Importance Sampling. Guillaume Alain, Alex Lamb, Chinnadhurai Sankar, Aaron Courville, Yoshua Bengio. ICLR Workshop 2016.
▪ Investigating Twitter as a Source for Studying Behavioral Responses to Epidemics. Alex Lamb, Michael J. Paul, Mark Dredze. AAAI Fall Symposium on Information Retrieval and Knowledge Discovery in Biomedical Text 2012.