Generally Intelligent
Ein Podcast von Kanjun Qiu
37 Folgen
-
Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology
Vom: 28.2.2022 -
Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity
Vom: 21.12.2021 -
Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory
Vom: 15.10.2021 -
Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement
Vom: 24.9.2021 -
Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning
Vom: 10.9.2021 -
Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement
Vom: 18.6.2021 -
Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI
Vom: 20.5.2021 -
Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI
Vom: 12.5.2021 -
Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization
Vom: 2.4.2021 -
Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations
Vom: 27.3.2021 -
Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models
Vom: 18.3.2021 -
Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions
Vom: 5.3.2021 -
Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding
Vom: 24.2.2021 -
Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning
Vom: 17.2.2021 -
Episode 03: Cinjon Resnick, NYU, on activity and scene understanding
Vom: 1.2.2021 -
Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process
Vom: 7.1.2021 -
Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems
Vom: 15.12.2020
Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.
