#101 DR. WALID SABA - Extrapolation, Compositionality and Learnability

MLST Discord! https://discord.gg/aNPkGUQtc5 Patreon: https://www.patreon.com/mlst YT: https://youtu.be/snUf_LIfQII We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis. References: A Spline Theory of Deep Networks [Balestriero] https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon! [00:00:00] Intro [00:00:58] Interpolation vs Extrapolation [00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity [00:32:18] Keith's brain teaser [00:36:53] Neural turing machines / discrete vs continuous / learnability

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).