Best AI papers explained
Ein Podcast von Enoch H. Kang
518 Folgen
-  Causal Interpretation of Transformer Self-AttentionVom: 24.5.2025
-  A Causal World Model Underlying Next Token Prediction: Exploring GPT in a Controlled EnvironmentVom: 24.5.2025
-  Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMsVom: 24.5.2025
-  Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-GenerationVom: 24.5.2025
-  Prompts from Reinforcement Learning (PRL)Vom: 24.5.2025
-  Logits are All We Need to Adapt Closed ModelsVom: 24.5.2025
-  Large Language Models Are (Bayesian) Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context LearningVom: 23.5.2025
-  Inference-Time Intervention: Eliciting Truthful Answers from a Language ModelVom: 23.5.2025
-  From Decoding to Meta-Generation: Inference-time Algorithms for Large Language ModelsVom: 23.5.2025
-  LLM In-Context Learning as Kernel RegressionVom: 23.5.2025
-  Personalizing LLMs via Decode-Time Human Preference OptimizationVom: 23.5.2025
-  Almost Surely Safe LLM Inference-Time AlignmentVom: 23.5.2025
-  Survey of In-Context Learning Interpretation and AnalysisVom: 23.5.2025
-  From Decoding to Meta-Generation: Inference-time Algorithms for Large Language ModelsVom: 23.5.2025
-  LLM In-Context Learning as Kernel RegressionVom: 23.5.2025
-  Where does In-context Learning Happen in Large Language Models?Vom: 23.5.2025
-  Auto-Differentiating Any LLM Workflow: A Farewell to Manual PromptingVom: 22.5.2025
-  metaTextGrad: Learning to learn with language models as optimizersVom: 22.5.2025
-  Semantic Operators: A Declarative Model for Rich, AI-based Data ProcessingVom: 22.5.2025
-  Isolated Causal Effects of LanguageVom: 22.5.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
 
 