Best AI papers explained
Ein Podcast von Enoch H. Kang
521 Folgen
-
metaTextGrad: Learning to learn with language models as optimizers
Vom: 22.5.2025 -
Semantic Operators: A Declarative Model for Rich, AI-based Data Processing
Vom: 22.5.2025 -
Isolated Causal Effects of Language
Vom: 22.5.2025 -
Sleep-time Compute: Beyond Inference Scaling at Test-time
Vom: 22.5.2025 -
J1: Incentivizing Thinking in LLM-as-a-Judge
Vom: 22.5.2025 -
ShiQ: Bringing back Bellman to LLMs
Vom: 22.5.2025 -
Policy Learning with a Natural Language Action Space: A Causal Approach
Vom: 22.5.2025 -
Multi-Objective Preference Optimization: Improving Human Alignment of Generative Models
Vom: 22.5.2025 -
End-to-End Learning for Stochastic Optimization: A Bayesian Perspective
Vom: 21.5.2025 -
TEXTGRAD: Automatic Differentiation via Text
Vom: 21.5.2025 -
Steering off Course: Reliability Challenges in Steering Language Models
Vom: 20.5.2025 -
Past-Token Prediction for Long-Context Robot Policies
Vom: 20.5.2025 -
Recovering Coherent Event Probabilities from LLM Embeddings
Vom: 20.5.2025 -
Systematic Meta-Abilities Alignment in Large Reasoning Models
Vom: 20.5.2025 -
Predictability Shapes Adaptation: An Evolutionary Perspective on Modes of Learning in Transformers
Vom: 20.5.2025 -
Efficient Exploration for LLMs
Vom: 19.5.2025 -
Rankers, Judges, and Assistants: Towards Understanding the Interplay of LLMs in Information Retrieval Evaluation
Vom: 18.5.2025 -
Bayesian Concept Bottlenecks with LLM Priors
Vom: 17.5.2025 -
Transformers for In-Context Reinforcement Learning
Vom: 17.5.2025 -
Evaluating Large Language Models Across the Lifecycle
Vom: 17.5.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
