Best AI papers explained
Ein Podcast von Enoch H. Kang
518 Folgen
-  Learning How Hard to Think: Input-Adaptive Allocation of LM ComputationVom: 26.5.2025
-  Highlighting What Matters: Promptable Embeddings for Attribute-Focused Image RetrievalVom: 26.5.2025
-  UFT: Unifying Supervised and Reinforcement Fine-TuningVom: 26.5.2025
-  Understanding High-Dimensional Bayesian OptimizationVom: 26.5.2025
-  Inference time alignment in continuous spaceVom: 25.5.2025
-  Efficient Test-Time Scaling via Self-CalibrationVom: 25.5.2025
-  Conformal Prediction via Bayesian QuadratureVom: 25.5.2025
-  Predicting from Strings: Language Model Embeddings for Bayesian OptimizationVom: 25.5.2025
-  Self-Evolving Curriculum for LLM ReasoningVom: 25.5.2025
-  Online Decision-Focused Learning in Dynamic EnvironmentsVom: 25.5.2025
-  FisherSFT: Data-Efficient Supervised Fine-Tuning of Language Models Using Information GainVom: 25.5.2025
-  Reward Shaping from Confounded Offline DataVom: 25.5.2025
-  Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM ReasoningVom: 25.5.2025
-  Understanding Best-of-N Language Model AlignmentVom: 25.5.2025
-  Maximizing Acquisition Functions for Bayesian Optimization - and its relation to Gradient DescentVom: 24.5.2025
-  Bayesian Prompt Ensembles: Model Uncertainty Estimation for Black-Box Large Language ModelsVom: 24.5.2025
-  Prompting Strategies for Enabling Large Language Models to Infer Causation from CorrelationVom: 24.5.2025
-  The Parallel Knowledge Gradient Method for Batch Bayesian OptimizationVom: 24.5.2025
-  FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearchVom: 24.5.2025
-  Automated Social Science: A Structural Causal Model-Based ApproachVom: 24.5.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
 
 