by Joshua B Tenenbaum Papers

14 papers found

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Clemencia Siro, Guy Gur-Ari, Gaurav Mishra et al.

ICLR 2025oral

Can Large Language Models Understand Symbolic Graphics Programs?

Zeju Qiu, Weiyang Liu, Haiwen Feng et al.

ICLR 2025poster
28
citations

Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains

Vighnesh Subramaniam, Yilun Du, Joshua B Tenenbaum et al.

ICLR 2025poster

Vision CNNs trained to estimate spatial latents learned similar ventral-stream-aligned representations

Yudi Xie, Weichen Huang, Esther Alter et al.

ICLR 2025poster

VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning

Yichao Liang, Nishanth Kumar, Hao Tang et al.

ICLR 2025poster

What Makes a Maze Look Like a Maze?

Joy Hsu, Jiayuan Mao, Joshua B Tenenbaum et al.

ICLR 2025poster

Building Cooperative Embodied Agents Modularly with Large Language Models

Hongxin Zhang, Weihua Du, Jiaming Shan et al.

ICLR 2024poster

HAZARD Challenge: Embodied Decision Making in Dynamically Changing Environments

Qinhong Zhou, Sunli Chen, Yisong Wang et al.

ICLR 2024poster

Learning Grounded Action Abstractions from Language

Lio Wong, Jiayuan Mao, Pratyusha Sharma et al.

ICLR 2024oral

Learning to Act from Actionless Videos through Dense Correspondences

Po-Chen Ko, Jiayuan Mao, Yilun Du et al.

ICLR 2024spotlight

Learning to Jointly Understand Visual and Tactile Signals

Yichen Li, Yilun Du, Chao Liu et al.

ICLR 2024poster

LILO: Learning Interpretable Libraries by Compressing and Documenting Code

Gabriel Grand, Lio Wong, Maddy Bowers et al.

ICLR 2024poster

Probabilistic Adaptation of Black-Box Text-to-Video Models

Sherry Yang, Yilun Du, Bo Dai et al.

ICLR 2024poster

Video Language Planning

Yilun Du, Sherry Yang, Pete Florence et al.

ICLR 2024poster
144
citations