Open-World HOI Synthesis
OpenHOI introduces an open-world framework for generating HOI sequences that demonstrates strong generalization across seen and unseen objects, high-level instructions, and long-horizon tasks.
OpenHOI introduces an open-world framework for generating HOI sequences that demonstrates strong generalization across seen and unseen objects, high-level instructions, and long-horizon tasks.
Understanding and synthesizing realistic 3D hand-object interactions (HOI) is critical for applications ranging from immersive AR/VR to dexterous robotics. Existing methods struggle with generalization, performing well on closed-set objects and predefined tasks but failing to handle unseen objects or open-vocabulary instructions. We introduce OpenHOI, the first framework for open-world HOI synthesis, capable of generating long-horizon manipulation sequences for novel objects guided by free-form language commands. Our approach integrates a 3D Multimodal Large Language Model (MLLM) fine-tuned for joint affordance grounding and semantic task decomposition, enabling precise localization of interaction regions (e.g., handles, buttons) and breakdown of complex instructions (e.g., "Find a water bottle and take a sip") into executable sub-tasks. To synthesize physically plausible interactions, we propose an affordance-driven diffusion model paired with a training-free physics refinement stage that minimizes penetration and optimizes affordance alignment. Evaluations across diverse scenarios demonstrate OpenHOI's superiority over state-of-the-art methods in generalizing to novel object categories, multi-stage tasks, and complex language instructions.
Pipeline: Our framework comprises two sequential components. First, a 3D multimodal large language model (3D MLLM) ingests high-level instructions and object point clouds to generate sequential affordance maps and decompose the high-level task into a sequence of sub-tasks. Second, the diffusion model takes the affordance map and the decomposed task sequence as conditions to synthesize realistic hand-object interaction sequences.
The visualization results showcase three types of long-horizon sequences—seen-object, unseen-object, and multi-object. The experiments demonstrate that our method exhibits strong generalization on both unseen objects and open-vocabulary instructions, enabling open-world HOI sequence synthesis.
The visualization results showcase three types of long-horizon sequences—seen-object, unseen-object, and multi-object. The experiments demonstrate that our method exhibits strong generalization on both unseen objects and open-vocabulary instructions, enabling open-world HOI sequence synthesis.
The visualization results showcase three types of long-horizon sequences—seen-object, unseen-object, and multi-object. The experiments demonstrate that our method exhibits strong generalization on both unseen objects and open-vocabulary instructions, enabling open-world HOI sequence synthesis.