LfVoid: Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning?
Abstract
Pre-trained text-to-image generative models can produce diverse, semantically rich, and realistic images from natural language descriptions. Compared with language, images usually convey information with more details and less ambiguity. In this study, we propose Learning from the Void (LfVoid), a method that leverages the power of pre-trained text-to-image models and advanced image editing techniques to guide robot learning. Given natural language instructions, LfVoid can edit the original observations to obtain goal images, such as “wiping” a stain off a table. Subsequently, LfVoid trains an ensembled goal discriminator on the generated image to provide reward signals for a reinforcement learning agent, guiding it to achieve the goal. The ability of LfVoid to learn with zero in-domain training on expert demonstrations or true goal observations (the void) is attributed to the utilization of knowledge from web-scale generative models. We evaluate LfVoid across three simulated tasks and validate its feasibility in the corresponding real-world scenarios. In addition, we offer insights into the key considerations for the effective integration of visual generative models into robot learning workflows. We posit that our work represents an initial step towards the broader application of pre-trained visual generative models in the robotics field.
Main Figure
Videos
In this section, we should some videoes of the reward curve obtained from trained classifiers of LfV for some success and failure trajectories. Those classifiers use the edited images of LfV as positive samples, and observations in the replay buffer as negative samples. For each trajectory, the reward and the aligned observation at each timestep is provided.
From the visualization, it’s easy to see that our classifier can assign a monotonic increasing reward curve for those successful demonstrations, will the reward curve for failure trajectories is almost flat till the end. The videoes demonstrate LfV’s plausibility for real world robotic tasks.
![]() Push Success |
![]() Push Failure |
![]() LED Success |
![]() LED Failure |
![]() Wipe Success |
![]() Wipe Failure |