FreqPolicy: Frequency Autoregressive Visuomotor Policy with Continuous Tokens

1ShanghaiTech University, Shanghai, China
2The University of Hong Kong, Hong Kong
3Nanyang Technological University
4The Chinese University of Hong Kong, Hong Kong
{zhongym2024, mayuexin}@shanghaitech.edu.cn
+Indicates Corresponding Author

Video Presentation and Real World Experiments

Abstract

Learning effective visuomotor policies for robotic manipulation is challenging, as it requires generating precise actions while maintaining computational efficiency. Existing methods remain unsatisfactory due to inherent limitations in the essential action representation and the basic network architectures. We observe that representing actions in the frequency domain captures the structured nature of motion more effectively: low-frequency components reflect global movement patterns, while high-frequency components encode fine local details. Additionally, robotic manipulation tasks of varying complexity demand different levels of modeling precision across these frequency bands. Motivated by this, we propose a novel paradigm for visuomotor policy learning that progressively models hierarchical frequency components. To further enhance precision, we introduce continuous latent representations that maintain smoothness and continuity in the action space. Extensive experiments across diverse 2D and 3D robotic manipulation benchmarks demonstrate that our approach outperforms existing methods in both accuracy and efficiency, showcasing the potential of a frequency-domain autoregressive framework with continuous tokens for generalized robotic manipulation.

Frequency Domain Analysis

Directional Weight Score

(a) Heat maps of frequency band energy across action dimensions for different tasks.The top row shows Adroit tasks with high-dimensional actions (26 dimensions), while the bottom row presents Robomimic tasks with low-dimensional actions (10 dimensions). (b) Success rate of actions reconstructed with varying frequency ratios. We reconstruct action sequences using different proportions of frequency components and evaluate their success rates on the original tasks.

Method

Directional Weight Score

Overview of FreqPolicy with both training (a) and inference (b) procedures. It transforms action trajectories into the frequency domain via DCT, learns latent codes for different frequency level actions using FreqPolicy, and reconstructs actions through masked prediction and a diffusion-based decoder. This enables robust, frequency-aware, and high-fidelity robotic action generation.

Training Algorithm

Directional Weight Score

Inference Algorithm

Directional Weight Score

3D Simulation Results

Directional Weight Score

Main results on 48 simulation 3d tasks. Averaged over tasks,* denotes results we reproduced using the same expert demonstrations, for fair comparison.

2D Simulation Results

Directional Weight Score

Comparison between discrete and continuous methods on 2D tasks.

Generalization Results

Directional Weight Score

Generalization Results. Comparison on unseen DexArt test data.

Discussion on VLA

Directional Weight Score

Multitask results on RoboCasa. Experimental results of BC-Transformer, Diffusion Policy and GR00T-N1 are from the GR00T-N1 paper.

Simulation Video

Adroit Hammer

Adroit Door

Adroit Pen

DexArt Bucket

DexArt Faucet

DexArt Laptop

DexArt Toilet

Coffee Press Button

Close Double Door

Open Double Door

Open Single Door

Close Drawer

Open Drawer

Navigate Kitchen

Turn Sink Spout

Turn On Sink Faucet

CoffeeSetupMug

CoffeeServeMug

PushT

PnPStoveToCounter

Assembly

Bin Picking

Box Close

Coffee Pull

Coffee Push

Disassemble

Drawer Open

Hammer

Hand Insert

Handle Press

Lever Pull

Peg Insert Side

Peg Unplug Side

Pick Out Hole

Pick Place Wall

Pick Place

Push Wall

Push

Shelf Place

Soccer

Stick Pull

Sweep