
Allen Institute of Artificial Intelligence
AI for the Common Good.
411 followers
AI for the Common Good.
411 followers
Objaverse is A massive dataset with 800k+ annotated 3D objects. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category.
This is the 11th launch from Allen Institute of Artificial Intelligence. View more

MolmoAct 2
Launching today
MolmoAct 2 is an open Action Reasoning Model that reasons in 3D before directing robot actions, handles bimanual tasks without per-task fine-tuning, and runs up to 37x faster than MolmoAct.
For robotics researchers and ML engineers.







Free Options
Launch Team





700 hours of bimanual robot demonstrations, all open, is the kind of training resource the robotics field has been missing.
What it is: MolmoAct 2 is an open Action Reasoning Model from Ai2 that reasons in 3D before directing physical robot actions, trained in part on the MolmoAct 2-Bimanual YAM dataset, the largest open-source bimanual robotics dataset released to date.
Most robotics foundation models are trained on proprietary data that no one outside the lab can inspect or build on. That makes reproducing results nearly impossible and limits who can meaningfully contribute to the field.
Ai2 built MolmoAct 2 differently, starting with the data. The MolmoAct 2-Bimanual YAM dataset covers 700 hours of two-arm manipulation demonstrations, folding towels, scanning groceries, clearing tables, charging smartphones, and more. It contains over 30 times the robot data used to train the original MolmoAct.
What makes it different: Bimanual capability is baked into the base model rather than added through per-task fine-tuning. The language annotations were reannotated to increase unique instruction labels from 71,000 to around 146,000, which makes the model more robust to real-world phrasing variation.
The dataset was supplemented with a broader mix covering different arms, camera setups, and control schemes so the model generalises beyond the training hardware.
Key features:
700-hour MolmoAct 2-Bimanual YAM dataset, fully open
Native bimanual manipulation without per-task fine-tuning
Reannotated language instructions for phrasing robustness
MolmoAct 2-Think variant with adaptive depth perception tokens
Reference hardware setup published: YAM arms, overhead and close-up cameras, tabletop workspace
Benefits:
Researchers can study, reproduce, and build on the training data directly
Dataset covers varied arms, cameras, and control schemes for broader generalisation
Open action tokenizer released alongside model weights
Training code coming soon under open-source license
Who it's for: Robotics researchers and ML engineers who need open training data and reproducible recipes to build or improve manipulation models.
The data problem in robotics AI is as significant as the model problem. Releasing both together is what makes this launch worth tracking.
I wonder how this dataset handles the variability in real-world object interactions—does it include failure cases or only successful demonstrations? That could be huge for robust policy learning.