Launched this week

Lyra 2.0 by NVIDIA
Explorable Generative 3D Worlds
0 followers
Explorable Generative 3D Worlds
0 followers
Lyra 2.0 is NVIDIA’s open-source framework that turns a single image into an explorable 3D world. It generates a camera-controlled roam video, then reconstructs it into 3D Gaussian Splats and a mesh for real-time rendering in game engines and simulators. Apache 2.0 code + weights on GitHub/Hugging Face for commercial use.





HeyForm
Lyra 2.0 is an open-source framework from NVIDIA that turns a single image into an explorable 3D world by generating a roam video and reconstructing it into 3D Gaussian Splats + a mesh for real-time simulation and rendering.
The problem: Getting diverse, sim-ready 3D environments is still expensive and slow. Teams either rely on curated 3D assets, limited scanned datasets, or spend a lot of time building worlds—making it hard to scale robotics/embodied AI training and rapid prototyping.
The solution: Lyra 2.0 takes one photo + a camera path and produces (1) a consistent exploration video and (2) a 3D reconstruction you can load into engines/sim. It specifically targets common failure modes in long trajectories like spatial forgetting (returning to an area changes) and temporal drift (quality degrades over time).
Features worth noting:
Single-image → roam video (camera-controlled exploration)
Video → 3D Gaussian Splats + mesh (real-time ready)
Improved long-range consistency (less drift/forgetting)
Open-source (Apache 2.0) code + weights (commercial use allowed)
Who it’s for: Robotics + embodied AI teams, simulation builders, and 3D/graphics engineers who want to quickly generate explorable environments from images—especially when 3D asset production is the bottleneck.
If you could generate unlimited “photo → explorable world” scenes, what would you build first: robotics training sims, game levels, or rapid environment prototyping for R&D?