V-JEPA 2 is Meta's new world model, trained on video to understand and predict the physical world. It enables zero-shot robot planning and sets SOTA benchmarks in visual understanding. Model, code, and new benchmarks are now open.
Introducing our new Meta AI app built with Llama 4 – the first step toward a personal AI. Used daily across WhatsApp, Instagram, Facebook & Messenger, it offers natural voice interactions that learn about you, keep you connected, and simplify multitasking!
With the release of Llama 3.1 405B, TogetherCompute built LlamaCoder — an open source web app that can generate an entire app from a prompt. The repo has now been cloned by hundreds of devs on GitHub and starred 2K+ times.
AudioCraft is Meta's simple framework that generates high-quality, realistic audio and music from text-based user inputs after training on raw audio signals as opposed to MIDI or piano rolls.
Developed by AI research teams at Meta, Movie Gen delivers results across a range of capabilities. We’re excited for the potential of this line of research to usher in entirely new possibilities for casual creators and creative professionals alike.
Introducing the next generation of the Meta Training and Inference Accelerator (MTIA), the next in our family of custom-made silicon, designed for Meta’s AI workloads.