All activity
TextaVoice is a zero-friction AI text-to-speech tool that works instantly in your browser. Generate voice with no signup, no limits, and download MP3 audio for creator and commercial use.

TextaVoiceGenerate commercial AI voiceovers without an account
Zac Zuoleft a comment
Hi everyone! TADA is one of the most interesting open-source voice releases I’ve seen in a while. The big idea is simple but brilliant: it aligns text and audio one-to-one, so the model never has to juggle that huge mismatch between text tokens and acoustic frames. That single change unlocks the three things people actually care about in TTS: way better speed, much longer context, and basically...

TADA1:1 text-acoustic alignment for 5x faster speech generation
TADA (Text-Acoustic Dual Alignment) is Hume AI's open-source speech-language model that synchronizes text and audio one-to-one. TADA synchronizes text and speech into a single continuous stream via 1:1 token alignment. Generating audio at 5x the speed of conventional LLM-based TTS systems completely eliminates skipped words and content hallucinations across 1000+ tests.

TADA1:1 text-acoustic alignment for 5x faster speech generation
Zac Zuoleft a comment
Hi everyone! VENTUNO Q emerging as one of the first really serious Qualcomm x Arduino platforms hits the edge AI and robotics wave perfectly. Combining up to 40 TOPS of local AI compute for vision, LLMs, and multimodal models with a dedicated STM32 real-time controller handles motors, sensors, and deterministic responses all on one board. Delivering a setup like this creates a massive advantage...

VENTUNO QDual-brain edge AI computer by Qualcomm and Arduino
VENTUNO Q is the first single-board computer born from Qualcomm's acquisition of Arduino. Fusing a 40 TOPS Snapdragon NPU with an STM32 microcontroller creates a unified dual-brain architecture, perfectly balancing heavy AI inference with real-time robotics.

VENTUNO QDual-brain edge AI computer by Qualcomm and Arduino
Zac Zuoleft a comment
Hi everyone! Phi-4-Reasoning-Vision-15B is Microsoft"s new 15B open-weight model that makes multimodal reasoning feel much more efficient. It was trained on 200B multimodal tokens, handles high-res screens well, and stays direct on simpler tasks while switching into deeper reasoning when needed. Looks especially strong for math, science, and computer-use agents. Weights on HF.
Phi-4-reasoning-visionOpen-weight 15B multimodal model for thinking and GUI agents
Zac Zuoleft a comment
Hi everyone! With no Phone (4) this year, the Phone (4a) Pro looks like the phone carrying Nothing in 2026. And it"s probably the most un-Nothing Nothing phone they've ever shipped :) Gone is the full transparent back. Instead, you get a slim full-metal aluminum unibody that feels properly premium. The iconic Glyph is now a bigger, brighter Matrix, and the transparent camera module is basically...

Nothing Phone (4a) ProRedefining the Nothing aesthetic with a metal unibody
Phi-4-reasoning-vision-15B is a compact open-weight multimodal model built on a mid-fusion architecture. Balancing fast direct perception with deep chain-of-thought, building capable computer-use agents and solving complex math is now highly efficient.
Phi-4-reasoning-visionOpen-weight 15B multimodal model for thinking and GUI agents
The Nothing Phone (4a) Pro features a slim 7.95 mm full-metal unibody, confining its signature transparency to the camera module. Running the Snapdragon 7 Gen 4, the phone delivers a 3000-nit Glyph Matrix and up to 140x telephoto zoom.

Nothing Phone (4a) ProRedefining the Nothing aesthetic with a metal unibody
Zac Zuoleft a comment
Hi everyone! AI2’s new Olmo Hybrid is the first hybrid 7B that clearly beats a pure transformer baseline (Olmo 3) in a fair fight. Same size as Olmo 3, trains at the same speed, but matches its accuracy with half the data and crushes long-context evals. The 3:1 RNN+attention mix just works. Super clean weights on HF. And you can even run the full model 100% locally in your browser on WebGPU!

Olmo Hybrid7B open model mixing transformers and linear RNNs
Olmo Hybrid is a fully open 7B model that combines transformer attention with linear RNN layers. Utilizing a 3:1 pattern of Gated DeltaNet to attention, it matches the accuracy of Olmo 3 on MMLU while using 49% fewer tokens.

Olmo Hybrid7B open model mixing transformers and linear RNNs
Zac Zuoleft a comment
"Book a demo" is essentially asking a high-intent buyer to endure friction just to see if your product is even a fit. I see a very clear split forming: AI is for instant product discovery and qualification, while human reps are for complex negotiation, security reviews, and relationship building. Applying the traditional Sales-Led Growth motion to early discovery is just too slow now. Love what...
"Book a demo" is killing your pipeline — not saving it
Dmitry ZakharovJoin the discussion
OmniXtreme is an open-source control framework pushing humanoids to hyperhuman limits. It perfectly balances generative Flow Matching for extreme motion planning with strict physical envelope clipping to prevent mid-air motor burnouts.

OmniXtremeOpen-source hyperhuman control framework for Unitree G1
Zac Zuoleft a comment
Hi everyone! Have you seen this? https://www.youtube.com/watch?v=Ykiuz1ZdGBc When I watched the Unitree G1 doing full kung-fu and extreme parkour on Spring Festival Gala, my jaw literally dropped. Last year’s G1 was already impressive, but now Unitree and BIGAI have open-sourced the core control framework behind it: OmniXtreme. I immediately dug into the repo, and the moment I saw the extremely...

OmniXtremeOpen-source hyperhuman control framework for Unitree G1
Zac Zuoleft a comment
Hi everyone! The official Codex app for Windows is now in the Microsoft Store and it's built exactly for how most of us actually work. Previously you could run Codex through PowerShell or the VS Code extension, but this is the native desktop version we've been missing — secure sandbox, real PowerShell support, parallel agents with clean isolation, smooth diff review, and one-click editor...
Codex app for WindowsCodex now runs natively on Windows with secure sandbox
Zac Zuoleft a comment
Hi everyone! Step 3.5 Flash has been out for a few weeks and has quickly become one of the strongest open models for real agentic workflows. 196B sparse MoE with just 11B active per token, MTP-3 giving up to 350 tok/s on coding, solid 74.4% SWE-bench, and clean long-context handling. The OpenClaw support is seamless. Over the last couple days Step 3.5 Flash has even been #1 in daily OpenClaw...
Step 3.5 FlashFrontier open-source MoE model built for OpenClaw agents
Step 3.5 Flash is StepFun’s 196B sparse MoE model that activates only 11B parameters per token. It delivers frontier reasoning and strong agentic performance with high efficiency. Seamless native OpenClaw integration makes it one of the best open models for running serious agents right now.
Step 3.5 FlashFrontier open-source MoE model built for OpenClaw agents
The official Codex desktop app by OpenAI brings parallel coding agents natively to Windows. It isolates tasks in OS-level sandboxes and dedicated worktrees so agents can write, test, and propose code without trashing your local environment.
Codex app for WindowsCodex now runs natively on Windows with secure sandbox
Zac Zuoleft a comment
Literally the best app to experience the latest @Qwen3 local AI models on your phone! 🚀

Locally AI + QwenRun Qwen's latest models locally on your iPhone
Gemini 3.1 Flash-Lite is the fastest and most cost-efficient model in the Gemini 3 series. At only $0.25 input and $1.50 output per million tokens, it beats 2.5 Flash with 2.5X faster first token and 45% higher output speed while matching or beating quality.

Gemini 3.1 Flash-LiteBest-in-class intelligence for your high-volume workloads


