All activity
Zac Zuoleft a comment
Hi everyone! GLM-5-Turbo feels like a very intentional and interesting release. Instead of just calling it a faster GLM-5, Z.ai is positioning it as a model deeply optimized for OpenClaw from training onward. That means stronger tool calling, better breakdown of complex instructions, more stable timed and persistent tasks, and smoother long-chain execution—which is basically exactly what people...

GLM-5-TurboHigh-speed agentic model built specifically for OpenClaw
GLM-5-Turbo is Z.ai’s high-speed variant of GLM-5, deeply optimized for OpenClaw from the training stage. It excels at precise tool calling, complex command following, scheduled and persistent tasks, and long-chain execution with near-zero hallucinations. Faster, more reliable, and purpose-built for real agent workflows.

GLM-5-TurboHigh-speed agentic model built specifically for OpenClaw
MuleRun is the world's first self-evolving personal AI — it learns your work habits, decision patterns, and preferences, then keeps getting sharper over time. It runs 24/7 on your dedicated cloud VM, works while you're offline, and proactively prepares what you need before you ask.No coding. No setup. Just raise your AI and watch it evolve.

MuleRunRaise an AI that actually learns how you work
Zac Zuoleft a comment
Hi everyone! Even though Meta is still one of NVIDIA’s biggest customers, they had already been going all-in on their own silicon — and will clearly continue to do so. Meta is explicitly going inference-first instead of building only for giant pretraining jobs, and they can now ship a new MTIA chip (300 → 500) roughly every six months using modular chiplets and a reusable rack design. That is a...

MTIA 300Meta's 3rd-gen custom AI chips for GenAI inference
Meta is accelerating its custom silicon roadmap with four new MTIA chips in two years. Built with an inference-first focus and native PyTorch integration, they are designed to cost-effectively power GenAI at a massive consumer scale.

MTIA 300Meta's 3rd-gen custom AI chips for GenAI inference
Nemotron 3 Super is NVIDIA"s open 120B model with 12B active parameters, a 1M-token context window, and a hybrid Mamba-Transformer MoE design. It is built for coding, long-context reasoning, and multi-agent workloads without the usual thinking tax.

Nemotron 3 SuperOpen hybrid Mamba-Transformer MoE for agentic reasoning
Zac Zuoleft a comment
Hi everyone! Nemotron 3 Super really stands out because NVIDIA is framing it around two very real agent problems: the "thinking tax" of using a huge reasoning model for every step, and the "context explosion" that happens when long tool loops keep resending history and drift off goal. This is an open 120B model with 12B active parameters, built to make those workloads more practical. It uses a...

Nemotron 3 SuperOpen hybrid Mamba-Transformer MoE for agentic reasoning
TextaVoice is a zero-friction AI text-to-speech tool that works instantly in your browser. Generate voice with no signup, no limits, and download MP3 audio for creator and commercial use.

TextaVoiceGenerate commercial AI voiceovers without an account
Zac Zuoleft a comment
Hi everyone! TADA is one of the most interesting open-source voice releases I’ve seen in a while. The big idea is simple but brilliant: it aligns text and audio one-to-one, so the model never has to juggle that huge mismatch between text tokens and acoustic frames. That single change unlocks the three things people actually care about in TTS: way better speed, much longer context, and basically...

TADA1:1 text-acoustic alignment for 5x faster speech generation
TADA (Text-Acoustic Dual Alignment) is Hume AI's open-source speech-language model that synchronizes text and audio one-to-one. TADA synchronizes text and speech into a single continuous stream via 1:1 token alignment. Generating audio at 5x the speed of conventional LLM-based TTS systems completely eliminates skipped words and content hallucinations across 1000+ tests.

TADA1:1 text-acoustic alignment for 5x faster speech generation
Zac Zuoleft a comment
Hi everyone! VENTUNO Q emerging as one of the first really serious Qualcomm x Arduino platforms hits the edge AI and robotics wave perfectly. Combining up to 40 TOPS of local AI compute for vision, LLMs, and multimodal models with a dedicated STM32 real-time controller handles motors, sensors, and deterministic responses all on one board. Delivering a setup like this creates a massive advantage...

VENTUNO QDual-brain edge AI computer by Qualcomm and Arduino
VENTUNO Q is the first single-board computer born from Qualcomm's acquisition of Arduino. Fusing a 40 TOPS Snapdragon NPU with an STM32 microcontroller creates a unified dual-brain architecture, perfectly balancing heavy AI inference with real-time robotics.

VENTUNO QDual-brain edge AI computer by Qualcomm and Arduino
Zac Zuoleft a comment
Hi everyone! Phi-4-Reasoning-Vision-15B is Microsoft"s new 15B open-weight model that makes multimodal reasoning feel much more efficient. It was trained on 200B multimodal tokens, handles high-res screens well, and stays direct on simpler tasks while switching into deeper reasoning when needed. Looks especially strong for math, science, and computer-use agents. Weights on HF.
Phi-4-reasoning-visionOpen-weight 15B multimodal model for thinking and GUI agents
Zac Zuoleft a comment
Hi everyone! With no Phone (4) this year, the Phone (4a) Pro looks like the phone carrying Nothing in 2026. And it"s probably the most un-Nothing Nothing phone they've ever shipped :) Gone is the full transparent back. Instead, you get a slim full-metal aluminum unibody that feels properly premium. The iconic Glyph is now a bigger, brighter Matrix, and the transparent camera module is basically...

Nothing Phone (4a) ProRedefining the Nothing aesthetic with a metal unibody
Phi-4-reasoning-vision-15B is a compact open-weight multimodal model built on a mid-fusion architecture. Balancing fast direct perception with deep chain-of-thought, building capable computer-use agents and solving complex math is now highly efficient.
Phi-4-reasoning-visionOpen-weight 15B multimodal model for thinking and GUI agents
The Nothing Phone (4a) Pro features a slim 7.95 mm full-metal unibody, confining its signature transparency to the camera module. Running the Snapdragon 7 Gen 4, the phone delivers a 3000-nit Glyph Matrix and up to 140x telephoto zoom.

Nothing Phone (4a) ProRedefining the Nothing aesthetic with a metal unibody
Zac Zuoleft a comment
Hi everyone! AI2’s new Olmo Hybrid is the first hybrid 7B that clearly beats a pure transformer baseline (Olmo 3) in a fair fight. Same size as Olmo 3, trains at the same speed, but matches its accuracy with half the data and crushes long-context evals. The 3:1 RNN+attention mix just works. Super clean weights on HF. And you can even run the full model 100% locally in your browser on WebGPU!

Olmo Hybrid7B open model mixing transformers and linear RNNs
Olmo Hybrid is a fully open 7B model that combines transformer attention with linear RNN layers. Utilizing a 3:1 pattern of Gated DeltaNet to attention, it matches the accuracy of Olmo 3 on MMLU while using 49% fewer tokens.

Olmo Hybrid7B open model mixing transformers and linear RNNs
Zac Zuoleft a comment
"Book a demo" is essentially asking a high-intent buyer to endure friction just to see if your product is even a fit. I see a very clear split forming: AI is for instant product discovery and qualification, while human reps are for complex negotiation, security reviews, and relationship building. Applying the traditional Sales-Led Growth motion to early discovery is just too slow now. Love what...
"Book a demo" is killing your pipeline — not saving it
Dmitry ZakharovJoin the discussion
OmniXtreme is an open-source control framework pushing humanoids to hyperhuman limits. It perfectly balances generative Flow Matching for extreme motion planning with strict physical envelope clipping to prevent mid-air motor burnouts.

OmniXtremeOpen-source hyperhuman control framework for Unitree G1


