Hey everyone! Introducing Kimi Slides! Now with Nano Banana Pro
It's not easy to gatekeep this, bc it's way too impressive
TL;DR:
> It's editable Notebooklm Slides
> Designer level infographic
> Unlimited nano banana uasage in slides (only in next 48h)
Try it FREE (unlimited for next 24 hours)
Appreciate if you could support our launch:) thanks <3
https://www.producthunt.com/prod...
Flowtica Scribe
Hi everyone!
Kimi WebBridge is a practical bridge between AI agents and the browser.
Install the extension, connect it to your local agent, and the agent can use your existing Chrome or Edge session to handle web tasks like opening pages, filling forms, collecting information, and moving through websites for you.
The nice part is that CC/Codex, @Cursor, Hermes, and @OpenClaw can use it too.
A lot of daily work still happens in the browser, and WebBridge gives agents a simple way to actually operate there.
@zaczuo Hi Zac, congrats on the launch. Does this require an open session or can agenst standup/invoke the browser for certain tasks by themselves?
I know how tough interacting with a live browser can be as I've been frequently using Python and Selenium recently. Giving terminal agents a clean way to bridge that gap is a massive step up. I'm really curious how the extension actually passes the page data back to the LLM... does it clean everything up into structured JSON or a lightweight DOM snippet first, or is it just dumping raw HTML? How do you manage the token count?
The local first approach for browser control is a smart move for security. I have stayed away from most browser agents because I don't want to hand over my session cookies to a third-party server. How do you handle sites that are heavy on shadow DOM or complex anti-bot triggers?
Connecting agents to the 'live web' is still a major hurdle. Does the bridge provide a structured data output (JSON) for the agent, or does it just pass raw HTML?
How do you handle user data privacy when bridging AI agents to the live web? Especially for users on sites with sensitive content (banking, health portals)?
How do you handle sensitive actions like form submissions, is there a confirmation step before it clicks buy or send?
Giving terminal agents a clean way to interact with a real browser is a huge help. I'm really curious how the extension actually passes the page data to the LLM. Does it clean everything up into structured JSON or a lightweight DOM snippet first, or is it just dumping the raw HTML? Managing the token count while keeping the page context is always the trickiest part of building web agents.