Launched this week

SuperPowers AI
Real time ambient visual agents for phones and wearables
684 followers
Real time ambient visual agents for phones and wearables
684 followers
Claude-grade AI agents that see what you see—on your phone or glasses. Solve visual problems instantly, no coding needed.












Agents Base
Hey Product Hunt 👋.
We noticed there were a lot of powerful tools like Claude code and Github that non-technical people didn't have access to, so for the past few months we decided to make it as easy as possible to level the playing field using real-time visual agents. The problem with existing tools:
❌ Not safe and scary to set up
❌ Requires hardware or tech knowledge of clouds
❌ Just bringing code to non-technical people doesn't solve the problem in terms of UI/UX
SuperPowers AI enables non-technical people to solve impossible problems by vibe-coding agents using voice and real time video.
Unlimited Cheap Computer Use
Instead of paying $200/mo for a Claude Max subscription, we figured out how to get it to work with cheaper, nearly free, models.
How?
Users can edit the voice commands to teach Super how to accomplish any complex multi-step actions on a Mac or Android using entirely english targets.
You don't actually need an expensive Mac mini or a Max subscription to automate everything you already do following this pattern.
Example Power:
Voice command: Get the news
Prompt:
1) Open Google News in chrome
2) Summarize the articles on the page
3) Email me the summary at email@domain.com
So you can get started NOW, for free, and start automating your Mac or Android within minutes!
At launch we support the Meta Display Glasses, Apple Vision Pro, Android XR like Luma Ultra, and SMS/Facetime/WhatsApp video calls to lower the barriers of access. Apple is currently reviewing the iPhone and Apple Vision Pro apps, so please start at getsupers.com on all devices.
Told
The ambient, always-on framing is what makes this interesting to me most visual AI tools require you to intentionally invoke them, which creates enough friction that people just don't bother. Removing that trigger step could genuinely change the usage pattern. My question is around onboarding: how do you help users develop the mental model of when to trust the agent versus when it's going to hallucinate on something visual? That trust calibration is usually where these kinds of tools lose people in the first two weeks. Curious how you're handling the early activation loop.
Agents Base
@jscanzi This is a good question! Each "Power" is basically an "RL" environment, but vibe-coded by consumers. So there can be many mistakes initially, but the real time API manages the experience and records the feedback. So if there is hallucination, it acts as the reward/RL loop that iterates over time. The real time API is saving all of this feedback and has the ability to regenerate the powers based on the errors. If we can solve getting consumers to build RL environments in this feedback loop, we solve the data problem for robotics and dwarf current AI labs.
Looks really cool @rohan_arun1 does it work with the regular Meta's or just the ones with the display? I have the RayBan's and the Oakley HSTN if you need me to test them out.
Agents Base
@reed_floren Yes it also works with meta 2 glasses through voice commands and shows the output on the phone instead of the glasses so if you can help test them that would be great!
This is super interesting -> I can see using this for tracking my photo subjects (face to a name) on my volume sports jobs.
Agents Base
@mark_rezansoff If you're interested I can generated that power for you! It's very easy let me know
Copperlane
Very cool idea! Curious what use cases you’re seeing most so far from early users?
Agents Base
@brianna_lin So far the most common use cases are social agents, social mods, social posting, etc. There's a lot of requests for home improvement, plumbing, electrical, and fixing cars so we'll be working on those next. What would you like to see?
Caught your SuperPowers AI launch, and I must say that ambient visual agents are technically ambitious, but I'm curious about the go to market. 226 upvotes show interest, but how are you thinking about user acquisition cost when the value prop requires sustained engagement to prove itself?
From the performance marketing side, wearable/mobile handoff creates attribution gaps that most founders underestimate. In MENA specifically, we see 40%+ higher CAC on experimental categories without clear conversion events in the first 72 hours. If you're exploring paid growth or need perspective on mobile-first attribution architecture for ambient tech, happy to share what's working (and failing) in visual AI campaigns.
The category needs better measurement frameworks indeed.
Agents Base
@ielrefaae Great insight thanks! Very interested in learning this market from our customers. One of my last AI products, Cheat Layer, was the first startup approved by openAI to sell GPT-3 for automation in 2021, so we were able to time the chatGPT wave with our launch and exploded. I suspect there will be another wave with Meta Display, Apple Vision Glasses, and Google XR coming, and we can time this one as well if we solve the AI+AR UX issues first. We already run on Meta Display, Apple Vision Pro, and Android XR so we're already solving these now, and expect this to be a much larger market coming soon. If you try it, it's a very different, collaborate, experience to use AI in glasses over pulling out a phone and using an app. I highly recommend
Very excited about today’s launch.
Real-time visual agents are going to allow non developers to do amazing things in the real world.
Imagine an angel on your shoulder that understands where are you, what you’re looking at, and can intuit your objective, all that with long running context and memory across devices and models.
@rohan_arun1 is the genius behind the Tech, and we’re both really looking forward to where the community takes “vision” into the world.
🚀
Agents Base
@ronp Yes super excited for today and it's been great working on this with you!