Screenpipe

Screenpipe

AI powered by what you've seen said or heard

5.0
1 review

163 followers

AI Screen and Voice Recording Software | screenpipe screenpipe is an open-source library that records your screen & mic 24/7 and connects it to LLMs. it's designed for gathering life context and easily connecting it to AI for search, automation, and more.
This is the 2nd launch from Screenpipe. View more

screenpipe

Launching today
Your AI finally knows what you're doing
screenpipe turns your computer into a personal AI that knows everything you've done. Record. Search. Automate. All local, all private, all yours. Your AI finally knows what you're doing
screenpipe gallery image
screenpipe gallery image
screenpipe gallery image
Free Options
Launch Team / Built With
Famulor AI
Famulor AI
One agent, all channels: phone, web & WhatsApp AI
Promoted

What do you think? …

louis030195
Maker
📌
Hey PH 👋 I built screenpipe because I was losing my mind. 20k notes in obsidian, obsessive tracking, but my screen was still a black hole. So i built 24/7 screen & mic recording over a weekend. A user posted on HN about us. it blew up. Your AI finally knows what you're doing. Instead of copy pasting context across ChatGPT, Claude, Claude Code, Opencode, Pi, Gemini, etc. they just know everything you're up to. It record everything, your can search with AI, scroll back your screen history, automate your workflows. The best part is that the data stays on your computer and it's open source, auditable. What would make you use this daily?
Daniele Packard

Congrats! Seems cool. How does data privacy work? Recordings etc stay local?

louis030195

@daniele_packard All data stays on your computer by default, we use native Apple, Windows OCR and voice activity detection, segmentation and transcription

We also have feature to remove PII from OCR, DOM, transcriptions and screenshots like email, credit cards, etc. heavily tested

Uses 1-3 gb of RAM, 20 gb/m, 30% CPU

Kunal Gupta

My dream use case for this would be a thinking companion. I do a ton of stuff, but don't take enough time to reflect (even though I reflect ~45m a day, it's not deep enough). My dream version of this is something that watched me like this all day, thought about it really deeply, and at some interval, gave me the 3-4 hour version of reflections that were inspired from watching me, as if i went on a hike every single day, but it goes on the hike for me. Deeper changes to product I can make that solve recent problems I've been having, changes to habits that might be especially efficient, ponderings of what my recent conversations might mean for the market opportunity in general I'm working on.

(ps - i have no idea what "score with friends" is and how to get it tf off my profile)

louis030195

@djkgamc We've released recently an Obsidian integration, but it's just Markdown files. You can configure it to run at a schedule (e.g., 15 minutes, 1 hour, 6 hours) and it would query your screen activity, microphone activity, keyboard mouse everything from the time zone time range you define and then it would give you this reflection. You can customize the system prompt and this under the hood is using a coding agent similar to Claude Code, so it can edit files, is read files and it's secure. It's like it's not just one shot prompt dump like request to AI and so it can properly extract like to-do lists and stuff like this.

louis030195
Kunal Gupta

awesome

Aleksandar Blazhev

Cool idea, but how does the security side of things look?

louis030195

@byalexai we do PII removal to remove risky elements and rigorously test the code

all data is local so it's as secure as other files on your computer

we have encryption at rest on the roadmap (eg. bitwarden / crypto wallet style locks) and support right now end-to-end encrypted device sync

also it's open source so you can audit the code or fix it

https://github.com/mediar-ai/screenpipe

Jan Schutte

Great idea, just wish my local VLLM models were a bit faster.

louis030195

@janschutte it works great with just text LLM - we capture OCR, accessibility, so don't really need vision, but you can still process some frames to double check information

Valeriia Kuna

Congrats on the 2nd launch! I’ve been looking at Screenpipe through the lens of a Product Manager—specifically for capturing those raw, unpolished moments during user research sessions. Having a locally searchable record of everything could be a game-changer for synthesis. Since this version focuses more on AI agent integration, how do you see it helping non-technical users (like PMs) to query their recorded context without needing to touch the CLI?

louis030195

@valeriia_kuna Actually it's aimed to be easy to use for non technical users. You can just run the app and ask AI questions through screenpipe AI chat or Claude integration

Benjamin Shafii

One of my fav software. I use it a lot with opencode and openwork. Very practical.

Any plans to make this more dev focused?

louis030195

@benjamin_shafii Yes, we are trying to make an amazing experience both for non-technical users and devs. For devs, we are working on improving our SDKs and APIs, trying to make the data quality as high as possible and the performance really great, while providing good documentation. We are trying to listen more to talk to developers every day to improve the experience, and would love more feedback from you so we can make it really like an amazing integration with OpenWork.

https://docs.screenpi.pe/sdk-reference

12
Next
Last