Launched this week
UniToolx

UniToolx

Privacy-first web tools. Edit PDFs & images locally.

5 followers

A collection of powerful, local-first web tools for everyday tasks. Unitoolx handles PDF editing, image conversion, and developer utilities entirely within your browser. No server uploads. No file size limits (dependent on your RAM). No privacy concerns. Built for users who value speed and data sovereignty.
UniToolx gallery image
UniToolx gallery image
UniToolx gallery image
UniToolx gallery image
Free
Launch Team
Flowstep
Flowstep
Generate real UI in seconds
Promoted

What do you think? …

Gabriel Beguerie
Hi Product Hunt! 👋 I'm the maker of Unitoolx. I built this platform because I was tired of feeling anxious every time I had to upload a sensitive PDF or personal image to a random online converter just to make a quick edit. I realized that most "free tools" require you to trade your privacy for convenience. Unitoolx is my solution to that. It’s a suite of essential web tools that run 100% locally in your browser. Why use Unitoolx? 🔒 Privacy: Your files never leave your device. No server uploads. ⚡ Speed: Since there's no uploading, processing is instant. 🛠️ The Tools: PDF Merge/Split, Image Conversion, QR Generation, and more. It’s completely free and I’m just getting started. I’d love to hear your feedback! What other tools would you like to see added next? Thanks for checking it out! 🚀
Alejandro Luna

Hi Gabriel,

I think it’s great that you’re prioritizing privacy. Too often we sacrifice client data by using "free" tools, and Unitoolx is here to solve that problem.

Congratulations on the launch. You are addressing a fundamental issue: privacy. We frequently need to perform quick document edits or merge PDFs, and relying on free tools usually carries the risk of compromising sensitive third-party data (clients or friends). Without a doubt, Unitoolx has the potential to be an excellent product.

After testing the tool, I’d like to share some questions and feedback that I hope you find useful:

  • Technical question regarding local processing: How exactly does the local processing work? When visiting the site, I didn't see an executable to install, which might create the perception that files are being uploaded to the cloud. It would be ideal to clarify this on the website.

  • PDF Roadmap: What improvements do you plan to implement in the short term for PDF workflows?

  • Image Optimization: In the image interface, I tried the option to reduce file size, but in my tests, I didn't notice any change in the resulting file.

  • Text Editing: When adding text to a PDF, it remains fixed in one position. Do you plan to allow text elements to be movable?

  • OCR Functionality: Are you planning to add Optical Character Recognition? It would be a high-value feature.

  • Bulk Upload: It would be very useful to allow uploading multiple PDF files simultaneously from the start to streamline the workflow.

  • Editing text in images: Is it technically possible for the tool to allow uploading an image and editing the text that already exists within it?

You’ve done a fantastic job, and I’m sure Unitoolx will become a go-to tool. The focus on privacy and the independence from external servers for third-party data is a real game-changer.

Gabriel Beguerie

@alejandro_luna2 

Thanks a lot, Alejandro — I really appreciate the thoughtful feedback and the concrete questions. This is exactly the kind of input that helps Unitoolx improve fast.

Local processing (how it works + how you can verify it):
Unitoolx runs directly in your browser using client-side code (JavaScript + browser APIs). There’s nothing to install because the “engine” is the web app itself. When you load a PDF/image, the processing happens locally on your device (in-memory / in the browser context).
A simple practical way to verify this: open a PDF/image, start editing, then disconnect your internet (airplane mode). You’ll see the editor keeps working and you can still export/download the result. If the processing depended on a server, that workflow would typically break the moment you go offline.
(Important nuance: you need internet to load the website initially, but once it’s loaded, the file processing and export happen locally.)

PDF roadmap (near term):
Near-term priorities are focused on the most common workflows:

  • More reliable merge/split/organize

  • Better text and annotation tools

  • Improved export consistency (fonts/positioning)

  • Batch workflows (queue multiple files)

Image optimization (file size didn’t change in your test):
Great catch. In some cases (e.g., already-compressed JPEGs or certain PNGs), naive recompression won’t reduce size much. That said, the UI should make outcomes obvious. I’ll review this behavior and add clearer controls (format/quality) plus a “before vs after size” indicator so it’s immediately visible what changed.

Text editing in PDFs (movable elements):
Yes — that’s on the short-term list. The current behavior is too rigid. I plan to make text objects draggable/resizable after placement (similar to standard editors), with a simple object list for quick selection.

OCR functionality:
Yes, OCR is a high-value feature and fits the privacy-first approach. The goal would be client-side OCR so documents don’t need to leave the device. I’ll likely ship it as an optional feature because OCR can be heavier on CPU/memory depending on the file.

Bulk upload / multi-PDF workflows:
Agreed. A multi-file queue is a big usability win. I’m planning to add multi-select from the start and drag-and-drop batch imports.

Editing existing text inside images:
Directly editing “existing” raster text isn’t straightforward because images don’t contain editable text objects. The realistic approach is: detect text (OCR), allow the user to cover/erase the original area, and then re-type new text on top (a retouch + text overlay workflow). This becomes much more feasible once OCR is in place.

Thanks again for taking the time to test and write this up. If you’re willing, could you share what file type you tested for image compression (PNG/JPG/WebP) and roughly the original size? That would help me reproduce it precisely and fix/clarify the behavior.

Alejandro Luna

@gabriel_beguerie Gracias por tu amable y pronta respuesta. Las modificaciones que mencionas suenan interesantes; seguro esta herramienta seguirá mejorando y será genial. Espero con ansias probar las actualizaciones.

Te comparto la imagen original y la resultante. Si me proporcionas tu correo, te las envío por ese medio. En este sentido, creo que tu comentario es acertado: en ocasiones, el tipo de imagen no permite una reducción, y quizás eso fue lo que sucedió en mi caso.

Original



Resultante

Easy Tools Dev

The privacy-first approach really resonates with me - I've always been hesitant about uploading sensitive documents to random converter sites. Processing everything locally in the browser is such a relief, especially for work documents with client information. I'm curious: does the browser-based processing handle large PDF files smoothly, or is there a practical size limit before performance starts to struggle?

Gabriel Beguerie

@easytoolsdev

You’re right to ask — with browser-based (local) processing, the limiting factor is usually your device’s RAM/CPU (and whether the PDF is mostly text/vectors or scanned images), not a server quota.

Here’s how it behaves in practice:

  • No hard limit enforced by Unitoolx (I’m not blocking files by size), but performance will degrade once the browser starts running tight on memory.

  • Text-based PDFs (contracts, invoices, reports) are typically smooth even when they’re fairly large, because they’re compact and render efficiently.

  • Scanned PDFs (each page is a big image) are the real heavy ones. Those can become slow sooner because every page can be many megapixels and memory usage climbs fast.

Rule of thumb:

  • If it’s a normal document PDF: you’re usually fine.

  • If it’s a scanned PDF with many high-resolution pages: you may hit slowdowns depending on the machine (older phones/laptops struggle earlier than modern desktops).

What I’m doing / will do to improve this:

  • Keep heavy work off the main thread (Workers) so the UI stays responsive.

  • Add a friendly warning when a file is likely to stress the browser (based on page count + estimated pixel load).

  • Add guidance for large scans (e.g., split/optimize workflow).

If you tell me roughly what you consider “large” (e.g., page count + file size and whether it’s scanned), I can give a more concrete answer—and I can use that as a benchmark case to optimize for.