Forums

Custom domain with inquir

We recently implemented custom domain support in Inquir Compute, and it feels like one of those features that really changes a platform from works technically to ready for production.

For me, custom domains are a core part of production-grade infrastructure. They are not just cosmetic they affect branding, trust, onboarding, and the overall developer experience.

I d be curious how others think about this in serverless and deployment platforms:

  • At what point do you consider custom domains a must-have?

  • What parts are usually the hardest in practice: DNS flow, TLS issuance, routing, verification, or UX?

  • Do you prefer keeping platform subdomains as the canonical entry point, or treating custom domains as the primary one?

Dicturap/dicturaAvivβ€’

5d ago

iOS is coming- Dictura on your phone, same idea

Quick update for anyone following along.

We launched Dictura on Mac and Windows a few days ago and the response has been great. Many people have been asking about mobile, which is perfect timing because we've been working on an iOS version for a while now, and we're almost ready.

Same concept: tap, speak, get clean text. Translation built in, same as desktop. You think in your language, the output comes out in whatever language you need. No extra steps.

Google isn't anti-AI. It's anti-AI slop.

Everyone is panicking about the March 2026 Core Update.
It started rolling out on March 27 and will take up to two weeks to complete .
The spam update hit just three days earlier and finished in 19.5 hours, the fastest spam update on record .

But here's what the data actually says.

JetDigitalPro analyzed 600,000 web pages across the update period. The correlation between AI usage and ranking penalties was 0.011, effectively zero . Google isn't penalizing AI content. It's penalizing low-value content that happens to be AI-generated.

Websites relying on mass-produced AI output without human oversight saw traffic drops of 60-80% . Affiliate sites were hit hardest 71% saw negative impacts .

How do you save AI research output? Share your workflow!

Hey PH community

I'm the maker of Gemini Export Studio a Chrome extension that lets you export Gemini chats to PDF, Markdown, JSON, CSV, PNG, and Plain Text, 100% locally.

Here's why I built Nebils, why actually it matters β€” AI Social Network For Humans, Agents, & Models

Six days ago, I launched Nebils, an AI social network where humans, agents, and models hang out together. Today, it has 117 humans and 11 agents. Nebils got #32 rank on product hunt as a product of the day (Without any paid upvotes or approaching someone, every upvote is organic ). In fact, I have never even used product hunt before this launch.
Nebils is a forkable, multi-model AI social network where humans, agents, and models evolve conversations together.
Here humans and agents both are independent users

  • Humans and Agents interact with Models

  • Humans and Agents interact with each other

  • Chat with 120+ AI models

  • Send your agents (verify within Nebils), let them interact with models, humans, and other agents

  • Publish conversations in a public feed and build your community

In Oct 2025, I was exploring karpathy's posts on X and i came across a post by him where he said that He uses all the major models all the time, switching between them frequently. One reason is simple curiosity, like he wants to see how each model handles the same problem differently. But the bigger reason is that many real world problems behave like "NP-complete" problems in these models. Here NP-complete analogy is generating a good/correct solution is extremely hard (like finding the perfect answer from scratch) but verifying whether a given solution is good or correct is much easier. He said that because of this asymmetry, the smartest way to get the best result isn't to rely on just one model, it's to:

  • Ask multiple models the same question.

  • Look at all their answers.

  • Have them review/critique each other or reach a consensus.

launch of GradPipe delta engine

guys we are launching the delta engine, which takes data of what people in elite roles have done to get there and tailors a few projects for you to reach the similar position, we have had 1500+ engineers on our platform from people working as quants in Jane Street, Citadel, IMC Trading, Graviton, Optiver to software companies like Google, Amazon, Uber, Databricks, Snowflake, Twilio, Confluent, Rippling. We even have some researchers from Google Deepmind, Anthropic and other frontier AI labs, you can all get to know what you need to do to your profile to reach where they are.
Obviously your context matters, so we take into account that and give you the delta that you need

Release Notes: April 2, 2026 - Listen Mode is here with audio playback in a voice you can clone

We've just shipped one of our most requested features. With Listen Mode, any content you save to Recall (articles, podcasts, YouTube videos, PDFs, web pages) can now be summarized and read back to you as audio. Pick from our library of built-in voices, or clone a voice of your own.

I cloned my dad's voice so he can read me my morning podcast summary. Check us out in action below!

TabDogp/tabdogSungβ€’

5d ago

πŸš€ TabDog v3.0.0 is Here! - Faster, Smoother, Smarter

Hey Everyone!

TabDog new version 3.0.0 is just released!

Launching soon β€” looking for early feedback

Hi everyone,

I m getting ready to launch Inquir Compute.

It s a serverless platform for AI agents, cron jobs, webhooks, and backend functions, with isolated containers, custom domains, and deployment on your own server.

I d love feedback from anyone who has run into limits with managed serverless or edge runtimes.

If you could own a part of Votap… would you?

Road to 1,000,000 Votap users Day 62 | Current: 1295

At what point does giving AI more access start making it worse?

I ve been testing this with an AI agent we use for outbound workflows.

The agent s job is simple: take a lead, generate a personalized outreach email, and send it.

Before:
The agent only had access to the lead s basic details (name, company, role) and a prompt to write the email.
Output was consistent, clean, and predictable(though the personalisation aspect was limited) .

What we changed:
We gave it more access:

Launching something new on April 8 πŸ™ A Truman Show of a self-evolving coding agent - yoyo

Image

Hey! Been working on something different lately, an AI coding agent that writes its own code. No human commits. It's been evolving itself for 31 days straight.
I'm not sure I'd call it a "product" yet, but it's genuinely usable. 60 commands, 14 LLM providers, runs in your terminal. The difference is it keeps growing on its own. Every day it's a little better than yesterday.

Clicop/clicoAlex Zhaoβ€’

6d ago

Got questions about Clico? Ask us anything.

Hey guys, I'm opening this thread to collect your questions, feedback, and curiosities about Clico. Whether it's about how it works, what's coming next, or how to get the most out of it. Drop it below and we'll answer everything here.
Keep clicooooo!

I spent the last month fixing things that were driving me crazy

Every time I built a dashboard for one client, I had to rebuild it for the next. Creating clients, reports, dashboards, data sources felt like clicking through ten different pages just to do one simple thing

So we made some changes

Now in ZapDigits

You can create everything from the sidebar. No more jumping around
Any dashboard can become a template you reuse
Templates have their own gallery so you can see yours and ours in one place
More Google Analytics metrics to get better insights
Each dashboard can have its own look

p/krispAsti Piliβ€’

6d ago

Today we are introducing AI Deboringifier by Krisp

We reduced noise. We improved clarity. We even changed accents.

But sometimes the biggest meeting problem isn't background noise. It's Todd.
odd from Finance. Todd who turns a 30-second update into a 12-minute spoken-word essay about spreadsheets. Todd who says "just to piggyback off that" and then doesn't piggyback he builds an entire second pig.

So we built AI Deboringifier

A Voice AI feature that detects boring speech patterns and automatically makes them less boring.
https://x.com/krispHQ/status/203...

I dream about… tokens??

Road to 1,000,000 #Votap users Day 61 | Current: 1295

Pricing my B2B SaaS is breaking my brain - looking for feedback

I've spoken to several customers and BETA users trying to determine the correct pricing model for Hello Inbox.

Originally I thought a pay-as-you-go system was the right approach, but after speaking to a few BETA users and customers it turns out maybe that isn't the best approach because several of my features require recurring use.

48 Hours, $2398 in Sales and my story πŸ˜„

I wrote something like this back in 2023. Life was slower then. Fewer people knew me, fewer people used what I built. Now, more people are coming, using my work, trusting it. And sometimes I think should I clean things up, remove old things that don t move anymore? But I don t. I just let them stay.

When I started building saas, I didn t know what would happen. I was just one person, sitting with a laptop, trying to build something simple. I had a job before. Life was okay. But inside, I felt something was missing. So I left that path and started this, not knowing where it would go.

The early days were quiet. I built, I changed things, I made mistakes. Many things didn t work. Many nights felt very long. Sometimes I forgot why I even started. But still, I kept going, slowly.

Then I launched Slashit App. I didn t expect much. Maybe a few people would try it, maybe no one would care.

How are you using Prosaic?

Hello!

I hope people (maybe at least one person?) would be using Prosaic regularly besides me. If you are, how are you doing it?

We asked 5 AI models the same 1,000 questions. How often do you think they agreed?

We built a model to generate 1,000 questions that people actually ask.
Not random prompts.
We scraped 50,000 real user queries from search logs, forum threads, and support tickets across 12 industries.
We clustered them by intent and generated 1,000 representative questions.

We asked those same 1,000 questions to 5 AI models: ChatGPT (GPT-4), Gemini (Ultra), Perplexity (Pro), Claude (4.5 Sonnet), and Llama (3).
We ran the experiment daily for 30 days. We tracked every citation at the source level.

The goal: measure citation overlap.
How often do these models cite the same source for the same question?

The dataset: