Amarsia

Amarsia

Ship AI fast, iterate safely — no glue code.

430 followers

Amarsia is a reliable AI platform to ship AI features without breaking things. Build and deploy AI workflows as production-ready APIs without SDKs, databases, or infrastructure. Conversations, state, and outputs are stored automatically so iteration feels safe, not risky. With predictable behavior, versioned changes, and clear visibility, teams move faster with confidence — and spend less time debugging AI in production.
This is the 4th launch from Amarsia. View more

Conversation API

Launched this week
Build chatbots with memory using just an API
Building AI chat features often means too much complexity — SDKs, databases, and infrastructure — just to support conversations and memory. Conversations API removes that overhead. Build stateful AI chat without managing backend systems. All chat data is stored for you — you only keep the conversation_id. What it helps with – AI chat memory & state – Faster prompt iteration – No backend setup – Ideal for low-code builders Built after helping builders stuck on AI chat infra instead of product.
Conversation API gallery image
Conversation API gallery image
Conversation API gallery image
Conversation API gallery image
Conversation API gallery image
Free Options
Launch Team / Built With
agent by Firecrawl
agent by Firecrawl
Gather structured data wherever it lives on the web
Promoted

What do you think? …

Anuj Kapasia
Hey Product Hunt 👋 I built Conversations API after repeatedly helping people integrate AI chat into their products. The common problem wasn’t models — it was everything around them: conversation memory, storage, infra, and slow iteration cycles. This is a small, focused API that removes those blockers. No SDKs. No databases. No infra to manage. Happy to answer questions, explain design decisions, or hear what you’re building. https://amarsia.com https://www.amarsia.com/document...
Amber Wang

Congrats on the launch! @anujkapasia 🎉As a product manager who's been through the pain of integrating chat features into our product, this resonates deeply. The "everything around the model" problem is so real – we spent 3 weeks just setting up conversation state management, storage, and context handling before we could even start iterating on the actual user experience. What I love about Conversation API:

  • Just keep the conversation_id – this is brilliant. No schema migrations, no database setup, no infrastructure headaches.

  • Faster iteration – being able to test different prompts without worrying about backend changes is exactly what product teams need during the exploration phase.

A few questions from a PM perspective:

  1. Versioning: If I update my prompt logic, can I A/B test different versions while keeping the same conversation history?

  2. Analytics: What visibility do I get into conversation quality? (e.g., average turns, drop-off points, user satisfaction signals)

  3. Cost predictability: How does pricing scale with conversation length and complexity? This is critical for product planning.

The "no glue code" promise is huge. We've wasted so much time on integration boilerplate that could've been spent on actual product value. Excited to try this out! 🚀

Anuj Kapasia

@amberjolie Great questions!

  1. Once a conversation starts, it’s tied to a specific, committed version of your prompt + config. This keeps live conversations stable even if you later change or improve the workflow. New conversations automatically use the latest version.

  2. You get clear visibility into how the AI behaves in production. This was one of the core problems we started Amarsia to solve—AI features often look fine in demos but break in real usage, and PMs don’t know what changed or why.

  3. Plans have monthly token limits, and pricing is primarily token-based beyond that.

Totally agree with you—the glue code and constant engineering dependency slow teams down and make it hard for product teams to iterate fast or clearly see what’s driving real user value.

Amber Wang

@anujkapasia 

Thanks for the detailed reply, @anujkapasia! Your answers are incredibly helpful and confirm that you're solving a major pain point for product teams.

I'd like to talk about a more complex challenge we're facing, and I'm curious how Amarsia thinks about it. In our product, we don't just use a single model. We have a router that directs requests to different models based on the task. For example:

  • A powerful (and expensive) model for complex tool calls.

  • A specialized multimodal model for handling image and file inputs.

  • A faster, cheaper model for simple summaries or classification tasks.

My question is:

Does Conversation API operate at the level of a single model endpoint, or can it act as a higher-level abstraction layer for a multi-model strategy?

For instance, could I define a single 'conversation' within your API that, under the hood, intelligently routes different turns of the conversation to different underlying models based on predefined rules?

Essentially, can Amarsia manage the complexity of this multi-model orchestration, or is it designed to streamline the connection to one model endpoint at a time?

This kind of multi-model management is one of the major hurdles for us after solving the basic state management problem. Your thoughts would be greatly appreciated!

Anuj Kapasia

@amberjolie Thanks for the well-articulated question — it’s a great one.

At the moment, the Conversation API operates at the level of a single model endpoint. Amarsia is designed to help teams build micro LLM workflows with strong observability, fast iteration, and straightforward deployment. It’s intentionally not positioned as a full agent or orchestration builder like n8n or Zapier.

Our primary focus is on improving the quality, reliability, and debuggability of each individual workflow component. Many of the teams we work with handle multi-step or multi-model logic by chaining Amarsia workflows together using simple decision logic (for example, basic if/else routing in their own code).

That said, the pattern you’re describing is definitely achievable today by introducing a lightweight decision layer on your side. That layer can determine intent or complexity and then route requests to the appropriate Amarsia workflow, each backed by a different model.

So while Amarsia doesn’t yet manage multi-model orchestration natively within a single conversation abstraction, it’s designed to plug cleanly into that kind of architecture and reduce the complexity at the workflow level.

Yoang

cool! 
I have already saw lots of nodoce or low code product, which they dont really let me just use with out understand code logic, THANKs you guys build this for low code creating

Anuj Kapasia

@yoang_loo Thanks for the kind words Yoang.
I truly agree, I've seen so many people getting stuck on AI features and eventually needing either serious engineering help, specially for production releases.

We're trying to provide a solid foundation so not everyone have to code the same thing again and again.

Jonec Cndv
What’s the difference to response api from openAI?
Anuj Kapasia
@jns8cndv we manage and store your chat history. No database setup required. Additionally we provide end to end AI management features: - No code writing - Rapid versioning and new releases without changing code. - Fully transparent logs for your AI features. Monitor how it’s doing on production. - RAG, web search and custom tools also without writing infrastructure overhead.
Alex Cloudstar

Been cobbling a chat demo; keeping memory/state is the messy part. If Amarsia handles that + versioning, huge relief. How easy is data export or self-hosted later? I try to avoid lock-in. Either way, no-backend setup sounds handy for quick tests.

Anuj Kapasia

@alexcloudstar Hi Alex — data export is straightforward. You can request it from our team, and we’ll provide your full account data, including chat history, commit history, and AI logs.

We don’t offer a self-hosted version yet, but if there’s strong demand from customers, it’s something we’d be open to supporting.

Feel free to try out the product, or best - book a guided demo!

Ali Raza
Curious This looks promising. Does the Conversation API also handle long-term memory limits, or is it mainly for session-based conversations?
Anuj Kapasia

@alirazaengineer Once a conversation is initiated, all chats for that conversation_id is stored with us indefinitely. User can continue the same conversation anytime. However context from one conversation never flows into any other conversation!

Jay Dev

Wow, Amarsia looks amazing! The auto-stored conversation data sounds like a lifesaver for iterating. How does versioning handle changes that impact older conversation histories? Super curious!

Anuj Kapasia

@jaydev13 Great question!

Conversations API has been designed keeping this in mind, any new version release for AI feature would not impact older conversations.

Once a conversation has initiates, its locked in to a version of workflow commit history.

1234
Next
Last