Arch - Build fast, hyper-personalized agents with intelligent infra
byβ’
Arch is an intelligent infrastructure primitive to help developers build fast, personalized agents in mins. Arch is a gateway engineered with LLMs to seamlessly integrate prompts with APIs, and to transparently add safety and tracing features outside app logic
Hello PH!
My name is Salman and I work on Arch - an open source infrastructure primitive to help developers build fast, personalized agent in minus. Arch is an intelligent prompt gateway engineered with (fast) LLMs for the secure handling, robust observability, and seamless integration of prompts with your APIs - all outside business logic.
Arch is built on (and by the contributors of) Envoy with the belief that:
Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization β all outside business logic.
Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
β Core Features:
ποΈ Built on Envoy: Arch runs alongside application servers, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs.
π€ Function Calling: For fast agentic and RAG apps. Engineered with SOTA.LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function calling, and parameter extraction from prompts. Our models can run under <200 ms!!
π‘οΈ Prompt Guard: Arch centralizes prompt guards to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.
π¦ Traffic Management: Arch manages LLM calls, offering smart retries, automatic cut over, and resilient upstream connections for continuous availability between LLMs or a single LLM provider with multiple versions
π OpenTelemetry Tracing, Metrics and Logs : Arch uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with exiting observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance.
- Visit our Github page to get started (and βοΈ the project π) : https://github.com/katanemo/arch
- To learn more about Arch our docs: https://docs.archgw.com/
A big thanks π to my incredibly talented team who helped us to our first milestone as we re:invent infrastructure primitives for Generative AI.
@alex_tartach Thanks Alex - building this was a lot of fun, and its early days for us. Packing intelligence in infrastructure to help developers build fast agents (faster than before) is the ultimate goal
What's a personalised agent? Web chatbot or personal assistant or? This field is moving so fast, it's hard to know what terms means these days. Thanks π
@sentry_co personalized means to customize the agent to be unique. Most agents are just summarizing over some data. With Arch you can build something very tailored like creating ad campaigns via prompts or updating insurance claims - and offer generative summaries in the same experience
Report
Impressive work - At Meta we have the same core belief that safety of agents is paramount and as much as possible if we can tackle those concerns early in the request path the better - Arch feels like a great fit for responsible and safe AI - not to mention the other super powers it offers developers.
One quick question: can you elaborate more about the prompt guard model, I see that you guys fined tuned it over the prompt guard from Meta?"
@sarmad_siddiqui Thank you! Yes, Arch uses purpose-built LLMs for guardrails The Arch-Guard collection of models can be found here. https://huggingface.co/collectio.... We fine-tuned over Meta's prompt guard and the optimization was to improve TPR (+4%) without impacting FPR. This was for the jailbreak use case, and the next set of baseline guardrails will include toxicity, harmfulness, etc.
Report
@salman_paracha awesome I know my holiday season plans now!!!!
This is awesome! Arch is a game-changer for building personalized agents. I love the idea of using Envoy as the foundation, as it's known for its scalability and reliability. The focus on prompt safety and observability is crucial for building trustworthy AI systems. I'm particularly excited about the fast function calling and parameter extraction capabilities β this will be a huge time-saver. I'm definitely going to check out the docs and give Arch a spin!
Congrats on the launch! Really great project -- I believe in the premise of a gateway that consolidates a lot of the infrastructure work needed for any LLM project.
Hello! My name is Adil Hafeez, and I am the Co-Founder at Katanemo and the lead developer behind Arch. Previously I worked on Envoy at Lyft. Arch is engineered with purpose-built LLMs, it handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling βbackendβ APIs to fulfill the userβs request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way - all outside business logic.
Here are some additional key details of the project,
* Built on top of Envoy and is written in rust. It runs alongside application servers, and uses Envoy's proven HTTP management and scalability features to handle traffic related to prompts and LLMs.
* Function calling for fast agentic and RAG apps. Engineered with purpose-built fast LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function/API calling, and parameter extraction from prompts.
* Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.
* Manages LLM calls, offering smart retries, automatic cutover, and resilient upstream connections for continuous availability.
* Uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance.
We love arch, love open source and would love to build alongside the community.
Please leave a comment or feedback here and I will be happy to answer!
@_naeemulhaq Thanks Naeem - deeply appreciate the kind words. Would love to hack away with your team and see how we can help you move faster in building fast personalized agents...
Replies
Plano
Plano
DiffSense
Plano
Plano
FunBlocks AIFlow
Plano
LastMile AI
Plano
Plano
Arch
ReadPo
Arch
Arch
MockRabit
Plano
Arch
Educative
Plano
Arch