Kevin William David

Arch - Build fast, hyper-personalized agents with intelligent infra

byβ€’
Arch is an intelligent infrastructure primitive to help developers build fast, personalized agents in mins. Arch is a gateway engineered with LLMs to seamlessly integrate prompts with APIs, and to transparently add safety and tracing features outside app logic

Add a comment

Replies

Best
Salman Paracha
Maker
πŸ“Œ
Hello PH! My name is Salman and I work on Arch - an open source infrastructure primitive to help developers build fast, personalized agent in minus. Arch is an intelligent prompt gateway engineered with (fast) LLMs for the secure handling, robust observability, and seamless integration of prompts with your APIs - all outside business logic. Arch is built on (and by the contributors of) Envoy with the belief that: Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic. Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way. ⭐ Core Features: πŸ—οΈ Built on Envoy: Arch runs alongside application servers, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs. πŸ€– Function Calling: For fast agentic and RAG apps. Engineered with SOTA.LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function calling, and parameter extraction from prompts. Our models can run under <200 ms!! πŸ›‘οΈ Prompt Guard: Arch centralizes prompt guards to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code. 🚦 Traffic Management: Arch manages LLM calls, offering smart retries, automatic cut over, and resilient upstream connections for continuous availability between LLMs or a single LLM provider with multiple versions πŸ‘€ OpenTelemetry Tracing, Metrics and Logs : Arch uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with exiting observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance. - Visit our Github page to get started (and ⭐️ the project πŸ™) : https://github.com/katanemo/arch - To learn more about Arch our docs: https://docs.archgw.com/ A big thanks πŸ™ to my incredibly talented team who helped us to our first milestone as we re:invent infrastructure primitives for Generative AI.
Salman Paracha
@alex_tartach Thanks Alex - building this was a lot of fun, and its early days for us. Packing intelligence in infrastructure to help developers build fast agents (faster than before) is the ultimate goal
AndrΓ© J
What's a personalised agent? Web chatbot or personal assistant or? This field is moving so fast, it's hard to know what terms means these days. Thanks πŸ™
Salman Paracha
@sentry_co personalized means to customize the agent to be unique. Most agents are just summarizing over some data. With Arch you can build something very tailored like creating ad campaigns via prompts or updating insurance claims - and offer generative summaries in the same experience
Sarmad Siddiqui
Impressive work - At Meta we have the same core belief that safety of agents is paramount and as much as possible if we can tackle those concerns early in the request path the better - Arch feels like a great fit for responsible and safe AI - not to mention the other super powers it offers developers. One quick question: can you elaborate more about the prompt guard model, I see that you guys fined tuned it over the prompt guard from Meta?"
Salman Paracha
@sarmad_siddiqui Thank you! Yes, Arch uses purpose-built LLMs for guardrails The Arch-Guard collection of models can be found here. https://huggingface.co/collectio.... We fine-tuned over Meta's prompt guard and the optimization was to improve TPR (+4%) without impacting FPR. This was for the jailbreak use case, and the next set of baseline guardrails will include toxicity, harmfulness, etc.
Sarmad Siddiqui
@salman_paracha awesome I know my holiday season plans now!!!!
Wood Peng
This is awesome! Arch is a game-changer for building personalized agents. I love the idea of using Envoy as the foundation, as it's known for its scalability and reliability. The focus on prompt safety and observability is crucial for building trustworthy AI systems. I'm particularly excited about the fast function calling and parameter extraction capabilities – this will be a huge time-saver. I'm definitely going to check out the docs and give Arch a spin!
Salman Paracha
@peng_wood thank you! Great to have you looking at the docs and giving the project a spin. Would love any feedback as you try it out
Sarmad Qadri
Congrats on the launch! Really great project -- I believe in the premise of a gateway that consolidates a lot of the infrastructure work needed for any LLM project.
Salman Paracha
@saqadri Thanks Sarmad - really appreciate you taking the time to dig deeper on the project, and believing in the premise
Tanmay Parekh
All the best for the launch @salman_paracha & team!
Salman Paracha
@parekh_tanmay thank you πŸ™ πŸ™
Adil Hafeez
Kane
Congrats, Salman! Sounds awesome! I’m following this project on GitHub. Keep it going! πŸš€
Adil Hafeez
@blueeon thanks Kane. Do give it a shot at https://github.com/katanemo/arch/. And we will love to hear your feedback.
Adil Hafeez
Hello! My name is Adil Hafeez, and I am the Co-Founder at Katanemo and the lead developer behind Arch. Previously I worked on Envoy at Lyft. Arch is engineered with purpose-built LLMs, it handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling β€œbackend” APIs to fulfill the user’s request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way - all outside business logic. Here are some additional key details of the project, * Built on top of Envoy and is written in rust. It runs alongside application servers, and uses Envoy's proven HTTP management and scalability features to handle traffic related to prompts and LLMs. * Function calling for fast agentic and RAG apps. Engineered with purpose-built fast LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function/API calling, and parameter extraction from prompts. * Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code. * Manages LLM calls, offering smart retries, automatic cutover, and resilient upstream connections for continuous availability. * Uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance. We love arch, love open source and would love to build alongside the community. Please leave a comment or feedback here and I will be happy to answer!
Ishwar Jha
congratulations πŸ‘ I am going to dive in
Salman Paracha
@ishwarjha than you. Would love the feedback as we build Arch as an open source project
Adil Hafeez
@ishwarjha thank you @ishwarjha πŸ™
Naeem ul Haq
Congrats on the launch. Would love to try it, especially the prompt guard. Onward!
Salman Paracha
@_naeemulhaq Thanks Naeem - deeply appreciate the kind words. Would love to hack away with your team and see how we can help you move faster in building fast personalized agents...
Adil Hafeez
@_naeemulhaq thanks Naeem. Try it out here at https://github.com/katanemo/arch/. We would love to know your feedback.
12
Next
Last