Jaid Jashim

[Proposal] Bridge Memory: Safely “borrow” context across projects

TL;DR

Bridge Memory is a feature idea for Claude (Anthropic’s AI assistant) that lets devs temporarily pull in read-only context (“Memory Chips”) from other projects for a single thread—so you can reuse standards, snippets, and runbooks without leaking data or polluting memories.

What it is

* Memory Chips (ephemeral): Add chips like Project A → Auth Patterns or Project X → Incident Runbook while composing.

* Scoped & read-only: Chips are TTL (time-to-live) bound, no auto write-back; your current project’s memory stays clean.

* Inline provenance: Replies show which chip informed which part.

* Admin guardrails: Allow/deny lists, approvals, label-based redaction (PII/secrets), and audit logs.

Why it helps

* Cut repeat explanations across repos/teams.

* Reuse proven auth/logging/CI/CD patterns instead of reinventing.

* Faster onboarding by “bridging” Architecture + Conventions.

* Safer incident response: pull the right runbook just for this thread.

MVP flow

1. Compose your question.

2. Click Add Memory Chip → search by project/label/item.

3. Attach 1–3 chips (e.g., “Auth”, “Runbook”).

4. Send. Chips are used for this reply only and auto-expire.

Privacy & compliance

* Default-deny cross-project recall; everything is explicit & logged.

* Sensitive labels can be auto-redacted or require owner approval.

Respect data residency & tenant isolation; *TTL** limits scope and all access is auditable.

Example use cases

* Reuse JWT best practices from Service A while coding Service B.

* Pull “K8s deploy runbook” into a hotfix discussion.

* Bring “Coding standards” and “Architecture overview” into a new hire’s first week.

Open questions (feedback welcome!)

* Security model: Is read-only + TTL + provenance enough? What else is mandatory?

* UX: Where should chips live in the composer to avoid friction?

* Policy defaults: Org opt-in vs user-initiated with policy gates?

* Limits: Ideal cap on chips per message (1–3?) and default TTL (single reply vs N minutes)?

* Failure modes: What worries you (stale chips, over-scoping, audits)?

Disclosure: I’m not affiliated with Anthropic; this is a community proposal.

Participation: I’ll stay active in the thread and incorporate your feedback into a draft spec and mock flows.

159 views

Add a comment

Replies

Best
Abdul Rehman

I’d vote for 2–3 chips max per thread. More than that might get confusing or start mixing too much context.

Jaid Jashim

@abod_rehman Totally with you 2–3 chips feels like the right balance between utility and signal.

How I’m thinking about enforcing it:

  • Soft cap at 2, hard cap at 3: At 3, show “Context may dilute replace an existing chip or proceed with override.”

  • Relevance gating: Only allow chips above a similarity/label match threshold; lower-scoring chips get hidden behind “See more.”

  • Typed slots: e.g., 1 Standards + 1 Runbook + 1 Local Context prevents piling on the same kind.

  • Conflict check: If two chips overlap/contradict, prompt to pick one (or pin a source of truth).

  • UI guardrails: Counter (`2/3 active`), quick Replace action, and inline provenance so you can trace which chip influenced what.

Would you prefer a strict hard cap (no override) or a soft cap with an explicit override + rationale for those rare edge cases?