I've been an early tester and user. Really like how it helps me see iterations on an output (e.g. image generation using nano banana) so I can go back, branch from an earlier version, add more metadata / context to a branch etc. And then when I go back the next day it's all there for me.
Spine
Hey PH 👋 Akshay here, CEO of Spine.
We built Spine to be the AI workspace where agents research, build, and deliver. You describe a project, agents research across the web, and you get finished results on a visual canvas where you can see every step.
Here's what's new:
Integrations: Spine agents now connect to your apps. Google Drive, Slack, CRMs, calendars, project management.
One prompt can pull a prospect list from your CRM, research each company across their website, news, and financials, then draft personalized outreach. All connected.
Automations: Build a workflow once. No triggers to configure, no Zapier logic. Just tell Spine what you want done and when. Daily, weekly, custom. You come back to finished work.
What this looks like in practice:
→ Set up a weekly competitive intel workflow. Agents browse competitor websites, track pricing and product changes, scan their blog and social, and deliver a structured report every Monday.
→ One of my workflows monitors my ICP's space for news, trends, and regulatory shifts, writes up why it matters, and saves it to Google Sheets. I show up to calls knowing things my buyers don't expect me to know.
→ Before a sales call, agents research the prospect, pull recent news and leadership changes, and generate a deck with relevant context. After the call, they draft a follow-up you can send that same day.
→ Before a tax meeting, they research the relevant tax regulations and generate a spreadsheet you can hand your accountant.
Why is this better?
Most AI tools run a single agent in a chat thread. Spine agents work on a canvas backed by a block-based DAG, they run in parallel, pass structured context to each other, and produce compound deliverables.
State-of-the-art on GAIA Level 3 and DeepSearchQA benchmarks. The canvas isn't decoration. It's the infrastructure.
Try it → Connect your first app and set up a workflow that runs while you sleep. Start with something you need done every week.
🎁 Use code SPINEUP for up to 30% off any annual plan. Offer ends in 5 days.
Ashwin and I are in the comments all day. Ask us anything, or tell us what workflow you'd automate first.
→ getspine.ai
One of the coolest launch today! Is there any one thing that spine can do today but even power users stitching together GPT+Zapier+Notion can’t??
Spine
@lak7 Great question. The honest answer: most individual tasks you can do in Spine, you could technically stitch together with GPT + Zapier + Notion + enough patience.
The difference is deterministic vs. non-deterministic work.
If you know every step upfront, A → B → C, Zapier is great. Build the chain, run it, done.
But most real work isn't like that. You start a research task and step 3 surfaces something that changes what you should've searched for in step 1. Or one agent finds a dead end and needs to reroute the whole plan.
Spine's agents pass structured context to each other through a DAG, so they adapt mid-run. One agent's output reshapes what the next agent does. That's not a workflow, it's a swarm. The canvas just makes it visible.
The other piece: try getting Zapier + GPT to do multi-step web research with citations, synthesis across 20+ sources, and a final deliverable, all in one run. We benchmark against the hardest agent evals (GAIA Level 3, DeepSearchQA) and beat systems with 10x our resources.
tl;dr: if you know every step in the chain, Zapier works fine. If the problem has unknown unknowns, that's where Spine lives.
@budhkarakshay that DAG approach sounds really cool, especially the adaptive flow part. Will definitely experiment with it over the next few days.
Spine
Hey PH, Ashwin here, co-founder and CTO.
Quick technical context on how this works under the hood.
Integrations: When an agent needs data from an external tool, you don't set up a separate connector. Just prompt spine in plain english, it handles auth, figures out which tool to use, and asks for your permission when needed.
You don't configure anything. The integration is just part of the workflow.
Automations: Agents re-run the full workflow on schedule. Not a cached refresh. They browse the web again, re-pull from your tools, and produce updated results.
Your Monday morning report actually reflects what happened over the weekend.
Scheduling: Daily, weekly, or custom. No triggers to set up, no Zapier-style logic. You describe what you want and when. Spine handles the rest.
Happy to answer anything technical in the comments.
Try it out at -- getspine.ai
CRM integration is where this gets complicated. One misconfigured agent run corrupting a contact list is a nightmare to clean up.
Spine
@mykola_kondratiuk 100% valid concern. this is why write actions in Spine require explicit permissions. by default, agents will research and prepare the update but ask before pushing anything to your CRM or any other tool. you stay in the loop on anything destructive.
so the flow is more like: agent pulls contacts, enriches them, drafts the changes, then says "here's what I want to update, approve?" rather than silently writing back.
That's the right default. Explicit write-gating is what separates a useful research agent from one that causes incidents.
Cross-app context gaps are brutal. Linear says sprint on track, Slack tells a different story, GitHub shows 40% done. If Spine surfaces those conflicts explicitly instead of averaging them out - that's the feature I actually want.
Spine
@mykola_kondratiuk you're describing exactly how the canvas works. you tell it what you want, and it spins up one agent block pulling from Linear, another from Slack, another from GitHub, each with its own context and logic. then a downstream block synthesizes all three.
the key thing is it's a DAG, not a summary tool. so that final block isn't averaging signals, it's getting structured context from each upstream agent. if Linear says "on track" but GitHub shows 40% done, that conflict flows through as a conflict, not a blended answer.
you could even have it flag those mismatches explicitly and write them into a status doc or push a Slack message back to the team. research → conflict detection → action, one canvas.
Solid. The synthesis step is where it gets tricky - when Linear says 'on track' and Slack says 'blocked', which signal wins?
one prompt → agents research, write docs, update tools…
feels powerful, but also slightly terrifying 😅
especially when it’s not just reading data, but writing back into your apps
curious what the “oh shit” moment looked like during testing
Spine
@webappski ha the "slightly terrifying" part is real. we felt it too.
one moment that sticks out: we had a canvas doing competitive intel, and one of the agents hit a dead competitor's website. instead of giving up, it spun up a browser use block on its own, went to the Wayback Machine, and pulled the archived version because it was that determined to finish the task.
nobody prompted that. the agents passed enough context through the DAG that it figured out the workaround on its own. that was the "oh shit this actually works" moment and the "oh shit we need guardrails" moment at the same time.
on the write-back piece: totally get the concern. unless you give the system full permissions, it asks before making any update. so it's research → decision → action, one canvas, but you stay in the loop on anything destructive.
The recurring workflow angle is interesting most agent tools focus on one-off runs. Curious how Spine handles auth token refresh for long-running integrations like Google or Slack? That's usually where scheduled agents break in my experience
Spine
@dklymentiev Great question, our agents pause when access is not present or expired and requests the user to reconnect via email notification at the moment.