I've been an early tester and user. Really like how it helps me see iterations on an output (e.g. image generation using nano banana) so I can go back, branch from an earlier version, add more metadata / context to a branch etc. And then when I go back the next day it's all there for me.
Spine
Hey PH π Akshay here, CEO of Spine.
We built Spine to be the AI workspace where agents research, build, and deliver. You describe a project, agents research across the web, and you get finished results on a visual canvas where you can see every step.
Here's what's new:
Integrations: Spine agents now connect to your apps. Google Drive, Slack, CRMs, calendars, project management.
One prompt can pull a prospect list from your CRM, research each company across their website, news, and financials, then draft personalized outreach. All connected.
Automations: Build a workflow once. No triggers to configure, no Zapier logic. Just tell Spine what you want done and when. Daily, weekly, custom. You come back to finished work.
What this looks like in practice:
β Set up a weekly competitive intel workflow. Agents browse competitor websites, track pricing and product changes, scan their blog and social, and deliver a structured report every Monday.
β One of my workflows monitors my ICP's space for news, trends, and regulatory shifts, writes up why it matters, and saves it to Google Sheets. I show up to calls knowing things my buyers don't expect me to know.
β Before a sales call, agents research the prospect, pull recent news and leadership changes, and generate a deck with relevant context. After the call, they draft a follow-up you can send that same day.
β Before a tax meeting, they research the relevant tax regulations and generate a spreadsheet you can hand your accountant.
Why is this better?
Most AI tools run a single agent in a chat thread. Spine agents work on a canvas backed by a block-based DAG, they run in parallel, pass structured context to each other, and produce compound deliverables.
State-of-the-art on GAIA Level 3 and DeepSearchQA benchmarks. The canvas isn't decoration. It's the infrastructure.
Try it β Connect your first app and set up a workflow that runs while you sleep. Start with something you need done every week.
π Use code SPINEUP for up to 30% off any annual plan. Offer ends in 5 days.
Ashwin and I are in the comments all day. Ask us anything, or tell us what workflow you'd automate first.
β getspine.ai
Spine
Hey PH, Ashwin here, co-founder and CTO.
Quick technical context on how this works under the hood.
Integrations:Β When an agent needs data from an external tool, you don't set up a separate connector. Just prompt spine in plain english, it handles auth, figures out which tool to use, and asks for your permission when needed.
You don't configure anything. The integration is just part of the workflow.
Automations:Β Agents re-run the full workflow on schedule. Not a cached refresh. They browse the web again, re-pull from your tools, and produce updated results.
Your Monday morning report actually reflects what happened over the weekend.
Scheduling: Daily, weekly, or custom. No triggers to set up, no Zapier-style logic. You describe what you want and when. Spine handles the rest.
Happy to answer anything technical in the comments.
Try it out at -- getspine.ai
One of the coolest launch today! Is there any one thing that spine can do today but even power users stitching together GPT+Zapier+Notion canβt??
one prompt β agents research, write docs, update toolsβ¦
feels powerful, but also slightly terrifying π
especially when itβs not just reading data, but writing back into your apps
curious what the βoh shitβ moment looked like during testing
CRM integration is where this gets complicated. One misconfigured agent run corrupting a contact list is a nightmare to clean up.
Spine
My favorite part of this launch: scheduled research. I have one set up tracking what people are saying about AI tools across Reddit and Twitter β it runs weekly and drops a summary straight into Notion. No more manual scrolling to stay on top of sentiment.
Cross-app context gaps are brutal. Linear says sprint on track, Slack tells a different story, GitHub shows 40% done. If Spine surfaces those conflicts explicitly instead of averaging them out - that's the feature I actually want.