What s worked for us looks very different from spray-and-pray.
We ve learned that outbound works when it s intentional at every step.
A few things that made the biggest difference for us:
Getting the ICP really right. Sometimes the first outreach isn t to the buyer, but to someone who can open the door. Personalization isn t optional. Company context, role, recent updates. Generic gets ignored fast. Channels are chosen by output, not comfort. We double down on what actually converts. The first message rarely works. Conversations usually start around the third or fourth touch, if there s value each time. Timing matters more than volume. Funding news, hiring, social posts. Showing up when the problem is top of mind changes everything. We focus on relationships, not just pipeline. Some buy later. Some refer. All conversations compound. Context before calls helps. If someone engages multiple times, the conversation feels very different. Signals matter. Engagement often tells you when to reach out, not just who.
Over time, I ve realized how much effort we put into our websites on landing pages, pricing, testimonials, product tours and yet, most visitors only ever deeply interact with one or two sections depending on your ICP.
For developer-first products, that s usually docs.
For consumer apps, maybe it s onboarding or pricing.
For enterprise tools, perhaps case studies or ROI calculators.
A lot of people read YC RFS Spring 2026 as a trend list. It s not. It s a signal of where work inside companies is quietly breaking.
Here s how this shows up in real teams:
Product teams YC references @Cursor , but the opportunity isn t coding faster. It s helping PMs synthesize interviews, metrics, and feedback to decide what to build next.
Finance and hedge funds Firms like Renaissance, Bridgewater, and D.E. Shaw won by systematising decisions. AI-native hedge funds push this further with continuous, machine-driven strategies.
As more teams build AI agents, search, and personalized feeds, one problem keeps surfacing. Not generation. Not model quality.
It s retrieval and ranking. Deciding what information should show up and in what order.
Most teams solve this by stitching together systems. Vector search for meaning. Keyword search for precision. Custom logic for business rules. Over time, relevance logic spreads everywhere and becomes hard to change.
This debate often gets framed as Should researchers use AI for literature reviews?
I think the real question is different.
Is it ethical to spend hundreds of researcher hours on mechanical work when that time could be spent advancing actual knowledge?
Think about a researcher spending an entire weekend searching papers, skimming irrelevant abstracts, copying citations, and fixing references. That s not insight or discovery. That s overhead.
Watched a launch yesterday. By morning, the founder's DMs were full of pitches from other builders. No questions about the product. Just "here's what I'm working on."
Look, networking is part of this. We all need it. But we're skipping a step. Launch day used to mean something. Try the product. Ask real questions. Then connect.
Now we've optimized so hard for efficiency that we skip straight to pitching. @Mastra a hit #3 yesterday despite this. But think about what that says quality products have to fight through noise just to get noticed.
Here's my take: we're not wrong to network. We're just moving too fast.