Forums

10+ Years of Backend Experience Taught Me How (Not) to Use AI

I want to talk about how I built @MCPCore - a cloud platform where developers create, deploy, and manage MCP servers from their browser - and what 10+ years of backend experience taught me about using AI in production work. Not the hype version. The honest one.

Every idea is already taken. So what?

I'm a backend engineer. I've spent most of my career building server-side systems, and I currently lead a backend team at my company. At some point I wanted to build something of my own. A product. Something real.

Cencurityp/cencuritypark

21d ago

Cencurity Engine Open Source Release

a lot of AI coding discussions focus on before or after

Redditp/redditRohan Chaubey

22d ago

If Reddit required face scans to prove you’re human… would you still use it?

With AI bots getting harder to detect, there s been growing discussion around platforms using biometric verification (like face scans) to confirm real users.

Cool in theory... Reddit is full of bots, fake accounts and garbage engagement. But let s be real

Reddit without anonymity isn t Reddit.

Flexpricep/flexpriceshreya chaurasia

1mo ago

How do you understand the difference between interest and intent?

Two conversations. Same week.

First founder said,
Really interesting product. Love what you re building.

Great energy. Smart questions. Strong validation.

We never heard back.

What if your 95%+ retention hides a 60-day sales cycle?

From the outside, it looks simple.
Strong retention. Happy customers. Steady growth.

What most people don t see: our average deal takes ~60 days to close.

Some move faster. Many don t.
And that changes how you run GTM entirely.

Long sales cycles stretch everything:

Introducing the Flexprice MCP Server.

You shouldn t need to open five dashboards just to change pricing.

Now you don t.

Plug Cursor, Claude Code, VS Code, Gemini, Windsurf or any MCP-compatible client directly into your Flexprice workspace and prompt your billing infrastructure like it s code.

What’s one metric you trust more than likes and signups?

Startup land rewards motion.
Announcements, launches, funding headlines, feature drops - it all looks like acceleration.

But visible activity isn t the same as real progress.

Shipping fast doesn t mean you re building the right thing.
Raising capital doesn t mean you found product-market fit.
Talking about scale doesn t mean you solved anything painful.

A lot of ecosystems reward velocity because it s easy to measure.
Markets reward outcomes because they re impossible to fake.

What does “good marketing” even mean in 2026, when everyone can ship and everyone can post?

Emergent isn t just doing marketing. They re making it feel inevitable.

They picked a moment with attention gravity (India AI Impact Summit in Delhi), then stacked surfaces that create I keep seeing them energy:

  • Billboards across the city + Economic Times print ads

  • A narrative number big enough to force curiosity: $100M ARR run-rate in 8 months

  • Credibility signals and proximity without being subtle

  • And a product unlock right after: now on mobile, build from your phone

The genius is they re not explaining the product.
They re engineering belief: this is the platform, everyone s building, you re late.

Can you really do outcome-based pricing if you can’t measure outcomes?

Last week I met a Voice AI company. We barely talked product. The real heat was pricing, not how much, but what exactly are we charging for?

They don t want per-minute, per-seat, or per-API-call anymore. They want per resolved call, per booking, per qualified lead, per deflection.

Sounds clean. Until you try to define resolved.
Who validates it?
What if their CRM says something else?
What if attribution breaks?

At that point, the metric becomes the product. And the infrastructure behind that metric becomes the business model.

Are credits becoming the default pricing language for AI products?

Subscription pricing struggles when value is variable.
Pure usage pricing is accurate, but messy to explain, messy to predict, and easy to hate when the bill surprises you.

Credit-based pricing sits in the middle:

  • Simple for customers: I bought 10,000 credits

  • Flexible for teams: bundle tokens, GPU time, storage, calls into one unit

  • Better for finance: prepaid revenue, clearer burn, fewer billing shocks

  • Better for product: you can experiment with packaging without rebuilding billing every time

The bigger trend is this:
We re moving from pricing as a plan to pricing as a runtime.

Why does running one outbound motion feel like orchestrating four different systems?

Every Monday, this is my GTM reality-

  • One tool for prospect discovery + enrichment.

  • One for basic LinkedIn workflows.

  • Another just for LinkedIn messaging.

  • And a separate one for email sequences.

Same list. Same campaign. Different dashboards.

If I want to remove one company, I remove it everywhere.
If I pause outreach, I double-check multiple tools to make sure nothing accidentally goes out.

Is ambition contagious or is burnout?

Spend enough time around driven builders, and your standards rise. You want to ship faster. Do more. Stay ahead.

That part is powerful.

But here s what I ve been noticing about myself:

I treat growth as urgent.
I treat health as optional.
Deadlines feel fixed.
Sleep feels flexible.
Momentum feels critical.
Recovery feels negotiable.

Are we confusing chaos with creativity?

Vibes are powerful. They spark ideas fast and give you momentum before overthinking takes over.
But vibes without structure just create noise.

That's where prompt engineering matters.
It's the bridge between inspiration and execution. It turns abstract intent into concrete instruction.

It's what turns "I want something cool" into:

  • Here's the outcome

  • Here's the user

  • Here are the constraints

  • Here are the edge cases

What if the outbound channel you're betting on is the wrong one for your market?

I've been talking to founders across different stages and ICPs, and here's what's surprising: there's no consensus anymore.
1. Cold email is crushing it for some teams and completely dead for others.
2. LinkedIn DMs are either goldmines or ghost towns.
3. And somehow, cold calls are quietly working for a subset of B2B companies.

It feels like the best practice playbooks don't account for how much this varies by your specific ICP, deal size, and market maturity.

So I'm curious about your experience, not what you think should work, but what's actually generating pipeline for you right now. Is it cold emails? Calls? LinkedIn outreach? Or have you found success with a completely different motion?

Would love to hear what's working in your world. What outbound channel is moving the needle for you?

When you launched on Product Hunt, how did you pick your category?

Most founders treat categories like labels.
Product Hunt treats them like distribution.

Categories weren t added to classify products.
They were added because one global feed stopped working.
Too much noise. Too little intent.

Your category decides:

  • who sees you

  • how you re evaluated

  • the quality of feedback you get

Nika

2mo ago

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

How many AI tools do you know, but can’t actually use?

I realized I was stuck in AI FOMO.
Bought multiple courses. Knew every tool by name.
Hadn t built a single working automation.

So I stopped and asked one question:
"What repetitive task can I hand off to AI today?"

Not after another course. Not after learning more. Today.

That shift mattered.

YC RFS 2026: here’s the breakdown that actually matters

A lot of people read YC RFS Spring 2026 as a trend list.
It s not. It s a signal of where work inside companies is quietly breaking.

Here s how this shows up in real teams:

Product teams
YC references @Cursor , but the opportunity isn t coding faster.
It s helping PMs synthesize interviews, metrics, and feedback to decide what to build next.

Finance and hedge funds
Firms like Renaissance, Bridgewater, and D.E. Shaw won by systematising decisions.
AI-native hedge funds push this further with continuous, machine-driven strategies.

Why is defining relevance still the hardest part of building AI features?

As more teams build AI agents, search, and personalized feeds, one problem keeps surfacing.
Not generation.
Not model quality.

It s retrieval and ranking. Deciding what information should show up and in what order.

Most teams solve this by stitching together systems. Vector search for meaning. Keyword search for precision. Custom logic for business rules. Over time, relevance logic spreads everywhere and becomes hard to change.

@Shaped approaches this differently.

Can Product Hunt actually bring in customers after launch day?

It did for us.
3 customers came to @Flexprice last week. No ads, no cold DMs. Just conversations.

Most people treat Product Hunt as a one-day spike.
I treat it like a community of builders.

We launched Flexprice last year and learned (the hard way) what works here and what doesn t.
So now I keep it simple:

I support makers launching on Product Hunt for free
I give honest product feedback as a real user
I help with launch strategy when useful