Nika

Is automating customer support the ultimate solution?

Today I read in TechCrunch that Airbnb says a third of its customer support in the US and Canada is now handled by AI.

Many CRM-focused platforms are following suit, automating support with their own “AI bots” – for example, Crisp, Chatbase, and others.

From a business perspective, it makes perfect sense: lower costs, faster responses, and fewer lost customers.

From a customer perspective:

I remember the situations where you just can’t reach a real human, and AI can’t solve the specific problem.

This happened to me just a month ago on Trading 212, where I needed to provide statements from an old bank account I had closed three years ago. I was redirected to a bank chatbot that couldn’t help, and the general email gave me an automatic reply saying it couldn’t handle my request.

In the end, I had to go in person to the bank branch.

Today’s technology forced me to use a solution from the 1980s.

In finance, which is sensitive in 99% of cases, I wouldn’t fully rely on AI support, but would instead prefer a hybrid model, where AI is complemented by human assistance.

Of course, in such a setup, support agents may not be fully occupied at all times, but customers get help when they really need it.

The question for companies: How can you optimally integrate AI into customer support while keeping humans available when it counts?

71 views

Add a comment

Replies

Best
Nika

P.S. but this is not only about financial world. You pay for products, apartments, electronics etc. Money are still involved here and people are very sensitive when it comes to refund and solving problems related to money.

Giodio Mitaart

Interesting topic, @busmark_w_nika! I don’t think automation is the ultimate solution. The real issue isn’t AI answering questions. It’s when AI refuses to escalate. In sensitive areas like finance, the bot shouldn’t try to solve everything. It should quickly recognize when a case is complex and route it to a human with full context.

The experience breaks when users can’t reach a real person, have to repeat themselves, or get stuck in reply loops.

One example from what my team and I building at @AskYura, we focus on keeping the balance right. The AI handles fast tier-1 questions, but we design clear escalation rules so it knows when to step aside. When that happens, the full conversation summary is passed to the human agent, so users don’t have to repeat themselves. We also support screenshots and image inputs to make things easier. And if the case needs quick help, there’s a built-in “notify human agent” feature to step in immediately.

Curious how others are approaching this and optimizing their setup too. 👀

Nika

@npmitaart This is the best approach – to have a mix. To be honest, I prefer this one because if I wanna solve something promptly, AI will answer. If AI cannot answer, the case will be forwarded to a human. It is simple like that.

AJ

I've been a rep and a tech at many levels.

The key here is that automation, even before AI can handle lots of routine questions, simple if this then that situations, but the moment someone's case is not that, it starts to fall apart. Most chatbot implementations are mediocre at best and they don't really inspire confidence. That's the biggest challenge. If the human customer doesn't trust the AI, even if the AI follows protocol and gives a solution, the customer walks away irritated.

The point of customer support is to solve the technical problem and the emotional problem. Customers are humans, they are often stressed, depending on your service or product. If a human rep is just reciting a script they also react badly.

The best automation is the one customers don't notice. Easily searchable public docs, relevant FAQs, fed by continuous data analysis and constant improvement.

I remember this call, I was working at this call center ages ages ago. It was one of those debit card digital banks type app, this woman calls, half desperate half livid, her card hadn't show up after 3 tries. In that moment she needed two things, 1 her problem solved, and a place to let out the emotions.

it made sense that she was so upset, this may have been her only account. I've been there.

So I can't tell her the address on file but I can ask her to confirm it, she says her address, Turns out, no one had bothered to update it. Card went to the wrong address. I then tell her this and she is understandably frustrated, so I just update it, double check it with her. Then I marked her card as lost, not stolen. This one in case she could get to her old mailbox and find the previous card.

She was glad to have it sorted, and I was happy that it was solved.

You can hear the relief in someone's voice.

This lead me to ask. Can AI exercise this sort of reasoning, and work through why her card wasn't arriving, Current LLMs certainly can, when prompted for it. When they have a procedure and tools to check, when they are configured correctly.

Can the LLM also handle the emotional side of it?

Maybe. It can certainly talk the the talk.

Another story. I was working help desk at the time, this guy. remote worker, getting set up after his monitor arrived broken. He was not happy of course. All it took to change his mood was a well timed joke about fedex and that viral video of packages being kicked. We had a good laugh. I sent the order for the replacement, with a big note to not use fedex. He went from upset to relaxed.

Would an AI chatbot do that? Would the organization even allow it to? Or would the risk of liability be too much.

And would the human customer take kindly to this sort of bonding?

Maybe the question is not Can it walk the walk.

The question is. Do we want it to?

We humans are social creatures, and having someone at the other end of the line makes all the difference.

I'll say it again. polar.sh has stellar customer support. I feel like I can ask questions and know they will get back to me.

I don't know how much they automate, I don't care, because it feels genuine. I don't care if the reps use AI to draft, or they have ML algos feeding their FAQ.

All I know is that I talk to someone and they always come through.

Nika

@build_with_aj It is questionable whether AI can do it. Some humans are so socially clumsy that they wouldn't be able to satisfy the frustrated client on the other end. :D, but imagine how AI is going forward.

When the webcam is enabled, it can "read" non-verbal communication (facial expressions, etc.), and it can talk with some expressions. I think that it can be possible in the future when used correctly, but not for now.

And of course, not everybody can be a customer support agent, because some people are less sociable and empathetic than others. And that's something that influences the whole conversation and impression.

AJ

@busmark_w_nika 

Tbh, a lot can be learned.

For me the biggest risk factor in AI support is when the customers know it's AI and come with Anti AI sentiment.

I mean i'm pretty tech friendly but I've had horrible experiences with some CX bots.

And it's extra frustrating because you can tell the issue isn't one incompetent rep. It's structural, institutional.

An unhelpful rep can either be nerves, lack of training, apathy, inexperience, a number of things. but there is always a factor of 'It's probably an issue with the person'

With AI assistants you can tell that ti's all organizational incompetence, Bad setup, bad implementation.

Bad call with a rep: I walk away thinking next one will be better and that they will learn.

Bad interaction with the bot, and I walk way knowing the next one will be largely the same.