Zum Inhalt springen
Agent Hub
← Back to blog
EngineeringApril 15, 20266 min read

How we build AI agents you can trust

Four principles from two years of practice — from transparency to human handoff.

by Vernes Perviz

Trust is the most valuable currency

When we started building AI agents two years ago, the question was always: 'How can we be sure the agent does the right thing?'

Today we know: trust isn't built on 100% accuracy. It comes from transparency, clear handoffs, and the ability to trace what happened at any point.

1. Source attribution

Every answer comes with a link to its source. The user sees at a glance where the information came from, and the editor can verify the agent quoted the right document.

2. Confidence thresholds

When the agent isn't sure, it says so openly. We measure answer quality in real-time (embedding distance, source coverage, re-ranking score) and trigger human handoff on low confidence.

3. Human handoff

Complex or emotionally charged requests are automatically routed to a human. The agent says 'let me bring in a colleague', norailroading, no frustration.

4. Audit logs

Every conversation is traceable, whoasked what, what did the agent answer, which tools were called. Pro: full logs; Enterprise: tamper-proof with WORM storage.

What's next

We're working on a Trust Score — a single number per conversation that summarizes these four dimensions. Early tests show: conversations with a score >85% correlate strongly with customer satisfaction. More on that in the next post.