
There has been a lot of press lately about how the newest generation of agentic artificial intelligence (AI) models is showing increased ability to "lie" to users, or "bluff" when caught in a mistake or a hallucination. As AI becomes more integrated into life and business, concerns over whether users can trust their AI agents can be unsettling.
That's why product data scientist and full-stack AI engineer Vaibhav Jha is concerned not just with technical fluency but with building trust into AI design. For Jha, deploying effective, enterprise-level AI that fulfills its promise is about more than just devising the right prompts; it's about understanding people. It's about empathy and about designing systems that earn trust proactively.
From his earliest projects at PayPal, where Jha developed AI-powered fraud-detection systems to protect user trust, to his latest work designing conversational agents that flag uncertain answers instead of bluffing when caught in an error, Jha has consistently focused on making AI responsible by design. This human-centered approach, he hopes, will reshape how generative AI (GenAI) earns users' trust in the workplace.
Responsible by Design
As far as Jha is concerned, the toughest barrier to GenAI adoption at the enterprise level is trust, not the capabilities of agentic AI. That means that simply honing the accuracy of the agents' responses isn't, by itself, enough. What's needed is AI design that prioritizes user control.
Recent examples from the industry have demonstrated how AI agents frequently double down rather than admit uncertainty when they make a mistake or experience a hallucination, undermining user trust. The problem lies not only in the fabrication of information but also in the inability to convey uncertainty, which can lead to frustration, misinformation, and reputational damage.
The AI solutions Jha has helped develop don't just aim for accuracy; they communicate about the accuracy of their responses. They flag ambiguity in both the prompts and the answers. In the case of an ambiguous prompt, the agent asks the user to rephrase it. In the case of an answer that may include information that isn't factual, the agent marks the response as unverified.
Honest admission, before the user requests it, can build confidence and improve communication between the human client and the AI agent. When developing GenAI pilot deployments for enterprise GenAI solutions, such as the use of a conversational AI in HR or an AskHR AI assistant, this responsibility, by design, not only saves frustration but can also save clients from lawsuits. Fallback logic and uncertainty flagging aren't just technical features; they're deliberate design choices that prioritize empathy for the client over mere efficiency in deploying agentic AI.
Cultivating AI Fluency and Trust Across Teams
The other thing Jha advocates is establishing AI fluency and trust among stakeholders at the start of an enterprise GenAI implementation. For Jha, stakeholder alignment begins with listening, not building.
"Whether you're pitching a GenAI PoC or negotiating project scope," Jha advises, "remember that stakeholders are people first. Listen more than you speak, ask about their pain points, celebrate small wins with them, and acknowledge their concerns. When clients see you value their perspective, they become collaborators instead of skeptics."
This listening-first approach also enables the design of AI agents tailored to the user, rather than expecting the user to adapt to the AI agent. This is why Jha believes empathy can be a strategic differentiator in a field often defined by abstraction and efficiency metrics.
Pushing Toward the Future
It is a philosophy Jha has been sharing and implementing across many venues. At Silicon Valley Labs in 2024, he presented live demos of AskHR and Researcher Agent, illustrating how conversational AI can handle sensitive HR and knowledge-management queries at scale while maintaining safety protocols. At IBM, he spotted a $79 million billing discrepancy, showing how data pipelines can recover value for clients. And in 2023, he won first place among 160,000 participants from over 60 countries in IBM's Watsonx GenAI Hackathon.
Vaibhav Jha's current focus is agentic root-cause analysis, orchestrating autonomous agents to pinpoint root causes behind complex problems. He strives to bridge the gaps between current AI research and practical enterprise adoption. Whether speaking to thousands or a small group of colleagues, Vaibhav Jha conveys both his excitement for the future possibilities of GenAI and his conviction that GenAI needs to be human-centered by design.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.