
Most enterprises today run on multiple, parallel software to run their companies, with critical information siloed from each other. Workers lose meaningful time hunting for important documents and reports scattered across platforms that were never designed to talk to each other. It is a fragmentation problem that grows worse with every new tool a company adopts.
That is exactly what Rahul Dey saw when he joined Glean, an enterprise AI platform seeking to unify workplace data by indexing it across every application and layering AI on top. As the product manager leading Glean's infrastructure team, Dey has overseen model evaluation strategy, frontier-model support, and cost-optimized routing, helping the company deliver reliable AI to enterprises running on dozens of disconnected tools.
A Product Manager with Experience In Tech
Dey studied computer science and humanities at Brown University, drawn to philosophy as well as algorithms. That combination pushed him toward product management, a discipline he describes as the only role that let him stay close to engineering while designing for people.
"I realized I was getting A's in all my CS classes," he recalls, "but I also could not just be holed up in a room coding away. I needed to talk to people and make sure that what was being built was actually good for society."
His first product role at Microsoft, a position that lasted four years, started with media-protection technology aimed at identifying deepfakes, work that predated the mainstream AI boom, meaning it became directly relevant once diffusion models arrived. He then moved within Microsoft to integrate Copilot into Teams for frontline workers, exploring how manufacturing-floor employees and retail staff interact with AI before that became an industry-wide conversation.
A year at Athelas, a healthcare-operations startup in Mountain View, convinced Dey that some of the most stubborn problems in the U.S. healthcare system are bureaucratic rather than technical. "I worked there for a year and realized that healthcare is really hard in the U.S.," he explains, "and it's not so much a tech problem as it is a lot of operational and bureaucratic problems as well."
That experience cemented his desire to work on core technology infrastructure, and a high school friend who was a founding engineer at Glean offered the opportunity.
How Glean Addresses the Problem with Siloed Data
The average enterprise runs on over 100 SaaS applications to take care of tasks like messaging, or project and expense tracking, each storing critical information independently. With this layout, workers lose meaningful time simply locating important information scattered across these platforms, with no one platform to look through all existing platforms.
Glean was founded in 2019 by former Google engineers who recognized that enterprise search remained an unsolved problem. The company started as a search product and expanded into a full AI work platform as the technology improved.
What separates Glean from others in the space using real-time data fetches, Dey explains, is its indexing approach. Glean crawls and ingests enterprise data, then builds its own understanding layer on top of it. A Slack message linking to a Google Doc, for example, gets cross-referenced so the platform can keep in mind the relationship between the two. Even emoji reactions on a message factor into ranking signals, helping the system weigh which answer is actually useful and preventing it from merely returning whatever data a single application happens to surface in the moment.
The platform now supports more than 130 connectors and takes into account the strengths of multiple data sources and AI models, something Dey frames as a core advantage over ecosystem-locked rivals. Microsoft optimizes its AI primarily for its own suite of products, and Google does the same for Workspace. Glean, by contrast, has invested in deep integrations across all major ecosystems so enterprises running a mixed-vendor stack don't feel they have to pick a side.
The same logic applies to models: OpenAI, Google, and Anthropic trade places on performance leaderboards with striking regularity, and rather than locking into a single provider's roadmap, Glean selects the best-performing model for each task, giving it the flexibility that Dey considers to be crucial to its success.
Rahul Dey's Role
When Dey first joined Glean in 2025, his focus was on connectors, the integrations the platform needs to pull data from and write actions back into the applications enterprises already use. He led the effort to ensure that core connectors could retrieve information, but also automatically execute tasks like replying to emails or checking Jira tickets. He also shipped Glean's NetSuite integration, which was key for the platform to automate ERP workflows for teams who'd long struggled with NetSuite's notoriously difficult interface.
That work gave Dey a detailed understanding of how enterprise data typically flows, and it led directly to his current role leading Glean's LLM infrastructure team.
His priorities now center on two initiatives. The first is defining Glean's company-wide evaluation strategy: establishing golden-standard outputs for key use cases and the performance of the AI agents themselves, then benchmarking every new frontier model against those standards using human reviewers alongside LLM-based judges. "If you had an amazing marketing person write copy for a launch, what would their output look like?" he explains. "And when I have a new LLM, does it get close to that output? Is it better? Is it worse?"
The second initiative is intelligent model routing, which means matching each query to the right LLM according to the prompt's task complexity, so the platform doesn't spend expensive frontier-model compute on simple requests. The goal is to directly improve per-customer profit margins, a concern that grows more urgent as enterprise AI adoption scales and token costs accumulate.
Dey also builds cost-monitoring tooling that gives enterprise customers controls over token consumption and costs, as well as latency thresholds, essentially a credit card limit for AI spend, so companies don't run up unexpected bills as usage scales.
To inform these efforts, Dey works directly with research labs through private channels, receiving early access to new versions of models weeks before public launch and using those previews to shape Glean's product roadmap. He considers this a rare level of access for an individual product manager and treats it as both a tactical advantage and a responsibility.
Improving Enterprise AI
Looking ahead, Dey believes the product management role will demand a more technical, hands-on approach than it did even two years ago. He expects PMs to move beyond writing product requirement documents and into prototyping with code, building working mockups, and keeping up-to-date with new and potentially more powerful tools, but he also highlights that these new requirements should always be seen through the lens of solving real problems. He points to Apple as a company that stays mission-focused even when it appears to lag behind competitors on the AI hype cycle: the AirPods work, and nobody asks which model powers them.
"It boils down to always prioritizing solving real customer problems," he says, "which should always be the main focus instead of simply jumping onto new technological trends."
Until then, Glean's connector depth continues to expand into new verticals, the evaluation framework is still being defined, and the model routing system that will determine the company's margins at scale is actively being built. For Rahul Dey, the task is the same one that has guided every stop in his career: make the technology disappear so people can focus on the work that matters. At Glean, that ambition now operates at enterprise scale.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




