Inside Plix's Safety Engine Breakthrough with Tanmay Agrawal

Tanmay Agrawal
Tanmay Agrawal

Across sectors that rely on field workers, body-camera video has become an essential record of daily operations. But as it stands, most of the material that comes out of it is never reviewed. Security guards, parking enforcement agents, and frontline teams capture hundreds of hours of footage each day, beyond what human monitors can realistically go through in real-time, with manual review taking up dedicated staff who, despite the cost, risk missing critical incidents.

The consequence is a system defined by delay. Safety events can take up days to confirm, slowing internal investigations and creating gaps in reporting. These lags increase exposure to lawsuits, risk making incident verification time-consuming and untrustworthy, and generally weaken operational oversight.

Into this widening gap comes a wave of engineers building systems that aim to automatically process video quickly and responsibly. Among them is Tanmay Agrawal, whose work at Plix centers on bringing reliable, real-time analytics to industries that have long lacked them. His contributions aim to empower a larger industry shift toward AI that puts human safety at the moment it matters.

Plix's Safety Engine System

Agrawal joined Plix as its first employee, taking on the role of building the technical core of a company focused on real-time safety analytics for field workers. As the founding engineer, he was tasked with defining the architecture that would anchor Plix's flagship product and set the direction for future development.

He designed and implemented Plix's Safety Engine, the algorithm that automatically reviews hours of body-camera footage and identifies safety-critical events within seconds. The system surfaces incidents like confrontations, falls, or use-of-force moments far faster than traditional review processes. Managers, then, can understand the nature of incidents as they unfold and react accordingly with greater speed, two factors essential when it comes to guaranteeing a swift reaction.

These improvements can also lead to a larger operational impact for companies. Customers report eliminating manual night-shift review roles once required to watch footage in real-time. Others cite reduced exposure to internal investigations and litigation because incidents are verified quickly and consistently. Near-instant review can also help close cases before they escalate, particularly in high-risk encounters, increasing their turnaround times.

By showing its economic and ethical value, the engine is seeking to place AI-assisted safety as a potentially practical asset for high-risk field roles.

The Start of the Engineer Behind Plix

Before joining Plix, Agrawal had been honing his knowledge of machine learning long before he entered the industry. Drawn early to search mechanics, pattern recognition, and statistical modeling, he taught himself advanced concepts alongside his coursework and treated Major League Hacking competitions as a promising opportunity to expand his training. That period led him to UC Berkeley's Data-X Lab, where he built models that predicted client risk for recruiting agencies and gained his first experience delivering applied systems with large-scale deployment.

His work continued expanding from thereon as he became involved in more technically encompassing products, like when he contributed to NVIDIA's Intelligent Video Analytics group, where large-scale inference optimization became a core focus. At UMass Amherst's Center for Data Science & AI, he served as a research fellow and participated in research initiatives focused on how to use this technology for social good, including one initiative that was later recognized with a Best Paper award at AAAI.

These responsibilities were paired with mentoring graduate students by working with UMass's Data Science for the Common Good program—an experience that built his worldview on how applied AI should deliver measurable public value.

Contributing Beyond the Technical

The technical work Agrawal carried out at Plix shaped more than its Safety Engine. He developed self-supervised training tactics and a multi-modal fusion pipeline that now support the company's pending patents, and he authored internal research and architecture papers cited across its engineering teams and by partner evaluators. These contributions brought structure to the company during its early days, as they set technical and ethical standards that now guide its approach to large-scale video analysis.

His role also includes serving as a mentor. Agrawal regularly leads sessions where he shows how different computer-vision concepts can turn into real-world systems, reinforcing a culture that values reproducibility, so that users can understand the logic behind all outputs. His external work reflects that same focus, including an invited guest lecture at Georgia Tech on large-scale video analytics.

That notion of establishing explainable systems also forms how he views the future of AI. He sees room for systems like Plix's to expand into sectors like logistics and emergency response services, where quick, reliable verification supports public trust—and he hopes his current work at Plix can, over time, show a way forward toward that future.

Tanmay Agrawal
Tanmay Agrawal

Looking Toward the Next Chapter without Losing Focus on Today

Currently still working at Plix, Agrawal's ambitions go even further. He aims to build a generation-defining technology company built on a foundation of transparency, fairness, and public accountability. For him, for people (and companies) alike to develop long-term trust in these systems, the ones building them need to establish clear and tangible standards and expectations on what said systems can do and how they'll behave in the real world.

He also plans to continue mentoring students and young engineers, helping them apply machine learning while teaching them the importance of responsible deployment. His work outside Plix reflects an interest in extending safety-focused analytics into sectors where reliability directly influences human outcomes.

In charting that direction, Tanmay Agrawal is aiming to present a model of leadership that thinks of AI as a tool to empower human workers, anchoring his contributions in an underlying effort to strengthen safety and trust across high-risk environments.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion