Ramakrishnan Neelakandan
(Photo : Ramakrishnan Neelakandan)

"As healthcare technology grows, the line between innovation and responsibility must be drawn and respected," declares Ramakrishnan Neelakandan, Google's Health software quality engineering lead who deploys high-level artificial intelligence (AI) while maintaining safety compliance in healthcare.

Healthcare technology has undergone a remarkable transformation over the years, evolving from sophisticated patient record management to the incorporation of AI, producing more advanced health devices. In this innovative pursuit, the critical role of quality gatekeepers, such as Neelakandan, comes into play.

Neelakandan's delicate balance between innovation and safety encompasses theoretical knowledge, practical application, and thoughtful leadership. He applies them in no less than one of the world's tech giants. Using his years of experience and the numerous projects he has managed, he shares his insights on combining innovation with caution. He provides a blueprint for navigating the complexities of healthcare and AI integration.

Highlighting Quality Assurance Frameworks

Neelakandan's work focuses on patient safety and product quality. At Google, he develops strategies prioritizing these critical aspects from the beginning of healthcare AI product development. "Safety and quality cannot be mere afterthoughts," Neelakandan stresses. "AI products must be developed under the highest standards and regulations."

Neelakandan's commitment to this philosophy is exemplified by his establishment of Google's Health Quality and Safety framework for AI product development. This framework has guided the quality and safety activities for groundbreaking projects such as the MedLM and Google Generative AI models specifically designed for the healthcare industry. Under his leadership, the quality and safety efforts for its AI-based mammography product, which utilizes AI technology to identify breast cancer, have also been particularly noteworthy.

Through this experience, Neelakandan emphasizes the crucial role of comprehensive frameworks and standardized practices. Recognizing the relative novelty of generative AI technology, he has embraced the challenge of developing risk-based frameworks to ensure that these advanced AI models are created responsibly and safely for healthcare applications.

For Neelakandan, having well-defined frameworks standardizes practices and provides clear guidance for professionals. These frameworks help determine when additional testing is necessary and form the basis for approving products for public use. He establishes ethical guidelines and safety protocols from the early stages of product development, incorporating feedback loops that enable continuous assessment and improvement.

Seeing Innovation as Both Challenge and Opportunity

Neelakandan's extensive experience in the Medical Device and In Vitro Diagnostics (IVD) industry, spanning over 12 years focusing on software, has instilled in him a profound understanding of balancing innovation and risk. While advanced technologies like generative AI hold immense potential for transforming healthcare, he recognizes that these innovations also carry inherent challenges and risks that must be carefully navigated.

As a leader in developing these cutting-edge AI technologies, Google presents an immense opportunity to push the possibilities in healthcare. However, Neelakandan remains acutely aware of the potential risks associated with these innovations. He takes it upon himself to ensure these new technologies are developed, prioritizing end-users safety and mitigating potential harm.

"The healthcare industry is a highly regulated environment, and for good reason," he explains. "Our products impact lives, and we must treat that responsibility with the utmost precision."

Unending Willingness to Adapt

Despite the rigorous frameworks and best practices in place, Neelakandan acknowledges no perfect approach to maintaining quality in healthcare technology exists. The field of AI, particularly in its application to healthcare, is inherently complex and ever-evolving. As the needs of users and the industry continue to change, there will always be new challenges to adapt to, leaving room for mistakes and opportunities for growth.

This acknowledgment of imperfection is not a weakness but a strength, as it drives Neelakandan and his team to strive for better solutions continuously and remain vigilant in identifying and addressing any gaps or shortcomings in their approach. In line with this, he looks forward to being part of AI2030 as a global fellow, where he will develop responsible AI standards and safe AI toolkits to make AI safe and more beneficial for humans.

"The endeavour involves continuous learning and improvement, where even the most well-established frameworks and protocols must be regularly re-evaluated and refined. We can truly gauge our success through the lives we improve, not only the technologies we develop," Neelakandan notes.

Such a perspective underscores the human-centric approach that defines Neelakandan's work, emphasizing the ultimate goal of healthcare technology—to improve patients' lives and enhance the overall quality of care.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion