A group of researchers from the NYU Tandon School of Engineering has introduced a new artificial intelligence (AI) approach capable of altering the apparent age of individuals in images while preserving their distinct facial features. 

This development marks a substantial advancement from conventional AI models, which can manipulate age but struggle to retain biometric identifiers that make each person unique.

Sudipta Banerjee, a research assistant professor in the Computer Science and Engineering (CSE) Department, spearheaded the work. 

Eye Lips Kiss
(Photo : Nikola Filipová from Pixabay)

Identity-Preserving Age Transformations of AI Model

This AI model was trained to execute identity-preserving age transformations. To achieve this, the team surmounted a common challenge inherent in this field: amassing an extensive dataset comprising images of individuals spanning multiple years. 

In contrast to this practice, the researchers employed a small dataset of images depicting an individual, supplemented by another set of images accompanied by captions indicating the represented person's age category, such as child, teenager, young adult, middle-aged, elderly, or old. 

Notably, this collection incorporated images of celebrities captured throughout their lifetimes. The AI model learned to recognize the unique biometric traits distinguishing individuals from the initial dataset. 

The age-categorized images facilitated the model's comprehension of the connection between photos and age. Consequently, the trained model could simulate the process of aging or rejuvenation by specifying a target age using a text prompt.

Read Also: Theoretical Physicists Call AI Chatbots Just 'Glorified Tape Recorders' as Fear of Artificial Intelligence Dies Down

'DreamBooth' Technique

The researchers adopted the "DreamBooth" technique for editing human facial images, a process involving gradual adjustments achieved through a combination of neural network components. 

This method entails introducing and eliminating noise, or random variations, in images while involving the underlying data distribution. By incorporating text prompts and class labels, the researchers guided the image generation process, emphasizing the preservation of identity-specific details and image quality. 

The application of diverse loss functions facilitated the fine-tuning of the neural network model. The team substantiated the efficacy of their approach through experiments involving the generation of human facial images exhibiting age-related changes and contextual variations.

To gauge the method's performance, the researchers compared it against existing age-modification techniques. They enlisted 26 volunteers to match the generated images with actual images of the subjects alongside the application of the ArcFace facial recognition algorithm. 

According to the team, their method outperformed alternative techniques, demonstrating a reduction of up to 44% in the rate of incorrect rejections during the recognition process. 

The team's findings were detailed in a paper on the pre-print server arXiv and set to be presented at the IEEE International Joint Conference on Biometrics (IJCB).

Collaborating with CSE PhD candidate Govind Mittal and PhD graduate Ameya Joshi, under the guidance of CSE associate professor Chinmay Hegde and CSE professor Nasir Memon, the researchers harnessed a specialized form of generative AI model known as a latent diffusion model. 

Related Article: Can AI Bots Solve CAPTCHA Tests Faster and More Accurately Than Humans? Researchers Reveal the Surprising Answer

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion