The Massachusetts Institute of Technology (MIT) has unveiled a new advancement in artificial intelligence, introducing an AI tool that swiftly generates high-quality images in one step, addressing previous limitations.  

AI generates high-quality images 30 times faster in a single step

(Photo : MIT)

MIT's AI Tool for Generating HD Images in One Step 

The traditional process of generating images using diffusion models has been complex and time-consuming, often requiring multiple iterations for the algorithm to produce satisfactory results. 

However, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have developed a new framework that simplifies this process into a single step, addressing previous limitations and significantly accelerating image generation.

The new approach, known as distribution matching distillation (DMD), leverages a teacher-student model, wherein a new computer model is trained to mimic the behavior of more complex, original models that generate images. 

Tianwei Yin, an MIT Ph.D. student and lead researcher on the DMD framework, highlights this approach's transformative potential. He said it accelerates current diffusion models by up to 30 times while maintaining or surpassing the quality of generated visual content. 

By combining principles from generative adversarial networks (GANs) and diffusion models, DMD achieves visual content generation in a single step, eliminating the need for the iterative refinement process required by conventional diffusion models.

Read Also: MIT Introduces Noninvasive Treatment for 'Chemo Brain' That Involves Lights and Sounds

Single-Step Diffusion Model

According to the MIT team, this single-step diffusion model has far-reaching implications. It has the potential to enhance design tools, enabling quicker content creation and supporting advancements in various fields, such as drug discovery and 3D modeling, where speed and efficacy are crucial factors.

DMD consists of two key components: a regression loss and a distribution matching loss. The regression loss ensures stable training by anchoring the mapping process, while the distribution matching loss aligns the probability of generating images with their real-world occurrence frequency. 

This dual approach, aided by two diffusion models, facilitates faster generation by minimizing the distribution divergence between generated and real images.

Yin and his colleagues employed pre-trained networks for the new student model, streamlining the training process. By transferring parameters from the original models, the team achieved rapid convergence of the latest model, which was capable of producing high-quality images with the same architectural foundation. 

In benchmark tests, DMD demonstrated consistent performance, particularly excelling in image generation based on specific classes on ImageNet. With a Fréchet inception distance (FID) score of just 0.3, DMD produced images on par with those generated by more complex models, showcasing its impressive quality and diversity.

DMD also showed promise in industrial-scale text-to-image generation, achieving state-of-the-art performance in one-step image generation, as reported by the MIT team. 

While there is still room for improvement for more challenging text-to-image applications, the researchers claim that the results are promising and suggest potential avenues for future enhancements.

"Decreasing the number of iterations has been the Holy Grail in diffusion models since their inception," said Fredo Durand, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and a lead author on the paper.

"We are very excited to finally enable single-step image generation, which will dramatically reduce compute costs and accelerate the process," he added.

The findings of the team were published in arXiv. 

Related Article: MIT Unveils Breakthrough Tamper-Proof ID Tag Using Terahertz Waves

Byline


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion