Astronomers are already capable of measuring the mass of a black hole, but up to now, they have not yet produced images of one. MIT graduate Katie Bouman and colleagues, however, have developed a computer algorithm that may eventually change that.

Big telescopes are needed to observe black holes because those space regions are relatively small. The supermassive black hole that lies at the center of the Milky Way, for instance, is only about 17 times the diameter of the solar system's sun. It is also too far at about 25,000 light years away.

Capturing the picture of a black hole would not be a problem if there was a telescope with a 10,000-kilometer diameter. However, that is almost the same diameter as that of the Earth so, it isn't possible.

The Event Horizon Telescope project attempts to solve this dilemma by simultaneously taking data from different telescopes scattered around the globe. Through a method known as Very Long Baseline Interferometry, or VLBI, scientists can merge this data in a way that it seems to be looking at one gigantic telescope.

Scientists, though, still have other obstacles. The number of telescopes involved in the project is not enough, leaving large gaps in data.

The telescopes also use radio wavelengths, and radio waves do not produce good pictures. The largest radio-telescope dish, for instance, produces an image of the moon that is blurrier than one seen through an ordinary backyard optical telescope.

The Earth's atmosphere can likewise slow down radio waves, which can cause massive differences in arrival times and result in many errors.

The new algorithm called Chirp, for Continuous High-resolution Image Reconstruction using Patch priors, is meant to solve this problem.

The algorithm can mathematically enhance the radio waves captured by the network of telescopes. It can filter out unnecessary data, such as atmospheric noise, to produce a more reliable image.

As for the sparse data, the algorithm uses other images from space to serve as a reference so gaps in data can be filled. This helps produce a sort of mosaic that matches the VLBI data.

"This research aims to overcome this gap in several ways: careful modeling of the sensing process, cutting-edge derivation of a prior-image model, and a tool to help future researchers test new methods," said Yoav Schechner, from Israel's Technion.

"[The researchers] mathematically merge into a single optimization formulation a very different, complex sensing process and a learning-based image-prior model."

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion