Usually, when we think of a telescope, we imagine recording images. Nevertheless, James Webb is able to gather a wealth of information, some of which can be converted into audio. On Wednesday (31) NASA released a sonication of some of the first images released by the JWST.
But how does a telescope record sound? In fact, this process is a kind of translation that scientists perform, which converts information that is inaudible to humans into a sound that we can pick up. The intent is that the materials could allow blind people to appreciate James Webb’s images through sound.
“These constructs offer a different way of revealing detailed information in early Webb data. Similar to how written descriptions are unique translations of visual images, semantics also translate visual images by encoding information such as color, brightness, star position, or water absorption signatures. Voices”. Telescope Education and Outreach. Baltimore Institute of Science, Maryland. “Our teams are committed to making astronomy accessible to all.”
Preliminary results from research conducted at the Chandra X-ray Center in Cambridge showed that people who are blind or visually impaired, as well as sighted, said they learned something about astronomical images by listening. Participants also shared that their listening experiences resonated deeply with them.
Respondents’ reactions varied – from admiration to nervousness,” said study leader Kimberly Arcand. “An important discovery came from sighted people. They said the experience helped them understand how blind or visually impaired people access information differently.”
Discover the sounds of James Webb pictures
Southern Ring Nebula
“In this sonication, the colors of the images were mapped to the tones of the sound – light frequencies were converted directly into sound frequencies. Near-infrared light with a higher frequency range is represented at the beginning of the range. Halfway through, the notes change, becoming generally lower to reflect that rays The mid-infrared includes longer wavelengths of light, NASA explains.
Sonification sweeps the spectrum from left to right. From bottom to top, the y-axis varies from less light to blocked light. The coordinate axis moves from 0.6 μm on the left to 2.8 μm on the right. The heights of each data point correspond to the frequencies of light represented by each point. Longer wavelengths of light have lower frequencies and are heard as lower tones.Size refers to the amount of light detected at each data point.
“Sonication scans the image from left to right. The soundtrack is vibrant and bouncy, depicting the details of this giant saw-like gas cavity. Gas and dust in the upper half of the image are depicted with blue shades and drone-like wind noise. Lower half of the image, depicted in degrees Reddish orange-red, having a lighter and more melodic texture.
Have you seen our new videos on Youtube? Subscribe to our channel!
“Music guru. Incurable web practitioner. Thinker. Lifelong zombie junkie. Tv buff. Typical organizer. Evil beer scholar.”