Due to the asymmetric shape of the ears and the distance between them, the person determines the direction to the sound source. Experts from Facebook Research and the University of Texas have found a way to credibly simulate surround sound using machine learning and a pair of artificial ears.

The human brain uses various devices in order to understand where the sound comes from three-dimensional space. One of them is a different time it takes a sound to reach each ear. The sound coming from the left will obviously reach to the left ear a little earlier. Another way is the difference in volume. The same sound will be perceived as louder with the left ear than with the right. The shape of the ear also helps the brain determine where the sound comes from, according to MIT Technology Review.

Thus, those who want to recreate this system are artificially faced with a difficult task. One way is stereo recording. If you place a microphone in each ear, you can fix these smallest variations in sound perception.

After analyzing them, scientists can reproduce them using a mathematical algorithm. Then ordinary headphones can be turned into a device that creates a three-dimensional sound.

But since everyone’s ears are different, everyone hears sounds in their own way. It would be necessary to take measurements for each separately before playing back the recording. In the laboratory this is possible, but in practice it is not.

However, there are ways to get closer to 3D sound without taking into account the individual shape of the ears. Johann Gao and Kristen Grauman applied one of them to determine which side was approaching the sound with the help of visual cues. The machine learning system, having received at its disposal a video recording of the scene and monophonic sound, finds its source and calculates the time it takes for the sound waves to reach the ear channels and the volume of the sound.

As a result, the listener perceives almost three-dimensional sound.

For example, on the video two musicians, a drummer and keyboard player, one on the left, the other on the right. The algorithm recognizes this and distributes the sound streams, respectively: the drums – to the left, the synthesizer – to the right.

For the learning algorithm, scientists have collected a database of examples and made stereo recordings of over 2000 video clips. To imitate a human hearing aid, they made two artificial ears, fixed them on a blank that was wide with a human head, and added a GoPro camera to the system.

simulate surround sound

The result can be heard here (do not forget to wear headphones):

The authors called their sound 2.5D due to the fact that the system does not personalize the sound for an individual user. Also, it does not recognize the sound source if it is not on the video. Grauman and Gao plan to continue working on their invention and expand its functionality.

Singapore scientists have taught AI to identify heart problems with noise in the lungs. The device, resembling a stethoscope, picks up the sounds of air passing through the fluid-filled lungs, and transmits them to the server, where they are processed by algorithms.

Categories: Explore

You might be interested:


China, USA and Japan – leaders in the number of vacancies in the field of AI The largest number of vacancies per worker population was recorded in Japan, Israel and the UK. Russia was not included in the rating, meanwhile, emp...
Yandex will test robotic autos in Israel Israel became the third state after Russia and the United States, which approved the test program of robots of the Russian company. They will be test...
Say “deepfake”: the neural network was taught to imitate any voice The era of deepfakes comes not only for images: an AI was created, which will make a voice younger or “change the sex” of the speaker. The neural net...
Alexa was allowed into hospital wards In a hundred wards of the Cedars-Sinai hospital in Los Angeles they will post Amazon Echo smart speakers. The voice assistant will figure out which o...

Comments