Google’s AI could diagnose diseases just by coughing, says Nature article; understand what it will be like

-

Google scientists developed a fmachine learning tool that can help detect and monitor health conditions evaluating only noises such as coughing and breathing, according to an article published on the Nature magazine website. The expectation is that the system artificial intelligence (AI) can be used by doctors to diagnose diseases such as Covid-19 and tuberculosisand to assess how well a person’s lungs are functioning.

This is not the first time that a research group has explored the use of sound as a biomarker for disease. The concept gained momentum during the pandemic when scientists discovered that it was possible to detect the respiratory disease through a person’s cough. This type of tool has also shown promise in detecting diabetes, for example, according to a study published in the scientific journal Mayo Clinic Proceedings: Digital Health last year.

Most AI tools developed for this purpose are trained on audio recordings combined with health information about the person making the sounds. For example, audios can be labeled to indicate that the person had bronchitis at the time of recording. The tool then starts to associate sound characteristics with the data label, in a training process called supervised learning.

Google’s tool, called Health Acoustic Representations (HeAR), is based on unlabeled data. Scientists have extracted more than 300 million short sound clips of coughing, breathing, throat clearing and other human sounds from publicly available YouTube videos.

Each clip was converted into a visual representation of the sound called a spectrogram. Then the researchers blocked out segments of the spectrograms to help the model learn to predict the missing parts. The technology is similar to that used in tool training ChatGPT.

This method can be adapted for many tasks. In the case of HeAR, the Google team adapted it to detect Covid-19, tuberculosis and whether a person is a smoker or not. Because the model was trained on such a wide range of human sounds, the researchers only had to feed it very limited datasets labeled with these diseases and characteristics.

The results showed that the model was able to predict these diseases with accuracy above existing models trained on speech or general audio data. Furthermore, the fact that the original training data is so diverse – with varying sound quality and human sources – means that the results are generalizable. The data was released in a preprint that has not yet been peer-reviewed.

The field of healthcare acoustics, or “audiomics,” is promising, according to Yael Bensoussan, a laryngologist at the University of South Florida in Tampa, USA, who co-leads a research consortium focused on exploring the voice as a biomarker for monitor health, to Nature. “Acoustic science has existed for decades. The difference is that now, with AI and machine learning, we have the means to collect and analyze a lot of data at the same time.”

It is still too early to say whether HeAR will become a commercial product. But, according to Bensoussan, this type of technology represents immense potential not only for diagnosis, but also for disease tracking.

For now, Google’s plan is to provide access to the model so that interested researchers can use it in their own investigations. “Our goal as part of Google Research is to spur innovation in this nascent field,” says Sujay Kakarmath, a product manager at Google in New York who worked on the project.

See too

WAR IN THE MIDDLE EAST

UN Security Council pronounces on new ceasefire resolution in Gaza

outrage

Arrest of Moscow bombing suspects opens discussion on death penalty in Russia


The article is in Portuguese

Tags: Googles diagnose diseases coughing Nature article understand

-

-

PREV Recovered from Covid-19, Alckmin is released to return to in-person work
NEXT Patos de Minas will adopt scale for Pfizer Bivalente