During COVID-19 pandemic, technology is playing a crucial role in supporting people’s daily activities whose realization would otherwise be compromised. However, one of the most powerful contributions technology is having is in the health sector. In particular, the pace at which the virus has been spreading and the similarity of symptoms of this infection to those of influenza and other diseases make it hard for physicians to diagnose, isolate and control the positive cases. Artificial Intelligence, and in particular deep learning, come to assist the healthcare system by providing tools able to detect COVID symptoms which are not discernible at human sight.
A study published in August on Nature Communication shows how it is possible to employ deep-learning techniques to classify, with high levels of accuracy, COVID-19 pneumonia, distinguishing it from the very similar influenza-associated pneumonia, in CT (computed tomography) scans. The algorithm, trained on a multinational dataset, is actually able to identify positivity to COVID-19 even on patients showing early symptoms.
However, scientists were able to do even better. Since the outbreak of the pandemic, several organizations and universities across the world have collected huge samples of cellphone-recorded forced coughs – such as the ones you let out when the doctor asks you to cough while listening at the stethoscope – and spoken words in order to build a model to detect infections. Indeed, the sound of cough is influenced by the conditions of surrounding organs, so even a very short recording could provide insights about vocal cords strength and lung performance, the latter being crucial for COVID-19 detection.
And, at the beginning of November, MIT researchers announced they finally developed a model recognizing COVID cough from forced cough recordings. The revolution brought by this discovery is the fact that a properly trained AI could extract up to 300 distinct features from the recordings and, thus, isolate even those patients who would be considered asymptomatic by a human doctor, who is only able to analyze 3 to 5 features of a cough. In particular, the model analyzes four main COVID-specific biomarkers (vocal cords strength, sentiment, lung performance and muscular degradation) from the recordings. Researchers trained their already existing Alzheimer’s disease AI model on the largest cough dataset, made of more than 70,000 recordings. The final version of the model correctly classified 98.5% of positive COVID coughs, including all the asymptomatic patients.
Despite the extremely positive results, this AI will not be used to perform an exhaustive diagnosis for COVID-19 infection. The next step for MIT researchers would be developing a COVID app which people could use on a daily basis to make a pre-screening, hoping this will help control the spread of the virus.