Researchers from the Institute of Electrical and Electronics Engineers (IEEE) and the Chinese Association of Automation have designed an artificial intelligence (AI) system which they say can distinguish between a baby’s various cries.
Although each is distinctive, many cries share similarities – something the researchers focused on when training the AI. An author of the study, Dr Lichuan Liu, said: “Like a special language, there are lots of health-related information in various cry sounds. The differences between sound signals actually carry the information. These differences are represented by different features of the cry signals.”
- Cyber Resilience is Key to Scotland’s Big Data Revolution
- More Than 50% of UK Workers Lack Digital Skills
- Aberdeen Students Use Video Tech to Recreate ‘Earliest Pictish Fort’
In order to classify and analyse the signals, the researchers used compressed sensing in order to process the vast quantity of data more efficiently. Compressed sensing reconstructs a signal based on sparse data and is particularly useful when audio is recorded in noisy environments, which is where the baby cries often occur.
The team, therefore, developed a new algorithm based on automatic speech recognition so that it can determine the meanings of both normal and abnormal cry signals, even when in noisy environments.
Most importantly, the algorithm operates across different babies, meaning it could not only be used by parents, but also by doctors in order to understand the meaning of cries from poorly children.
Dr Liu added: “The ultimate goals are healthier babies and less pressure on parents and care givers.
“We are looking into collaborations with hospitals and medical research centres, to obtain more data and requirement scenario input, and hopefully we could have some products for clinical practice.”