A crying baby can be a source of worry for parents, but AI might be able to help ease their concerns by distinguishing normal cry signals from abnormal ones.
Babies cry for many reasons and a so-called cry language recognition algorithm developed in the US could help distinguish between everyday baby gripes – such as needing to be changed or being tired and grouchy – and an underlying illness.
As well as parents, the method promises to be useful to doctors who may use it to discern cries among sick children.
The research was published in the May issue of IEEE/CAA Journal of Automatica Sinica (JAS), a joint publication of the IEEE and the Chinese Association of Automation.
Experienced health care workers and seasoned parents can often distinguish a baby’s many needs based on the crying sounds it makes. While each baby’s cry is unique, they share some common features when they result from the same reasons.
Identifying the hidden patterns in the cry signal has been a major challenge, and artificial intelligence applications have now been shown to be an appropriate solution within this context.
The new research is said to use a specific algorithm based on automatic speech recognition to detect and recognise the features of infant cries.
To analyse and classify those signals, the team used compressed sensing to process big data more efficiently. Compressed sensing reconstructs a signal based on sparse data and is especially useful when sounds are recorded in noisy environments, which is where baby cries typically take place.
In this study, the researchers designed a new cry language recognition algorithm which can distinguish the meanings of normal and abnormal cry signals in a noisy environment.
The algorithm is reportedly independent of the individual crier, meaning that it can be used in a broader sense in practical scenarios to recognise and classify various cry features and better understand why babies are crying and how urgent the cries are.
“Like a special language, there are lots of health-related information in various cry sounds. The differences between sound signals carry the information. These differences are represented by different features of the cry signals. To recognise and leverage the information, we have to extract the features and then obtain the information in it,” said Lichuan Liu, corresponding author and Associate Professor of Electrical Engineering and the director of Digital Signal Processing Laboratory at Northern Illinois University, whose group conducted the research.
The researchers hope that the findings of their study could be applicable to several other medical care circumstances in which decision making relies heavily on experience.