Researchers at University of California, Los Angeles have now developed software that listens to a baby’s crying and interprets it based on a few clues that parents with healthy hearing have learned to focus on. “For example, if a cry has a long period of silence, it’s more likely that the baby is fussy,” says Ariana Anderson, PhD, the lead researcher of the project. “If there are constant, high-volume frequencies, it’s more likely the baby is in pain.”

The technology actually relies on machine learning, rather than anecdotal findings of parents with normal hearing. A collection of more than 2,000 cries by infants was analyzed using a computer, which sorted the sounds, probably with the help of humans, and then built an algorithm that can do this sorting on cries that it hasn’t heard before.


The technology has already been implemented in the form of a smartphone app, and we’re looking forward to its commercial release. If it works as expected, it will simplify the raising of children not only for deaf people, but also for those with normal hearing who just aren’t very good at understanding the cries of their children.