Developed by a team from the National University of Singapore (NUS) Tropical Marine Science Institute (TMSI), the sonar is said to incorporate information on the sparsity of objects, which helps interpret sound echoes better. This processing method is based on the hypothesis that dolphins use prior information about their environment, apart from broadband sound pulses, to interpret their echoes.
Compared to similar sonars, the NUS sonar is claimed to provide a better trade-off between sonar-image clarity, the number of sensors and the size of the sensor array used. The study has been published in Communications Engineering.
The scientists observed that dolphins were able to acoustically scan objects underwater and pick matching objects visually. This demonstrated that a dolphin’s sound echoes emitted off an object contained information of the object’s shape. They then recorded dolphin echoes emitted when scanning an object underwater.
Based on their observations, the team built a biomimetic sonar that replicated a dolphin’s sonar. The sonar is designed to emit sharp, impulsive click sounds similar to a dolphin’s echolocation. Three transmitters are used to send sounds from different directions. The researchers then processed the sounds from the dolphin and their sonar to visualise what the echoes revealed about the object shape.
To complement the hardware, the team developed software that allowed the sonar to improve the visualisation of the echoes. Based on the hypothesis that dolphins use prior information to process their echoes, the researchers incorporated the concept of sparsity into the sonar’s software. This assumes that out of the space scanned, only a small percentage is occupied by the object.
“Using prior information, such as the idea of sparsity, is intuitive. It is something humans do all the time – we turn our understanding of reality into expectations that can speed up our inferences and decisions. For example, in the absence of other information, the human brain and vision system tend to assume that in an image, the light on an object will be falling from above,” said Dr Hari Vishnu, senior research fellow at NUS TMSI.
The effectiveness of the software was demonstrated when it was able to visualise information from a dolphin’s sonar echoes when scanning an object, as well as sonar signals produced by their compact sonar. A conventional approach of processing both sonar echoes resulted in noisy images, but the novel processing approach gave better resolution and sharper images. The software is also able to generate visualisations with three clicks from the sonar, making it operationally fast.
According to NUS, the new sonar processing method could have potential benefits in underwater commercial or military sonars; it could be used to scan the seabed to search for features that can be used to aid navigation. The sonar’s compactness also makes it suitable to be mounted on underwater robots for ocean exploration.
The economy cannot wait for next gen AI and digital talent
If I recall my Greek mythology correctly, Cassandra received the gift of prophecy but was cursed to never be believed … don´t know how much that...