Study author Luzheng Bi explained that the team designed a hybrid strategy that merges neural perception with acoustic feature learning to exploit the strengths of both modes. "Current automated STD methods perform well under controlled conditions but degrade sharply in low SNR or with unseen targets, while standalone BCI systems suffer from high false alarm rates. To overcome these limitations, we proposed a hybrid approach that combines the complementary strengths of neural perception and acoustic feature learning," explained study author Luzheng Bi, a researcher at the Beijing Institute of Technology.
The framework's main elements are an EEG decoding network that incorporates neuroanatomical information, a confidence-based fusion mechanism that combines outputs from the brain interface and the automatic detector, and a streaming-mode experimental protocol that mimics real-time operational use. "This integrated solution achieves robust detection performance with high generalization, offering a practical tool for security protection and environmental reconnaissance."
The EEG component, called Tri-SDANet, applies a spatial partitioning strategy grounded in brain anatomy to process multichannel recordings. In this setup, 60-channel EEG data are divided into temporal, frontal, and parieto-occipital lobes, and each region passes through its own spatiotemporal filters to improve decoding of task-related neural activity. On the signal-processing side, the automatic detection module uses established models trained on log-Mel spectrogram representations of sound to extract acoustic features relevant for target identification.
A key design choice is the confidence-driven fusion scheme that dictates when the system engages the EEG-based interface. "The fusion framework invokes BCI only when the automatic detector is uncertain, reducing human workload while maintaining accuracy," said Jianting Shi, the lead author. This selective use of neural input is intended to keep operator demands manageable while still using human brain responses to disambiguate difficult cases.
Shi noted that the current implementation still faces several technical and human-factor hurdles. "While the hybrid system shows promising results, it still faces challenges: EEG decoding latency, operator fatigue, and adaptation to more diverse sound targets. Future work will focus on algorithm and hardware optimization to reduce latency, develop user-friendly training protocols, and expand the dataset to cover broader acoustic scenarios," said Shi.
The researchers view the brain - machine hybrid intelligence framework as a generalizable path to more robust acoustic target detection that narrows the gap between laboratory benchmarks and real-world operational needs.
Research Report:Neuroanatomy-Informed Brain - Machine Hybrid Intelligence for Robust Acoustic Target Detection
Related Links
Beijing Institute of Technology Press Co., Ltd
All about the robots on Earth and beyond!
| Subscribe Free To Our Daily Newsletters |
| Subscribe Free To Our Daily Newsletters |