AI-powered cochlear mechanical studies
Title: AI-powered cochlear mechanical studies
DNr: LiU-gpu-2024-15
Project Type: LiU Compute
Principal Investigator: Pierre Hakizimana <pierre.hakizimana@liu.se>
Affiliation: Linköpings universitet
Duration: 2024-12-11 – 2026-01-01
Classification: 30105
Keywords:

Abstract

Our study employs AI-powered image analysis to extract novel insights from confocal microscopy data of guinea pig cochlear outer hair cells (OHCs). Using the Zeiss LSM 980 microscope, we acquire 512x512 pixel multichannel confocal images that require sophisticated deep learning processing for accurate cellular segmentation and quantification. The AI pipeline, built on a modified U-Net architecture, performs automated detection and measurement of subtle OHC morphological changes across thousands of images. The computational demands arise from our deep learning model's complex instance segmentation tasks, which precisely track individual OHCs and measure their dimensional changes. Training this model requires substantial GPU resources to optimize the neural network's parameters across our diverse dataset of cellular responses. The analysis pipeline processes large batches of images to extract statistically significant patterns in cellular responses to different sound intensities. This GPU-accelerated analysis revealed previously undetectable mechanical adaptations and uncovered a fundamental shift in OHC mechanics between loud and mild stimulation (correlation shifts from r = -0.21 to r = 0.64). The high-throughput processing capabilities enabled by GPU acceleration were crucial for identifying these subtle yet significant patterns across our experimental dataset. These computational tools are essential for advancing our understanding of cochlear mechanics and developing interventions for noise-induced hearing loss.