PhD Defence Michele Esposito

Supervisor: Prof. Dr. Elia Formisano

Co-supervisor: Dr. Giancarlo Valente

Keywords: Neural representations of sound, deep learning in auditory neuroscience, semantic processing of sounds, brain-inspired computational models
 

"From Sound to Meaning Brain-Inspired Deep Neural Networks For Sound Recognition"


This PhD thesis explores how the human brain processes complex everyday sounds by combining neuroscience and artificial intelligence insights. The research focuses on developing and evaluating computational models that mimic how the brain hears, interprets, and assigns meaning to sounds. By using deep neural networks—Artificial Intelligence systems inspired by the brain’s structure—the study aims to understand better how we decipher sounds such as speech, music, or environmental noise. These models are compared to data collected from brain scans and behavioural experiments to assess their biological relevance. Emphasis is also placed on understanding how we extract acoustic features and semantic information—what a sound means in context. The last chapter introduces a novel time-resolved model that captures how the brain continuously integrates sound features over time, better reflecting the brain’s dynamic nature. This interdisciplinary work contributes to cognitive neuroscience and the development of human-like artificial hearing systems.

Click here for the live stream.

Also read

  • PhD Defence Jiaojing Xu

    "Uncovering the Link Between Pain Related Cognitive Biases and Pain Interference: A Virtual Reality Approach"

    15 May