Home | Program | Directions |
In evolutionarily advanced species, from rodents to humans, the biological brain orchestrates computations performed by hundreds of millions to billions of neurons, giving rise to behaviors emerging from this highly parallelized system. To model neural activities efficiently, down-sampling of the neuronal density is often required. Meanwhile, models inspired by neural computation can scale up to millions or even billions of parameters. While the interpretability of neuro-inspired artificial neural networks (ANNs) is essential for evaluating their reliability, robustness, biological plausibility, and trustworthiness, their structural complexity often makes them difficult to interpret.
This workshop will begin with an overview of interpretability challenges in modeling the nervous system, highlighting solutions from the perspective of explainable AI (XAI). We will examine the obstacles faced by the experimental and computational neuroscience communities in data analysis, model development, and dataset integration. Through these examples, we will explore how XAI methods can be used to probe the inner workings of neuro-inspired ANNs.
Furthermore, this OCNS workshop provides a unique platform to bring together neuroscience and AI researchers working on XAI for neuroscience (NeuroXAI) and other relevant areas of research. Topics of interest include machine learning and data mining models for neuroscience, XAI methods to address challenges in neuroscience data analysis and modeling, neuro-inspired models, and topics of high societal relevance (e.g., open science practices and reproducibility). By fostering the exchange of ideas and collaborations, the workshop aims to shape future research directions and advance the development and testing of robust, reliable, and reproducible models, methods, and frameworks.
The list of organizers of the workshop are listed below:
![]() Jie Mei
|
![]() Nina Hubig
|
![]() Claudia Pant
|
![]() Subham Dey
|