Dalarna University's logo and link to the university's website

du.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Detecting Information Relays in Deep Neural Networks
Dalarna University, School of Information and Engineering, Microdata Analysis. Michigan State University, East Lansing, MI, USA.ORCID iD: 0000-0002-4872-1961
Michigan State University, East Lansing, MI, USA; .
2023 (English)In: Entropy, E-ISSN 1099-4300, Vol. 25, no 3, article id 401Article in journal (Refereed) Published
Abstract [en]

Deep learning of artificial neural networks (ANNs) is creating highly functional processes that are, unfortunately, nearly as hard to interpret as their biological counterparts. Identification of functional modules in natural brains plays an important role in cognitive and neuroscience alike, and can be carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or calcium imaging. However, we do not have such robust methods at our disposal when it comes to understanding functional modules in artificial neural networks. Ideally, understanding which parts of an artificial neural network perform what function might help us to address a number of vexing problems in ANN research, such as catastrophic forgetting and overfitting. Furthermore, revealing a network's modularity could improve our trust in them by making these black boxes more transparent. Here, we introduce a new information-theoretic concept that proves useful in understanding and analyzing a network's functional modularity: the relay information IR. The relay information measures how much information groups of neurons that participate in a particular function (modules) relay from inputs to outputs. Combined with a greedy search algorithm, relay information can be used to identify computational modules in neural networks. We also show that the functionality of modules correlates with the amount of relay information they carry.

Place, publisher, year, edition, pages
2023. Vol. 25, no 3, article id 401
Keywords [en]
deep learning, information theory, relay
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:du-45828DOI: 10.3390/e25030401ISI: 000960036200001PubMedID: 36981289Scopus ID: 2-s2.0-85152710571OAI: oai:DiVA.org:du-45828DiVA, id: diva2:1748821
Available from: 2023-04-04 Created: 2023-04-04 Last updated: 2023-04-25Bibliographically approved

Open Access in DiVA

fulltext(2717 kB)143 downloads
File information
File name FULLTEXT01.pdfFile size 2717 kBChecksum SHA-512
4107dee6f9d2793c035ae0a9bae3d14b16ab9e3faae9c909380b20b1e8c717ea0c23c7f81f775ffe7ea911d3c24b89984c61e30938e0baf081d77941f9c95083
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMedScopus

Authority records

Hintze, Arend

Search in DiVA

By author/editor
Hintze, Arend
By organisation
Microdata Analysis
In the same journal
Entropy
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 143 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 256 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf