Dalarna University's logo and link to the university's website

du.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Detecting Information Relays in Deep Neural Networks
Högskolan Dalarna, Institutionen för information och teknik, Mikrodataanalys. Michigan State University, East Lansing, MI, USA.ORCID-id: 0000-0002-4872-1961
Michigan State University, East Lansing, MI, USA; .
2023 (engelsk)Inngår i: Entropy, E-ISSN 1099-4300, Vol. 25, nr 3, artikkel-id 401Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Deep learning of artificial neural networks (ANNs) is creating highly functional processes that are, unfortunately, nearly as hard to interpret as their biological counterparts. Identification of functional modules in natural brains plays an important role in cognitive and neuroscience alike, and can be carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or calcium imaging. However, we do not have such robust methods at our disposal when it comes to understanding functional modules in artificial neural networks. Ideally, understanding which parts of an artificial neural network perform what function might help us to address a number of vexing problems in ANN research, such as catastrophic forgetting and overfitting. Furthermore, revealing a network's modularity could improve our trust in them by making these black boxes more transparent. Here, we introduce a new information-theoretic concept that proves useful in understanding and analyzing a network's functional modularity: the relay information IR. The relay information measures how much information groups of neurons that participate in a particular function (modules) relay from inputs to outputs. Combined with a greedy search algorithm, relay information can be used to identify computational modules in neural networks. We also show that the functionality of modules correlates with the amount of relay information they carry.

sted, utgiver, år, opplag, sider
2023. Vol. 25, nr 3, artikkel-id 401
Emneord [en]
deep learning, information theory, relay
HSV kategori
Identifikatorer
URN: urn:nbn:se:du-45828DOI: 10.3390/e25030401ISI: 000960036200001PubMedID: 36981289Scopus ID: 2-s2.0-85152710571OAI: oai:DiVA.org:du-45828DiVA, id: diva2:1748821
Tilgjengelig fra: 2023-04-04 Laget: 2023-04-04 Sist oppdatert: 2023-04-25bibliografisk kontrollert

Open Access i DiVA

fulltext(2717 kB)151 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 2717 kBChecksum SHA-512
4107dee6f9d2793c035ae0a9bae3d14b16ab9e3faae9c909380b20b1e8c717ea0c23c7f81f775ffe7ea911d3c24b89984c61e30938e0baf081d77941f9c95083
Type fulltextMimetype application/pdf

Andre lenker

Forlagets fulltekstPubMedScopus

Person

Hintze, Arend

Søk i DiVA

Av forfatter/redaktør
Hintze, Arend
Av organisasjonen
I samme tidsskrift
Entropy

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 151 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

doi
pubmed
urn-nbn

Altmetric

doi
pubmed
urn-nbn
Totalt: 257 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf