Dalarna University's logo and link to the university's website

du.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Are Distributed Representations in Neural Networks More Robust Against Malicious Fooling Attacks
Dalarna University, School of Information and Engineering.
Dalarna University, School of Information and Engineering.
2023 (English)Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesisAlternative title
Är distribuerade representationer i neurala nätverk mer robusta mot illvilliga lurattacker? (Swedish)
Abstract [en]

A plethora of data from sources like IoT, social websites, health, business, and many more have revolutionized the digital world in recent years. To make effective use of data for any sort of analysis, prediction, or automation of applications, the demand for machine learning and artificial intelligence has grown over time. With the growing capability of neural networks, they are now used in real-time applications related to medical diagnosis, weather forecasting, speech and facial recognition, stock markets, etc. Despite the undoubted processing and intelligence capabilities of neural networks, there are key challenges that are to be addressed for the effective implementation of neural networks in real-time applications. One of these challenges is their vulnerability due to fooling – that is, making networks classify wrongly by inducing very small changes in their inputs. How information is distributed in the network, might be a predictor for fooling, so the role of information distribution on fooling robustness is investigated here. Specifically, we use dropout,a known regularization technique to induce more distributed representations, and test network robustness to fooling induced by the Fast Gradient Sign Method (FGSM). The research findings showed that information smearedness is a better predictor against robustness to fooling as compared to dropout.

Place, publisher, year, edition, pages
2023.
Keywords [en]
Adversarial attacks, Information Smearedness, Artificial Neural Networks, Information Relay, Dropout, Fast Gradient Sign Method
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:du-45453OAI: oai:DiVA.org:du-45453DiVA, id: diva2:1736740
Subject / course
Microdata Analysis
Available from: 2023-02-14 Created: 2023-02-14 Last updated: 2023-02-14

Open Access in DiVA

fulltext(602 kB)211 downloads
File information
File name FULLTEXT01.pdfFile size 602 kBChecksum SHA-512
b4d59aa57c4ef638a1c54672386987b3f534127f05f988bb138ce1dab08e4ac65daa0a8d6cf225a51110c48247697bbb5760b981b4d9f69b408dc7ddf1c69aff
Type fulltextMimetype application/pdf

By organisation
School of Information and Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 211 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 207 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf