The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
심층 신경망(DNN)은 이미지, 음성, 패턴 인식 등 다양한 애플리케이션에 널리 사용됩니다. 그러나 최근 DNN은 인간이 구별할 수 없는 이미지의 작은 왜곡에 취약할 수 있는 것으로 나타났습니다. 이러한 유형의 공격은 적대적 사례로 알려져 있으며 딥 러닝 시스템에 심각한 위협입니다. 대부분의 DNN 분류자를 속일 수 있는 알려지지 않은 대상 지향의 일반화된 적대적 예는 더욱 위협적입니다. 계층적 앙상블 방법을 이용하여 알려지지 않은 분류자를 효과적으로 공격할 수 있는 일반화된 적대적 예시 공격 방법을 제안한다. 우리가 제안한 체계는 알려지지 않은 분류기에 대한 합리적인 공격 성공률을 달성하기 위해 고급 앙상블 적대적 예제를 생성합니다. 실험 결과, 제안하는 방법은 기존의 앙상블 방법과 기존 베이스라인 방법에 비해 MNIST 데이터에서는 최대 9.25%, 18.94%, CIFAR4.1 데이터에서는 13%, 10% 더 높은 미지의 분류기에 대한 공격 성공률을 달성할 수 있음을 보여주었다. , 각각.
Hyun KWON
Korea Advanced Institute of Science and Technology
Yongchul KIM
Korea Military Academy
Ki-Woong PARK
Sejong University
Hyunsoo YOON
Korea Advanced Institute of Science and Technology
Daeseon CHOI
Kongju National University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Hyun KWON, Yongchul KIM, Ki-Woong PARK, Hyunsoo YOON, Daeseon CHOI, "Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 10, pp. 2485-2500, October 2018, doi: 10.1587/transinf.2018EDP7073.
Abstract: Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018EDP7073/_p
부
@ARTICLE{e101-d_10_2485,
author={Hyun KWON, Yongchul KIM, Ki-Woong PARK, Hyunsoo YOON, Daeseon CHOI, },
journal={IEICE TRANSACTIONS on Information},
title={Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers},
year={2018},
volume={E101-D},
number={10},
pages={2485-2500},
abstract={Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.},
keywords={},
doi={10.1587/transinf.2018EDP7073},
ISSN={1745-1361},
month={October},}
부
TY - JOUR
TI - Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers
T2 - IEICE TRANSACTIONS on Information
SP - 2485
EP - 2500
AU - Hyun KWON
AU - Yongchul KIM
AU - Ki-Woong PARK
AU - Hyunsoo YOON
AU - Daeseon CHOI
PY - 2018
DO - 10.1587/transinf.2018EDP7073
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 2018
AB - Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.
ER -