The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
심층 신경망(DNN)은 이미지 인식, 음성 인식 및 패턴 분석에 적합합니다. 그러나 이러한 신경망은 적대적 사례에 취약합니다. 적대적 예는 인간이 식별하기 어렵지만 대상 모델에 의해 샘플이 잘못 분류되는 방식으로 원본 샘플에 소량의 노이즈를 추가하여 생성된 데이터 샘플입니다. 군사 환경에서는 적 모델을 속이면서도 아군 모델을 정확하게 분류하는 적대적 사례가 유용할 수 있다. 본 논문에서는 아군 보행 인식 시스템에서는 올바르게 분류되고, 적군 보행 인식 시스템에서는 오분류된 선택적 적대적 사례를 생성하는 방법을 제안한다. 제안된 기법은 아군 보행 인식 시스템의 올바른 분류를 위한 손실과 적군 보행 인식 시스템의 오분류에 대한 손실을 결합하여 선택적 적대 사례를 생성한다. 실험에서는 CASIA Gait Database를 데이터세트로 사용하고 TensorFlow를 머신러닝 라이브러리로 사용했습니다. 결과는 제안한 방법이 적군 보행 인식 시스템에 대해 98.5%의 공격 성공률을 갖고, 아군 보행 인식 시스템에 의해 87.3%의 정확도로 분류되는 선택적 적대적 사례를 생성할 수 있음을 보여준다.
Hyun KWON
Korea Military Academy
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Hyun KWON, "Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 2, pp. 262-266, February 2023, doi: 10.1587/transinf.2021EDL8080.
Abstract: Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDL8080/_p
부
@ARTICLE{e106-d_2_262,
author={Hyun KWON, },
journal={IEICE TRANSACTIONS on Information},
title={Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network},
year={2023},
volume={E106-D},
number={2},
pages={262-266},
abstract={Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.},
keywords={},
doi={10.1587/transinf.2021EDL8080},
ISSN={1745-1361},
month={February},}
부
TY - JOUR
TI - Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network
T2 - IEICE TRANSACTIONS on Information
SP - 262
EP - 266
AU - Hyun KWON
PY - 2023
DO - 10.1587/transinf.2021EDL8080
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2023
AB - Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.
ER -