The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
비디오의 이상 탐지에 대한 이전 연구에서는 작업 성능이 낮은 프레임이 테스트 중에 이상으로 감지되도록 정상 데이터에 대해 재구성 및 예측 작업을 수행하는 탐지기를 훈련했습니다. 본 논문에서는 생성 네트워크 구조를 사용하여 비디오 클립을 정렬하는 새로운 접근 방식을 제안합니다. 우리의 접근 방식은 외관에서 공간적 맥락을 학습하고 프레임의 순서 관계에서 시간적 맥락을 학습합니다. 실험은 4개의 데이터 세트에 대해 수행되었으며, 변칙 시퀀스를 모양과 동작별로 분류했습니다. 평가는 전체 데이터세트뿐만 아니라 카테고리별로도 진행됐다. 우리의 방법은 정상과 다른 모양과 움직임을 갖는 이상 현상에 대한 탐지 성능을 향상시켰습니다. 또한 우리의 접근 방식과 예측 방법을 결합하면 높은 재현율에서 정밀도가 향상되었습니다.
Wen SHAO
The University of Tokyo
Rei KAWAKAMI
Tokyo Institute of Technology,Denso IT Laboratory, Inc.
Takeshi NAEMURA
The University of Tokyo
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Wen SHAO, Rei KAWAKAMI, Takeshi NAEMURA, "Anomaly Detection Using Spatio-Temporal Context Learned by Video Clip Sorting" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 5, pp. 1094-1102, May 2022, doi: 10.1587/transinf.2021EDP7207.
Abstract: Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7207/_p
부
@ARTICLE{e105-d_5_1094,
author={Wen SHAO, Rei KAWAKAMI, Takeshi NAEMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Anomaly Detection Using Spatio-Temporal Context Learned by Video Clip Sorting},
year={2022},
volume={E105-D},
number={5},
pages={1094-1102},
abstract={Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.},
keywords={},
doi={10.1587/transinf.2021EDP7207},
ISSN={1745-1361},
month={May},}
부
TY - JOUR
TI - Anomaly Detection Using Spatio-Temporal Context Learned by Video Clip Sorting
T2 - IEICE TRANSACTIONS on Information
SP - 1094
EP - 1102
AU - Wen SHAO
AU - Rei KAWAKAMI
AU - Takeshi NAEMURA
PY - 2022
DO - 10.1587/transinf.2021EDP7207
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2022
AB - Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.
ER -