The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
GAN(Generative Adversarial Networks)은 생성 모델의 가장 성공적인 학습 원리 중 하나이며 많은 생성 작업에 광범위하게 적용되었습니다. 처음에는 Wasserstein GAN의 Lipschitz 연속성을 만족시키기 위해 GAN의 판별자를 강화하기 위해 GP(Gradient Penalty)가 적용되었습니다. 그라디언트 페널티의 바닐라 버전은 다양한 목적을 위해 추가로 수정되었지만, 적대적 학습에서 더 나은 균형과 더 높은 생성 품질을 추구하는 것은 여전히 어려운 일입니다. 최근 DRAGAN은 모델 최적화에서 로컬 볼록성을 촉진하기 위해 노이즈 기울기 페널티를 적용하여 주변 데이터 다양체에서 로컬 선형성을 달성하기 위해 제안되었습니다. 그러나 우리는 그들의 접근 방식이 판별자에 대한 Lipschitz 연속성을 만족시키는 데 부담을 줄 것임을 보여줍니다. DRAGAN의 Lipschitz 연속성과 국지적 선형성 사이의 이러한 충돌은 균형이 좋지 않아 발전 품질이 이상적이지 않습니다. 이를 위해 우리는 충돌 없이 더 나은 균형에 도달하기 위해 로컬 선형성과 Lipschitz 연속성을 모두 활용하는 새로운 접근 방식을 제안합니다. 자세히 설명하면 판별기에서 Lipschitz 연속성의 속성을 잃지 않고 로컬 선형성을 달성하기 위한 특정 형태의 노이즈 그래디언트 페널티를 수신하기 위해 판별기에서 동기화된 활성화 함수를 적용합니다. 실험 결과에 따르면 우리의 방법은 우수한 이미지 품질에 도달할 수 있으며 실제 데이터 세트의 Inception Score 및 Fréchet Inception Distance 측면에서 WGAN-GP, DiracGAN 및 DRAGAN보다 성능이 뛰어납니다.
Rui YANG
University of Tokyo
Raphael SHU
Amazon AI
Hideki NAKAYAMA
University of Tokyo
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Rui YANG, Raphael SHU, Hideki NAKAYAMA, "Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 9, pp. 1537-1545, September 2022, doi: 10.1587/transinf.2022EDP7019.
Abstract: Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7019/_p
부
@ARTICLE{e105-d_9_1537,
author={Rui YANG, Raphael SHU, Hideki NAKAYAMA, },
journal={IEICE TRANSACTIONS on Information},
title={Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks},
year={2022},
volume={E105-D},
number={9},
pages={1537-1545},
abstract={Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.},
keywords={},
doi={10.1587/transinf.2022EDP7019},
ISSN={1745-1361},
month={September},}
부
TY - JOUR
TI - Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks
T2 - IEICE TRANSACTIONS on Information
SP - 1537
EP - 1545
AU - Rui YANG
AU - Raphael SHU
AU - Hideki NAKAYAMA
PY - 2022
DO - 10.1587/transinf.2022EDP7019
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2022
AB - Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.
ER -