logo
    Resolution enhancement in microscopic imaging based on generative adversarial network with unpaired data
    15
    Citation
    35
    Reference
    10
    Related Paper
    Citation Trend
    Deep learning based generative models such as deepfake have been able to generate amazing images and videos. However, these models may need significant transformation when applied to generate crystal materials structures in which the building blocks, the physical atoms are very different from the pixels. Naively transferred generative models tend to generate a large portion of physically infeasible crystal structures that are not stable or synthesizable. Herein we show that by exploiting and adding physically oriented data augmentation, loss function terms, and post processing, our deep adversarial network (GAN) based generative models can now generate crystal structures with higher physical feasibility and expand our previous models which can only create cubic structures.
    Generative adversarial network
    Generative model
    Crystal (programming language)
    Citations (1)
    Generative adversarial networks (GANs) can be used in modeling highly complex distributions for real world data, especially images. This paper compares between two different models of the Generative Adversarial Networks: the Multi-Agent Diverse Generative Adversarial Networks (MAD-GAN) which consists of multi-generator and one discriminator and the Generative Multi-Adversarial Networks (GMAN) that has multiple discriminators and one generator. The results show that both MAD-GAN and GMAN outperformed the DCGAN. In addition, MAD-GAN performs better than GMAN when avoiding mode collapse or when the dataset contains many different modes.
    Discriminator
    Generative adversarial network
    Mode (computer interface)
    본 논문은 딥러닝 모델 중 하나인 generative adversarial network (GAN)을 사용하여 글자 특성이 고려된 초해상도 영상 복원 방법을 제시한다. 기존의 초해상도 영상 복원 방법은 일반적인 영상에 대한 특징들을 주로 학습하기 때문에 글자 영역의 복원에 대해서는 부족한 성능을 보인다. 글자 영상이 가지고 있는 특징은 일반 영상의 특징과 구분되므로 이를 별도로 처리하기 위한 과정이 필요하다. 따라서 본 논문에서는 기존의 데이터셋에 글자를 추가하고, 일반 영상에 대한 학습과 글자 영상에 대한 학습을 나누어 수행하여 글자 영역에 대해 개선된 초해상도 복원 방법을 제시한다. 실험 결과를 통해 본 논문에서 제안한 알고리즘이 글자가 포함된 영상에 대하여 복원의 품질이 향상되는 것을 보인다.
    Generative adversarial network
    As the first successful general purpose way of generating new data, GANs have shown great potential for a wide range of practical applications (including those in the fields of art, fashion, medicine and finance). It is one of the most popular research topics of recent times. GANs are the new class of exciting machine learning model that leads to applications that bring to mind their ability to produce synthetic but realistic looking data. Generative Adversarial Networks are composed of two neural networks that work in opposite directions. In this paper, it is aimed to examine the same initial situation, same dataset, same number of iterations, parts of the same size in order to compare Generative Adversarial Networks. This paper Generative Adversarial Network (GAN), Deconvolusional Generative Adversarial Network (DCGAN), Semi-Supervised Generative Adversarial Network (SGAN/SeGAN) Conditional Generative Adversarial Network (CoGAN / CGAN) were used. These methods were calculated on the performance of MNIST dataset. The results are presented both numerically and visually.
    MNIST database
    Generative adversarial network
    Deep Neural Networks
    대립생성망(generative adversarial networks, GAN)은 실제 자료와 유사한 자료를 만들어주는 생성형 딥러닝(generative deep learning) 모형이다. 2014년에 발표된 이래로 많은 파생 모형들이 개발되어 다양한 분야에 활용되고 있다. 본 연구에서는 성능이 우수하다고 평가된 파생 GAN들을 요약 및 정리하고 성능을 비교하였다. 그리고 GAN의 입력 잠재공간 (input latent space)의 적절한 차원크기를 추정하고 생성자료의 품질을 평가하는 프레쳇 인셉션 거리 (Fréchet Inception distance, FID)와 인셉션 점수(Inception score)의 적절성도 평가하였다. 실험결과 GAN-NS와 LSGAN이 안정적으로 우수한 성능을 보였으며 FID가 더 좋은 측도로 평가되었다. 그리고 잠재공간은 10차원에서도 전형적인 100차원과 차이가 없는 좋은 결과를 보였다.
    Generative adversarial network
    Generative model
    When designing a part of machines, it is desired to generate shapes that satisfies performance requirements. For such an aim, deep generative models are used. Generative adversarial network (GAN), variational autoencoders (VAE), and VAEGAN are usually employed. In the present study, we compare those three generative models, and explain the necessity of physics guided generative models.
    Generative adversarial network
    Generative model
    Recent successes in generative modeling have accelerated studies on this subject and attracted the attention of researchers. One of the most important methods used to achieve this success is Generative Adversarial Networks (GANs). It has many application areas such as; virtual reality (VR), augmented reality (AR), super resolution, image enhancement. Despite the recent advances in hair synthesis and style transfer using deep learning and generative modelling, due to the complex nature of hair still contains unsolved challenges. The methods proposed in the literature to solve this problem generally focus on making high-quality hair edits on images. In this thesis, a generative adversarial network method is proposed to solve the hair synthesis problem. While developing this method, it is aimed to achieve real-time hair synthesis while achieving visual outputs that compete with the best methods in the literature. The proposed method was trained with the FFHQ dataset and then its results in hair style transfer and hair reconstruction tasks were evaluated. The results obtained in these tasks and the operating time of the method were compared with MichiGAN, one of the best methods in the literature. The comparison was made at a resolution of 128x128. As a result of the comparison, it has been shown that the proposed method achieves competitive results with MichiGAN in terms of realistic hair synthesis, and performs better in terms of operating time.
    Generative adversarial network
    Image Synthesis
    Transfer of learning
    Citations (0)