ReliefNet: Fast Bas-relief Generation from 3D Scenes
24
Citation
39
Reference
10
Related Paper
Citation Trend
The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The output of this challenge resulted in five papers that address different facets of deep learning. These different facets include a high-level understating of why and when deep networks work (and do not work), the impact of geometry on the expressiveness of deep networks, and making deep networks interpretable.
Foundation (evidence)
Cite
Citations (0)
The use of the deep learning approach in the textile industry for the purpose of defect detection has become an increasing trend in the past 20 years. The majority of publications have investigated a specific problem in this field. Furthermore, many of published reviews or survey articles preferred to investigate papers from a more general perspective. Compared with published review publications, this study is the first up-to-date study that investigates the implementation of deep learning approaches for the detection of fabric defects from 2003 to the present. As the main objective of this study is to review deep learning-based fabric defect detection, the publications regarding fabric defect detection by using deep learning are examined. The methods, database, performance rates, comparisons, and architecture type of these works were compared with each other. The most widely used deep learning architectures customized deep convolutional neural networks, long short-term memory, generative adversarial networks, and autoencoders. Besides the use of the most used deep learning algorithms, the advantages and disadvantages of these approaches have also been expressed.
Cite
Citations (43)
Cancer Detection
Deep belief network
Deep Neural Networks
Cite
Citations (444)
Deep learning is a sophisticated and adaptable technique that has found widespread use in fields such as natural language processing, machine learning, and computer vision. It is one of the most recent deep learning-powered applications to emerge. Deep fakes are altered, high-quality, realistic videos/images that have lately gained popularity. Many incredible uses of this technology are being investigated. Malicious uses of fake videos, such as fake news, celebrity pornographic videos, financial scams, and revenge porn are currently on the rise in the digital world. As a result, celebrities, politicians, and other well-known persons are particularly vulnerable to the Deep fake detection challenge. Numerous research has been undertaken in recent years to understand how deep fakes function and many deep learning-based algorithms to detect deep fake videos or pictures have been presented.This study comprehensively evaluates deep fake production and detection technologies based on several deep learning algorithms. In addition, the limits of current approaches and the availability of databases in society will be discussed. A deep fake detection system that is both precise and automatic. Given the ease with which deep fake videos/images may be generated and shared, the lack of an effective deep fake detection system creates a serious problem for the world. However, there have been various attempts to address this issue, and deep learning-related solutions outperform traditional approaches.
Popularity
Deep belief network
Cite
Citations (13)
The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of Why & When Deep Learning works, with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The output of this challenge resulted in five papers that address different facets of deep learning. These different facets include a high-level understating of why and when deep networks work (and do not work), the impact of geometry on the expressiveness of deep networks, and making deep networks interpretable.
Foundation (evidence)
Deep time
Cite
Citations (1)
Feature (linguistics)
Cite
Citations (881)
Deep learning applications are used in various technologies, such as image processing, video classification, and speech recognition. Hardware implementation of deep learning applications has gained a vast popularity for its high speed which derives from its parallel computations. Although hardware implementation of deep learning applications has higher performance and speed compared to their software counterparts, it is highly power- and area-consuming. Several methodologies have been proposed for reducing the hardware complexity and power consumption of the deep learning applications such as approximate and stochastic computing methodologies which are proposed as alternatives to exact computing methodology for fault-tolerant circuits like deep learning applications. These methodologies reduce the hardware complexity and power consumption of the circuits at a limited loss of accuracy compared to exact designs. There are several deep learning applications in the literature which are implemented by the state-of-the-art stochastic and approximate computing methodologies, although a hybrid design of these two methodologies has never been used. Using these two methodologies together in a design could give a much better result compared to using them separately, because each of them would compensate for the other one's downsides. In this chapter, we start with the introduction of stochastic and approximate computing methodologies and their computational elements. Then, we are going to review the deep learning arithmetic units and we are going to survey some of the stochastic and approximate deep learning applications. Next, we are going to propose two area- and power-efficient hybrid stochastic-approximate designs for being used in deep learning applications, and finally, we are going to conclude the chapter and discuss the future research directions.
Stochastic Computing
Cite
Citations (0)
The various hurdles in machine learning are beaten by deep learning techniques and then the deep learning has gradually become preeminent in artificial intelligence. Deep learning uses neural networks to kindle decisions like humans. Deep learning flourished as an energetic approach and clarity marked its success in various domains. The study includes some dominant deep learning algorithms such as convolution neural network, fully convolutional network, autoencoder, and deep belief network to analyze the medical image and to detect and diagnose of cancer at an early stage. As early as the detection of cancer than to treat the disease is uncomplicated. Early diagnosis was particularly relevant for some cancers such as breast, skin, colon, and rectum, which prohibit the chance to grow and spread. Deep learning contributes to enhanced performance and better prediction in detection of cancer with medical images. The paper presents the study of a few deep learning software frameworks such as tensor flow, theano, caffe, torch, and keras. Tensor Flow provides excellent functionality for deep learning. Keras is a high-level neural network API that operates above on tensor flow or theano. The survey winds up by presenting several future avenues and open challenges that should be addressed by the researcher in the future.
Autoencoder
Cite
Citations (4)
Abstract: Deep fake is a rapidly growing concern in society, and it has become a significant challenge to detect such manipulated media. Deep fake detection involves identifying whether a media file is authentic or generated using deep learning algorithms. In this project, we propose a deep learning-based approach for detecting deep fakes in videos. We use the Deep fake Detection Challenge dataset, which consists of real and Deep fake videos, to train and evaluate our deep learning model. We employ a Convolutional Neural Network (CNN) architecture for our implementation, which has shown great potential in previous studies. We pre-process the dataset using several techniques such as resizing, normalization, and data augmentation to enhance the quality of the input data. Our proposed model achieves high detection accuracy of 97.5% on the Deep fake Detection Challenge dataset, demonstrating the effectiveness of the proposed approach for deep fake detection. Our approach has the potential to be used in real-world scenarios to detect deep fakes, helping to mitigate the risks posed by deep fakes to individuals and society. The proposed methodology can also be extended to detect in other types of media, such as images and audio, providing a comprehensive solution for deep fake detection.
Deep Neural Networks
Normalization
Cite
Citations (2)
Abstract: Recently, a machine learning (ML) area called deep learning emerged in the computer-vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in many fields, including medical image analysis, have started actively participating in the explosively growing field of deep learning. In this paper, deep learning techniques and their applications to medical image analysis are surveyed. This survey overviewed 1) standard ML techniques in the computer-vision field, 2) what has changed in ML before and after the introduction of deep learning, 3) ML models in deep learning, and 4) applications of deep learning to medical image analysis. The comparisons between MLs before and after deep learning revealed that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is learning image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The survey of deep learningalso revealed that there is a long history of deep-learning techniques in the class of ML with image input, except a new term, “deep learning”. “Deep learning” even before the term existed, namely, the class of ML with image input was applied to various problems in medical image analysis including classification between lesions and nonlesions, classification between lesion types, segmentation of lesions or organs, and detection of lesions. ML with image input including deep learning is a verypowerful, versatile technology with higher performance, which can bring the current state-ofthe-art performance level of medical image analysis to the next level, and it is expected that deep learning will be the mainstream technology in medical image analysis in the next few decades. “Deep learning”, or ML with image input, in medical image analysis is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical image analysis in the next few decades. Keywords: Deep learning, Convolutional neural network, Massive-training artificial neural network, Computer-aided diagnosis, Medical image analysis, Classification (key words)
Cite
Citations (2)