logo
    LSANNet: A lightweight convolutional neural network for maize leaf disease identification
    1
    Citation
    31
    Reference
    10
    Related Paper
    최근 딥러닝(Deep learning)을 이용한 연구가 활발히 진행되고 있다. 딥러닝의 한 종류인 Convolutional Neural Network(CNN)에서는 Alexnet, VggNet, GoogleNet, ResNet, DenseNet 등 이 나왔으며, 이 모델들을 이용하여 영상 분류(image classification), 영상 검출(detection)로 활용되고 있다. 본 논문은 나이 추정에 관한 영상 분류의 목적을 가지고 있으며, 각 나이에 해당하는 여러 개의 클래스를 분류함에 있어 기존 Comparative Region Convolutional Neural Network(CRCNN)와 같이 멀티 클래스 분류기를 사용하지 않고 두 개의 클래스에 대해 나이 비교만을 수행하는 바이너리 분류기를 사용하여 나이를 분류하는 Comparative Convolutional Neural Network for Age Estimation(CCNNAE)기법을 제안한다. 바이너리 분류기를 이용함으로써 클래스 간 DB개수의 불균형을 완화되고 비교를 수행하는 분류기를 사용함으로써 한 장의 영상에 여러 개의 영상에 관한 라벨(Label)이 각각 비교 되어 적용되기 때문에 DB가 증가(Augmentation)되는 효과가 있어 기존 기법보다 예측 정확도가 증가한다. CCNNAE는 배치 사이즈(Batch size)안에서 두 영상에 대한 M차원의 나이 비유사도를 측정한 것을 학습한다. 실험에서는 정확성(Accuracy)과 mean absolute error(MAE)이 각각 90.2%, 2.77 이 나왔고 이 결과를 통해 나이를 분별력 있게 측정하였다.
    Residual neural network
    Mean absolute error
    高等教育の質に対する社会的関心は高く,faculty development(FD)という授業改善活動が広く行われている.FDのための教員への授業のフィードバック手段として,聴講者の受講状態を解析する工学的アプローチが存在する.カメラによって撮影された映像から聴講者の状態を解析する手法は,コストや聴講者への影響の面から,教育現場への導入可能性が高い.しかしこれまでの映像解析の試みでは,状況の多様性に対応できる効果的な特徴量が明らかでなく,またそれらの設計が十分でなかった. そのため,利用できる場面は限られていた.本研究では,聴講者の状態推定課題をパターン認識系と解釈し,機械学習の枠組みで特徴量獲得を行うconvolutional neural networks(CNN)を用いた聴講者の状態推定システムを提案する.そして,CNNを識別器として用いる際に抱える問題を取り上げ,提案システムが学習により獲得した特徴量について議論を行う.様々な環境で撮影された映像データについて性能評価実験を行った結果,提案システムは聴講者の検出性能で適合率が84.8%,再現率が61.8%,全状態の平均正答率で72.8%を達成した.CNNによる聴講者の状態推定は,学習データ数を制限した問題設定においても高い性能を示し,聴講者に関わる特徴を効率的に獲得することが示唆された.
    Cellular neural network
    Citations (1)
    Convolutional Neural Network CNN) is one of the deep learning algorithms. It is useful for finding patterns in images. Intelligent software automates understanding images and speech. Extracting distinct features, by their own induces intelligent to software for identifying objects, recognizing faces and diagnosing diseases from medical images. With the help of CNN, software on their own acquires the knowledge of patterns from raw data. These developments play a prominent role in medical imaging. Classification, Segmentation and diagnosing are the area where CNN marked its importance. About CNN there has been a large array of improvements achieved in the last few years. We provide a short overview of the role of CNN in medical image analysis. A shallow CNN model is proposed as an automatic diagnosing system. This work specifically concentrates on three key elements: (1) building blocks of convolutional neural networks (2) introduction of various CNN architecture; (3) Challenges in implementing CNN for analyzing medical images.
    Cellular neural network
    Summary This abstract shows the capability of convolutional neural networks (CNN) to predict faults on a challenging onshore Texas dataset. Furthermore, the paper compares the quality of the prediction cubes generated from pretrained 2D and 3D convolutional neural networks (CNN) to extract fault surfaces suitable for structural model building construction.
    Prior to the creation of the ophthalmoscope by von Helmholtz roughly 150 years ago, eye doctors had no way of inspecting the area behind the pupil. The recent decade has seen significant development in retinal imaging and image processing, opening up new frontiers in the study of the eye. Prevention of permanent vision loss due to ocular illnesses requires prompt diagnosis. In particular, convolutional neural networks (CNNs) have shown encouraging results in the analysis of medical pictures in recent years. This research provides support for the use of convolutional neural networks (CNNs) in the detection of ocular disorders. The proposed method takes retinal fundus images as input and employs a pre-trained convolutional neural network (CNN) model to extract relevant features. There are a number of benefits to using CNNs to forecast eye diseases instead of more conventional methods. Convolutional neural networks (CNNs) can learn complicated characteristics from big datasets, which can boost their accuracy and generalizability. Additionally, they are able to pick up on small changes in medical images that may be ignored by human experts, allowing for earlier diagnosis and treatment of ocular illnesses. Furthermore, CNNs can offer objective and reproducible predictions, which can help to lessen the variability and subjectivity of human evaluations. Convolutional neural networks (CNNs) are a sort of deep neural network that has shown effective for a number of computer vision tasks. Due to their propensity for learning complicated characteristics from big datasets, convolutional neural networks (CNNs) are ideally suited for the analysis of medical pictures like retinal fundus photographs and optical coherence tomography (OCT) images. For the purpose of ocular illness prediction, convolutional neural networks (CNNs) are trained using medical image annotation datasets to learn features that can distinguish between healthy and sick eyes. Then, new photos can be analyzed using these traits to determine the existence and severity of ocular disorders. The CNN model was trained and validated using over four thousand fundus images representing various ocular diseases and conditions. Eighty percent of the images were used for training, while the other twenty percent were used for testing. Training using two convolutional layers and two dense layers resulted in an 80% accuracy in predictions.
    Fundus (uterus)
    Real time object detection in traffic surveillance is one of the latest topics in today's world using Region based Convolutional Neural Networks algorithm in comparison with Convolutional Neural Networks. Real-Time Object Detection is performed using Regional Convolutional Neural Networks (N=78) over Convolutional Neural Networks (N=78) with the split size of training and testing dataset 70% and 30% respectively. Regional Convolutional Neural Networks had significantly better accuracy (75.6%) compared to Convolutional Neural Networks (47.7%) and attained significance value of p=0.041. Regional Convolutional Neural Networks achieved significantly better object detection than Convolutional Neural Networks for identifying real-time objects in traffic surveillance.
    Scene recognition is a task of computer vision. The project is all about to detect the scene with its attribute and category. Here, it is going to use convolutional neural network (CNN) to detect the scene. Convolutional neural network is effective for image classification. To get accurate results we must learn the deep features of image which is possible with the help of convolutional neural network. Here, we are going to train two specific convolutional neural networks in which one of the convolutional neural network is of object centric and the other is of scene centric. These two networks will work parallel which will reduce the response time of the system and improve the accuracy.
    Contextual image classification
    The suggested study's objectives are to develop an unique criterion-based method for classifying RBC pictures and to increase classification accuracy by utilizing Deep Convolutional Neural Networks instead of Conventional CNN Algorithm. Materials and Procedures A dataset-master image dataset of 790 pictures is used to apply Deep Convolutional Neural Network. Convolutional Neural Network and Deep Convolutional Neural Network comparison using deep learning has been suggested and developed to improve classification accuracy of RBC pictures. Using Gpower, the sample size was calculated to be 27 for each group. Results: When compared to Convolutional Neural Network, Deep Convolutional Neural Network had the highest accuracy in classifying blood cell pictures (95.2%) and the lowest mean error (85.8 percent). Between the classifiers, there is a statistically significant difference of p=0.005. The study demonstrates that Deep Convolutional Neural Networks perform more accurately than Conventional Neural Networks while classifying photos of blood cells[1].
    Convolution (computer science)
    There are many rules to be followed to assess the safety of navigation, the certifiers and classifiers are responsible for ensuring compliance with all these rules that ensure the integrity of the vessels, however, this is not enough. The Naval District, in which the state of Pará is included, was the first in accidents that occurred in the year 2020 and the third in the year 2021. Due to these accident occurrences, concepts of artificial intelligence, machine learning and deep learning were applied in this area. Aiming to assist in this process, this work proposes to develop an application using Convolutional Neural Network (CNN) for image recognition (Vessels and plimsoll disk). In this sense, a Convolutional Neural Network (CNN) learning technique was used to identify the type of ship through a bank of supplied images, the same method was applied to identify if there is accident risk with the ship through the analysis of plimsoll disk images. To perform the training of the CNNs, six different network architectures were evaluated with: changing the number of filters in each convolutional layer; varying the amount of convolutional layers and; using transfer learning of the VGG-16 network with the fine tuning technique. The results achieved in this work are promising and demonstrate the feasibility of employing Convolutional Neural Network as a method for identifying the images of vessels as from the plimsoll disk).
    Transfer of learning
    Citations (0)
    Our study briefly discusses the architectures of convolutional neural networks (CNN), their advantages and disadvantages. The features of the architecture of the convolutional neural network U-net are described. An analysis of the CNN U-net was carried out, based on the analysis, a rationale was given for choosing the CNN U-net as the main architecture for using and building subsequent created and analyzed models of cert neural networks to solve the problem of segmentation of medical images. The analysis of architectures of convolutional neural networks, which can be used as convolutional layers in CNN U-net, has been carried out. Based on the analysis, three architectures of convolutional neural networks were selected and described suitable for use as convolutional layers in CNN U-net. Using CNN U-net and three selected convolutional neural networks (“resnet34”, “inceptionv3” and “vgg16”), three neural network models for medical image segmentation were created. The training and testing of the created models of neural networks was carried out. Based on the results of training and testing, an analysis of the obtained indicators was carried out. Experiments were carried out with each of the three constructed models (segmentation of images from the validation set was performed and segmented images were presented). Based on the testing indicators and empirical data obtained from the results of the experiments, the most suitable neural network model created for solving the problem of medical image segmentation was determined. The algorithm for segmentation of medical images has been improved. An algorithm is described that uses the predictions of all created models of neural networks, which demonstrated a more accurate result than each of the considered models separately.