logo
    Abstract:
    Determining the type of kidney stones allows urologists to prescribe a treatment to avoid recurrence of renal lithiasis. An automated in-vivo image-based classification method would be an important step towards an immediate identification of the kidney stone type required as a first phase of the diagnosis. In the literature it was shown on ex-vivo data (i.e., in very controlled scene and image acquisition conditions) that an automated kidney stone classification is indeed feasible. This pilot study compares the kidney stone recognition performances of six shallow machine learning methods and three deep-learning architectures which were tested with in-vivo images of the four most frequent urinary calculi types acquired with an endoscope during standard ureteroscopies. This contribution details the database construction and the design of the tested kidney stones classifiers. Even if the best results were obtained by the Inception v3 architecture (weighted precision, recall and F1-score of 0.97, 0.98 and 0.97, respectively), it is also shown that choosing an appropriate colour space and texture features allows a shallow machine learning method to approach closely the performances of the most promising deep-learning methods (the XGBoost classifier led to weighted precision, recall and F1-score values of 0.96). This paper is the first one that explores the most discriminant features to be extracted from images acquired during ureteroscopies.
    To effectively mix the features which are extracted by Fisher linear discrininant analysis and maximum scatter difference discriminant analysis and form a feature set which can reflect the samples more comprehensive,an enhanced discriminant analysis method based on canonical correlation analysis is proposed in the paper.Fisher linear discriminant analysis(LDA) and Maximum Scatter Difference Discriminate Analysis(MSDDA) are firstly adopted to extract two sets of features in the same pattern space,respectively.The canonical correlation analysis method is then used to fuse the two sets of features obtained above and derives more effective canonical discriminant features.Finally,the extensive experiments are performed on ORL face database.The experimental results verify the effectiveness of the proposed method.
    Canonical correlation
    Optimal discriminant analysis
    Canonical analysis
    Multiple discriminant analysis
    Feature (linguistics)
    Citations (0)
    Near infrared spectra of apples contain the most useful information of the soluble solids content and firmness of apples. A new feature extraction method, called sorting discriminant analysis, was proposed to use a sorting method based on principal component analysis and linear discriminant analysis to extract the features of near infrared spectra. The objective of this research was to make use of feature extraction methods, such as principal component analysis, linear discriminant analysis, discriminant partial least squares, and sorting discriminant analysis to extract information from near infrared spectra of the "Huaniu" apples and the "Fuji" apples. After feature extraction, the nearest neighbor classifier was used to classify the apples, and the classification results were compared to study that which feature extraction method performed best. The experimental results showed principal component analysis + linear discriminant analysis and sorting discriminant analysis could extract discriminant information from near infrared spectra of apples better than principal component analysis and discriminant partial least squares, and sorting discriminant analysis was the best one. Sorting discriminant analysis can not only compress the high-dimensional near infrared spectra to the low-dimensional data but also project near infrared spectra to a new feature space where the data can be classified easily and effectively, and sorting discriminant analysis is superior to principal component analysis + linear discriminant analysis in most cases.
    Multiple discriminant analysis
    Optimal discriminant analysis
    Nested Cavity Classifier (NCC) is a classification rule that pursues partitioning the feature space, in parallel coordinates, into convex hulls to build decision regions. It is claimed in some literatures that this geometric-based classifier is superior to many others, particularly in higher dimensions. First, we give an example on how NCC can be inefficient, then motivate a remedy by combining the NCC with the Linear Discriminant Analysis (LDA) classifier. We coin the term Nested Cavity Discriminant Analysis (NCDA) for the resulting classifier. Second, a simulation study is conducted to compare both, NCC and NCDA to another two basic classifiers, Linear and Quadratic Discriminant Analysis. NCC alone proves to be inferior to others, while NCDA always outperforms NCC and competes with LDA and QDA.
    Quadratic classifier
    Citations (0)
    Curriculum 2013, which applied to Madrasah Aliyah Negeri (MAN) requires that majors are implemented since the 10th grade The majors is supposed to relate to the study plan when they study at University. The aims of this research are to describe correspondence between current majors and desired future majors, to observe distribution pattern of student exam data based on their boxplot, to classify student majors data using the Linear Discriminant Analysis (LDA) models, and to get the best LDA model to describe the characteristics of majors in MAN. The LDA models used in this research were Fisher's Linear Discriminant Analysis (FLDA), Diagonal Linear Discriminant Analysis (DLDA), Shrunken Linear Discriminant Analysis (SLDA), Maximum-uncertainty Linear Discriminant Analysis (MLDA), and Factor-model Linear Discriminant Analysis (RFLDA). The best LDA model was chosen based on the classification accuracy produced by resampling with n = 1000 and n = 5000 for training data and testing data with the probability of each data being drawn were 60:40, 70:30, 80:20, and 90:10.
    Optimal discriminant analysis
    Resampling
    Multiple discriminant analysis
    Nested Cavity Classifier (NCC) is a classification rule that pursues partitioning the feature space, in parallel coordinates, into convex hulls to build decision regions. It is claimed in some literatures that this geometric-based classifier is superior to many others, particularly in higher dimensions. First, we give an example on how NCC can be inefficient, then motivate a remedy by combining the NCC with the Linear Discriminant Analysis (LDA) classifier. We coin the term Nested Cavity Discriminant Analysis (NCDA) for the resulting classifier. Second, a simulation study is conducted to compare both, NCC and NCDA to another two basic classifiers, Linear and Quadratic Discriminant Analysis. NCC alone proves to be inferior to others, while NCDA always outperforms NCC and competes with LDA and QDA.
    Quadratic classifier
    Margin classifier
    Citations (0)
    In this paper we will extend the recently proposed weighted linear discriminant analysis (W_LDA) and fraction-step linear discriminant analysis (F_LDA) from one dimension vector form to the case of two dimension matrix form, which are called weighted two dimensional linear discriminant analysis (W_2DLDA) and fraction-step two dimension linear discriminant analysis (F_2DLDA), respectively. The motivation of this work is based on the recent research results on two dimensional principal component analysis (2DPCA) and 2DLDA showing that the two dimensional algorithms can save computational costs significantly and thus improve the classifiers performances. First, we derived these numerical algorithms in matrix form and then we implement these two new algorithms on ORL and YALE face databases. The experimentation results show that W_2DLDA produces the best performance among F_2DLDA, F_LDA and W_LDA.
    Scatter matrix
    Optimal discriminant analysis
    Matrix (chemical analysis)
    Multiple discriminant analysis
    Fraction (chemistry)
    The support vector machine(SVM)is,for the first time,used to diagnose kidney stones and compared with linear discriminant analysis.According to results of the two methods.they both show good prediction ability,indicating that SVM is an effective tool for the classification of kidney stones.The formation of kidney stone is connected with the environment,living conditions,bodily disorder and urinary diseases. This paper discusses the formation of kidney stone from the characters of calcium ion.
    Citations (0)
    Linear Discriminant Analysis(LDA) is the well-known method in pattern recognition,and researchers usually use PCA+LDA instead of LDA due to the Small Sample Size Problem(SSSP) in image recognition.A Multi-band Linear Discriminant Analysis(MBLDA) is proposed,by which LDA is established on the whole samples space and SSSP is resolved.Based on MBLDA,the data loss using PCA is avoided,the dimension of discrimination feature extracted is very low and the recognition performance is advanced.Its recognition rate exceeds PCA,LDA,or PCA+LDA largely.Some experiments on ORL and NUST603 face database demonstrate that the proposed method is effective.
    Optimal discriminant analysis
    Feature (linguistics)
    Citations (0)
    Abstract: Deep fake is a rapidly growing concern in society, and it has become a significant challenge to detect such manipulated media. Deep fake detection involves identifying whether a media file is authentic or generated using deep learning algorithms. In this project, we propose a deep learning-based approach for detecting deep fakes in videos. We use the Deep fake Detection Challenge dataset, which consists of real and Deep fake videos, to train and evaluate our deep learning model. We employ a Convolutional Neural Network (CNN) architecture for our implementation, which has shown great potential in previous studies. We pre-process the dataset using several techniques such as resizing, normalization, and data augmentation to enhance the quality of the input data. Our proposed model achieves high detection accuracy of 97.5% on the Deep fake Detection Challenge dataset, demonstrating the effectiveness of the proposed approach for deep fake detection. Our approach has the potential to be used in real-world scenarios to detect deep fakes, helping to mitigate the risks posed by deep fakes to individuals and society. The proposed methodology can also be extended to detect in other types of media, such as images and audio, providing a comprehensive solution for deep fake detection.
    Deep Neural Networks
    Normalization
    On our planet, skin cancer is among the most dangerous diseases. It is, however, difficult to diagnose skin cancer correctly. A variety of tasks have recently been shown to be excelled by machine learning and deep learning algorithms. In the case of skin diseases, these algorithms are very useful. In this article, we examine various machine learning and deep learning techniques and their use in diagnosing skin diseases. In this paper, we discuss common skin diseases and the method of acquiring images from dermatology, and we present several freely available datasets. Our focus shifts to exploring popular machine learning and deep learning architectures and popular frameworks for implementing machine and deep learning algorithms once we have introduced machine learning and deep learning concepts. Following that, performance evaluation metrics are presented. Here we are going to review the literature on machine and deep learning and how these technologies can be used to detect skin diseases. Furthermore, we discuss potential research directions and the challenges in the area. In this paper, the principal goal is to describe contemporary machine learning and deep learning methods for skin disease diagnosis
    Citations (1)