Face recognition is an important biometry used in many areas such as building security systems, biometric passports and identification, surveillance systems. Systems used in these areas need to be fast. In recent years, many applications have been developed by taking advantage of the GPU's parallel processing. In this study, face detection and face recognition studies with CUDA supported by GPGPU, which is a parallel computing platform and programming module developed by NVIDIA, are examined. To date, the literature on face detection and recognition studies with CUDA has been conducted. As a result of the study, it was observed that the face detection and face recognition operations performed by using the parallel processing power of CUDA can be performed much faster. Furthermore, it was concluded that if deep learning is used in CUDA based face recognition applications, face recognition operations will be performed in much shorter periods.
The increase in the world's population, technological advancements, genetic research, digital health services, and medical devices are among the reasons that lead to the rapid growth of Electronic Health Record(EHR). Health services, clinical studies, public health studies, health insurance data, health research, and similar sources generate a large amount of EHR. Especially in recent years, the use of technologies such as data science, artificial intelligence, and big data analytics in the healthcare sector has increased, which has further led to the growth of EHR.One of today's important issues is the storage of data in centralized database systems and the fact that data accuracy and security are only ensured through these centers. With its decentralized architecture and verification mechanisms, blockchain technology can work more efficiently than centralized structures without data loss or security problems. In global and Turkish healthcare systems, EHR is stored and shared in centralized databases located in multiple centers. Due to communication and integration issues between these centers, vital data such as examinations and laboratory tests can be repeated due to difficulty accessing the EHR.As citizens do not have full control over sharing their own EHR, this situation can cause deficiencies and disruptions in the healthcare process. EHR collected by medical facilities and authorities can be shared without the citizen's consent. Sharing personal EHR without the citizen's consent can lead to data privacy issues. In systems where patient data is centrally stored, software changes can cause problems such as loss of patient data. In healthcare facilities in Turkey, EHR is stored for billing and statistical purposes. The fact that the patient's potentially valuable EHR is secondary causes irregular maintenance of the data. In this study, it is proposed to share EHR using blockchain infrastructure and to prepare a consensus algorithm specifically for health data on the blockchain. This way, personal EHR is stored in the patient's own control and their own health wallet.
There has been a great deal of attention paid to Genetic Algorithm (GA).The algorithm, as a methodology, is a multi objective methodology which can be used in different fields such as self-organizing wireless sensor network.The technique examines the applied parameters and at the same time takes into consideration the fitness function by the way of or considering the whole operational modes in produced feasible states.Majority of the GA implementations in clustering algorithm only deal with optimization of few parameters including coverage and energy consumption with noticeable effect on network quality.Keeping network coverage can be modeled as mathematical programming problem which is featured with heavy load of computation.On the other hand, wireless sensor networks (WSNs) can be of dynamic nature, if so it needs to have proper reaction to events; so that slightest management decision may lead to considerable problems on the quality of the network.This problem is dealt with in this study through a hybrid method in MATLAB with the help of Genetic Algorithm toolbox and custom codes.The optimum solution was obtained by mathematical algorithm that conforms to all the mentioned parameters.
In this paper, we investigate an approach for classification of mammographic masses as benign or malign. This study relies on a combination of Support Vector Machine (SVM) and wavelet-based subband image decomposition. Decision making was performed in two stages as feature extraction by computing the wavelet coefficients and classification using the classifier trained on the extracted features. SVM, a learning machine based on statistical learning theory, was trained through supervised learning to classify masses. The research involved 66 digitized mammographic images. The masses were segmented manually by radiologists, prior to introduction to the classification system. Preliminary test on mammogram showed over 84.8% classification accuracy by using the SVM with Radial Basis Function (RBF) kernel. Also confusion matrix, accuracy, sensitivity and specificity analysis with different kernel types were used to show the classification performance of SVM.
In this study, It was tried to do segmentation on digital mammography images in order to help expert or radiologist to find cancer region with computer aided. In addition to traditional segmentation methods, It was studied to find cancer region in digital mammography images with ant colony algorithm (ACO) and k-means clustering with using MIAS dataset.
The Android operating system has increased in popularity and has been increasing its shares in the smart phone market. Users can carry out their daily work such as paying bills, being social, and sharing photos through mobile applications. These applications have access to sensitive information about the user, such as location, contacts, call logs, and SMS messages. However, the users have no knowledge of the applications or the personal information these applications have access to. Even if an application is not malware or does not have malicious behavior, it can compromise the security and privacy of the user by accessing the permissions and gathering sensitive personal information. In this study, we have designed and implemented a prototype of a novel fuzzy risk inference system that serves as a web-based service. The system analyzes the risks related to Android-based mobile applications and performs risk scoring by taking several features into account. The system presents the user with the risks of exposure before the installation of applications on the user's device and serves as an intelligent decision support system.
Mikrodizi teknolojisi gen ifadesindeki farklılıkların tespit edilmesinde kullanılır. Bu teknoloji ilaç geliştirme süreçlerinden tedavi süreçlerinin iyileştirilmesine birçok alanda katkı sağlamaktadır. Bu çalışmada, kronik hipoksi tedavisinin fare beyni üzerindeki etkisi ve oksidatif strese maruz kalan fare nöronlarının gen üzerindeki etkisi ile ilgili veri kümeleri üzerinde mikrodizi analizi yapılmıştır. Çalışmada açık erişim sağlanabilen iki farklı mikrodizi veri kümesi kullanılmıştır ve makine öğrenmesi yöntemleri ile çalışılmıştır. İlk adım olarak veri kümelerinin indirilmesi, ardından ön işleme tabi tutularak normalizasyon uygulanması sağlanmıştır. Bu aşamalar ile veri kümeleri gen çıkarımı için çalışmaya uygun hale getirilmiştir. Hazırlanan bu gen ekspresyon ifadeleri üzerinden istatistiksel ve makine öğrenmesi yöntemleri kullanılarak analizler gerçekleştirilmiş ve hedef gen çıkarımı sağlanmıştır.
Main reason of genetic defects is the disorders in gene regions which are responsible for coding the proteins necessary for normal body functions. By gene therapy, the regions with disorders can be detected and their genetic content can be changed for good. These regions may have special characteristics in terms of nucleotide dispersion which are beyond the known statistical norms of genome. In this study, such a characteristic is defined and its effect on predicting the strand direction of genomic reads (classification) is analyzed. By the analyses, it is observed that Canonical Correlation Analysis (CCA) method outperforms well known Support Vector Machines (SVM) approach considering the discrimination of reads according to their strand directions.
There has been an increased interest in speech pattern analysis applications of Parkinsonism for building predictive telediagnosis and telemonitoring models. For this purpose, we have collected a wide variety of voice samples, including sustained vowels, words, and sentences compiled from a set of speaking exercises for people with Parkinson's disease. There are two main issues in learning from such a dataset that consists of multiple speech recordings per subject: 1) How predictive these various types, e.g., sustained vowels versus words, of voice samples are in Parkinson's disease (PD) diagnosis? 2) How well the central tendency and dispersion metrics serve as representatives of all sample recordings of a subject? In this paper, investigating our Parkinson dataset using well-known machine learning tools, as reported in the literature, sustained vowels are found to carry more PD-discriminative information. We have also found that rather than using each voice recording of each subject as an independent data sample, representing the samples of a subject with central tendency and dispersion metrics improves generalization of the predictive model.