Amaç: Laboratuvardaki toplam hataların büyük çoğunluğunu preanalitik evre hataları oluşturur (%46-68).Bu çalışmanın amacı, acil laboratuvarımızdaki bazı test gruplarına ilişkin reddedilen örneklerin oranlarını ve nedenlerini araştırmaktı
During the past decades, recognition of plant types has attracted the attention of numerous researchers due to its outstanding applications including precision agriculture. Applying to the video frames, this paper proposes a hybrid method which combines the features extracted from the images using the SIFT, HOG and GIST descriptors and classifies the plants by means of the deep belief network. First, in order to avoid ineffective features, a pre-processing course is performed on the image. Then, the mentioned descriptors extract several features from the image. Due to the problems of working with a large number of the features, a small and distinguishing feature set is produced using the bag of words technique. Finally, these reduced features are given to a deep belief network in order to recognize the plants. Comparing the results of the proposed method with some other existing methods demonstrates an improvement in the accuracy, precision and recall measures for the approach of this work in the plant recognition.
In agriculture field, classification of agricultural plants is a major problem due to need for improving the crop yield. This research work focuses on the classification of crops by applying machine vision and knowledge-based techniques with image processing by using different feature descriptors including texture, color, HOG (Histogram of oriented gradients) and GIST (Global image descriptor). A combination of all these features was used in the classification of crops. In this research, several machine learning algorithms including both base classifiers and ensemble classifiers were applied and the performances of classification results were evaluated by majority voting. Naive Bayes (NB), Support Vector Machine (SVM), K-nearestneighbor (KNN) and Multi-Layer Perceptron (MLP) were used as Base classifiers. Ensemble classifiers include Random Forest (RF), Bagging and Adaboost were utilized. The experimental results showed that the classification accuracy is improved by majority voting with ensemble classifiers in the combination of texture, color, HOG and GIST features.
In this paper, we present a fingerspelling recognition module that has been designed to function in a smart system that is intended to act as a communication medium between people with hearing and visual disabilities. The method described is a computer vision based, close to real-time, automatic skin color based model hand gesture recognition module. We analyze and compare the recognition performance of appearance based hand descriptors on a self collected dataset. The dataset contains isolated videos of 88 different signs of the Czech, Turkish and Russian Sign Alphabets from 5 different signers with a total training and test length of 4 hours. On our test sets, we have achieved signer dependent and signer independent fingerspelling recognition rates of %82 and %42, respectively.
Power of computing technology to improve the efficiency in agricultural fields is becoming increasingly more important with the current projections of expected world population growth and decrease in available land and natural resources. One of the critical improvements can be achieved by monitoring phenology of agricultural plants which would henceforth improve the timing for the harvest, pest control, yield prediction, farm monitoring, disaster warning etc. Inferring the phenological information contributes to a better understanding of relationships between productivity, vegetation health and environmental conditions. As part of a government supported project, a terrestrial observation network is built throughout Turkey. The network includes over twelve hundred agro-stations that are placed on many agricultural fields. The stations are equipped with many sensors including cameras that acquire image sequences of the farm fields in a periodic manner. In this study, we use textural analysis combined with machine learning techniques to develop measures in order to recognize and classify phenological stages of several types of plants purely based on the visual data captured every half an hour by cameras mounted on the ground agro-stations. Experimental results suggest that Histogram of Oriented Gradients (HOG) features outperform Gray Level Co-occurrence Matrix (GLCM) features for the discrimination of phenological stages.
One of the essential part of agricultural technologies and crop monitoring is automating accurate plant phenotyping. Environmental conditions have tremendous impact on a plant's growth. Hence, accurate monitoring of phenology can provide a lot of information that can be used for increasing the yield quality and accelerating crop production. Advancements in both computer vision algorithms and communication systems have been transforming the perception of precision agriculture. Enourmous amount of information is being collected through sensors positioned on ground stations in national agriculture monitoring networks. Availability of this higher-quality measurements coupled with modern image processing algorithms steadily grows the applications possibilites in agriculture. The advancement of machine learning techniques offer a different approach comparatively to the traditional ways for agricultural applications. In this paper, we employ a deep learning approach to recognize and classify phenological stages of agricultural plants. The visual data for plants are captured every half an hour by cameras mounted on the ground agro-stations. In contrast to traditional feature extraction approaches, a pre-trained Convolutional Neural Network architecture (CNN) is employed to automatically extract the features of images. The results obtained through CNN model are compared with those obtained by employing hand crafted feature descriptors. Experimental results indicate that CNN architecture outperforms the machine learning algorithms based on hand crafted features.
With advances in agricultural technology and know-how, agricultural fields have increased by twelve percent and agricultural production has almost been doubled over the last fifty years. On the contrary these advances have also caused significant environmental change in many regions despite the increased food production. Hence, accurate information about irrigated lands is crucial in order to make further progress in agricultural production. Updated information about the use of each agricultural parcel is anticipated to be very valuable for diverse agricultural-related agencies and research purposes. Since many applications such as updating cadastral information, land-cover or land-use mapping, estimate of agricultural subsidies require primarily a parcel-based study, a correct delineation of the parcels becomes very crucial. In this paper, we propose an algorithm for automatic delineation of agricultural parcels. We first apply watershed segmentation to high resolution remote sensing imagery. Applying watershed segmentation yields many superpixels which do not correspond to the actual parcels in the scene. We assume that the geometric and texture properties of parcels to be segmented are different from each other, but they are similar for the superpixels that fall into the same parcel. In order to improve segmentation, the superpixels are merged based on two basic assumptions: (a) Superpixels that have similar textural characteristics and which are also within close proximity of each other should be merged. (b) The contraction within a group of merged superpixels should be minimized. In order to evaluate the performance of the proposed method, percentage of correctly segmented parcel areas are computed with respect to the manual ground truth delineation. The experimental results on pilot rural areas in south eastern part of Turkey confirm that the proposed method is quite promising.
We propose a visual scene interpretation system for cognitive robots to maintain a consistent world model about their environments. This interpretation system is for our lifelong experimental learning framework that allows robots analyze failure contexts to ensure robustness in their future tasks. To efficiently analyze failure contexts, scenes should be interpreted appropriately. In our system, LINE-MOD and HS histograms are used to recognize objects with/without textures. Moreover, depth-based segmentation is applied for identifying unknown objects in the scene. This information is also used to augment the recognition performance. The world model includes not only the objects detected in the environment but also their spatial relations to efficiently represent contexts. Extracting unary and binary relations such as on, on_ground, clear and near is useful for symbolic representation of the scenes. We test the performance of our system on recognizing objects, determining spatial predicates, and maintaining consistency of the world model of the robot in the real world. Our preliminary results reveal that our system can be successfully used to extract spatial relations in a scene and to create a consistent model of the world by using the information gathered from the onboard RGB-D sensor as the robot explores its environment.