Forecasting the number of Olympic medals for each nation is highly relevant for different stakeholders: Ex ante, sports betting companies can determine the odds while sponsors and media companies can allocate their resources to promising teams. Ex post, sports politicians and managers can benchmark the performance of their teams and evaluate the drivers of success. To significantly increase the Olympic medal forecasting accuracy, we apply machine learning, more specifically a two-staged, thus outperforming more traditional naïve forecast for three previous Olympics held in the past years. In our project best player is predicted by algorithms namely Naïve Bayes (NB) as existing and K Nearest Neighbor (KNN) as proposed system and compared in terms of Accuracy. From the results obtained its proved that proposed KNN works better than existing NB. This project aims to develop a machine learning solution in Python for searching and ranking the best players based on their performance metrics. The project involves collecting and preprocessing relevant player data, including statistics and attributes. Various machine learning algorithms, such as regression or ranking models, are explored to predict player performance. The trained model is then deployed to make real-time predictions, assisting sports teams or gaming platforms in selecting the most suitable players. The project highlights the potential of machine learning in optimizing player selection processes, offering a scalable and data-driven approach to identifying top performers.
Tenants that rent computer resources to run sophisticated systems might benefit from increased resource flexibility provided by the infrastructure cloud (IaaS) service model. The user will thus be launched into virtual computers after completing the authentication procedure, where they will start the upload process to the cloud.Secure data access is offered by suggested system's implementation of virtual machines and key management.Session management and failed authentication are other key components of suggested solution. All facets of managing active sessions and handling user authentication fall under the purview of authentication and session management. The act of updating an account, changing a password, remembering a password, and other similar operations are examples of credential management functions that can compromise even the most robust authentication schemes.
Artificial Intelligence (AI) approaches, such as the big data, mobile computing, and Internet of Things (IoT) have become more common in the computer science area in recent years. It is critical to properly manage resources in order to maintain service quality, service level agreements, and the system's overall availability. AI may be used to address the needs of cloud resource management, and this study examines and evaluates several approaches. Fog computing systems, intelligent cloud computing systems and edge-cloud systems are all types of AI-based cloud resource management systems that have been reviewed in this contribution. In this research, an intelligent resource management method that controls mobile resources by monitoring device states and projecting their future stability is proposed. We look at the possibility of using our suggested resource management system to a variety of cloud-based platforms.
Governments throughout the world are encouraging the use of "smart city" technologies to improve urban residents' day-to-day experiences. In order to improve services like healthcare, electricity distribution, water purification, traffic management, etc., smart cities implement internet-connected technologies. The proliferation of connected gadgets has led to an increase in botnet assaults based on the IoT. The term IoT is used to describe a system of computers that are linked together that may be used for a wide variety of tasks, from environmental monitoring to on-demand power switching and beyond. Many Internets of Things gadgets are inherently disparate, update at irregular intervals, and hide in plain sight on a private or company network. The safety and confidentiality issues surrounding the Internet of Things that need to be addressed in both academic and practical settings. This research study proposes a federated-based solution to botnet attack detection utilizing on-device decentralized traffic data and a deep learning (DL) model. The proposed federated method addresses privacy concerns by preventing data from leaving the network's edge on the device. Instead, the edge layer is used to do the DL calculation, which has the extra benefit of being closer to the source of the data. Many tests are run on newly made public test data sets for deep learning models. Additionally, the sets of data are presented for examination and understanding. The recommended DL model achieved better results than the ML models. Finally, this research shows that the suggested model can detect anomalies with a precision of up to 98%.
Stuttering or Stammering is a speech defect within which sounds, syllables, or words are rehashed or delayed, disrupting the traditional flow of speech. Stuttering can make it hard to speak with other individuals, which regularly have an effect on an individual's quality of life. Automatic Speech Recognition (ASR) system is a technology that converts audio speech signal into corresponding text. Presently ASR systems play a major role in controlling or providing inputs to the various applications. Such an ASR system and Machine Translation Application suffers a lot due to stuttering (speech dysfluency). Dysfluencies will affect the phrase consciousness accuracy of an ASR, with the aid of increasing word addition, substitution and dismissal rates. In this work we focused on detecting and removing the prolongation, silent pauses and repetition to generate proper text sequence for the given stuttered speech signal. The stuttered speech recognition consists of two stages namely classification using ANN and testing in ASR. The major phases of classification system are Re-sampling, Segmentation, Pre Emphasis, Epoch Extraction and Classification. The current work is carried out in UCLASS Stuttering dataset using MATLAB with 4% to 6% increase in accuracy by ANN.
Magnetic resonance imaging (MRI) of the brain for the purpose of manual tumor diagnosis is laborious and timeconsuming. A diagnostic by an expert is also necessary in every case. The need for accurate diagnosis and classification of brain tumors led to the development of several computer- controlled procedures. In order to segregate brain tumors, this research suggests a new filter. The approach used a hybrid model that included NasNet and FRCNN for segmentation of the brain tumor image, and it was optimized using Adam optimizer. For image filtering, SWT enhanced Median filter was utilized. Outperforming state-of-the-art approaches presented in literature, the suggested strategy shows promising outcomes. The proposed model segments brain tumor image with 92.19% accuracy on tumor region and 99.79% accuracy on background region.
The brain is a significant organ of the body that the nervous system regulates. Any system must be capable of detecting and analysing brain tumors, as demonstrated by the results of years of thorough study and procedural development. To increase the precision of tumor detection, this endeavour must incorporate an efficient automated system with powerful pre-processing. Techniques for noise reduction and enhancement are crucial in digital image processing. Brain cancers are frequently identified using magnetic resonance imaging (MRI) images. In this article, a method for pre-processing brain tumor images is suggested to analyse brain tumors. For the preparation of brain MRI images, several filters are applied. This study evaluates several filters using the outcome values to determine the optimum pre-processing.