Download This Paper Open PDF in Browser Add Paper to My Library Share: Permalink Using these links will ensure access to this page indefinitely Copy URL Copy DOI
Abstract Artificial Intelligence (AI) technology has shown tremendous contribution in various applications like Speech Recognition, Expert Systems, Computer Vision, Robotics, Gaming etc. Machine Learning (ML) and Deep Learning (DL) algorithms under AI address problems such as prediction, classification and regression. AI has touched many domains like Finance, Healthcare, Retail, Travel, Media etc. The results or the predictions generated by these algorithms are not easily accepted by the user with the apprehension as what AI promises, how can they guarantee it? Especially the Healthcare domain is facing a great challenge in accepting the results or the predictions with the concern, are AI results reliable, correct and ethical? Doctors or medical practitioners aren’t ready to treat patients on the basis of results or suggestions generated by AI algorithms. Hence, a technology which can explain how the results returned by AI algorithms are trustworthy, transparent and interpretable was strongly needed. This need has given rise to the latest technology-Explainable Artificial Intelligence (XAI). With the use of XAI, all the predictions, classifications made by AI algorithms are explainable, auditable, comprehensive, validating and socially acceptable. This study will help medical practitioners and other researchers to use XAI for reliable, trustworthy and explainable results or suggestions provided by AI, ML, DL algorithms not only in healthcare but all other sectors adopting these technologies.
Abstract Artificial Intelligence (AI) technology has shown tremendous contribution in various applications like Speech Recognition, Expert Systems, Computer Vision, Robotics, Gaming etc. Machine Learning (ML) and Deep Learning (DL) algorithms under AI address problems such as prediction, classification and regression. AI has touched many domains like Finance, Healthcare, Retail, Travel, Media etc. The results or the predictions generated by these algorithms are not easily accepted by the user with the apprehension as what AI promises, how can they guarantee it? Especially the Healthcare domain is facing a great challenge in accepting the results or the predictions with the concern, are AI results reliable, correct and ethical? Doctors or medical practitioners aren’t ready to treat patients on the basis of results or suggestions generated by AI algorithms. Hence, a technology which can explain how the results returned by AI algorithms are trustworthy, transparent and interpretable was strongly needed. This need has given rise to the latest technology-Explainable Artificial Intelligence (XAI). With the use of XAI, all the predictions, classifications made by AI algorithms are explainable, auditable, comprehensive, validating and socially acceptable. This study will help medical practitioners and other researchers to use XAI for reliable, trustworthy and explainable results or suggestions provided by AI, ML, DL algorithms not only in healthcare but all other sectors adopting these technologies.
Download This Paper Open PDF in Browser Add Paper to My Library Share: Permalink Using these links will ensure access to this page indefinitely Copy URL Copy DOI
Every day, people travel places -- from home to office to the gym and a variety of different places. Being able to predict the accurate future location of the user will be useful for making travel convenient. It will also help to maintain a proper schedule. Predicting future location presents an idea of the future destination of the user and thus helps the user to travel with ease and the shortest path which can save some amount of time. A lot of advancements have been made by Google in scheduling the travel but not much of progress has been done in prediction. The aim is to contribute to advances in predicting the mobile users' future location using a variety of algorithms and techniques of machine learning and deep learning domains.
Organizations handling huge amount of data needs to preserve privacy of the documents. Every customer has the rights to ask for privacy of their documents. These documents can be classified into different categories like private, public, confidential, etc. Using suitable text classification methods, documents can be classified into different categories. Various approaches like machine learning (ML), deep learning (DL), natural language processing (NLP) is available. Machine learning algorithms are showing acceptable performance but doesn't work well when data grows in size. In this paper, convolutional neural network(CNN) which is a deep learning model, is used to classify documents into different categories. Deep learning models are beneficial over machine learning models in terms of performance and volume of data to be classified. The performance of the model is evaluated and obtained acceptable performance.
Cancer is one of the major causes of death by disease and treatment of cancer is one of the most crucial phases of oncology. Precision medicine for cancer treatment is an approach that uses the genetic profile of individual patients. Researchers have not yet discovered all the genetic changes that causes cancer to develop, grow and spread. The Neuro-Genetic model is proposed here for the prediction and recommendation of precision medicine. The proposed work attempts to recommend precision medicine to cancer patients based upon the past genomic data of patient’s survival. The work will employ machine learning (ML) approaches to provide recommendations for different gene expressions. This work can be used in caner hospitals, research institutions for providing personalized treatment to the patient using precision medicine. Precision medicine can even be used to treat other complex diseases like diabetes, dentistry, cardiovascular diseases etc. Precision medicine is the kind of treatment to be offered in the near future.
<p>Breast cancer stands as a prevalent global concern, prompting extensive research into its origins and personalized treatment through Artificial Intelligence (AI)-driven precision medicine. However, AI's black box nature hinders result acceptance. This study delves into Explainable AI (XAI) integration for breast cancer precision medicine recommendations. Transparent AI models, fuelled by patient data, enable personalized treatment recommendations. Techniques like feature analysis and decision trees enhance transparency, fostering trust between medical practitioners and patients. This harmonizes AI's potential with the imperative for clear medical decisions, propelling breast cancer care within the precision medicine era. This research work is dedicated to leveraging clinical and genomic data from samples of metastatic breast cancer. The primary aim is to develop a machine learning (ML) model capable of predicting optimal treatment approaches, including but not limited to hormonal therapy, chemotherapy, and anti-HER2 therapy. The objective is to enhance treatment selection by harnessing advanced computational techniques and comprehensive data analysis. A decision tree model developed here for the prediction of suitable personalized treatment for breast cancer patients achieves 99.87% overall prediction accuracy. Thus, the use of XAI in healthcare will build trust in doctors as well as patients.</p>
Increased digitization in nearly every sector demands huge data storage requirements. Every person upload tons of information related to themselves on Internet through some mobile or web application, knowingly or sometimes unknowingly. Such increasing personal data storage requirement has created data privacy issues. There is no law which prohibits someone from using personal information of an individual. India is still in the process of preparing personal data protection law, whereas European Union's data protection regulation has already took place in the year 2018. Some organizations are in the process of developing applications which can check whether a document is personal or non-personal. Such applications can be developed with the help of deep learning models such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Term Short Memory (LSTM), etc. This research focuses on different text representation techniques required to represent text in text classification problems such as private data classification, sentiment analysis, language detection, online abuse detection, recommendations systems, to name a few. Having represented text in different formats, helps in increasing accuracy of classification algorithms.