Cancer is a major public health issue in the modern world. Breast cancer is a type of cancer that starts in the breast and spreads to other parts of the body. One of the most common types of cancer that kill women is breast cancer. When cells become uncontrollably large, cancer develops. There are various types of breast cancer. The proposed model discussed benign and malignant breast cancer. In computer-aided diagnosis systems, the identification and classification of breast cancer using histopathology and ultrasound images are critical steps. Investigators have demonstrated the ability to automate the initial level identification and classification of the tumor throughout the last few decades. Breast cancer can be detected early, allowing patients to obtain proper therapy and thereby increase their chances of survival. Deep learning (DL), machine learning (ML), and transfer learning (TL) techniques are used to solve many medical issues. There are several scientific studies in the previous literature on the categorization and identification of cancer tumors using various types of models but with some limitations. However, research is hampered by the lack of a dataset. The proposed methodology is created to help with the automatic identification and diagnosis of breast cancer. Our main contribution is that the proposed model used the transfer learning technique on three datasets, A, B, C, and A2, A2 is the dataset A with two classes. In this study, ultrasound images and histopathology images are used. The model used in this work is a customized CNN-AlexNet, which was trained according to the requirements of the datasets. This is also one of the contributions of this work. The results have shown that the proposed system empowered with transfer learning achieved the highest accuracy than the existing models on datasets A, B, C, and A2.
5G networks are highly distributed, built on an open service-based architecture that requires multi-vendor hardware and software development environments, all of which create a high attack surface in the 5G networks than other proprietary fixed-function networks. Besides that, cloud-native architectures also present new security challenges. Cloud-native separates monolithic virtual machines into microservice pods, resulting in higher volumes of signaling and communication flowing through and between microservices. In addition, secure connections in monolithic applications have been replaced by untrusted communication between microservice pods, requiring additional cybersecurity capabilities. Access control systems were created to provide reliability and limit access to an organization's assets. However, due to technology's constant evolution and dynamicity, these conventional security systems lack the security to protect an organization's information because they were created to address access control for known users. For 5G based cloud native technology, these access controls need to be taken further by implementing a Zero Trust model to secure one's essential assets for all users within the system. Zero Trust is implemented in an access control system under the concept "Never Trust, Always Verify". In this paper, we implement zero trust as a factor within access control systems by combining the principles of access control systems and zero-trust security by factoring in the user's historical behavior and recommendations into the mix.
The Urdu language is spoken and written on different social media platforms like Twitter, WhatsApp, Facebook, and YouTube. However, due to the lack of Urdu Language Processing (ULP) libraries, it is quite challenging to identify threats from textual and sequential data on the social media provided in Urdu. Therefore, it is required to preprocess the Urdu data as efficiently as English by creating different stemming and data cleaning libraries for Urdu data. Different lexical and machine learning-based techniques are introduced in the literature, but all of these are limited to the unavailability of online Urdu vocabulary. This research has introduced Urdu language vocabulary, including a stop words list and a stemming dictionary to preprocess Urdu data as efficiently as English. This reduced the input size of the Urdu language sentences and removed redundant and noisy information. Finally, a deep sequential model based on Long Short-Term Memory (LSTM) units is trained on the efficiently preprocessed, evaluated, and tested. Our proposed methodology resulted in good prediction performance, i.e., an accuracy of 82%, which is greater than the existing methods.
Web 2.0 has entirely revolutionized the web and the ways people use it by bringing enhancements in information discovery, retrieval, and aggregation. It has also brought improvements in various aspects like content controlling, content structuring, web technologies, applications, communication, marketing and selling. On the one hand, these improvements are providing extensive benefits while, on the other hand they are compromising fundamental requirements of security and confidentiality. The vulnerabilities and security risk associated with these enhancements are becoming a major security threats to the organizations and laymen. In this paper, we have discussed the features and improvements brought in web 2.0 and the security risks associated with them which will set a road map towards the necessity of web 3.0.
Automatic information extraction from online published scientific documents is useful in various applications such as tagging, web indexing and search engine optimization. As a result, automatic information extraction has become among the hottest areas of research in text mining. Although various information extraction techniques have been proposed in the literature, their efficiency demands domain specific documents with static and well-defined format. Furthermore, their accuracy is challenged with a slight modification in the format. To overcome these issues, a novel ontological framework for information extraction (OFIE) using fuzzy rule-base (FRB) and word sense disambiguation (WSD) is proposed. The proposed approach is validated with a significantly wider document domains sourced from well-known publishing services such as IEEE, ACM, Elsevier, and Springer. We have also compared the proposed information extraction approach against state-of-the-art techniques. The results of the experiment show that the proposed approach is less sensitive to changes in the document format and has a significantly better average accuracy of 89.14% and F-score as 89%.
Throughout the course of careers, a large number of employees encounter a variety of normal life events that can have an impact on their performance and job environment satisfaction. Under these circumstances, some employees are compelled to leave their positions, not because of issues with their work environment, but due to personal circumstances. This is what we refer to as Employee Attrition, a pervasive phenomenon affecting numerous businesses and work environments in the present day. Employee attrition could occur in a variety of ways and be determined by several entities. Therefore, we look into various conditions, causes, indicators, and entities in order to study the attrition problem and determine its answers and enhancements. This study investigated the attrition of employees based on IBM data repository, together with the machine learning based experimental analysis and knowledge discoveries, to identify the most effective approaches to the attrition problem. The experimental data contained 1,471 records with 35 distinct characteristics. We formally employed Nave Bayes Classifier in this study using WEKA datamining software, and adequate knowledge has been extracted to better comprehend employee turnover and their work performance facets.