With the development of information technology, computer education is paid more and more attention. However, in the face of the same new development of computer technology, there are still many problems in the teaching of computer major. Intelligent information processing technology is a kind of modern technology, which can realize the collection and arrangement of data information and make the data clear. This paper studies the application of intelligent information processing technology in the teaching reform of computer specialty. In the research, we use the principal component analysis method of intelligent information processing technology, through questionnaire survey and literature analysis, to study the teaching and reform of computer major, sort out the key problems, and put forward some reform suggestions. This study found that there are many problems in the teaching of computer science, among which the most prominent problems are backward teaching methods and low teaching efficiency, accounting for 31%. In addition, there are some other problems that need to be paid attention to. Thus, we can see that the application of intelligent information processing technology in the teaching reform of computer specialty can make the data become clear and clear, which is conducive to us to find the key of problems and provide solutions for better solving problems.
W ith the rapid development of the World Wide Web, huge amount of data has been growing exponentially in our daily life. Users will spend much more time on searching the information they really need than before. Even when they make the exactly same searching input, different users would have various goals. Otherwise, users commonly annotate the information resources or make search query according to their own behaviors. As a matter of fact, this process will bring fuzzy results and be time-consuming. Based on the above problems, we propose our methodology that to combine user’s context, users’ profile with users’ Folksonomies together to optimize personal search. At the end of this paper, we make an experiment to evaluate our methodology and from which we can conclude that our work performs better than other sample s.
This paper introduces the AHP, entropy value method, TOPSIS method and fuzzy comprehensive evaluation method in the comprehensive evaluation method, and analyzes the advantages and disadvantages of the four from the perspectives of subjectivity and objectivity, acceptance of decision makers, and difficulty of evaluation. The value method depends on the amount of initial data, the AHP research results are too subjective, the TOPSIS research conclusions do not fit the reality, and the results of the fuzzy comprehensive evaluation may have poor resolution. Finally, suggestions are put forward for the application scope of the four methods of single use and combined use. It is believed that the four evaluation methods can be combined in pairs or three, such as: entropy weight-TOPSIS, AHP-TOPSIS or entropy weight-AHP -Methods such as fuzzy comprehensive evaluation method avoid being affected by the limitations of a single evaluation method.
Lung cancer is one of the most common malignant tumors in humans. Adenocarcinoma of the lung is another of the most common types of lung cancer. In clinical medicine, physicians rely on the information provided by pathology tests as an important reference for the fifinal diagnosis of many diseases. Thus, pathological diagnosis is known as the gold standard for disease diagnosis. However, the complexity of the information contained in pathology images and the increase in the number of patients far exceeds the number of pathologists, especially in the treatment of lung cancer in less-developed countries.This paper proposes a multilayer perceptron model for lung cancer histopathology image detection, which enables the automatic detection of the degree of lung adenocarcinoma infifiltration. For the large amount of local information present in lung cancer histopathology images, MLP IN MLP (MIM) uses a dual data stream input method to achieve a modeling approach that combines global and local information to improve the classifification performance of the model. In our experiments, we collected 780 lung cancer histopathological images and prepared a lung histopathology image dataset to verify the effectiveness of MIM.The MIM achieves a diagnostic accuracy of 95.31% and has a precision, sensitivity, specificity and F1-score of 95.31%, 93.09%, 93.10%, 96.43% and 93.10% respectively, outperforming the diagnostic results of the common network model. In addition, a number of series of extension experiments demonstrated the scalability and stability of the MIM.In summary, MIM has high classifification performance and substantial potential in lung cancer detection tasks.
The advancement of autonomous driving technology is becoming increasingly vital in the modern technological landscape, promising notable enhancements in safety, efficiency, traffic management, and energy use. Despite these benefits, conventional deep reinforcement learning algorithms often struggle to navigate complex driving environments effectively. To tackle this challenge, we propose a novel network called DynamicNoise, designed to significantly boost algorithmic performance by introducing noise into the Deep Q-Network (DQN) and Double Deep Q-Network (DDQN). Drawing inspiration from the NoiseNet architecture, DynamicNoise uses stochastic perturbations to improve the exploration capabilities of these models, leading to more robust learning outcomes. Our experiments demonstrate a 57.25% improvement in navigation effectiveness within a 2D experimental setting. Moreover, by integrating noise into the action selection and fully connected layers of the Soft Actor-Critic (SAC) model in the more complex 3D CARLA simulation environment, our approach achieved an 18.9% performance gain, substantially surpassing traditional methods. These results confirm that the DynamicNoise network significantly enhances the performance of autonomous driving systems across various simulated environments, regardless of their dimensionality and complexity, by improving their exploration capabilities rather than just their efficiency.
In recent years, with the rapid development of the World Wide Web, huge amount of data arises in our daily life. Which approach should be taken to assist users when searching the information they need is the point. For users, when facing the same resources, they more commonly annotate them or input search queries according to their own perspectives. This process has been already proven as a time-consuming. Based on the goal, we propose our methodology that to combine user's context, users' profile with users' Folksonomies together to optimize personal search. And at the end of this paper, we make an experiment to evaluate our methodology and which performs better.
The rise in renewable energy has driven the widespread use of large-scale energy storage batteries, which makes the risk of overheating more threatening. To ensure battery safety, it is essential to build a monitoring system with a comprehensive evaluation of large quantities of batteries. However, existing battery management systems exhibit significant limitations in terms of monitoring scope, analytical precision, and transmission efficiency. As an applicable solution, cloud–edge technology is an advanced integrated method that provides low-latency data access, accurate analysis capabilities, and adjustable monitoring ranges. In this work, the Kubernetes-orchestrated battery monitoring platform (KBMP), which integrates Kubernetes and cloud–edge technology, is proposed to provide comprehensive battery management. Specifically, Kubernetes is used to ensure low latency in data transmission and analysis, while the K-Means clustering algorithm is applied to provide accurate thermal runaway (TR) warnings. To validate the performance of KBMP, four sets of real battery TR data are fed to test its accuracy and latency. The experimental findings reveal that KBMP is capable of providing battery TR warnings in advance within 30 min. Additionally, the platform concurrently decreases data transmission latency by up to 20% and reduces replica scaling latency by 50% compared to the platform without integrating Kubernetes.