Moving the Needle on Rigorous Floating-Point Precision Tuning
2
Citation
25
Reference
10
Related Paper
Citation Trend
Abstract:
Virtually all real-valued computations are carried out using floating-point data types and operations. With increasing emphasis on overall computational efficiency, compilers are increasingly attempting to optimize floating-point expressions. Practical reasoning about the correctness of these optimizations requires error analysis procedures that are rigorous (ideally, they can generate proof certificates), can handle a wide variety of operators (e.g., transcendentals), and handle all normal programmatic constructs (e.g., conditionals and loops). Unfortunately, none of today’s approaches can achieve this combination. This position paper summarizes recent progress achieved in the community on this topic. It then showcases the component techniques present within our own rigorous floating-point precision tuning framework called FPTuner—essentially offering a collection of “grab and go” tools that others can benefit from. Finally, we present FPTuner’s limitations and describe how we can exploit contemporaneous research to improve it.Keywords:
Component (thermodynamics)
Cite
Analog mixed-signal (AMS) devices promise faster, more energy-efficient deep neural network (DNN) inference than their digital counterparts. However, recent studies show that DNNs on AMS devices with fixed-point numbers can incur an accuracy penalty because of precision loss. To mitigate this penalty, we present a novel AMS-compatible adaptive block floating-point (ABFP) number representation. We also introduce amplification (or gain) as a method for increasing the accuracy of the number representation without increasing the bit precision of the output. We evaluate the effectiveness of ABFP on the DNNs in the MLPerf datacenter inference benchmark -- realizing less than $1\%$ loss in accuracy compared to FLOAT32. We also propose a novel method of finetuning for AMS devices, Differential Noise Finetuning (DNF), which samples device noise to speed up finetuning compared to conventional Quantization-Aware Training.
Benchmark (surveying)
Deep Neural Networks
Representation
Cite
Citations (4)
Recently, more and more software vulnerabilities are disclosed and researchers tend to study on automatically discover and exploit the vulnerabilities. However, the main challenges of automated exploit generation are: 1) it is hard to analyze the program failure and extract useful information, 2) the scenario of the vulnerability too complex to successfully exploit. Therefore, This paper proposes a vulnerability exploit generation framework AEG-E. AEG-E can extract the control flow graph from the target program and employ the crash reproduce algorithm in symbolic execution to reduce the problem of path explosion. To adapt to complex vulnerability scenarios, we design the extendable and user-configurable exploit model to generate different exploit. Finally, we used the binaries from Robo Hacking Games and real world program to demonstrate the validity and efficiency of AEG-E. The experiment results shows that AEG-E is 2.913 times more efficient than previous exploit generation tool, REX.
Vulnerability
Control flow graph
Cite
Citations (3)
The purpose of this paper is to advise an approach (and to support that advice by discussion of an example) towards achieving a goal first announced by John McCarthy: that compilers for higher-level programming languages should be made completely trustworthy by proving their correctness. The author believes that the compiler-correctness problem can be made much less general and better-structured than the unrestricted program-correctness problem; to do so will of course entail restricting what a compiler may be.
Structuring
Compiler construction
Trustworthiness
Advice (programming)
Cite
Citations (130)
Abstract : Host-based intrusion-prevention systems are recently popular technologies that protect computer systems from malicious attacks. Instead of merely detecting exploits, the systems attempt to prevent the exploits from succeeding on the host they protect. This research explores the threats that have led to the development of these systems and the techniques many use to counter those problems. The author then evaluates two current intrusion-prevention products (McAfee Entercept and the Cisco Security Agent) as to their success in preventing exploits. His tests used live viruses, worms, Trojan horses, and remote exploits that turned loose on an isolated two-computer network. The author then makes recommendations about deployment of the two products based on the results of this testing. Testing procedures for the remote exploit, e-mail exploit, disk exploit, and web phase exploit are appended.
Cite
Citations (1)
The automatic exploit generation challenge is given a program, automatically find vulnerabilities and generate exploits for them. In this paper we present AEG, the first end-to-end system for fully automatic exploit generation. We used AEG to analyze 14 open-source projects and successfully generated 16 control flow hijacking exploits. Two of the generated exploits (expect-5.43 and htget-0.93) are zero-day exploits against unknown vulnerabilities. Our contributions are: 1) we show how exploit generation for control flow hijack attacks can be modeled as a formal verification problem, 2) we propose preconditioned symbolic execution, a novel technique for targeting symbolic execution, 3) we present a general approach for generating working exploits once a bug is found, and 4) we build the first end-to-end system that automatically finds vulnerabilities and generates exploits that produce a shell.
Control flow
Cite
Citations (213)
Cite
Citations (24)
There has recently been great progress in proving the correctness of compilers for increasingly realistic languages with increasingly realistic runtime systems. Most work on this problem has focused on proving the correctness of a particular compiler, leaving open the question of how to verify the correctness of assembly code that is hand-optimized or linked together from the output of multiple compilers. This has led Benton and other researchers to propose more abstract, compositional notions of when a low-level program correctly realizes a high-level one. However, the state of the art in so-called "compositional compiler correctness" has only considered relatively simple high-level and low-level languages.
Compile time
Compiler construction
Cite
Citations (57)
Most of today's numerical computations are performed using floating-point data operations for representing real numbers. The precision of the related data types should be adapted in order to guarantee a desired overall rounding error and to strengthen the performance of programs. This adaptation lies to the precision tuning technique, which consists of finding the least floating-point formats enabling a program to compute some results with an accuracy requirement. Precision tuning is important for several applications from a wide variety of domains including embedded systems, Internet of Things (IoT), etc. In this work, we evaluate the performance of POP, our static analysis tool, in several manners. First, we compare two ways of optimizing programs in POP throughout the definition of different cost functions given to the solver. Second, we measure the runtime errors between the exact results given by an execution in multiple precision and the results of tuned programs by POP and show that the measured error is always less than the accuracy requirements given by the user for several examples. Third, we provide a detailed comparison of POP versus the prior state-of-the-art tool, Precimonious.
Rounding
Double-precision floating-point format
Solver
Single-precision floating-point format
Floating-point unit
IEEE floating point
Cite
Citations (3)
Modern operating systems set exploit mitigations to thwart the exploit, which has also become a barrier to automated exploit generation (AEG). Many current AEG solutions do not fully account for exploit mitigations, and as a result, they are unable to accurately assess the exploitability of vulnerabilities in such settings.This paper proposes AEMB, an automated solution for bypassing exploit mitigations and generating useable exploits (EXPs). Initially, AEMB identifies exploit mitigations in the system based on characteristics of the program execution environment. Then, AEMB implements exploit mitigations bypassing the payload generation by modeling expert experience and constructs the corresponding constraints. Next, during the program’s execution, AEMB uses symbol execution to collect symbol information and create exploit constraints. Finally, AEMB utilizes a solver to solve the constraints, including payload constraints and exploit constraints, to generate the EXP. In this paper, we evaluated a prototype of AEMB on six test programs and seven real-world applications. Furthermore, we conducted 54 sets of experiments on six different combinations of exploit mitigations. Experiment results indicate that AEMB can automatically overcome exploit mitigations and produce successful exploits for 11 out of 13 applications.
Cite
Citations (4)
Reasoning about floating-point is difficult and becomes only more so if there is an interplay between floating-point and bit-level operations. Even though real-world floating-point libraries use implementations that have such mixed computations, no systematic technique to verify the correctness of the implementations of such computations is known. In this paper, we present the first general technique for verifying the correctness of mixed binaries, which combines abstraction, analytical optimization, and testing. The technique provides a method to compute an error bound of a given implementation with respect to its mathematical specification. We apply our technique to Intel's implementations of transcendental functions and prove formal error bounds for these widely used routines.
Implementation
Abstraction
Transcendental function
Cite
Citations (25)