Aspect Oriented Programming (AOP) aims to ease maintenance and promote reuse of software components by separating core concerns from crosscutting concerns: aspects of a program that cannot be confined to a single program component. In AOP languages such as AspectJ, this kind of concern is encapsulated in an aspect and connected to main classes using pointcuts. This removes extraneous code from the classes of the program, allowing them to focus on their core purpose and making them more maintainable and reusable. The implementation of each crosscutting concern, which would have been scattered throughout an object oriented program, is centralised in a single aspect. Unfortunately, due to the way aspects and classes are associated, and the lack of an explicit interface between them, these aspects may be tightly coupled to the classes of the program and so may not be as reusable or maintainable as might be expected. We propose ParaAJ (Parametric Aspects), as an extension to AspectJ. ParaAJ allows classes to specify which aspects should be applied, and allows applications to specify which aspects to apply to which classes in what ways. This makes it easier for classes and aspects to be developed and maintained independently, and encourages reuse of both.
Zones and Octagons are popular abstract domains for static program analysis. They enable the automated discovery of simple numerical relations that hold between pairs of program variables. Both domains are well understood mathematically but the detailed implementation of static analyses based on these domains poses many interesting algorithmic challenges. In this article, we study the two abstract domains, their implementation and use. Utilizing improved data structures and algorithms for the manipulation of graphs that represent difference-bound constraints, we present fast implementations of both abstract domains, built around a common infrastructure. We compare the performance of these implementations against alternative approaches offering the same precision. We quantify the differences in performance by measuring their speed and precision on standard benchmarks. We also assess, in the context of software verification, the extent to which the improved precision translates to better verification outcomes. Experiments demonstrate that our new implementations improve the state of the art for both Zones and Octagons significantly.
The subject of groundness analysis for (constraint) logic programs has been widely studied, and interesting domains have been proposed. Pos has been recognized as the most suitable domain for capturing the kind of dependencies arising in groundness analysis, and Reduced Ordered Binary Decision Diagrams (ROBDDs) are generally accepted to be the most efficient representation for Pos. Unfortunately, the size of an ROBDDs is, in the worst case, exponential in the number of variables it depends upon. Earlier work [2] has shown that a hybrid representation that separates the definite information from the dependency information is considerably more efficient than keeping the two together. The aim of the present paper is to push this idea further, also separating out certain dependency information, in particular all pairs of variables that are always either both ground or neither ground. We find that this new hybrid representation is a significant improvement over previous work.
This paper is concerned with the problem of Boolean approximation in the following sense: given a Boolean function class and an arbitrary Boolean function, what is the function's best proxy in the class?Specifically, what is its strongest logical consequence (or envelope) in the class of affine Boolean functions.We prove various properties of affine Boolean functions and their representation as ROBDDs.Using these properties, we develop an ROBDD algorithm to find the affine envelope of a Boolean function.
Precondition inference is a non-trivial problem with important applications in program analysis and verification. We present a novel iterative method for automatically deriving preconditions for the safety and unsafety of programs. Each iteration maintains over-approximations of the set of safe and unsafe initial states; which are used to partition the program's initial states into those known to be safe, known to be unsafe and unknown. We then construct revised programs with those unknown initial states and iterate the procedure until the approximations are disjoint or some termination criteria are met. An experimental evaluation of the method on a set of software verification benchmarks shows that it can infer precise preconditions (sometimes optimal) that are not possible using previous methods.
Abstract Researchers working on the automatic parallelization of programs have long known that too much parallelism can be even worse for performance than too little, because spawning a task to be run on another CPU incurs overheads. Autoparallelizing compilers have therefore long tried to use granularity analysis to ensure that they only spawn off computations whose cost will probably exceed the spawn-off cost by a comfortable margin. However, this is not enough to yield good results, because data dependencies may also limit the usefulness of running computations in parallel. If one computation blocks almost immediately and can resume only after another has completed its work, then the cost of parallelization again exceeds the benefit. We present a set of algorithms for recognizing places in a program where it is worthwhile to execute two or more computations in parallel that pay attention to the second of these issues as well as the first. Our system uses profiling information to compute the times at which a procedure call consumes the values of its input arguments and the times at which it produces the values of its output arguments. Given two calls that may be executed in parallel, our system uses the times of production and consumption of the variables they share to determine how much their executions would overlap if they were run in parallel, and therefore whether executing them in parallel is a good idea or not. We have implemented this technique for Mercury in the form of a tool that uses profiling data to generate recommendations about what to parallelize, for the Mercury compiler to apply on the next compilation of the program. We present preliminary results that show that this technique can yield useful parallelization speedups, while requiring nothing more from the programmer than representative input data for the profiling run.