In this paper we present Java Package for Distributed Computing (JPDC), a toolkit for implementing and testing distributed algorithms in Java. JPDC’s goals are to simplify the development of distributed algorithms by defining an highlevel programming interface. The interface is very close to the pseudo-code formalism commonly used to describe algorithms and allows, at the same time, the implementation and deployment in a truly distributed setting. Moreover, JPDC also provides a friendly interface that can be used both to visualize the algorithm behavior and both to interact with it. This is especially useful for teaching environments where complex algorithms can be much better debugged, understood and validated by implementing and running them.
JIVE (Java interactive software visualization environment) is a system for the visualization of Java coded algorithms and data structures. It supports the rapid development of interactive animations through the adoption of an object oriented approach. JIVE introduces several significant innovations such as a distributed architecture able to separate transparently the visualization activity from the underlying communication needed to support it. Therefore, it becomes possible to use JIVE in a variety of scenarios ranging from debugging algorithms to software visualization in virtual classrooms environments. Moreover, JIVE uses a zoomable user interface for representing algorithms: seamless visualization of both small and large data sets is achieved by using semantic zooming. Finally, JIVE comes with a collection of already animated data types including data structures provided by the Java standard library
In this paper we propose a novel approach to the learning of cryptographic protocols, based on a collaborative role-based visualization system, DISCERN, that helps students to understand a protocol by actively engaging them in a simulation of its execution. In DISCERN, each student shares a visual e
In this paper, we present a universal framework for collecting publicly available information from Online Social Networks (OSNs). Our proposal is based on a three-levels distributed architecture. At a first level, one or more crawler parallel processes are in charge of identifying all the resources that need to be acquired from a target OSN. Once identified, these resources are requested to the intermediate-level. This level implements an abstraction layer that allows crawlers to query at the same time different OSNs using the same interface. Here, if the needed resource is available, it is returned immediately. Otherwise, a new request is prepared and shared toward a network of remote data collectors processes by means of a set of distributed data structures. The architecture is organized to allow a large number of data collectors to operate in parallel, so to make it possible to download big amount of data in a relatively short amount of time. In our paper, we also present the results of some experiments we conducted on the Twitter and the Flickr OSNs to validate our framework.
The source camera identification problem is concerned with the identification of the camera that has been used to generate a digital picture. A widely adopted identification technique, proposed by Lukas in [1], relies on the usage of the pattern noise left by the camera sensor as a fingerprint. This technique may perform badly when applied to images that have undergone lossy compression techniques, such as being saved as a low-quality JPEG image. In this paper, we firstly analyze the experimental performance of the identification technique by Lukas, when dealing with JPEG images saved using increasing compression rates. Then, we investigate if and how some of the enhanced sensor pattern noise extraction techniques proposed in literature are able to improve on the original technique in the considered cases. Our results show that, on a side, an increase in the compression rate of a JPEG image deeply affects the effectiveness of the identification process carried out using the Lukas technique. On the other side, we show that at least two of the considered enhanced sensor pattern noise extraction techniques succeed in recovering most part of this degradation.