Automated Red Teaming (ART) is an automated process for Manual Red Teaming which is a technique frequently used by the Military Operational Analysis community to uncover vulnerabilities in operational tactics. The ART makes use of multi-objective evolutionary algorithms such as SPEAII and NSGAII to effectively find a set of non-dominated solutions from a large search space. This paper investigates the use of a multi-objective bee colony optimization (MOBCO) algorithm with Automated Red Teaming. The performance of the MOBCO algorithm is first compared with a well known evolutionary algorithm NSGAII using a set of benchmark functions. The MOBCO algorithm is then integrated into the ART framework and tested using a maritime case study involving the defence of an anchorage. Our experimental results show that the MOBCO algorithm proposed is able to achieve comparable or better results compared to NSGAII in both the benchmark function and the ART maritime scenario.
Symbiotic simulation is a paradigm in which a simulation system and a physical system are closely associated with each other. This close relationship can be mutually beneficial. The simulation system benefits from real-time measurements about the physical system which are provided by corresponding sensors. The physical system, on the other side, may benefit from the effects of decisions made by the simulation system. An important concept in symbiotic simulation is that of the what-if analysis process which is concerned with the evaluation of a number of what-if scenarios by means of simulation. Symbiotic simulation and related paradigms have become popular in recent years because of their ability to dynamically incorporate real-time sensor data. In this paper, we explain different types of symbiotic simulation and give an overview of the state of the art. In addition, we discuss common research issues that have to be addressed when working with symbiotic simulation. While some issues have been adequately addressed, there are still research issues that remain open.
The problem of shared state is well known to the parallel and distributed simulation research community. In this article, the authors revisit the problem of shared state in the context of a High Level Architecture (HLA)-based distributed simulation. A middleware approach is proposed to solve this problem within the framework of the HLA runtime infrastructure. Four solutions to this problem are implemented in the middleware using receive-order messages. The authors discuss the implementation issues of these four solutions in the middleware. Experimental results comparing the performance of these four solutions against a simple request-reply approach using timestamp-order messages are also presented.
Most features in commercial simulation packages are often omitted in parallel simulation benchmarks, because they neither affect the overall correctness of the simulation protocol nor the benchmark's performance. In our work on parallel simulation of a wafer fabrication plant, we however find several features which complicate the implementation of the simulation protocol and affect the program performance. One such feature is the dispatch rules which a machine set uses to decide the priority of the waiting wafer lots. In a sequential simulation, the dispatch rule can be implemented in a straightforward fashion, because the whole system state is at the same simulation time, and the rule simply reads the state variables (of any machine, resources, etc.) In a parallel simulation, the dispatch rule computation may be complicated by the fact that different portions of the simulated system can be at different simulation times. This paper describes our study of the implementation of dispatch rules in parallel simulation. This is actually an instance of the little-studied problem of providing shared-state information in parallel simulation. We briefly survey previous related work. We then outline two different approaches for a dispatch rule to access the shared-state information and compare them in terms of their ease of implementation.
Programming is a crucial skill in today's world and being taught worldwide at different levels. However, in the literature there is little research investigating a formal approach to embedding public engagement into programming module design. This paper explores the integration of public engagement into an introductory programming module, at the University of Warwick, UK, as part of the Digital and Technology Solutions (DTS) degree apprenticeship. The module design follows a 'V' model, which integrates community engagement with traditional programming education, providing a holistic learning experience. The aim is to enhance learning by combining programming education with community engagement. Apprentices participate in outreach activities, teaching programming and Arduino hardware to local secondary school students. This hands-on approach aligns with Kolb's experiential learning model, improving communication skills and solidifying programming concepts through teaching. The module also includes training in safeguarding, presentation skills, and storytelling to prepare apprentices for public engagement. Pedagogical techniques in the module include live coding, group exercises, and Arduino kit usage, as well as peer education, allowing apprentices to learn from and teach each other. Degree apprentices, who balance part-time studies with full-time employment, bring diverse knowledge and motivations. The benefit of public engagement is that it helps bridge their skills gap, fostering teamwork and creating a positive learning environment. Embedding public engagement in programming education also enhances both technical and soft skills, providing apprentices with a deeper understanding of community issues and real-world applications. Our design supports their academic and professional growth, ensuring the module's ongoing success and impact.
Automated Red Teaming (ART), an automated process for Manual Red Teaming, is a technique frequently utilised by the Military Operational Analysis (OA) community to uncover vulnerabilities in operational tactics. Currently, individual ART studies are limited to the parameter tuning of a simulation model with a fixed structure. The effects in the evolutions of structural features of a simulation model have not been investigated in any of the studies. This paper investigates the benefits of Evolvable Simulation, which involves evolution of the structure of a simulation model. The case study used for this purpose is a maritime based scenario which involves the defense of an anchorage. Simulation results obtained through Evolvable Simulation revealed that the quality of the solutions found given an appropriate amount of evaluations will improve when the simulation model is evolved. Additionally, experimental results also showed that it is likely to have negligible improvement in solutions for models with smaller search space when the amount of evaluations is more than required. The insights obtained in this work shows that evolvable simulation is an effective methodology which allows decision makers to enhance their understanding on military operational tactics.
This paper presents our ongoing work on modeling agents with human-like decision making and behavior execution capabilities in crowd simulation. We aim to provide a generic framework that reflects the major cognitive and physical processes as observed from human behaviors in real-life situations. The design of the framework is based on some basic assumptions and related cognitive theories on human behaviors in various real-life situations. In this paper, the cognitive architecture of our framework is presented, which emphasizes the role of experiences in human's decision making. The paper also briefly describes the design of agent's decision making process and presents a case study to show some results in a crowd simulation scenario.
The HLA Runtime Infrastructure can support a conservative simulation protocol for its time management service. However, the performance of conservative simulation protocols is very much dependent on lookahead that one can extract out of a simulation model. Also the most conservative value has to be taken in order to ensure the causality constraint. In this paper, we propose two algorithms, namely pullRO and pushRO, that allow one to replace some of the timestamp order (TSO) messages (possibly those causing zero lookahead values) with receive order (RO) messages. This removes the time constraint that these messages impose on the lower bound timestamp (LBTS) calculation, which in turn will improve the time advancement rate of federates. The algorithms still ensure the causality constraint and a middleware approach is used to preserve the semantics of the RTI APIs. The performance of the two algorithms is compared against a baseline model where no TSO messages are replaced.
The grid enables large-scale resource sharing and makes it viable for running large-scale parallel and distributed simulations. The high level architecture (HLA) paradigm provides a software platform and interoperability interface for simulation components to utilize these hardware resources. However, neither the grid nor the HLA provides mechanism for resource management for parallel and distributed simulations. It is also noticed that substantial effort is required for writing program that conforms to the runtime infrastructure (RTI) requirements because of its complexity. In this paper, we introduce a framework for designing and executing parallel simulation using the RTI. The framework is also designed to assist load balancing and checkpointing. With the code library from our framework, the modeler is able to complete the design of a parallel simulation that runs on RTI by specifying the simulation configuration and the handling detail of each event. Our framework incorporates automatic code generation. It also uses data distribution management (DDM) to route simulation events (interactions) to achieve efficient use of network bandwidth.
We have developed a set of performance prediction tools which help to estimate the achievable speedups from parallelizing a sequential simulation. The tools focus on two important factors in the actual speedup of a parallel simulation program : (a) the simulation protocol used, and (b) the inherent parallelism in the simulation model. The first two tools are a performance/parallelism analyzer for a conservative, asynchronous simulation protocol, and a similar analyzer for a conservative, synchronous (super-step) protocol. Each analyzer allows us to study how the speedup of a model changes with increasing number of processors, when a specific protocol is used. The third tool -- a critical path analyzer -- gives an ideal upper bound to the model's speedup. This paper gives an overview of the prediction tools, and reports the predictions from applying the tools to a discrete-event wafer fabrication simulation model. The predictions are close to speedups from actual parallel implementations. These tools help us to set realistic expectations of the speedup from a parallel simulation program, and to focus our work on issues which are more likely to yield performance improvement.