The concept of understanding is ambiguously used across areas of study, such as philosophy and cognitive sciences. This ambiguity partly originates from understanding's generally accepted definition of 'grasping' of something. Further, the concept is confounded with concurrent processes such as learning and decision making. This dissertation provides a general theory of understanding (GTU) that explains the concept of understanding unambiguously and separated from concurrent processes.
The GTU distinguishes between the process of understanding and its outcomes. Understanding, defined as a process, is the matching of knowledge, worldview, and problem. The outcome of this process is the assignment of a truth value to a problem, the generation of knowledge and the generation of worldview. Both accounts say what understanding is and what it does. Additionally, a construct of understanding is proposed to provide insight into the process of understanding. The construct does not only help explain existing theories about understanding, but also adds to the body of knowledge by identifying three types of understanding. Two exist in the literature while the third type is a contribution of this dissertation. Generalizing from the data it is shown how complexity of a problem depends on the effort an individual had to understand. It emerges that effort to understand converges to seven levels.
The theory provides insights in areas of interest to Engineering Management such as complexity and complexity's dependence on the observer while differentiating understanding from concurrent processes such as learning and decision making.
Whether by design or by practice, systems engineering (SE) processes are used more and more often in Modeling and Simulation (M&S). While the two disciplines are very close, there are some differences that must be taken into account in order to successfully reuse practices from one community to another. In this paper, we introduce the M&S System Development Framework (MS-SDF) that unifies SE and M&S processes. The MS-SDF comprises the SE processes of requirements capture, conceptual modelling, and verification and validation (V&V), and extends them to M&S. We use model theory as a deductive apparatus in order to develop the MS-SDF. We discuss the benefits of the MS-SDF especially in the selection between federation development and multi-model approaches and the design of composable models and simulations. Lastly, a real life application example of the framework is provided.
Abstract In this article, we provide an introduction to simulation for cybersecurity and focus on three themes: (1) an overview of the cybersecurity domain; (2) a summary of notable simulation research efforts for cybersecurity; and (3) a proposed way forward on how simulations could broaden cybersecurity efforts. The overview of cybersecurity provides readers with a foundational perspective of cybersecurity in the light of targets, threats, and preventive measures. The simulation research section details the current role that simulation plays in cybersecurity, which mainly falls on representative environment building; test, evaluate, and explore; training and exercises; risk analysis and assessment; and humans in cybersecurity research. The proposed way forward section posits that the advancement of collecting and accessing sociotechnological data to inform models, the creation of new theoretical constructs, and the integration and improvement of behavioral models are needed to advance cybersecurity efforts.
We examine the potential of prompting a large language model-based chatbot, ChatGPT, to generate functional simulation model code from a prose-based narrative. The simple narrative describes how the mode of transportation for elementary school students changed due to the COVID-19 pandemic and related factors, including a lack of available bus drivers, lack of mask enforcement on buses, and inclement weather. We document the process of providing this narrative to ChatGPT and further prompting the chatbot to generate model code to represent the narrative and to make it executable. We test ChatGPT’s ability to use prose descriptions of a phenomenon to generate simulation models using three paradigms: discrete event system, system dynamics, and agent-based modeling. Our findings reveal that ChatGPT could not produce concise or executable models, facing higher difficulty when asked to do so in programming languages it was less familiar with. This analysis underscores the strengths and limitations of the current state of this technology for modeling and simulation. Furthermore, we propose how future advancements in Large Language Models may aid the simulation modeling process, increasing transparency and participation in multidisciplinary team efforts.
Epistemology is the branch of philosophy that deals with gaining knowledge. It is closely related to ontology. The branch that deals with questions like “What is real?” and “What do we know?” as it provides these components. When using modeling and simulation, we usually imply that we are doing so to either apply knowledge, in particular when we are using them for training and teaching, or that we want to gain new knowledge, for example when doing analysis or conducting virtual experiments. This paper looks at the history of science to give a context to better cope with the question, how we can gain knowledge from simulation. It addresses aspects of computability and the gen-eral underlying mathematics, and applies the findings to validation and verification and development of federations. As simulations are understood as computable executable hypotheses, validation can be understood as hypothesis testing and theory building. The mathematical framework allows furthermore addressing some challenges when de-veloping federations and the potential introduction of contradictions when composing different theories, as they are represented by the federated simulation systems.
A study by Hu et al. (2009, PNAS, 106(5)) projected that a targeted malicious attack on Wi-Fi routers could infect a region in two days. The study also argued that the use of WPA security protocol in 60--70% routers would practically prevent such epidemics. This paper revisits their model with current Wi-Fi router data from WiGLE.net and a refined data selection method. We examine the temporality and scale of the malware spread applying these two updates. Despite ≈88% WPA adoption rate, we see a rapid malware spread occurring in a week and infecting ≈34% of all insecure routers (≈5.4% of all) after two weeks. This result is significantly higher than the original study projection. It occurs due to the increased use of Wi-Fi routers causing a more tightly connected graph. We argue that this projected risk can increase when current vulnerabilities introduced and connected devices are considered. Ultimately, a thorough consideration is needed to assess cybersecurity risks in Wi-Fi ecosystem and evaluate interventions to stop epidemics.
The process of developing and running simulations needs to become simple and accessible to audiences ranging from middle school students in a learning environment to subject matter experts in order to make the benefits of modeling and simulation commonly available. However, current simulations are for the most part developed and run on platforms that are: (1) demanding in terms of computational resources, (2) difficult for general audiences to use owing to unintuitive interfaces mired in mathematical syntax, (3) expensive to acquire and maintain and (4) hard to interoperate and compose. The result is a four-dimensional expense that makes simulation inaccessible to the general public. In this paper we show that by embracing the web and its standards, the use and development of simulations can become democratized and be part of a Web of Simulation where people of all skill levels are able to build, upload, retrieve, rate, and connect simulations. We show how the Web of Simulation can be built using the three basic principles of service orientation, platform independence, and interoperability. Finally, we present strategies for implementing the Web of Simulation and discuss challenges and possible approaches.