8.0 Advanced Topics and Interdisciplinary Integration
8.1 The Role of Databases in Modelling & Simulation
There is a powerful symbiotic relationship between database technology and the field of M&S. Modern simulations often require vast amounts of structured input data to define their parameters and scenarios, and they can generate enormous volumes of output data that must be stored, managed, and analyzed. Databases provide the essential infrastructure for both of these tasks.
The history of data modelling, initiated by Edgar Codd’s 1980 model, was built on three features: defining data objects and their relationships, establishing rules and constraints, and creating operations to retrieve information. While early models were based on the entity-relationship concept, modern data modeling often uses an object-oriented design, where entities are represented as classes that act as templates.
In the context of M&S, databases are used to manage several specific data representation schemes:
- Data Representation for Events: A database can be used to store a log of simulation events, with attributes such as the event name, its associated time, and links to the input and output files for that particular simulation run.
- Data Representation for Input Files: A database provides a structured way to manage the many different input files and parameter sets required for various simulation experiments, ensuring consistency and repeatability.
- Data Representation for Output Files: The output from simulation runs can be systematically stored and organized. A common scheme is to store the raw numerical output and the descriptive metadata in separate but linked files or tables, making the results easier to query and analyze.
Moving from data management, we now explore how another powerful computational field, artificial intelligence, is being integrated with simulation.
8.2 Artificial Intelligence in Simulation: Neural Networks
Artificial Neural Networks (NNs) are a branch of artificial intelligence inspired by the structure of the human brain. They are composed of networks of simple, interconnected processors called “units” or “neurons,” each with its own small local memory. These units are linked by unidirectional communication channels that carry numeric data. An NN “learns” by processing examples and adjusting the strengths of these connections, allowing it to recognize complex patterns in data.
The history of NNs is marked by periods of intense interest and subsequent decline:
- 1940: The first conceptual neural model was developed by McCulloch and Pitts.
- 1949: Donald Hebb’s book, The Organization of Behavior, introduced key concepts about how neurons might learn.
- 1950: As computers advanced, it became possible to create models based on these theories, an effort undertaken by IBM research laboratories.
- 1959: Bernard Widrow and Marcian Hoff developed ADALINE and MADALINE, the latter being the first neural network applied to a real-world problem.
- 1962: Rosenblatt’s perceptron model demonstrated the ability to solve simple pattern classification problems.
- 1969: A famous critique by Minsky and Papert proved the mathematical limitations of the simple perceptron. This revelation caused a significant decline in funding and research in the field for many years.
- 1982: John Hopfield of Caltech reinvigorated the field by introducing networks with bidirectional connections, which directly addressed the limitations of earlier models and led to a major resurgence in NN research.
This resurgence was also driven by the failure of traditional symbolic AI to solve certain complex problems. The massive parallelism inherent in the NN architecture provided the computing power needed for these challenges. Today, NNs are used in simulation for tasks like pattern recognition, diagnostics, and in the control boards for robotics.
8.3 Handling Uncertainty: Fuzzy Sets in M&S
A common challenge in simulation, particularly in continuous simulation, is that the parameters of the governing differential equations are often uncertain. They cannot always be represented as single, crisp numbers. Fuzzy Logic and Fuzzy Sets provide a mathematical framework for representing and reasoning with this kind of imprecision.
A Fuzzy Set is a generalization of a classical (“crisp”) set. In a classical set, an element is either a member of the set or it is not—there is no middle ground. In a fuzzy set, by contrast, an element can have a degree of membership in the set, a value typically between 0 and 1. This membership is defined by a membership function, μA(x).
The concept can be illustrated with a few examples:
- Case 1 (Properties): The membership function μA(x) must be greater than or equal to zero for all elements, and the maximum membership value in the set must be 1.
- Case 2 (Notation): A fuzzy set can be written in a standard notation. For example, the set A = {0.3/3, 0.7/4, 1/5, 0.4/6} means that the element ‘3’ has a membership degree of 0.3, ‘4’ has a degree of 0.7, ‘5’ has a full degree of 1.0, and ‘6’ has a degree of 0.4.
- Case 3 (Relationship to Crisp Sets): A classical crisp set can be seen as a special case of a fuzzy set where the membership function is restricted to only two values: 1 (for full members) and 0 (for non-members).
By using fuzzy numbers to represent uncertain parameters, analysts can build models that more realistically capture the inherent ambiguity and imprecision of real-world systems.