4.0 Verification and Validation: Ensuring Model Credibility
4.1 The Critical Distinction: Verification vs. Validation
Verification and Validation (V&V) are the cornerstones of credible simulation. Together, they form the quality assurance process that ensures a model is both built correctly and is an accurate representation of the real system it is meant to portray. A failure in the V&V process can lead to deeply flawed conclusions and, consequently, poor real-world decisions based on erroneous model outputs. While often used interchangeably in casual conversation, these two terms have precise and distinct meanings in the M&S discipline.
Validation is the process of determining whether the conceptual model is an accurate representation of the real system. It addresses the question: “Are we building the right model?” Validation involves comparing the model’s structure, logic, and assumptions against the real system to ensure it captures the essential dynamics correctly.
Verification is the process of determining whether the model’s implementation (i.e., the computer program) is a correct reflection of the developer’s conceptual description and specifications. It addresses the question: “Are we building the model right?” Verification is essentially a process of debugging and ensuring that the code functions exactly as intended by the model design.
The relationship between these two critical processes can be summarized as follows:
| Aspect | Verification | Validation |
| Core Question | “Are we building the model right?” | “Are we building the right model?” |
| Comparison Made | The model’s implementation (code) is compared against the conceptual model design. | The conceptual model is compared against the real-world system. |
| Goal | To ensure the simulation program is free of bugs and accurately reflects the intended logic. | To ensure the model is an accurate and credible representation of reality. |
While the concepts are distinct, they are supported by a suite of practical techniques designed to systematically build confidence in the simulation model.
4.2 Methodologies for Model Verification
Verification is largely an internal process focused on debugging and logical review. The goal is to ensure that the code and logic of the simulation program perfectly match the intended design of the conceptual model. Several techniques are used to achieve this.
- Modular Programming and Debugging: A “divide and conquer” strategy is highly effective. Instead of trying to debug a single, monolithic program, the model is built from smaller, self-contained sub-programs or modules. Each module can be written and debugged independently, making it far easier to isolate and fix errors before integrating them into the larger model.
- Structured Walk-throughs: This technique leverages the power of peer review. The model’s code and logic are presented to a team of people—not just the original programmer—who systematically “walk through” the program. This process is invaluable for catching logical errors, faulty assumptions, or bugs that the original developer might have overlooked.
- Tracing Intermediate Results: A simulation model can often feel like a “black box” where inputs go in and outputs come out. Tracing allows the analyst to look inside this box. The technique involves printing the values of key state variables at intermediate points during a simulation run. These traced values can then be compared with hand-calculated or expected outcomes to confirm that the internal logic is operating correctly.
- Testing with Diverse Inputs: A model should be tested with a wide range of input combinations to ensure its robustness. This includes testing with typical values, extreme values, and “edge cases” (e.g., zero values, negative values where inappropriate) to see if the model behaves as expected or fails gracefully rather than producing nonsensical results.
- Comparison with Analytic Results: For simpler models or for subsystems of a more complex model, it may be possible to calculate the expected output mathematically using analytic methods (e.g., queuing theory formulas). Comparing the simulation’s output for these simple cases to the known analytic solution provides a powerful check on the correctness of the implementation.
Once we are confident that the model has been built correctly, we must then turn our attention to ensuring we have built the correct model.
4.3 Techniques for Model Validation
Validation is the process of building confidence in the model’s ability to accurately represent the actual system. It is less about finding bugs and more about assessing the model’s fidelity to reality. This is typically accomplished through a multi-step process that involves expert consultation, rigorous data testing, and careful output analysis.
Step 1: Design a High-Validity Model This is a proactive approach to validation that begins during the design phase. To prevent fundamental conceptual errors from being built into the model, it is crucial to involve system experts and the client throughout the entire design process. Their domain knowledge is invaluable for ensuring the model’s assumptions are reasonable and its logic reflects the true workings of the system. Regular reviews and feedback sessions ensure the model evolves in alignment with reality.
Step 2: Test Model Assumptions Every model is built upon a set of assumptions that simplify reality. Validation requires that these assumptions be tested quantitatively wherever possible. A key technique used here is Sensitivity Analysis. This involves systematically varying the model’s input parameters and observing the effect on the output. This process helps identify which assumptions or parameters have the most significant impact on the results. These high-impact factors are the most critical assumptions and must be validated with the greatest care.
Step 3: Determine Representative Output The ultimate test of a model is whether its output behavior is representative of the real system’s output. This comparison can be made using several methods:
- The Turing Test: This involves presenting both real system data and simulated data to subject matter experts. If the experts cannot reliably distinguish between the two, it builds confidence in the model’s validity. The data must be presented in the same format as real system reports to ensure a fair comparison.
- Statistical Methods: Where historical data from the real system is available, formal statistical tests can be used to compare the distribution of the model’s output to the distribution of the real data. This provides a quantitative, objective measure of the model’s accuracy.
The specific validation strategy, however, must adapt to a critical factor: whether or not historical data for the system actually exists.
4.4 Validating New vs. Existing Systems
The challenge of validation differs significantly depending on whether the model represents an existing system or a proposed system that does not yet exist.
Validating an Existing System When a real system already exists, validation is primarily a data-driven process. The approach is to feed the model the same real-world inputs that the actual system received over a historical period and then compare the model’s output to the historical system data. This comparison is often performed using formal statistical goodness-of-fit tests, such as the chi-square test, Kolmogorov-Smirnov test, Cramer-von Mises test, or the Moments test, to quantitatively assess how well the model’s output distribution matches the real system’s output distribution.
Validating a First-Time Model Validating a model of a proposed system is a more complex challenge, as there is no historical data for comparison. In this scenario, confidence in the model must be built through a combination of other techniques. The following approaches are essential:
- Subsystem Validity: While the overall system may be new, it is often composed of subsystems for which real systems or data do exist. The model can be broken down into these components, and each subsystem can be validated independently against its real-world counterpart.
- Internal Validity: The model’s internal logic must be scrutinized. A model that exhibits a very high degree of internal variance might be suspect, as this randomness could obscure the effect of changes in the input variables, making the model’s results unreliable.
- Sensitivity Analysis: This technique is especially crucial for new systems. By understanding which input parameters have the most significant influence on the output, analysts can focus their efforts on ensuring those parameters are based on the most reliable estimates available.
- Face Validity: This is a critical “common sense” check performed by experts. Does the model behave in a logical and plausible way under a variety of conditions? Even if a model happens to produce results that seem correct, it should be rejected if it does so for the wrong reasons or based on flawed internal logic. This ensures the model is not just accidentally right but is fundamentally sound.
With a firm grasp of the principles of building and validating models, we can now shift our focus to a deep dive into the major specific paradigms of simulation, starting with the most common: discrete-event simulation.