5.0 Paradigm I: Discrete-Event System Simulation
5.1 Core Principles and Key Features
Discrete-Event Simulation (DES) is a simulation paradigm where the state variables of the system change their values only at discrete points in time. These points in time are triggered by the occurrence of an event, which is an instantaneous occurrence that can change the state of the system. The simulation clock jumps from one event time to the next, skipping the periods of inactivity in between, making it a highly efficient method for many types of systems.
The core of any discrete-event simulation model is built upon five key features:
- Entities: These are the representations of the real-world objects that move through the system, such as parts in a factory or packets in a network.
- Relationships: These are the logical connections that link entities together, defining their interactions and pathways through the system.
- Simulation Executive: This is the heart of the simulation program. It is the control routine responsible for managing the simulation clock, sequencing events from an event list, and executing the appropriate logic when an event occurs.
- Random Number Generator: Since many real-world systems involve uncertainty, the random number generator is a crucial component that helps simulate this variability, such as random arrival times or service durations.
- Results & Statistics: This component is responsible for collecting data as the simulation runs and calculating key performance measures. It is the mechanism through which the model is validated and its performance is analyzed.
A central concept in DES is how the simulation clock advances. Two primary methods exist:
- Time Slicing: The clock is advanced in fixed, small increments of time. After each increment, the system is checked to see if any events have occurred. This method can be inefficient if the time between events is large, as the computer spends most of its time checking an inactive system.
- Next-Event Time Advance: The clock is advanced directly to the time of the next scheduled event. The simulation executive maintains an ordered list of future events and simply jumps the clock to the time of the most imminent one. This is the far more common and efficient approach for DES, as it completely skips the periods where the system state is not changing.
One of the most classic and important applications of discrete-event simulation is in the analysis of waiting lines, known formally as queuing systems.
5.2 Deep Dive: Simulation of Queuing Systems
Queuing theory is a fundamental area of study within operations research, and DES is the primary tool used for analyzing complex queuing systems. Queues, or waiting lines, are ubiquitous in real-world systems, appearing everywhere from customers waiting at a bank, to data packets waiting for transmission in a network, to airplanes waiting to take off. DES allows us to model, analyze, and optimize the performance of these systems.
Parameters of a Queuing System
The behavior of a queuing system is described by a set of standard parameters:
| Symbol | Parameter Name | Description |
| λ | Arrival Rate | The number of arrivals per second. |
| Ts | Mean Service Time | The average time to service one item, excluding queue time. |
| σTs | Standard Deviation of Service Time | A measure of the variability in service time. |
| ρ | Server Utilization | The proportion of time a server is busy. |
| u | Traffic Intensity | A measure of the total traffic load on the system. |
| r | Mean Items in System | The average number of items in the system (in queue + in service). |
| R | Total Items in System | The total number of items currently in the system. |
| Tr | Mean Time in System | The average total time an item spends in the system. |
| TR | Total Time in System | The total time an item spends in the system. |
| σr | Standard Deviation of r | A measure of variability for the number of items in the system. |
| σTr | Standard Deviation of Tr | A measure of variability for the time an item spends in the system. |
| w | Mean Items in Queue | The average number of items waiting in the queue. |
| σw | Standard Deviation of w | A measure of variability for the number of items in the queue. |
| Tw | Mean Waiting Time | The average time an item waits in the queue. |
| Td | Mean Waiting Time of Delayed Items | The average waiting time for only those items that actually have to wait. |
| N | Number of Servers | The total number of servers in the system. |
| mx(y) | yth Percentile | The value below which x occurs y percent of the time. |
The Single-Server Queue
This is the simplest queuing model. It consists of arriving items, a server that provides a service, a queue for items to wait in if the server is busy, and departing items once service is complete. A single cashier at a small coffee shop is a perfect real-world example of this model. Items (customers) arrive, wait in line if the cashier is busy, get served, and then depart.
The Multi-Server Queue
This model extends the single-server system to include multiple, identical servers who serve a single common queue. When an item arrives, it is served by any available server. If all servers are busy, the item joins the waiting queue. A bank with several tellers serving a single line of customers is a classic example. For a system with N identical servers, the utilization of the entire system can be denoted as Nρ, and the maximum input rate the system can handle is given by:
max = \frac{\text{N}}{\text{T}s}
Queuing Relationships
A set of fundamental formulas, including the famous Little’s Law, describes the key relationships in queuing systems. These allow an analyst to calculate various performance metrics based on a few known parameters.
| General Systems | Single-Server Systems | Multi-Server Systems |
| r = λTr (Little’s Formula) | ρ = λTs | ρ = λTs/N |
| w = λTw (Little’s Formula) | r = w + ρ | u = λTs = Nρ |
| Tr = Tw + Ts | r = w + Nρ |
These relationships are invaluable for both analytical solutions and for verifying the output of a DES model of a queuing system.
5.3 Application Case Study: Simulation of Time-Sharing Systems
Another key application of DES, rooted in the history of computing, is the simulation of time-sharing operating systems. A time-sharing system allows multiple users to share a single computer simultaneously by having the central processing unit (CPU) rapidly switch between each user’s tasks. This creates the illusion for each user that they have dedicated access to the computer’s resources.
DES is the ideal method for simulating such a system because the concepts map directly. The users or their computational jobs are the entities. The CPU is a resource. The requests for processing time are the events that drive the simulation. By simulating the scheduling algorithms and resource allocation policies, computer scientists can analyze the performance and fairness of the system under different workloads.
A concrete example is the SimOS Simulation System developed at Stanford University. This powerful simulation environment was designed to study complex computer hardware designs, analyze application performance, and conduct research on operating systems. SimOS provides detailed software simulations of all the key hardware components of modern computers, including processors, Memory Management Units (MMUs), and caches, illustrating the incredible level of detail and insight that DES can provide in this domain.
We now pivot from systems defined by discrete events to those defined by continuous, smooth change, introducing the next major simulation paradigm.