Contact information
http://www.modelbenders.com
[email protected]
ph: 407.977.3310
MODELING ESSENTIALS

Though simulations vary widely in their design and implementation, most share a few common features to achieve their goals.

Event Management:
A simulation is made up of states, events, and entities. States are groups of variables that describe the system at a specific time. Events are activities that change the state of the system. Entities are the objects represented in the simulation, the things described by the state variables and to which events occur. Events are the key items that make transformations in the model and drive it through its operations. These may include the arrival of a piece of material at a milling machine, the departure of an aircraft from an airport, the delivery of a message in a network, or an engagement between a missile and a fighter aircraft. These are typically managed through the use of multiple lists or queues in the model. The queues identify which events are ready to be processed, which are waiting until a specified time, and which must be triggered by specific conditions.

Queues manage events by ordering and releasing them according to specified criteria. The most common type of queues are the First In First Out (FIFO), Last In First Out (LIFO), Ordered, and Random. Each of these releases events into the simulation using a different method and each is uniquely useful for representing specific situations. The FIFO queue may contain a plan in the form of events to be executed. The LIFO queue may handle object reactions that interrupt and supersede planned events. An Ordered queue is widely used in training simulations, in which the time an event occurs drives its insertion into the simulation. A random queue assigns no order to the events in the queue, processing any one without regard for its priority or arrival order.

Time Management:
In a simulation, the advancement of time is performed using a variable that can be controlled as any other and need not be tied to the advancement of real time or the internal computer clock. Typically, simulations move forward through the use of event based time (event stepped) or incremental time advancement (time stepped). An event stepped simulation recognizes that in the model of the system, the only changes occur at the points at which events occur. Therefore, the model jumps from one scheduled event to the next, omitting the representation of intermediate times and speeding up the execution of the simulation by eliminating operations that do not effect simulation state. Time stepped simulations, on the other hand, are used when there are a large number of interactions between entities based on shared events. Training simulations use this method because of the need to present a consistent flow of time and events to a person that is interacting with the simulation.

A great deal of work has been done in the area of managing time stepped simulations. These simulations are discrete, stepping from one time to another using a specified increment. This increment may be fixed such that each step is one minute, one second, or one microsecond long, or it may be variable such that the size of the step is determined by the activity being simulated. It may be necessary, for example, to represent time in sub-seconds during a combat engagement and in days during the political negotiations prior to the beginning of hostilities.

As simulations have grown to operate across networks of computers or on parallel computers, the models have been separated into pieces that represent a portion of the problem, and exist as multiple software programs on multiple machines. This has resulted in the need to maintain consistency among programs, which can not be done effectively with the simple queuing lists used within a single program. Parallel and distributed time management was initially achieved through the use of a shared clock to which all programs would refer. However, research has lead to the use of algorithms that can ensure time synchronization without the use of a central shared clock.

Parallel and distributed time management can be accomplished through conservative or optimistic synchronization. Both methods use a mechanism to understand the time of each of the processes. Conservative synchronization then chooses to maintain consistency among all processes as the simulation executes. Optimistic synchronization, on the other hand, allows each process to move ahead as fast as computationally possible, putting each process at a different point in time. When an event is received from another process that affects past events in the local process, the simulation reverses its operations and "rolls back" in time to include the new event interaction. The conservative method assumes that interactions between processes are common enough that constant synchronization is the most efficient method of proceeding into the future. The optimistic method assumes that interactions are scarce and the problem can be solved more efficiently by working as fast as possible. These roll back only when interactions are received in the past.

Random Number Generation:
Many models require the use of random numbers to introduce the variability caused by statistical rather than deterministic representations of events. Random number generators are used in the computers to replicate the creation of a series of numbers that are random and independent of each other. These algorithms are actually deterministic and merely provide the impression of randomness. Algorithms are typically required to be repeatable, fast, use little storage space, and usually generate Uniformly distributed numbers in the range of (0,1). To create variates from other distributions the Uniform random number becomes the input to a second algorithm that generates Normal, Exponential, Poisson, Gamma, Weibull, Lognormal, Beta, Binomial, or other distributed numbers.

Physical Modeling:
Traditionally, models have represented the capabilities of machinery and systems based on their physical characteristics and the basic laws of physics. The focus has been on understanding and representing the physical environment - distance, rate, weight, density, etc. In manufacturing systems, models represent entities entering a system in which events are generated by statistical distributions buffered by waiting queues. In analytical physics simulation, the models represent the specific behaviors of particles or chemicals under specified conditions. In training simulations, the models reproduce the physical world allowing people to interact with terrain, buildings, and other entities. Each of these takes a unique view of the essential variables and algorithms needed to represent the physical behavior of the system. The impact of human decision-making and process variance is handled through the use of statistical distributions that represent variation with the aid of random number inputs to these distributions.

Behavioral Modeling:
As the role of simulation has grown, the need to more accurately represent human and group behaviors has increased. To accommodate these needs, simulation developers have turned to the artificial intelligence community for assistance. Models now contain finite state machines, expert systems, neural networks, and case-based reasoning to represent human behavior in finer detail. This acknowledges that specific physical conditions trigger human behaviors that must be explicitly modeled to achieve the appropriate results. Behavioral modeling has been particularly useful in training applications where computer controlled adversaries with challenging and realistic behaviors are required.

Model Management:
A computer simulation is a system of software and hardware that must be developed and managed in accordance with the same principles of systems and software engineering that govern other applications. Issues that are not germane to the science of simulation are very important to the business of simulation. An attractive and friendly user interface is important. Systems that have provided only textual input/output are giving way to those with graphical user interfaces and multi-dimensional representations of the simulated world. Configuration management of the models provides stability to a simulation program, ensuring that it can control its own evolution. Documentation provides permanence of expertise by recording model assumptions, algorithms, data collection, and validation results. This establishes a foundation that can extend the useful life of a model beyond the tenure of its original developers. Finally, domain architectures are being developed which attempt to capture the essential components and interactions of entire families of simulations. The intent is to create a structure that eliminates redundant development and promotes the reuse of modules or components that provide common functionality and interfaces across an entire family of simulations.

COMPUTER TECHNOLOGIES

Simulations, like all other applications, leverage technologies from other areas of science. The algorithms and information required to create a very complex model usually exceed the power of the available computer hardware and software necessary to run it. However, simulation programs are growing larger and more useful as a direct result of advancements in computer science. A few of the most useful technologies are described here.

Networks:
The ability to distribute a simulation across a network of computers leads to more detailed, scaleable, complex, and accessible models. Distributed message passing and event synchronization allow a single problem to be addressed with a large number of traditional computers on a network. The proliferation of standardized networks between computerized machinery, communications systems, decision aids, and other tools has created an environment in which simulations can drive "real world" computers directly and extract data from them in real time. This has blurred the boundary between real and simulated worlds.

Parallel Computing:
Parallel computing provides many of the advantages of networked computers, but adds the characteristic of close coupling. Some problems can be divided into many thousands of separate processes, but the interactions between them are so frequent that a general-purpose network for delivering messages introduces delays that greatly extend the execution time of the simulation. In these cases, parallel computers can provide the close coupling between processors and memory that allow the simulation to execute much more quickly.

Artificial Intelligence:
The representation of human and group behavior has become essential in some parts of the simulation community. Techniques developed under the umbrella of artificial intelligence and cognitive modeling can solve some of these problems. Simulations are including more finite state machines, expert systems, neural networks, case based reasoning, and genetic algorithms in an attempt to represent human behavior with more fidelity and realism.

Computer Graphics:
Simulation data lends itself very well to graphic displays. Factories and battlefields can be represented in full 3D animation using virtual reality techniques and hardware devices. Graphical user interfaces provide easy model construction, operation, data analysis, and data presentation. These tools place a new and more attractive face on simulations that previously relied on the mind?s eye for visualization. This often leads to greater acceptance of the models and their results by the engineering and business communities.

Databases:
Simulations can generate a large amount of data to be analyzed and often require large volumes input data to drive the models. The availability of relational and object oriented databases has made the task of organizing and accessing this information much more efficient and accessible. Previously, model developers were required to build their own storage constructs and query languages, a distraction from the real focus of the simulation study.

Systems Architecture:
Simulations can be grouped into families, or domains, where the same software architectures can be used to model entire classes of problems. This recognition in transaction-based simulation has lead to the creation of a host of simulation products that encapsulate functionality used to model everything from factory operations to aircraft routing schedules.

World Wide Web:
The expansion of the Internet and the World Wide Web has led to experiments with simulations that are either distributed through the Internet or accessible from it. These simulations make use of standard protocols and allow the distribution of a simulation across multiple computers that are not directly controlled on a dedicated network. Simulation users do not necessarily need to own the computers that run the simulation. Instead, the user may access a simulation-specific machine connected to the Web, provide input values, control model execution, and receive the results without ever having their own copy of the simulation software or the computers necessary to run it.

(Click to continue to the next page)