Keywords

Abstract

A key difference between stochastic microsimulation models and more traditional forms of travel demand forecasting models is that microsimulation-based forecasts change each time the sequence of random numbers used to simulate choices is varied. To address practitioners concerns about this variation, a common approach is to run the microsimulation model several times and average the results. The question then becomes: What is the minimum number of runs required to reach a true average state for a given set of model results? This issue was investigated by means of a systematic experiment with the San Francisco model, a microsimulation model system used in actual planning applications since 2000. The system contains models of vehicle availability, day pattern choice, tour time-of-day choice, destination choice, and mode choice. To investigate the variability of the forecasts of this system due to random simulation error, the model system was run 100 times, each time changing only the sequence of random numbers used to simulate individual choices from the logit model probabilities. The extent of random variability in the model results is reported as a function of two factors: (a) the type of model (vehicle availability, tour generation, destination choice, or mode choice); and (b) the level of geographic detail--transit at the analysis zone level, neighborhood level, or countywide level. For each combination of these factors, it is shown graphically how quickly the mean values of key output variables converge toward a stable value as the number of simulation runs increases.