Abstract
Over the past decade, a substantial literature on the estimation of discrete choice dynamic programming (DC-DP) models of behavior has developed. However, this literature now faces major computational barriers. Specifically, in order to solve the dynamic programming (DP) problems that generate agents' decision rules in DC-DP models, high dimensional integrations must be performed at each point in the state space of the DP problem. In this paper we explore the performance of approximate solutions to DP problems. Our approximation method consists of: 1) using Monte Carlo integration to simulate the required multiple integrals at a subset of the state points, and 2) interpolating the non-simulated values using a regression function. The overall performance of this approximation method appears to be excellent, both in terms of the degree to which it mimics the exact solution, and in terms of the parameter estimates it generates when embedded in an estimation algorithm.