This paper presents an algorithm for decision-making in multiple open ascending-price (English) auctions where the buyer needs to procure a complete bundle of complementary products. When making bidding decisions, the utility of each choice is determined by considering the buyer's expected utility of future consequential decisions. The problem is modeled as a Markov decision process (MDP), and the value iteration method of dynamic programming is used to determine the value of bidding/not bidding in each state. To ease the computational burden, three state-reducing techniques are employed. When tested against adaptations of two methods from the literature, results show that the algorithm works significantly better when sufficient information on the progress of other concurrently running auctions will be available when future bidding decisions are made.