Conventional incremental learning approaches in multi-layered feedforward neural networks are based on new incoming training instances. However, in this paper, changing environment is defined as new incoming features of a specific problem. Our empirical study illustrates that ISGNN, incremental self-growing neural networks, can adapt to such a changing environment with new input dimension. In the meanwhile, dynamic neural network algorithms are used for automatic network structure design in order to avoid time-consuming search for an appropriate network topology with the trial and error method. We also exploit information learned by the previous grown network so as to avoid retraining. Finally, we report simulation results on two benchmark problems. Our experiments show that this kind of adaptive learning mechanism could significantly improve the performance of original networks.