Abstract—In a standard convolutional neural network the different layers are trained using the back propagation algorithm. Following training, it can be observed that the first convolutional layer structurally resembles the Gabor filter in spite of being randomly initialized. Consequently, in this study, we propose to directly use the Gabor filters within the convolutional neural network‘s learning framework to enhance the classification accuracy. The Gabor filters are integrated into the learning framework in two ways. Firstly, they are used to initialize the filters in the first convolutional layer. Secondly, they are used in conjunction with the Gershgorin circle theorem to constrain the weights in the first convolutional layer during backpropagation. The Gabor filter-based training is investigated for its effectiveness in increasing the classification accuracy of the convolutional neural network. The proposed method is validated on the MNIST and USPS public dataset. The results obtained on these datasets show that integrating the Gabor filter within the neural network‘s learning framework increases the classification accuracy of the convolutional neural network.