In this paper, we present a new gradient descent optimization method that overcomes some weaknesses of existing gradient descentoptimization strategies. The proposed optimization technique is basedon the variation of simple gradients and regulates the learning rate inresponse to the need for an efficient learning adjustment mechanism.The memory and hyper-parameter requirements of our algorithm are reduced compared to other algorithms. We provide a convergence analysisof the algorithm. In order to evaluate our algorithm we perform classification tests on the MNIST data collection, IMDB movie review data andCIFAR-10 data, to explore how our proposed method performs againststate-of-the-art optimizers, such as SGD(M), Adam, Adamax, NadamRMSProp, Adagrad and AdaDelta. Our algorithm is computationallyefficient and effective when compared to existing methods.