Since the start of the current century, artificial intelligence has gone through critical advances improving the capabilities of intelligent systems. Especially machine learning has changed remarkably and caused the rise of deep learning. Deep learning shows cutting-edge results in terms of even the most advanced, difficult problems. However, that includes a trade-off in terms of interpretability. Although traditional machine learning techniques employ interpretable working mechanisms, hybrid systems and deep learning models are black-box being beyond our understanding capabilities. So, the need for making such systems understandable, additional methods by explainable artificial intelligence (XAI) has been widely developed in last years. In this sense, this study purposes a Convolutional Neural Networks (CNN) model, which runs a new form of Grad-CAM. As providing numerical feedback in addition to the default Grad-CAM, the numerical Grad-CAM (numGrad-CAM) was used within the developed CNN model, in order to have an explainability interface for brain tumor diagnosis. In detail, the numGrad-CAM-CNN model was evaluated via technical and physicians-oriented (human-side) evaluations. The model provided average findings of 97.11% accuracy, 95.58% sensitivity, and 96.81% specificity for the target brain tumor diagnosis setup. Additionally, numGrad-CAM integration provided 90.11% accuracy according to the other CAM variations in the same CNN model. The physicians used the numGrad-CAM-CNN model gave positive responses in terms of using the model for an explainable (and safe) diagnosis decision-making perspective for brain tumors.