Machine learning-based translation models have recently hit the trend line and impacted the pace of research in Natural Language Processing. The emerging trend of the era witnesses' transformation networks overpowering all existing influences of deep learning-based systems: the intelligent transformer system's attention-based encoder-decoder's robustness and exceptional performance display scope for modest research. In terms of handwritten mathematical text recognition, these systems need more attention. This article presents an attention-based encoder-decoder-based transformation model that has been produced and trained for an extensive database of mathematical expressions that have been collected from localities of Punjab and Madhya Pradesh schools, colleges and universities. Mathematical expressions are an essential component of education and scientific learning opportunities. Thus, the article's novel approach is to predict them based on their handwritten source using an encoder-decoder-based neural network. Massively, 101400 images from the corpus have been preprocessed, segmented, and recognized using the built network. The proposed attention-based dense encoder with GRU-based Decoder with attention mechanics gives us an ExpRate of 57.4% on the created corpus. It exhibits a competent ExpRate of 58% on CROHME 2016, outperforming many state-of-the-art models.