The peer review process is widely acknowledged to be susceptible to various issues, such as bias, inconsistencies and arbitrariness. However, peer reviews remain an essential aspect of scholarly communication, providing authors with valuable feedback on crucial elements of their papers, including Motivation/Impact, Soundness/Correctness, Novelty and Substance. To address the shortcomings of the peer review process, we propose a novel multi-task model called MultiPEER, leveraging the Transformer-based variant of SciBERT representations. MultiPEER employs a deep neural attention mechanism to extract aspect categories and sentiment from the reviewer’s comments. Its primary objective is to evaluate how effectively the review covers the important aspects of the paper, thereby assessing the usefulness and informativeness of the review. By incorporating MultiPEER into the workflow, editors/area chairs can expedite the decision-making process regarding peer review conformity. In our study, we demonstrate the superior performance of MultiPEER compared with competitive baseline models. This is supported by lucid explanations, graphical plots and test cases. Notably, MultiPEER achieves an impressive accuracy rate of 88.1%. In addition, we conduct a comprehensive analysis to identify the strengths and weaknesses of MultiPEER, providing valuable insights for further improvements. By automating aspects of review assessment, we aim to facilitate faster and more reliable decision-making for editors and area chairs. We have made our code available at https://github.com/PrabhatkrBharti/MultiPEER.git to replicate our findings.