.

ISSN 2063-5346
For urgent queries please contact : +918130348310

Pruning BERT (Bidirectional Encoder Representations from Transformers): A Comprehensive Review of Pruning Techniques

Main Article Content

Madhusudhanan.R, Dr.Subramaniam Gnanasaravanan, Vinod.A, Dr.N.Kumareshan, Dr.N.Arun Vignesh, Prakash.N, Gokul Prasad.C
» doi: 10.48047/ecb/2023.12.7.100

Abstract

BERT is the concept which indicates what it learns and how it is expressed, how its training goals are often adjusted, and how it is used in practice, design, the over parameterization problem, and compression methods. Aspect-sentiment detection and the provision of explainable aspect words in text summarization both benefit greatly from the incorporation of external domain-specific knowledge. State-of-the-art performance in the processing of natural languages has been achieved with pre-trained models based on the transformer. While these models might be useful, they typically comprise billions of parameters, making them too resource-heavy and computationally intensive for devices with limited capabilities or software that has strict latency constraints. Due to its emphasis on a knowledge-enabled language representation [BERT-KELR], the BERT model is strongly suggested for aspect-based sentiment analysis. The inclusion matrices of organizations in the sentiment knowledge graph and words in the text can be obtained by injecting sentiment domain information into the language representation model, resulting in a consistent vector space thereby making use of the supplementary data provided by the sentiment knowledge graph. The goal of this research is to apply cutting-edge methods to improve the quality of executive summaries of complex texts. It is additionally found that the BERT -based classifier's classification performance is significantly affected by the sequence length. The best results were achieved using the proposed method, which aims to lengthen and improve training data accuracy for short text summarization

Article Details