.

ISSN 2063-5346
For urgent queries please contact : +918130348310

TEMPORAL CONVOLUTIONAL NETWORK & CONTENT-BASED FRAME SAMPLING FUSION FOR SEMANTICALLY ENRICHED VIDEO SUMMARIZATION

Main Article Content

S.Sumanth¹, T. Charan Durga², Ch. Yashwanth Sai³, Suneetha Manne4
» doi: 10.48047/ecb/2023.12.8.226

Abstract

Due to the exponential growth of video data, which makes it difficult to handle massive volumes of video content, video summarizing techniques have drawn a lot of interest recently. Although there are numerous video summary techniques, summarizing lengthy videos is still difficult since it takes a long time to process hundreds of frames. Modern video summarizing techniques that use Temporal Convolutional Networks to determine shot boundaries take a long time to process when dealing with large films. The Distributed Temporal Convolutional Networks method, which takes into account the shot length distribution's properties to effectively summarize lengthy films, is proposed in this paper as a novel solution to tackle this difficulty. By completely using the temporal coherence of lengthy films, the Content-Based Frame Sampling [3] technique is additionally suggested to improve the system's throughput. The fusion approach has a significant implication for improving video summarization techniques and enabling efficient video processing, paving the way for new applications in areas such as video surveillance, entertainment, and education.

Article Details