Volume - 13 | Issue-1
Volume - 13 | Issue-1
Volume - 13 | Issue-1
Volume - 13 | Issue-1
Volume - 13 | Issue-1
Due to the exponential growth of video data, which makes it difficult to handle massive volumes of video content, video summarizing techniques have drawn a lot of interest recently. Although there are numerous video summary techniques, summarizing lengthy videos is still difficult since it takes a long time to process hundreds of frames. Modern video summarizing techniques that use Temporal Convolutional Networks to determine shot boundaries take a long time to process when dealing with large films. The Distributed Temporal Convolutional Networks method, which takes into account the shot length distribution's properties to effectively summarize lengthy films, is proposed in this paper as a novel solution to tackle this difficulty. By completely using the temporal coherence of lengthy films, the Content-Based Frame Sampling [3] technique is additionally suggested to improve the system's throughput. The fusion approach has a significant implication for improving video summarization techniques and enabling efficient video processing, paving the way for new applications in areas such as video surveillance, entertainment, and education.