Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Neural networks : the official journal of the International Neural Network Society

Video summarization has long been used to ease video browsing and plays a more crucial role with the explosion of online videos. In the context of event-centric videos, we aim to extract the corresponding clips of more important events in the video. To tackle the dilemma between the detection precision and the clip completeness faced by previous methods, we present an efficient Boundary-Aware framework for Summary clip Extraction (BASE) to extract summary clips with more precise boundaries while maintaining their completeness. Specifically, we propose a new distance-based importance signal to reflect the progress information in each video. The signal can not only help us to detect boundaries with higher precision, but also make it possible to preserve the clip completeness. For the feature presentation part, we also explore new information types to facilitate video summarization. Our approach outperforms current state-of-the-art video summarization models in terms of more precise clip boundaries and more complete summary clips. Note that we even yield comparable results to manual annotations.

Li Qingwen, Chen Jianni, Xie Qiqin, Han Xiao

2023-Feb-03

Boundary-aware, Deep learning, Event-centric videos, Video summarization