Paper: | TP-P8.3 |
Session: | Image and Video Storage and Retrieval III |
Time: | Tuesday, September 18, 14:30 - 17:10 |
Presentation: |
Poster
|
Title: |
TEMPORALLY CONSISTENT GAUSSIAN RANDOM FIELD FOR VIDEO SEMANTIC ANALYSIS |
Authors: |
Jinhui Tang; University of Science and Technology of China | | |
| Xian-Sheng Hua; Microsoft Research Asia | | |
| Tao Mei; Microsoft Research Asia | | |
| Guo-Jun Qi; University of Science and Technology of China | | |
| Shipeng Li; Microsoft Research Asia | | |
| Xiuqing Wu; University of Science and Technology of China | | |
Abstract: |
As a major family of semi-supervised learning, graph based semi-supervised learning methods have attracted lots of interests in the machine learning community as well as many application areas recently. However, for the application of video semantic annotation, these methods only consider the relations among samples in the feature space and neglect an intrinsic property of video data: the temporally adjacent video segments (e.g., shots) usually have similar semantic concept. In this paper, we adapt this temporal consistency property of video data into graph based semi-supervised learning and propose a novel method named Temporally Consistent Gaussian Random Field (TCGRF) to improve the annotation results. Experiments conducted on the TRECVID data set have demonstrated its effectiveness. |