Paper: | WP-P5.6 |
Session: | Image and Video Modeling III / Distributed Coding |
Time: | Wednesday, September 19, 14:30 - 17:10 |
Presentation: |
Poster
|
Title: |
VIDEO MODELING BY SPATIO-TEMPORAL RESAMPLING AND BAYESIAN FUSION |
Authors: |
Yunfei Zheng; West Virginia University | | |
| Xin Li; West Virginia University | | |
Abstract: |
In this paper, we propose an empirical Bayesian approach toward video modeling and demonstrate its application in multiframe image restoration. Based on our previous work on spatio-temporall adaptive localized learning (STALL), we introduce a new concept of spatio-temporal resampling to facilitate the task of video modeling. Resampling produces a redundant representation of video signals with distributed spatio-temporal characteristics. When combined with STALL model, we show how to probabilistically combine the linear regression results of resampled video signals under a Bayesian framework. Such empirical Bayesian approach opens the door to develop a whole new class of video processing algorithms without explicit motion estimation or segmentation. The potential of our distributed video model is justified by considering its application into two multiframe image restoration tasks: repair damaged blocks and remove impulse noise. |