Cargando…
Bayesian-Inference Embedded Spline-Kerneled Chirplet Transform for Spectrum-Aware Motion Magnification
The ability to discern subtle image changes over time is useful in applications such as product quality control, civil engineering structure evaluation, medical video analysis, music entertainment, and so on. However, tiny yet useful variations are often combined with large motions, which severely d...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002565/ https://www.ncbi.nlm.nih.gov/pubmed/35408408 http://dx.doi.org/10.3390/s22072794 |
Sumario: | The ability to discern subtle image changes over time is useful in applications such as product quality control, civil engineering structure evaluation, medical video analysis, music entertainment, and so on. However, tiny yet useful variations are often combined with large motions, which severely distorts current video amplification methods bounded by external constraints. This paper presents a novel use of spectra to make motion magnification robust to large movements. By exploiting spectra, artificial limitations and the magnification of small motions are avoided at similar frequency levels while ignoring large ones at distinct spectral pixels. To achieve this, this paper constructs spline-kerneled chirplet transform (SCT) into an empirical Bayesian paradigm that applies to the entire time series, giving powerful spectral resolution and robust performance to noise in nonstationary nonlinear signal analysis. The important advance reported is Bayesian-rule embedded SCT (BE-SCT); two numerical experiments show its superiority over current approaches. For applying to spectrum-aware motion magnification, an elaborate analytical framework is established that captures global motion, and use of the proposed BE-SCT for dynamic filtering enables a frequency-based motion isolation. Our approach is demonstrated on real-world and synthetic videos. This approach shows superior qualitative and quantitative results with less visual artifacts and more local details over the state-of-the-art methods. |
---|