English  |  正體中文  |  简体中文  |  Items with full text/Total items : 54367/62174 (87%)
Visitors : 15037865      Online Users : 50
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTHU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version

    Please use this identifier to cite or link to this item: http://nthur.lib.nthu.edu.tw/dspace/handle/987654321/83999

    Title: Context-aware single image rain removal
    Authors: De-An Huang;Li-Wei Kang;Chih-Yun Tsai;Ming-Chun Yang;Chia-Wen Lin;Yu-Chiang Frank Wang
    教師: 林嘉文
    Date: 2012
    Publisher: Institute of Electrical and Electronics Engineers
    Relation: IEEE Int. Conf. Multimedia and Expo, Melbourne, July 2012, Pages 164-169
    Keywords: rain removal
    sparse coding
    dictionary learning
    image decomposition
    Abstract: Rain removal from a single image is one of the challenging image denoising problems. In this paper, we present a learning-based framework for single image rain removal, which focuses on the learning of context information from an input image, and thus the rain patterns present in it can be automatically identified and removed. We approach the single image rain removal problem as the integration of image decomposition and self-learning processes. More precisely, our method first performs context-constrained image segmentation on the input image, and we learn dictionaries for the high-frequency components in different context categories via sparse coding for reconstruction purposes. For image regions with rain streaks, dictionaries of distinct context categories will share common atoms which correspond to the rain patterns. By utilizing PCA and SVM classifiers on the learned dictionaries, our framework aims at automatically identifying the common rain patterns present in them, and thus we can remove rain streaks as particular high-frequency components from the input image. Different from prior works on rain removal from images/videos which require image priors or training image data from multiple frames, our proposed self-learning approach only requires the input image itself, which would save much pre-training effort. Experimental results demonstrate the subjective and objective visual quality improvement with our proposed method.
    Relation Link: http:/dx.doi.org/10.1109/ICME.2012.92
    URI: http://nthur.lib.nthu.edu.tw/dspace/handle/987654321/83999
    Appears in Collections:[電機工程學系] 會議論文
    [光電研究中心] 會議論文

    Files in This Item:

    File Description SizeFormat


    SFX Query


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback