English  |  正體中文  |  简体中文  |  Items with full text/Total items : 54367/62174 (87%)
Visitors : 14550256      Online Users : 68
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTHU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    National Tsing Hua University Institutional Repository > 電機資訊學院 > 電機工程學系 > 會議論文  >  Self-learning-based rain streak removal for image/video

    Please use this identifier to cite or link to this item: http://nthur.lib.nthu.edu.tw/dspace/handle/987654321/84000

    Title: Self-learning-based rain streak removal for image/video
    Authors: Li-Wei Kang;Chia-Wen Lin;Che-Tsung Lin;Yu-Cheng Lin
    教師: 林嘉文
    Date: 2012
    Publisher: Institute of Electrical and Electronics Engineers
    Relation: IEEE Int. Symp. Circuits and Systems, Seoul, Korea, 20-23 May 2012, Pages 1871 - 1874
    Keywords: Self-learning-based
    Abstract: Rain removal from an image/video is a challenging problem and has been recently investigated extensively. In our previous work, we have proposed the first single-image-based rain streak removal framework via properly formulating it as an image decomposition problem based on morphological component analysis (MCA) solved by performing dictionary learning and sparse coding. However, in this previous work, the dictionary learning process cannot be fully automatic, where the two dictionaries used for rain removal were selected heuristically or by human intervention. In this paper, we extend our previous work to propose an automatic self-learning-based rain streak removal framework for single image. We propose to automatically self-learn the two dictionaries used for rain removal without additional information or any assumption. We then extend our single-image-based method to video-based rain removal in a static scene by exploiting the temporal information of successive frames and reusing the dictionaries learned by the former frame(s) in a video while maintaining the temporal consistency of the video. As a result, the rain component can be successfully removed from the image/video while preserving most original details. Experimental results demonstrate the efficacy of the proposed algorithm.
    Relation Link: http:/dx.doi.org/10.1109/ISCAS.2012.6271635
    URI: http://nthur.lib.nthu.edu.tw/dspace/handle/987654321/84000
    Appears in Collections:[電機工程學系] 會議論文
    [光電研究中心] 會議論文

    Files in This Item:

    File Description SizeFormat


    SFX Query


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback