English  |  正體中文  |  简体中文  |  Items with full text/Total items : 54371/62179 (87%)
Visitors : 8707221      Online Users : 116
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTHU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    National Tsing Hua University Institutional Repository > 電機資訊學院 > 電機工程學系 > 會議論文  >  Self-learning-based rain streak removal for image/video


    Please use this identifier to cite or link to this item: http://nthur.lib.nthu.edu.tw/dspace/handle/987654321/84000


    Title: Self-learning-based rain streak removal for image/video
    Authors: Li-Wei Kang;Chia-Wen Lin;Che-Tsung Lin;Yu-Cheng Lin
    教師: 林嘉文
    Date: 2012
    Publisher: Institute of Electrical and Electronics Engineers
    Relation: IEEE Int. Symp. Circuits and Systems, Seoul, Korea, 20-23 May 2012, Pages 1871 - 1874
    Keywords: Self-learning-based
    Abstract: Rain removal from an image/video is a challenging problem and has been recently investigated extensively. In our previous work, we have proposed the first single-image-based rain streak removal framework via properly formulating it as an image decomposition problem based on morphological component analysis (MCA) solved by performing dictionary learning and sparse coding. However, in this previous work, the dictionary learning process cannot be fully automatic, where the two dictionaries used for rain removal were selected heuristically or by human intervention. In this paper, we extend our previous work to propose an automatic self-learning-based rain streak removal framework for single image. We propose to automatically self-learn the two dictionaries used for rain removal without additional information or any assumption. We then extend our single-image-based method to video-based rain removal in a static scene by exploiting the temporal information of successive frames and reusing the dictionaries learned by the former frame(s) in a video while maintaining the temporal consistency of the video. As a result, the rain component can be successfully removed from the image/video while preserving most original details. Experimental results demonstrate the efficacy of the proposed algorithm.
    Relation Link: http:/dx.doi.org/10.1109/ISCAS.2012.6271635
    http://www.ieee.org/
    URI: http://nthur.lib.nthu.edu.tw/dspace/handle/987654321/84000
    Appears in Collections:[電機工程學系] 會議論文
    [光電研究中心] 會議論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML455View/Open


    在NTHUR中所有的資料項目都受到原著作權保護,僅提供學術研究及教育使用,敬請尊重著作權人之權益。若須利用於商業或營利,請先取得著作權人授權。
    若發現本網站收錄之內容有侵害著作權人權益之情事,請權利人通知本網站管理者(smluo@lib.nthu.edu.tw),管理者將立即採取移除該內容等補救措施。

    SFX Query

    與系統管理員聯絡

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback