• Login
    View Item 
    •   YODA Home
    • CSEM Archive
    • Research Publications
    • View Item
    •   YODA Home
    • CSEM Archive
    • Research Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Deep Semantic Segmentation using NIR as extra physical information

    Thumbnail
    Author
    Bigdeli, S. A.; Susstrunk, S.
    Metadata
    Show full item record
    Abstract
    Deep neural networks for semantic segmentation are most often trained with RGB color images, which encode the radiation visible to the human eyes. In this paper, we study if additional physical scene information, specifically Near-Infrared (NIR) images, improve the performance of neural networks. NIR information can be captured with conventional silicon-based cameras and provide complementary information to visible images regarding object boundaries and materials. In addition, extending the networks’ input from a three to a four channel layer is trivial with respect to changes to the architecture and additional parameters. We perform experiments on several state-of-the-art neural networks trained both on RGB alone and on RGB plus NIR and show that the additional image channel consistently improves semantic segmentation accuracy over conventional RGB input even for powerful architectures.
    Publication Reference
    The 26th IEEE International Conference on Image Processing - IEEE ICIP, Taipei (CN), September 2019.
    Year
    2019
    URI
    https://yoda.csem.ch/handle/20.500.12839/347
    Collections
    • Research Publications

    Browse

    All of YODACommunities & CollectionsBy Issue DateAuthorsTitlesResearch AreasBusiness UnitsThis CollectionBy Issue DateAuthorsTitlesResearch AreasBusiness Units

    My Account

    Login

    DSpace software copyright © 2002-2022  DuraSpace
    Contact Us | Send Feedback
    DSpace Express is a service operated by 
    Atmire NV