Show simple item record

dc.contributor.authorBigdeli, S. A.
dc.contributor.authorSusstrunk, S.
dc.date.accessioned2021-12-09T14:03:59Z
dc.date.available2021-12-09T14:03:59Z
dc.date.issued2019
dc.identifier.citationThe 26th IEEE International Conference on Image Processing - IEEE ICIP, Taipei (CN), September 2019.
dc.identifier.urihttps://yoda.csem.ch/handle/20.500.12839/347
dc.description.abstractDeep neural networks for semantic segmentation are most often trained with RGB color images, which encode the radiation visible to the human eyes. In this paper, we study if additional physical scene information, specifically Near-Infrared (NIR) images, improve the performance of neural networks. NIR information can be captured with conventional silicon-based cameras and provide complementary information to visible images regarding object boundaries and materials. In addition, extending the networks’ input from a three to a four channel layer is trivial with respect to changes to the architecture and additional parameters. We perform experiments on several state-of-the-art neural networks trained both on RGB alone and on RGB plus NIR and show that the additional image channel consistently improves semantic segmentation accuracy over conventional RGB input even for powerful architectures.
dc.subjectImage segmentation, Neural networks, Semantics,Task analysis,Network architecture,Computer architecture,Training
dc.titleDeep Semantic Segmentation using NIR as extra physical information
dc.typeProceedings Article
dc.type.csemdivisionsDiv-M
dc.type.csemresearchareasIoT & Vision
dc.type.csemresearchareasData & AI
dc.identifier.doihttps://doi.org/10.1109/ICIP.2019.8803242


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Research Publications
    The “Research Publications” collection provides bibliographic information for scientific papers including conference proceedings and presentations.

Show simple item record