The 2nd International Workshop on
Physics Based Vision meets Deep Learning (PBDL)

Following the success of 1st ICCV Workshop on Physics Based Vision Meets Deep Learning (PBDL2017). We propose the 2nd workshop using the same title and topics with ICCV 2019. The goal is to encourage the interplay between physics based vision and deep learning. Physics based vision aims to invert the processes to recover the scene properties, such as shape, reflectance, light distribution, medium properties, etc., from images. In recent years, deep learning shows promising improvement for various vision tasks. When physics based vision meets deep learning, there must be mutual benefits.


We welcome submissions of new methods in the classic physics based vision problems, but preference will be given to novel insights inspired by utilizing deep learning techniques. Relevant topics include but are not limited to

        •     Photometric 3D reconstruction
        •     Polarimetric 3D reconstruction
        •     Radiometric modeling/calibration of cameras
        •     Illumination analysis and estimation
        •     Reflectance modeling, fitting, and analysis
        •     Inverse graphics
        •     Material recognition and classification
        •     Reflection removal
        •     Intrinsic image decomposition
        •     Transparency and multi-layer imaging
        •     Vision in bad weather (dehaze, derain, etc.)
        •     Bio-inspired sensors
        •     Multimodal sensor fusion
        •     Light field imaging
        •     Color constancy
        •     Multispectral/hyperspectral capture, modeling and analysis


Paper submission is through CMT:
https://cmt3.research.microsoft.com/pbdl2019

The format for paper submission is the same as the ICCV 2019 submission format. Papers that violates the anonymity, do not use the ICCV submission template or have more than 8 pages (excluding references) will be rejected without review. The accepted papers will appear in the proceedings of ICCV 2019 workshops. In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another workshop or conference during the review period.