Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
530665 | Pattern Recognition | 2010 | 17 Pages |
Abstract
Due to low cost for capturing depth information, it is worthwhile to reduce the illumination ambiguity by employing scenario depth information. In this article, a neural computation approach is reported that estimates illuminant direction from scenario reflectance map. Since the reflectance map recovered from depth map and image is a variable sized point cloud, we propose to parameterize it as a two dimensional polynomial function. Afterwards, a novel network model is presented for mapping from continuous function (reflectance map) to vectorial output (illuminant direction). Experimental results show that the proposed model works well on both synthetic and real scenes.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Chi Kin Chow, Shiu Yin Yuen,