HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor (Source codes and dataset released!!!)

by Shuda Li*, Ankur Handa**, Yang Zhang*, Andrew Calway*
*University of Bristol, **University of Cambridge
Accepted by Intl. Conf. on 3DVision (3DV), 2016

  • [paper]
  • [github]   NEW!!!
  • [video]
  • [dataset]   NEW!!!
  • Experiments demonstrating strong robustness to motion blur NEW!!!


We describe a new method for comparing frame appearance in a frame-to-model 3-D mapping and tracking system using an low dynamic range (LDR) RGB-D camera which is robust to brightness changes caused by auto exposure. It is based on a normalised radiance measure which is invariant to exposure changes and not only robustifies the tracking under changing lighting conditions, but also enables the following exposure compensation perform accurately to allow online building of high dynamic range (HDR) maps. The latter facilitates the frame-to-model tracking to minimise drift as well as better capturing light variation within the scene. Results from experiments with synthetic and real data demonstrate that the method provides both improved tracking and maps with far greater dynamic range of luminosity.


Figure. (a) shows the proposed frame-to-model tracking using normalized ra-
diance deliver best tracking accuracy using visual data. The tracking is per-
formed using a challenging synthetic flickering RGB-D sequence. (b)-(e) are
screen captures from video released with previous works. Specifically, (b) and (d)
are from [1], where (b) is the raw input image. (d) is predicted scene textures. By
contrast, the unrealistic artefacts, marked by red circles, indicate insufficent ex-
posure compensation. (c) is predicted scene texture from [2]; (e) from [3]. Similar
artefacts can be seen in these results. (f) in the top right shows the results from
our implementation of [4] using a RGB-D video sequence, the artefacts are very
strong due to large camera exposure adjustment when moving from bright area
(top in the scene) to the dark area (bottom left in the scene). (g) in the bottom
right are the predicted textures using the proposed HDRFusion. It can been that
it is free of artefacts and its HDR textures are visualized using Mantiuk tone
mapping operater [5].


[1] Meilland, M., Barat, C., Comport, A.: 3D High Dynamic Range dense visual
SLAM and its application to real-time object re-lighting. In: IEEE/ACM Intl.
Symposium on Mixed and Augmented Reality (ISMAR). (2013) 143–152.

[2]  Kerl, C., Cremers, D., Universit, T.: Dense Continuous-Time Tracking and Map-
ping with Rolling Shutter RGB-D Cameras. In: Intl. Conf. on Computer Vision
(ICCV). (2015).

[3]  Whelan, T., Leutenegger, S., Salas-moreno, R.F., Glocker, B., Davison, A.J.: Elas-
ticFusion : Dense SLAM Without A Pose Graph. Robotics: Science and Systems
(RSS) (2015).

[4] Whelan, T., Kaess, M., Johannsson, H., Fallon, M., Leonard, J.J., McDonald, J.:
Real-time large-scale dense RGB-D SLAM with volumetric fusion. Intl. Journal
on Robotics Research (IJRR) 34(4-5) (2015) 598–626.

[5] Mantiuk, R., Daly, S., Kerofsky, L.: Display adaptive tone mapping. ACM Trans.
on Graphics (ToG) 27(3) (2008).



About Shuda Li

Dr Shuda Li -------------------------------- Computer Vision Group Room 1.15 Merchant Venturers Building Woodland Road the University of Bristol Bristol BS8 1UB United Kingdom --------------------------------- Email: lishuda1980@gmail.com csxsl@bristol.ac.uk csxsl@compsci.bristol.ac.uk web: http://www.cs.bris.ac.uk/~csxsl/ Fax: +44 (0)117 954 5208 ---------------------------------
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s