HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor (Source codes and dataset released!!!)

by Shuda Li*, Ankur Handa**, Yang Zhang*, Andrew Calway*
*University of Bristol, **University of Cambridge
Accepted by Intl. Conf. on 3DVision (3DV), 2016

  • [paper]
  • [github]   NEW!!!
  • [video]
  • [dataset]   NEW!!!
  • Experiments demonstrating strong robustness to motion blur NEW!!!

Abstract

We describe a new method for comparing frame appearance in a frame-to-model 3-D mapping and tracking system using an low dynamic range (LDR) RGB-D camera which is robust to brightness changes caused by auto exposure. It is based on a normalised radiance measure which is invariant to exposure changes and not only robustifies the tracking under changing lighting conditions, but also enables the following exposure compensation perform accurately to allow online building of high dynamic range (HDR) maps. The latter facilitates the frame-to-model tracking to minimise drift as well as better capturing light variation within the scene. Results from experiments with synthetic and real data demonstrate that the method provides both improved tracking and maps with far greater dynamic range of luminosity.

results1_v4

Figure. (a) shows the proposed frame-to-model tracking using normalized ra-
diance deliver best tracking accuracy using visual data. The tracking is per-
formed using a challenging synthetic flickering RGB-D sequence. (b)-(e) are
screen captures from video released with previous works. Specifically, (b) and (d)
are from [1], where (b) is the raw input image. (d) is predicted scene textures. By
contrast, the unrealistic artefacts, marked by red circles, indicate insufficent ex-
posure compensation. (c) is predicted scene texture from [2]; (e) from [3]. Similar
artefacts can be seen in these results. (f) in the top right shows the results from
our implementation of [4] using a RGB-D video sequence, the artefacts are very
strong due to large camera exposure adjustment when moving from bright area
(top in the scene) to the dark area (bottom left in the scene). (g) in the bottom
right are the predicted textures using the proposed HDRFusion. It can been that
it is free of artefacts and its HDR textures are visualized using Mantiuk tone
mapping operater [5].

References

[1] Meilland, M., Barat, C., Comport, A.: 3D High Dynamic Range dense visual
SLAM and its application to real-time object re-lighting. In: IEEE/ACM Intl.
Symposium on Mixed and Augmented Reality (ISMAR). (2013) 143–152.

[2]  Kerl, C., Cremers, D., Universit, T.: Dense Continuous-Time Tracking and Map-
ping with Rolling Shutter RGB-D Cameras. In: Intl. Conf. on Computer Vision
(ICCV). (2015).

[3]  Whelan, T., Leutenegger, S., Salas-moreno, R.F., Glocker, B., Davison, A.J.: Elas-
ticFusion : Dense SLAM Without A Pose Graph. Robotics: Science and Systems
(RSS) (2015).

[4] Whelan, T., Kaess, M., Johannsson, H., Fallon, M., Leonard, J.J., McDonald, J.:
Real-time large-scale dense RGB-D SLAM with volumetric fusion. Intl. Journal
on Robotics Research (IJRR) 34(4-5) (2015) 598–626.

[5] Mantiuk, R., Daly, S., Kerofsky, L.: Display adaptive tone mapping. ACM Trans.
on Graphics (ToG) 27(3) (2008).

 

Posted in Uncategorized | Leave a comment

Absolute pose estimation using multiple forms of correspondence from RGB-D frames (Source codes released!!!)

by Shuda Li and Andrew Calway
University of Bristol, UK
accepted by IEEE Intl. Conf. on Robotics and Automation (ICRA), 2016

Abstract

We describe a new approach to absolute pose estimation from noisy and outlier contaminated matching point sets for RGB-D sensors. We show that by integrating multiple forms of correspondence based on 2-D and 3-D points and surface normals gives more precise, accurate and robust pose estimates. This is because it gives more constraints than using one form alone and increases the available measurements, especially when dealing with sparse matching sets. We demonstrate the approach by incorporating it within a RANSAC algorithm and introduce a novel direct least-square approach to calculate pose estimates. Results from experiments on synthetic and real data demonstrate improved performance over existing methods.

6

Posted in Uncategorized | Leave a comment

RGBD Relocalisation Using Pairwise Geometry and Concise Key Point Sets (Source codes released!!!)

by Shuda Li and Andrew Calway.
University of Bristol, UK
publised in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2015

Abstract

We describe a novel RGBD relocalisation algorithm based on key point matching. It combines two components. First, a graph matching algorithm which takes into account the pairwise 3-D geometry amongst the key points, giving robust relocalisation. Second, a point selection process which provides an even distribution of the `most matchable’ points across the scene based on non-maximum suppression within voxels of a volumetric grid. This ensures a bounded set of matchable key points which enables tractable and scalable graph matching at frame rate. We present evaluations using a public dataset and our own more difficult dataset containing large pose changes, fast motion and non-stationary objects. It is shown that the method significantly out performs state-of-the-art methods.

global_new

Posted in Uncategorized | Tagged , , | Leave a comment

CMake + CUDA + Dynamic parallelism (Nested parallelism)


set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-gencode arch=compute_35,code=sm_35)
set(CUDA_SEPARABLE_COMPILATION TRUE)
link_directories( "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v6.5\\lib\\x64")
target_link_libraries( DynamicParallel cudadevrt.lib )

Posted in Uncategorized | Tagged , | Leave a comment

Upgrade all stuffs to CUDA 6.5rc + VS2013 + QT 5.3.1 + OpenCV 3.0 + PCL 1.7.2 today. Feeling so cool!!!

Really love VS2013 for its space efficiency and speed. Here is a brief comparison between 2013 and 2012:

tmp
Other new features useful to me:

  • online account to synchronize IDE environment settings among multiple PCs
  • c++ 11 features such as “vector a(5)”
  • CUDA syntax highlighting when combined with CUDA 6.5rc

Possible bugs I encountered in 2013:

  • when setting project properties for multiple projects, it crashes.
  • Network failure will make it impossible to start or shut down.
Posted in Uncategorized | Leave a comment

Bolt C++ Template Library

Mentioned in SLAM++ CVPR’13 paper, Bolt C++ Template Library is a GPU library provides paralleled primitive operations such as reduction, scan and sort. 

[Resources] http://developer.amd.com/tools-and-sdks/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/bolt-c-template-library/

Questions: 

– What is the hardware environment compilable with BCT? 

 

 

 

 

Posted in Uncategorized | Leave a comment

实时三维视频

http://v.youku.com/v_show/id_XMjcxMzIyMTIw_type_99.html

Gallery | Leave a comment

Hello world!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Posted in Uncategorized | 1 Comment

Say “we” for us.

Wind is wind,
          is wind,
          is wind,
          is wind;
Sea is sea,
        is sea,
        is sea,
        is sea;
You are you,
        are you,
        are you,
        are you;
but I am still expecting that
there will be one day when I could say "we" for us…

风是风,
    是风,
    是风,
    仍是风;
海是海,
    是海,
    是海,
    仍是海;
你是你,
    是你,
    是你,
    仍是你;
但我期待,
有这么一天,
我可以为了我们说,
“我们”…

 

Posted in Uncategorized | Leave a comment

《爱情呼叫转移2》

《爱情呼叫转移2》范伟和林嘉欣的对白

相识
范:正式认识一下:范中举
林:你的名字好像有出处。
范:疑似范进的后代。
林:啊,书香门第呀
范:Yes
  
分手
林:你应该明白我的意思了吧?
范:明白。
范:我用丘比特爱情之箭追啊追,你穿防弹背心飞啊飞….
林: 所以你别对我太好,我没什么感觉
范:没关系,慢慢感,会有觉的…

林:那基础呢?我们的基础在哪儿?
范:人怕出名猪怕壮,男怕没钱女怕胖。我有钱你不胖,这就是最大的基础
林:就因为我不胖?
  
林:你不该找我的
范:茫茫人海一片杂草,遇到鲜花凭啥不找…
林:我想找白马王子
范:唉…(向沙发靠背上倒去)那坏了!
范:他们都叫我黑马胖子…

在《爱》剧一系列激情之中,这段范氏浪漫实在让人忍俊不禁。

Posted in Uncategorized | 1 Comment