Direct Edge Alignment (D-EA)

 

Abstract

There has been a paradigm shifting trend towards feature-less methods due to their elegant formulation, accuracy and ever increasing computational power. In this work, we present a direct edge alignment approach for 6-DOF tracking. We argue that photo-consistency based methods are plagued by a much smaller convergence basin and are extremely sensitive to noise, changing illumination and fast motion. We propose to use the Distance Transform in the energy formulation which can significantly extend the influence of the edges for tracking. We address the problem of non-differentiability of our cost function and of the previous methods by use of a sub-gradient method. Through extensive experiments we show that the proposed method gives comparable performance to the previous method under nominal conditions and is able to run at 30 Hz in single threaded mode. In addition, under large motion we demonstrate our method outperforms previous methods using the same run-time configuration for our method.

Illustration

output_7ewdbs

Showing residues at every iteration as the optimization proceeds to estimate the relative pose (rotation and translation) between a pair of images spaced about 90ms apart. Red represents higher residues, blue represent smaller residues.

shen_fig1

Publication

Kuse M., Shen S. “Robust Camera Motion Estimation using Direct Edge Alignment and Sub-gradient Method“. In Proc. of IEEE International Conference on Robotics and Automation (ICRA), 2016 in Stockholm, Sweden. [PDF]

Source Code

Git Repo : https://bitbucket.org/mpkuse/rgbd_odometry
Documentation : [ZIP] (generated with Doxygen)
Pre-requisites : ROS (hydro and above), Eigen, IGL Library, OpenCV.

Other Data for Download

Reprojections for a image-pair (full results similar to figure-1 of the paper) : [ZIP]
Relatives Poses for sequences and related scripts : [ZIP]

Summary of Algorithm

(For details please refer to the publication)

fig2

 

Student in charge: Manohar KUSE