Unsupervised Indoor Depth Estimation

Abstract

Single-view depth estimation using CNNs trained from unlabelled videos has shown significant promise. However, the excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices, in which case the ego-motion is often degenerate, i.e., the rotation dominates the translation. In this work, we establish that the degenerate camera motions exhibited in handheld settings are a critical obstacle for unsupervised depth learning. A main contribution of our work is fundamental analysis which shows that the rotation behaves as noise during training, as opposed to the translation (baseline) which provides supervision signals. To capitalise on our findings, we propose a novel data pre-processing method for effective training, i.e., we search for image pairs with modest translation and remove their rotation via the proposed weak image rectification. With our pre-processing, existing unsupervised models can be trained well in challenging scenarios (e.g., NYUv2 dataset), and the results outperform the unsupervised SOTA by a large margin (0.147 vs. 0.189 in the AbsRel error).

Paper

Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue, Jia-Wang Bian, Huangying Zhan, Naiyan Wang, Tat-Jun Chin, Chunhua Shen, Ian Reid, arXiv:2006.02708 [ArXiv] [GitHub]

@article{bian2020depth,
  title={Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue},
  author={Bian, Jia-Wang and Zhan, Huangying and Wang, Naiyan and Chin, Tat-Jun and Shen, Chunhua and Reid, Ian},
  journal={arXiv preprint arXiv:2006.02708},
  year={2020}
}

Contribution

  1. We analyze the effects of complicated camera motions on unsupervised depth learning.
  2. We release an rectified NYUv2 dataset for unsupvised learning of single-view depth CNN.

Results on NYUv2

Visual comparison

Leave a Reply

Your email address will not be published. Required fields are marked *