PanoMixSwap – Panorama Mixing via Structural Swapping for Indoor Scene Understanding

National Tsing Hua University

Abstract

The volume and diversity of training data are critical for modern deep learning-based methods. Compared to the massive amount of labeled perspective images, 360◦ panoramic images fall short in both volume and diversity.

In this paper, we propose PanoMixSwap, a novel data augmentation technique specifically designed for indoor panoramic images. PanoMixSwap explicitly mixes various background styles, foreground furniture, and room layouts from the existing indoor panorama datasets and generates a diverse set of new panoramic images to enrich the datasets. We first decompose each panoramic image into its constituent parts: background style, foreground furniture, and room layout. Then, we generate an augmented image by mixing these three parts from three different images, such as the foreground furniture from one image, the background style from another image, and the room structure from the third image. Our method yields high diversity since there is a cubical increase in image combinations.

We also evaluate the effectiveness of PanoMixSwap on two indoor scene understanding tasks: semantic segmentation and layout estimation. Our experiments demonstrate that state-of-the-art methods trained with PanoMixSwap outperform their original setting on both tasks consistently.

PanoMixSwap


PanoMixSwap involves three major inputs: style sample, structure layout, and furniture sample. PanoMixSwap is composed of two blocks: Style Fusing Block and Furniture Fusing Block. The Style Fusing Block generates a foreground-free styled structure image that fuses the background style from the style image and the room layout from structure layout. Furniture Fusing Block transforms furniture from the furniture image onto the styled structure image to produce the final augmented image and semantic mask.

Visualization Results


Quantitative Results

Semantic Segmentation

Dataset Model PanoMixSwap mIoU(%) mACC(%)
Stanford2D3D HoHoNet - 52.00 65.00
56.02 67.43
PanoFormer - 42.20 61.03
42.94 62.14
Structured3D HoHoNet - 80.80 87.98
81.96 88.52


Layout Estimation

Model PanoMixSwap 3DIoU(%) CE(%) PE(%)
HorizonNet - 83.51 0.62 1.97

86.61 0.61 1.99
LGT-Net - 86.03 0.63 2.11
86.96 0.63 2.04


Qualitative Results

BibTeX

@article{park2021nerfies,
  title     = {PanoMixSwap – Panorama Mixing via Structural Swapping for Indoor Scene Understanding},
  author    = {Yu-Cheng Hsieh and Cheng Sun and Suraj Dengale and Min Sun},
  journal   = {British Machine Vision Conference (BMVC)},
  year      = {2023},
}