ArXiv Preprint
In many imaging modalities, objects of interest can occur in a variety of
locations and poses (i.e. are subject to translations and rotations in 2d or
3d), but the location and pose of an object does not change its semantics (i.e.
the object's essence). That is, the specific location and rotation of an
airplane in satellite imagery, or the 3d rotation of a chair in a natural
image, or the rotation of a particle in a cryo-electron micrograph, do not
change the intrinsic nature of those objects. Here, we consider the problem of
learning semantic representations of objects that are invariant to pose and
location in a fully unsupervised manner. We address shortcomings in previous
approaches to this problem by introducing TARGET-VAE, a translation and
rotation group-equivariant variational autoencoder framework. TARGET-VAE
combines three core innovations: 1) a rotation and translation
group-equivariant encoder architecture, 2) a structurally disentangled
distribution over latent rotation, translation, and a
rotation-translation-invariant semantic object representation, which are
jointly inferred by the approximate inference network, and 3) a spatially
equivariant generator network. In comprehensive experiments, we show that
TARGET-VAE learns disentangled representations without supervision that
significantly improve upon, and avoid the pathologies of, previous methods.
When trained on images highly corrupted by rotation and translation, the
semantic representations learned by TARGET-VAE are similar to those learned on
consistently posed objects, dramatically improving clustering in the semantic
latent space. Furthermore, TARGET-VAE is able to perform remarkably accurate
unsupervised pose and location inference. We expect methods like TARGET-VAE
will underpin future approaches for unsupervised object generation, pose
prediction, and object detection.
Alireza Nasiri, Tristan Bepler
2022-10-24