Unsupervised Deep Generative Adversarial Hashing Network

Abstract

Unsupervised deep hash functions have not shown satisfactory improvements against their shallow alternatives, and usually require supervised pretraining to avoid overfitting. In this paper, we propose a new deep unsupervised hashing function, called HashGAN, which efficiently obtains binary representation of input images without any supervised pretraining. HashGAN consists of three networks, a generator, a discriminator and an encoder. By sharing the parameters of the encoder and discriminator, we benefit from the adversarial loss as a data-dependent regularization in training our deep hash function. Moreover, a novel hashing loss function is introduced for real images, which results in minimum entropy, uniform frequency, consistent and independent hash bits. Furthermore, we employ a collaborative loss in training our model, enforcing similar random inputs and hash bits for synthesized images. In our experiments, HashGAN outperforms the previous unsupervised hash functions in image retrieval and achieves the state-of-the-art performance in image clustering on benchmark datasets. We also provide an ablation study, showing the contribution of each component in our loss function.

Publication
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Date