Despite recent progress in generative image modeling, successfully generating
high-resolution, diverse samples from complex datasets such as ImageNet remains
an elusive goal. To this end, we train Generative Adversarial Networks at the
largest scale yet attempted, and study the instabilities specific to such
scale. We find that applying orthogonal regularization to the generator renders
it amenable to a simple "truncation trick", allowing fine control over the
trade-off between sample fidelity and variety by truncating the latent space.
Our modifications lead to models which set the new state of the art in
class-conditional image synthesis. When trained on ImageNet at 128x128
resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and
Frechet Inception Distance (FID) of 9.6, improving over the previous best IS of
52.52 and FID of 18.65.

Captured tweets and retweets: 2

Made with a human heart + one part enriched uranium + four parts unicorn blood