Nvidia Research’s latest AI model can learn skills as complex as emulating renowned painters and recreating images of cancer tissue using limited datasets.
By applying a neural network training technique to the Nvidia StyleGAN2 model, Nvidia researchers reimagined artwork based on fewer than 1,500 images from the Metropolitan Museum of Art.
It typically takes 50,000 to 100,000 training images to train a high-quality GAN.
Using Nvidia DGX systems to accelerate training, they generated new AI art inspired by the historical portraits.
The technique — called adaptive discriminator augmentation, or ADA — reduces the number of training images by 10-20x while still getting great results.
According to Nvidia, the same method could someday have a significant impact in healthcare, for example by creating cancer histology images to help train other AI models.
“These results mean people can use GANs to tackle problems where vast quantities of data are too time-consuming or difficult to obtain,” said David Luebke, vice president of graphics research at Nvidia. “I can’t wait to see what artists, medical experts and researchers use it for.”