Leadership, entrepreneurship, and AI insights

  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Twitter Social Icon
  • Facebook Social Icon
  • Instagram Social Icon
  • SoundCloud Social Icon
  • Spotify Social Icon

© 2018 by Robin.ly. Provided by CrossCircles Inc.

[NeurIPS 2019 Highlight] Sharon Zhou @ Stanford: HYPE of Generative Models



This episode is an interview with Sharon Zhou from Stanford University, discussing highlights from her paper, "HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models," accepted as an oral presentation at NeurIPS 2019 Conference.



Sharon Zhou is a CS PhD student advised by Andrew Ng, working on generative models and the inductive biases of neural networks, as well as applications of ML to climate change and healthcare. She was previously an ML product manager at Google and various ML startups. She was the first Harvard graduate to major in CS and Classics and in her spare time composes poetry and plays with generative models. 




Interview with Robin.ly:




Robin.ly is a content platform dedicated to helping engineers and researchers develop leadership, entrepreneurship, and AI insights to scale their impacts in the new tech era.


Subscribe to our newsletter to stay updated for more NeurIPS interviews and inspiring AI talks:


Paper At A Glance


Generative models often use human evaluations to measure the perceived quality of their outputs. Automated metrics are noisy indirect proxies, because they rely on heuristics or pretrained embeddings. However, up until now, direct human evaluation strategies have been ad-hoc, neither standardized nor validated. Our work establishes a gold standard human benchmark for generative realism. We construct Human eYe Perceptual Evaluation (HYPE) a human benchmark that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) able to produce separable model performances, and (4) efficient in cost and time. We introduce two variants: one that measures visual perception under adaptive time constraints to determine the threshold at which a model's outputs appear real (e.g. 250ms), and the other a less expensive variant that measures human error rate on fake and real images sans time constraints. We test HYPE across six state-of-the-art generative adversarial networks and two sampling techniques on conditional and unconditional image generation using four datasets: CelebA, FFHQ, CIFAR-10, and ImageNet. We find that HYPE can track model improvements across training epochs, and we confirm via bootstrap sampling that HYPE rankings are consistent and replicable.[Poster]

[Full Paper]