[NeurIPS 2019 Highlight] Meena Jagadeesan @ Harvard: Understanding Sparse JL for Feature Hashing

Updated: Mar 4




This episode is an interview with Meena from Harvard University, discussing highlights from her paper, "Understanding Sparse JL for Feature Hashing," accepted as an oral presentation at NeurIPS 2019 conference.


Meena Jagadeesan is a senior at Harvard, pursuing an A.B./S.M. (Bachelor’s and Master’s degrees) in computer science. She is broadly interested in research in theoretical computer science. She has received a CRA Outstanding Undergraduate Researcher Award, a Siebel Scholarship, and a Barry Goldwater Scholarship for her research. 


Interview with Robin.ly:




Robin.ly is a content platform dedicated to helping engineers and researchers develop leadership, entrepreneurship, and AI insights to scale their impacts in the new tech era.


Subscribe to our newsletter to stay updated for more NeurIPS interviews and inspiring AI talks:


Paper At A Glance


Feature hashing and other random projection schemes are commonly used to reduce the dimensionality of feature vectors. The goal is to efficiently project a high-dimensional feature vector living in R^n into a much lower-dimensional space R^m, while approximately preserving Euclidean norm. These schemes can be constructed using sparse random projections, for example using a sparse Johnson-Lindenstrauss (JL) transform. A line of work introduced by Weinberger et. al (ICML '09) analyzes the accuracy of sparse JL with sparsity 1 on feature vectors with small l_infinity-to-l_2 norm ratio. Recently, Freksen, Kamma, and Larsen (NeurIPS '18) closed this line of work by proving a tight tradeoff between l_infinity-to-l_2 norm ratio and accuracy for sparse JL with sparsity 1. In this paper, we demonstrate the benefits of using sparsity s greater than 1 in sparse JL on feature vectors. Our main result is a tight tradeoff between l_infinity-to-l_2 norm ratio and accuracy for a general sparsity s, that significantly generalizes the result of Freksen et. al. Our result theoretically demonstrates that sparse JL with s > 1 can have significantly better norm-preservation properties on feature vectors than sparse JL with s = 1; we also empirically demonstrate this finding.

[Presentation Slides]

[Paper in NeurIPS proceedings]













Leadership, entrepreneurship, and AI insights

  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Twitter Social Icon
  • Facebook Social Icon
  • Instagram Social Icon
  • SoundCloud Social Icon
  • Spotify Social Icon

© 2018 by Robin.ly. Provided by CrossCircles Inc.