@SimonsInstituteTOC
  @SimonsInstituteTOC
Simons Institute | Overview of Statistical Learning Theory Part 2 @SimonsInstituteTOC | Uploaded 1 week ago | Updated 2 minutes ago
Nati Srebro (Toyota Technological Institute at Chicago)
https://simons.berkeley.edu/talks/nati-srebro-toyota-technological-institute-chicago-2024-08-28
Modern Paradigms in Generalization Boot Camp

In this tutorial I will mostly survey classical, mostly 20th century, statistical learning theory, focusing on generalization by controlling capacity. We will discuss:
- Vapnik and Chervonenkis's Fundamental Theorem of Learning
- Scale sensitive capacity control and marking
- Minimum Description Length / Occam's Rule / Structural Risk Minimization and PAC-Bayes
- Parallels with Stochastic Optimization
- Generalization and capacity control from optimization: online-to-batch, stochastic approximation, boosting, min norm and max margin.
We will ask how the classic theory fits with current interests, including interpolation learning, benign oversitting and implicit bias.
Overview of Statistical Learning Theory Part 2Statistical Limits of Causal InferenceGoing beyond the here and now: Counterfactual simulation in human cognitionThe long path to sqrt{d} monotonicity testersFast Streaming Euclidean Clustering with Constant SpaceDistribution Learning Meets Graph Structure SamplingVerifiable Data Science via Interactive ProofsAre there graphs whose shortest path structure requires large edge weights?Synthesis of Privacy-Preserving SystemsOpen challenges in AI for molecular design: representation, experimental alignment, and...Talk By Ilya Mironov (Google Brain)New Directions in Property Testing | Richard M. Karp Distinguished Lecture

Overview of Statistical Learning Theory Part 2 @SimonsInstituteTOC

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER