@SimonsInstituteTOC
  @SimonsInstituteTOC
Simons Institute | Plenary Talk: Privately Evaluating Untrusted Black-Box Functions @SimonsInstituteTOC | Uploaded 1 week ago | Updated 9 hours ago
Sofya Raskhodnikova (Boston University)
https://simons.berkeley.edu/talks/sofya-raskhodnikova-boston-university-2024-08-07
Workshop on Local Algorithms (WoLA)

We provide tools for sharing sensitive data in situations when the data curator does not know in advance what questions an (untrusted) analyst might want to ask about the data. The analyst can specify a program that they want the curator to run on the dataset. We model the program as a black-box function $f$. We study differentially private algorithms, called privacy wrappers, that, given black-box access to a real-valued function $f$ and a sensitive dataset $x$, output an accurate approximation to $f(x)$. The dataset $x$ is modeled as a finite subset of a possibly infinite universe, in which each entry $x$ represents data of one individual. A privacy wrapper calls $f$ on the dataset $x$ and on some subsets of $x$ and returns either an approximation to $f(x)$ or a nonresponse symbol $\perp$. The wrapper may also use additional information (that is, parameters) provided by the analyst, but differential privacy is required for all values of these parameters. Correct setting of these parameters will ensure better accuracy of the privacy wrapper. The bottleneck in the running time of our privacy wrappers is the number of calls to $f$, which we refer to as queries. Our goal is to design privacy wrappers with high accuracy and small query complexity.

We consider two settings: in the automated sensitivity detection setting, the analyst supplies only the black-box function $f$ and the intended (finite) range of $f$; in the provided sensitivity bound setting, the analyst also supplies additional parameters that describe the sensitivity of $f$. We define accuracy for both settings. We present the first privacy wrapper for the automated sensitivity detection setting. For the setting where a sensitivity bound is provided by the analyst, we design privacy wrappers with simultaneously optimal accuracy and query complexity, improving on the constructions provided (or implied) by previous work. We also prove tight lower bounds for both settings. In addition to addressing the black-box privacy problem, our private mechanisms provide feasibility results for differentially private release of general classes of functions.

Joint work with Ephraim Linder, Adam Smith, and Thomas Steinke
Plenary Talk: Privately Evaluating Untrusted Black-Box FunctionsML Efficiency for Large Models: From Data Efficiency to Faster TransformersEvidence of social learning across symbolic cultural barriers in sperm whalesUnderstanding (a bit about) hallucinations in Generative AIHarnessing the properties of equivariant neural networks to understand and design materialsSome thoughts on ML-based protein engineeringTractable Representations for Boolean Functional SynthesisLightning TalksGeneralizable sampling of conformational ensembles with latent space dynamicsSynthesis from HyperpropertiesCut Sparsification and Succinct Representation of Submodular HypergraphsGraph Connectivity Using Star Contraction

Plenary Talk: Privately Evaluating Untrusted Black-Box Functions @SimonsInstituteTOC

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER