Machine Learning

The Communication Complexity of Distributed Estimation

We study an extension of the standard two-party communication model in which Alice and Bob hold probability distributions ppp and qqq over domains XXX and YYY, respectively. Their goal is to estimate
Ex∼p,y∼q[f(x,y)]mathbb{E}_{x sim p, y sim q}[f(x, y)]Ex∼p,y∼q​[f(x,y)]
to within additive error εvarepsilonε for a bounded function fff, known to both parties. We refer to this as the distributed estimation problem. Special cases of this problem arise in a variety of areas including sketching, databases and learning. Our goal is to understand how the required communication scales with the…

​We study an extension of the standard two-party communication model in which Alice and Bob hold probability distributions ppp and qqq over domains XXX and YYY, respectively. Their goal is to estimate
Ex∼p,y∼q[f(x,y)]mathbb{E}_{x sim p, y sim q}[f(x, y)]Ex∼p,y∼q​[f(x,y)]
to within additive error εvarepsilonε for a bounded function fff, known to both parties. We refer to this as the distributed estimation problem. Special cases of this problem arise in a variety of areas including sketching, databases and learning. Our goal is to understand how the required communication scales with the… ​​ Read More

Leave a Reply

Your email address will not be published. Required fields are marked *