*In Annals of statistics **; h5-index 0.0 *

*K*eigenvectors and transmits them to the central server; the central server then aggregates the information from all the node machines and conducts a PCA based on the aggregated information. We investigate the bias and variance for the resulting distributed estimator of the top

*K*eigenvectors. In particular, we show that for distributions with symmetric innovation, the empirical top eigenspaces are unbiased and hence the distributed PCA is "unbiased". We derive the rate of convergence for distributed PCA estimators, which depends explicitly on the effective rank of covariance, eigen-gap, and the number of machines. We show that when the number of machines is not unreasonably large, the distributed PCA performs as well as the whole sample PCA, even without full access of whole data. The theoretical results are verified by an extensive simulation study. We also extend our analysis to the heterogeneous case where the population covariance matrices are different across local machines but share similar top eigen-structures.

*Fan Jianqing, Wang Dong, Wang Kaizheng, Zhu Ziwei*

*2019-Dec*

**Communication Efficiency, Distributed Learning, Heterogeneity, One-shot Approach, PCA, Unbiasedness of Empirical Eigenspaces**