Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.


Social network analysis requires us to perform a variety of analysis tasks, including summarization and prediction, on massive graphs. In this talk, I will present a fast and memory-efficient procedure called clustered low rank matrix approximation for massive graphs. The procedure involves a fast clustering of the graph followed by approximation of each cluster separately using existing methods, e.g. the singular value decomposition, or stochastic algorithms. The clusterwise approximations are then extended to approximate the entire graph. This approach has several benefits: (1) important structure of the graph is preserved due to the clustering; (2) accurate low rank approximations are achieved; (3) the procedure is efficient both in terms of computational speed and memory usage. Further, we generalize stochastic algorithms into the clustered low rank approximation framework and present theoretical bounds for the approximation error. Finally, a set of experiments shows that our methods outperform existing dimensionality reduction algorithms on massive social networks.

Questions and Answers

You need to be logged in to be able to post here.