Skip to main content
Event_

How framelets enhance graph neural networks

Apr 30, 2021 9:00 am - 10:00 am AEST


This work presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. With the framelet system, we can decompose the graph feature into low-pass and high-pass frequencies as extracted features for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many types of node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds the high-frequency information at different scales. Compared to ReLU, shrinkage in framelet convolution improves the graph neural network model in terms of denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with the prediction performance well preserved.

*This is joint work with Xuebin Zheng, Bingxin Zhou, Junbin Gao (University of Sydney); Pietro Lio (University of Cambridge); Ming Li (Zhejiang Normal University); Guido Montufar (UCLA/MPI MIS).

Short Bio

Yuguang Wang obtained his PhD from the University of New South Wales, Sydney and the Graduate Certificate in deep learning and data science from the University of California, Los Angeles. He is a scientist at Max Planck Institute for Mathematics in the Sciences, working on deep learning theory, applied harmonic analysis and graph neural networks. His research outputs appear in top conferences NeurIPS, ICML, and leading journals such as Applied and Computational Harmonic Analysis, Journal of Approximation Theory, ACM Transactions on Mathematical Software, Neural Networks. He is an editor of the journal of Frontiers in Applied Mathematics and Statistics, Guest editor for IEEE TNNLS 2021, the program committee for ICML, NeurIPS, ICLR, and IJCAI. He has been a long-term visitor at UCLA and Brown University on computational harmonic analysis and geometric deep learning.