publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2022
- FSGNNFederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Package for Federated Graph LearningZhen Wang, Weirui Kuang, Yuexiang Xie, Liuyi Yao, and 3 more authorsIn Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington DC, USA, 2022
The incredible development of federated learning (FL) has benefited various tasks in the domains of computer vision and natural language processing, and the existing frameworks such as TFF and FATE has made the deployment easy in real-world applications. However, federated graph learning (FGL), even though graph data are prevalent, has not been well supported due to its unique characteristics and requirements. The lack of FGL-related framework increases the efforts for accomplishing reproducible research and deploying in real-world applications. Motivated by such strong demand, in this paper, we first discuss the challenges in creating an easy-to-use FGL package and accordingly present our implemented package FederatedScope-GNN (FS-G), which provides (1) a unified view for modularizing and expressing FGL algorithms; (2) comprehensive DataZoo and ModelZoo for out-of-the-box FGL capability; (3) an efficient model auto-tuning component; and (4) off-the-shelf privacy attack and defense abilities. We validate the effectiveness of FS-G by conducting extensive experiments, which simultaneously gains many valuable insights about FGL for the community. Moreover, we employ FS-G to serve the FGL application in real-world E-commerce scenarios, where the attained improvements indicate great potential business benefits. We publicly release FS-G, as submodules of FederatedScope, at https://github.com/alibaba/FederatedScope to promote FGL’s research and enable broad applications that would otherwise be infeasible due to the lack of a dedicated package.
@inproceedings{FSGNN, dimensions = {true}, title = {FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Package for Federated Graph Learning}, url = {https://doi.org/10.1145/3534678.3539112}, doi = {10.1145/3534678.3539112}, year = {2022}, author = {Wang, Zhen and Kuang, Weirui and Xie, Yuexiang and Yao, Liuyi and Li, Yaliang and Ding, Bolin and Zhou, Jingren}, isbn = {9781450393850}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {4110–4120}, numpages = {11}, keywords = {federated learning, graph neural networks}, location = {Washington DC, USA}, series = {KDD '22} }
2014
- TransHKnowledge Graph Embedding by Translating on HyperplanesZhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng ChenProceedings of the AAAI Conference on Artificial Intelligence, Jun 2014
We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many/many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.
@article{TransH, dimensions = {true}, title = {Knowledge Graph Embedding by Translating on Hyperplanes}, volume = {28}, url = {https://ojs.aaai.org/index.php/AAAI/article/view/8870}, doi = {10.1609/aaai.v28i1.8870}, number = {1}, journal = {Proceedings of the AAAI Conference on Artificial Intelligence}, author = {Wang, Zhen and Zhang, Jianwen and Feng, Jianlin and Chen, Zheng}, year = {2014}, month = jun }