⚔🛡 Awesome Graph Adversarial Learning
- ⚔🛡 Awesome Graph Adversarial Learning
- 👀Quick Look
- ⚔Attack
- 🛡Defense
- 🔐Certification
- ⚖Stability
- 🚀Others
- 📃Survey
- ⚙Toolbox
- 🔗Resource
This repository contains Attack-related papers, Defense-related papers, Robustness Certification papers, etc., ranging from 2017 to 2021. If you find this repo useful, please cite: A Survey of Adversarial Learning on Graph, arXiv'20, Link
@article{chen2020survey,
title={A Survey of Adversarial Learning on Graph},
author={Chen, Liang and Li, Jintang and Peng, Jiaying and Xie,
Tao and Cao, Zengxu and Xu, Kun and He,
Xiangnan and Zheng, Zibin and Wu, Bingzhe},
journal={arXiv preprint arXiv:2003.05730},
year={2020}
}
👀Quick Look
The papers in this repo are categorized or sorted:
| By Alphabet | By Year | By Venue | Papers with Code |
If you want to get a quick look at the recently updated papers in the repository (in 30 days), you can refer to 📍this.
⚔Attack
2023
- Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective, 📝ICLR, :octocat:Code
- Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning, 📝AAAI, :octocat:Code
- GUAP: Graph Universal Attack Through Adversarial Patching, 📝arXiv, :octocat:Code
- Node Injection for Class-specific Network Poisoning, 📝arXiv, :octocat:Code
- Unnoticeable Backdoor Attacks on Graph Neural Networks, 📝WWW, :octocat:Code
- A semantic backdoor attack against Graph Convolutional Networks, 📝arXiv
2022
- Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem, 📝WSDM, :octocat:Code
- Inference Attacks Against Graph Neural Networks, 📝USENIX Security, :octocat:Code
- Model Stealing Attacks Against Inductive Graph Neural Networks, 📝IEEE Symposium on Security and Privacy, :octocat:Code
- Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, 📝WWW, :octocat:Code
- Neighboring Backdoor Attacks on Graph Convolutional Network, 📝arXiv, :octocat:Code
- Understanding and Improving Graph Injection Attack by Promoting Unnoticeability, 📝ICLR, :octocat:Code
- Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs, 📝AAAI, :octocat:Code
- More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks, 📝arXiv
- Black-box Node Injection Attack for Graph Neural Networks, 📝arXiv, :octocat:Code
- Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection, 📝arXiv
- Projective Ranking-based GNN Evasion Attacks, 📝arXiv
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation, 📝arXiv
- Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization, 📝Asia CCS, :octocat:Code
- Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees, 📝CVPR, :octocat:Code
- Transferable Graph Backdoor Attack, 📝RAID, :octocat:Code
- Adversarial Robustness of Graph-based Anomaly Detection, 📝arXiv
- Label specificity attack: Change your label as I want, 📝IJIS
- AdverSparse: An Adversarial Attack Framework for Deep Spatial-Temporal Graph Neural Networks, 📝ICASSP
- Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks, 📝WSDM
- Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors, 📝IJCAI, :octocat:Code
- Label-Only Membership Inference Attack against Node-Level Graph Neural NetworksCluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors, 📝arXiv
- Adversarial Camouflage for Node Injection Attack on Graphs, 📝arXiv
- Are Gradients on Graph Structure Reliable in Gray-box Attacks?, 📝CIKM, :octocat:Code
- Adversarial Camouflage for Node Injection Attack on Graphs, 📝arXiv
- Graph Structural Attack by Perturbing Spectral Distance, 📝KDD
- What Does the Gradient Tell When Attacking the Graph Structure, 📝arXiv
- BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection, 📝ICDM, :octocat:Code
- Model Inversion Attacks against Graph Neural Networks, 📝TKDE
- Sparse Vicious Attacks on Graph Neural Networks, 📝arXiv, :octocat:Code
- Poisoning GNN-based Recommender Systems with Generative Surrogate-based Attacks, 📝ACM TIS
- Dealing with the unevenness: deeper insights in graph-based attack and defense, 📝Machine Learning
- Membership Inference Attacks Against Robust Graph Neural Network, 📝CSS
- Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks, 📝ICDM, :octocat:Code
- Revisiting Item Promotion in GNN-based Collaborative Filtering: A Masked Targeted Topological Attack Perspective, 📝arXiv
- Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection, 📝arXiv, :octocat:Code
- Private Graph Extraction via Feature Explanations, 📝arXiv
- Towards Secrecy-Aware Attacks Against Trust Prediction in Signed Graphs, 📝arXiv
- Camouflaged Poisoning Attack on Graph Neural Networks, 📝ICDM
- LOKI: A Practical Data Poisoning Attack Framework against Next Item Recommendations, 📝TKDE
- Adversarial for Social Privacy: A Poisoning Strategy to Degrade User Identity Linkage, 📝arXiv
- Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification, 📝Pattern Recognition
- GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections, 📝arXiv, :octocat:Code
- Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs, 📝arXiv
- Are Defenses for Graph Neural Networks Robust?, 📝NeurIPS, :octocat:Code
- Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation, 📝ECCV
- Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models, 📝NeurIPS
- Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias, 📝NeurIPS, :octocat:Code
- Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks, 📝SecureComm
2021
- Stealing Links from Graph Neural Networks, 📝USENIX Security
- PATHATTACK: Attacking Shortest Paths in Complex Networks, 📝arXiv
- Structack: Structure-based Adversarial Attacks on Graph Neural Networks, 📝ACM Hypertext, :octocat:Code
- Optimal Edge Weight Perturbations to Attack Shortest Paths, 📝arXiv
- GReady for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack, 📝Information Sciences
- Graph Adversarial Attack via Rewiring, 📝KDD, :octocat:Code
- Membership Inference Attack on Graph Neural Networks, 📝arXiv
- Graph Backdoor, 📝USENIX Security
- TDGIA: Effective Injection Attacks on Graph Neural Networks,