PyGIP Icon
PyPI Version Build Status License PyPI Downloads Issues Pull Requests Stars GitHub forks GitHub

PyGIP is a comprehensive Python library focused on model extraction attacks and defenses in Graph Neural Networks (GNNs). Built on PyTorch, PyTorch Geometric, and DGL, the library offers a robust framework for understanding, implementing, and defending against attacks targeting GNN models.

PyGIP is featured for:

  • Extensive Attack Implementations: Multiple strategies for GNN model extraction attacks, including fidelity and accuracy evaluation.

  • Defensive Techniques: Tools for creating robust defense mechanisms, such as watermarking graphs and inserting synthetic nodes.

  • Unified API: Intuitive APIs for both attacks and defenses.

  • Integration with PyTorch/DGL: Seamlessly integrates with PyTorch Geometric and DGL for scalable graph processing.

  • Customizable: Supports user-defined attack and defense configurations.

Quick Start Example:

Model Extraction Attack Example with 5 Lines of Code:

from datasets import Cora
from models.attack import ModelExtractionAttack0

# Load the Cora dataset
dataset = Cora()

# Initialize the attack with a sampling ratio of 0.25
mea = ModelExtractionAttack0(dataset, 0.25)

# Execute the attack
mea.attack()

Attack Modules

Class Name

Reference

MEA

Wu, Bang, et al. “Model extraction attacks on graph neural networks: Taxonomy and realisation.” Proceedings of the 2022 ACM on Asia conference on computer and communications security. 2022.

AdvMEA

DeFazio, David, and Arti Ramesh. “Adversarial model extraction on graph neural networks.” arXiv preprint arXiv:1912.07721 (2019).

CEGA

Wang, Zebin, et al. “CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and Acquisition.” arXiv preprint arXiv:2506.17709 (2025).

DataFreeMEA

Zhuang, Yuanxin, et al. “Unveiling the Secrets without Data: Can Graph Neural Networks Be Exploited through {Data-Free} Model Extraction Attacks?.” 33rd USENIX Security Symposium (USENIX Security 24). 2024.

Realistic

Guan, Faqian, et al. “A realistic model extraction attack against graph neural networks.” Knowledge-Based Systems 300 (2024): 112144.

Defense Modules

Class Name

Reference

RandomWM

Zhao, Xiangyu, Hanzhou Wu, and Xinpeng Zhang. “Watermarking graph neural networks by random graphs.” 2021 9th International Symposium on Digital Forensics and Security (ISDFS). IEEE, 2021.

BackdoorWM

Xu, Jing, et al. “Watermarking graph neural networks based on backdoor attacks.” 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P). IEEE, 2023.

SurviveWM

Wang, Haiming, et al. “Making Watermark Survive Model Extraction Attacks in Graph Neural Networks.” ICC 2023-IEEE International Conference on Communications. IEEE, 2023.

ImperceptibleWM

Zhang, Linji, et al. “An imperceptible and owner-unique watermarking method for graph neural networks.” Proceedings of the ACM Turing Award Celebration Conference-China 2024. 2024.

ATOM

Cheng, Zhan, et al. “Atom: A framework of detecting query-based model extraction attacks for graph neural networks.” Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2. 2025.

Integrity

Wu, Bang, et al. “Securing graph neural networks in mlaas: A comprehensive realization of query-based integrity verification.” 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024.

Indices and tables