ADVME¶
A class implementing adversarial model extraction attacks on graph neural networks using DGL.
Parameters dataset : Dataset
The target graph dataset to perform extraction on.
- attack_node_fractionfloat
Fraction of nodes to use in the attack.
- model_pathstr, optional
Path to load pre-trained target model from.
Attributes graph : DGLGraph
The target graph to attack.
- featurestorch.Tensor
Node feature matrix.
- labelstorch.Tensor
Node label tensor.
- label_numberint
Number of unique classes.
- node_numberint
Total number of nodes.
- feature_numberint
Number of node features.
- net1nn.Module
Target model to extract.
Methods
attack()¶
Executes the model extraction attack.
Process: 1. Selects a center node and extracts k-hop subgraph 2. Constructs prior distributions for features and node counts per class 3. Generates synthetic graphs using prior distributions 4. Queries target model for labels 5. Trains surrogate model on synthetic data
Implementation details: - Uses k-hop subgraph sampling (k=2) - Filters subgraphs to size 10-150 nodes - Generates n=10 synthetic graphs per class - Trains surrogate for 200 epochs using Adam optimizer - Evaluates using fidelity and accuracy metrics
Returns: - GraphNeuralNetworkMetric
Performance metrics of extracted model
Key Features: - Maintains feature distributions per class - Preserves node feature count distributions - Combines multiple subgraphs into training set - Uses DGL for efficient graph operations - Implements early stopping based on performance
Notes: - Requires target model to be in eval mode - Uses dropout and weight decay for regularization - Monitors both fidelity to target and accuracy on test set - Prints progress during extraction process - Returns best achieved performance metrics
1# Importing necessary classes and functions from the pygip library.
2from pygip.datasets.datasets import * # Import all available datasets.
3from pygip.protect import * # Import all core algorithms.
4
5# Loading the Cora dataset, which is commonly used in graph neural network research.
6dataset = Cora()
7
8# Initializing a model extraction attack with the Cora dataset.
9adversarial_attack = AdversarialModelExtraction(dataset, 0.25)
10
11# Executing the attack on the model.
12adversarial_attack.attack()
Adversarial Attack on Cora
NumNodes: 2708
NumEdges: 10556
NumFeats: 1433
NumClasses: 7
NumTrainingSamples: 140
NumValidationSamples: 500
NumTestSamples: 1000
Done loading data from cached files. =========Target Model Generating========================== 100%|█████████████████| 200/200 [00:00<00:00, 328.01it/s] 100%|█████████████████| 200/200 [00:02<00:00, 85.62it/s]
Fidelity: 0.767
Accuracy: 0.719