pygip.models.attack.mea.MEA

Classes

ModelExtractionAttack(dataset, ...[, ...])

ModelExtractionAttack0(dataset, ...[, ...])

ModelExtractionAttack1(dataset, ...)

ModelExtractionAttack2(dataset, ...[, ...])

ModelExtractionAttack2.

ModelExtractionAttack3(dataset, ...[, ...])

ModelExtractionAttack3.

ModelExtractionAttack4(dataset, ...[, ...])

ModelExtractionAttack4.

ModelExtractionAttack5(dataset, ...[, ...])

ModelExtractionAttack5.

class pygip.models.attack.mea.MEA.ModelExtractionAttack(dataset, attack_node_fraction, model_path=None, alpha=0.8)[source]

Bases: BaseAttack

_abc_impl = <_abc_data object>
_load_model(model_path)[source]

Load a pre-trained model from a file.

_train_target_model()[source]

Train the target model (GCN) on the original graph.

attack()[source]

Execute the attack.

supported_api_types = {'dgl'}
supported_datasets = {'CiteSeer', 'CoauthorCS', 'CoauthorPhysics', 'Computers', 'Cora', 'Photo', 'PubMed'}
class pygip.models.attack.mea.MEA.ModelExtractionAttack0(dataset, attack_node_fraction, model_path=None, alpha=0.8)[source]

Bases: ModelExtractionAttack

_abc_impl = <_abc_data object>
attack()[source]

Main attack procedure.

  1. Samples a subset of nodes (sub_graph_node_index) for querying.

  2. Synthesizes features for neighboring nodes and their neighbors.

  3. Builds a sub-graph, trains a new GCN on it, and evaluates fidelity & accuracy w.r.t. the target model.

get_nonzero_indices(matrix_row)[source]
class pygip.models.attack.mea.MEA.ModelExtractionAttack1(dataset, attack_node_fraction)[source]

Bases: ModelExtractionAttack

_abc_impl = <_abc_data object>
attack()[source]

Main attack procedure.

  1. Reads selected nodes from file for training (attack) nodes.

  2. Reads query labels from another file.

  3. Builds a shadow graph from the given adjacency matrix file.

  4. Trains a shadow model on the selected nodes, then evaluates fidelity & accuracy against the original target graph.

class pygip.models.attack.mea.MEA.ModelExtractionAttack2(dataset, attack_node_fraction, model_path=None)[source]

Bases: ModelExtractionAttack

ModelExtractionAttack2.

A strategy that randomly samples a fraction of nodes as attack nodes, synthesizes identity features for all nodes, then trains an extraction model. The leftover nodes become test nodes.

Inherits

ModelExtractionAttack

_abc_impl = <_abc_data object>
attack()[source]

Main attack procedure.

  1. Randomly select attack_node_num nodes as training nodes.

  2. Set up synthetic features as identity vectors for all nodes.

  3. Train a Net_attack model on these nodes with the queried labels.

  4. Evaluate fidelity & accuracy on a subset of leftover nodes.

class pygip.models.attack.mea.MEA.ModelExtractionAttack3(dataset, attack_node_fraction, model_path=None)[source]

Bases: ModelExtractionAttack

ModelExtractionAttack3.

A more complex extraction strategy that uses a “shadow graph index” file to build partial subgraphs and merges them. It queries selected nodes from a potential set and forms a combined adjacency matrix.

Inherits

ModelExtractionAttack

_abc_impl = <_abc_data object>
attack()[source]

Main attack procedure.

Steps: 1. Loads indices for two subgraphs from text files. 2. Selects attack_node_num nodes from the first subgraph index. 3. Merges subgraph adjacency matrices and constructs a new graph

with combined features.

  1. Trains a new GCN and evaluates fidelity & accuracy w.r.t. the original target.

class pygip.models.attack.mea.MEA.ModelExtractionAttack4(dataset, attack_node_fraction, model_path=None)[source]

Bases: ModelExtractionAttack

ModelExtractionAttack4.

Another graph-based strategy that reads node indices from files, merges adjacency matrices, and links new edges based on feature similarity.

Inherits

ModelExtractionAttack

_abc_impl = <_abc_data object>
attack()[source]

Main attack procedure.

  1. Reads two sets of node indices from text files.

  2. Selects a fixed number of nodes from the target set for attack.

  3. Builds a combined adjacency matrix with zero blocks, then populates edges between shadow and attack nodes based on a distance threshold.

  4. Trains a new GCN on this combined graph and evaluates fidelity & accuracy.

class pygip.models.attack.mea.MEA.ModelExtractionAttack5(dataset, attack_node_fraction, model_path=None)[source]

Bases: ModelExtractionAttack

ModelExtractionAttack5.

Similar to ModelExtractionAttack4, but uses a slightly different strategy to link edges between nodes based on a threshold distance.

Inherits

ModelExtractionAttack

_abc_impl = <_abc_data object>
attack()[source]

Main attack procedure.

  1. Reads two sets of node indices (for target and shadow nodes).

  2. Builds a block adjacency matrix with all zero blocks, then links edges between attack nodes and shadow nodes if the feature distance is less than a threshold.

  3. Trains a new GCN on this combined graph and evaluates fidelity & accuracy.