pygip.utils package

Submodules

pygip.utils.dglTopyg module

pygip.utils.dglTopyg.dgl_to_pyg_data(dgl_graph)[source]

pygip.utils.hardware module

pygip.utils.hardware.get_device()[source]
pygip.utils.hardware.set_device(device_str)[source]

pygip.utils.metrics module

class pygip.utils.metrics.AttackCompMetric(gpu_count=None)[source]

Bases: object

compute()[source]
end()[source]
start()[source]
update(train_target_time=None, query_target_time=None, train_surrogate_time=None, attack_time=None, inference_surrogate_time=None)[source]
class pygip.utils.metrics.AttackMetric[source]

Bases: MetricBase

_abc_impl = <_abc_data object>
compute()[source]

Compute and return all metric results.

compute_fidelity(preds_label, query_label)[source]
Return type:

Dict[str, float]

reset()[source]

Reset internal state.

Return type:

None

update(preds, labels, query_label)[source]

Update internal metric state.

class pygip.utils.metrics.DefenseCompMetric(gpu_count=None)[source]

Bases: object

compute()[source]
end()[source]
start()[source]
update(train_target_time=None, train_defense_time=None, inference_defense_time=None, defense_time=None)[source]
class pygip.utils.metrics.DefenseMetric[source]

Bases: MetricBase

_abc_impl = <_abc_data object>
compute()[source]

Compute and return all metric results.

compute_wm()[source]
reset()[source]

Reset internal state.

Return type:

None

update(preds, labels)[source]

Update internal metric state.

update_wm(wm_preds, wm_label)[source]
class pygip.utils.metrics.GraphNeuralNetworkMetric(fidelity=0, accuracy=0, model=None, graph=None, features=None, mask=None, labels=None, query_labels=None)[source]

Bases: object

Graph Neural Network Metric Class.

This class evaluates two metrics, fidelity and accuracy, for a given GNN model on a specified graph and features.

static calculate_surrogate_fidelity(target_model, surrogate_model, data, mask=None)[source]

Calculate fidelity between target and surrogate model predictions.

Parameters:
  • target_model – Original model

  • surrogate_model – Extracted surrogate model

  • data – Input graph data

  • mask – Optional mask for evaluation on specific nodes

Returns:

Fidelity score (percentage of matching predictions)

Return type:

float

evaluate()[source]

Main function to update fidelity and accuracy scores.

evaluate_helper(model, graph, features, labels, mask)[source]

Helper function to evaluate the model’s performance.

static evaluate_surrogate_extraction(target_model, surrogate_model, data, train_mask=None, val_mask=None, test_mask=None)[source]

Comprehensive evaluation of surrogate extraction attack.

Parameters:
  • target_model – Original model

  • surrogate_model – Extracted surrogate model

  • data – Input graph data

  • train_mask – Mask for training nodes

  • val_mask – Mask for validation nodes

  • test_mask – Mask for test nodes

Returns:

Dictionary containing fidelity scores for different data splits

Return type:

dict

class pygip.utils.metrics.MetricBase[source]

Bases: ABC

_abc_impl = <_abc_data object>
static _cat_to_numpy(a)[source]
Return type:

ndarray

abstract compute()[source]

Compute and return all metric results.

Return type:

Dict[str, float]

compute_default_metrics(preds, labels)[source]
Return type:

Dict[str, float]

reset()[source]

Reset internal state.

Return type:

None

abstract update(*args, **kwargs)[source]

Update internal metric state.

Return type:

None

Module contents

class pygip.utils.GraphNeuralNetworkMetric(fidelity=0, accuracy=0, model=None, graph=None, features=None, mask=None, labels=None, query_labels=None)[source]

Bases: object

Graph Neural Network Metric Class.

This class evaluates two metrics, fidelity and accuracy, for a given GNN model on a specified graph and features.

static calculate_surrogate_fidelity(target_model, surrogate_model, data, mask=None)[source]

Calculate fidelity between target and surrogate model predictions.

Parameters:
  • target_model – Original model

  • surrogate_model – Extracted surrogate model

  • data – Input graph data

  • mask – Optional mask for evaluation on specific nodes

Returns:

Fidelity score (percentage of matching predictions)

Return type:

float

evaluate()[source]

Main function to update fidelity and accuracy scores.

evaluate_helper(model, graph, features, labels, mask)[source]

Helper function to evaluate the model’s performance.

static evaluate_surrogate_extraction(target_model, surrogate_model, data, train_mask=None, val_mask=None, test_mask=None)[source]

Comprehensive evaluation of surrogate extraction attack.

Parameters:
  • target_model – Original model

  • surrogate_model – Extracted surrogate model

  • data – Input graph data

  • train_mask – Mask for training nodes

  • val_mask – Mask for validation nodes

  • test_mask – Mask for test nodes

Returns:

Dictionary containing fidelity scores for different data splits

Return type:

dict