pygip.datasets package

Submodules

pygip.datasets.datasets module

class pygip.datasets.datasets.CiteSeer(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.CoauthorCS(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.CoauthorPhysics(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.Collab(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.Computers(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.Cora(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.Dataset(api_type='dgl', path='./data')[source]

Bases: object

_generate_masks_by_classes(num_class_samples=100, val_count=500, test_count=1000, seed=42)[source]

For Amazon and Coauthor datasets: - train: num_class_samples per class - val: val_count nodes from remaining - test: test_count nodes from remaining after val Works for both DGL and PyG graphs via self.graph_data

_generate_masks_by_ratio(train_ratio=0.8)[source]
_index_to_mask(index, size)[source]
_load_meta_data()[source]
get_name()[source]
load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.ENZYMES(api_type='dgl', path='./data')[source]

Bases: Dataset

load_pyg_data()[source]
class pygip.datasets.datasets.Facebook(api_type='dgl', path='./data')[source]

Bases: Dataset

load_pyg_data()[source]
class pygip.datasets.datasets.Flickr(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.IMDB(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.LastFM(api_type='dgl', path='./data')[source]

Bases: Dataset

load_pyg_data()[source]
class pygip.datasets.datasets.MUTAG(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.NCI1(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.PROTEINS(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.PTC(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.Photo(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.PolBlogs(api_type='dgl', path='./data')[source]

Bases: Dataset

load_pyg_data()[source]
class pygip.datasets.datasets.PubMed(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.Reddit(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.datasets.Twitter(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
class pygip.datasets.datasets.YelpData(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
pygip.datasets.datasets.dgl_to_tg(dgl_graph)[source]
pygip.datasets.datasets.tg_to_dgl(py_g_data)[source]

Module contents

class pygip.datasets.CiteSeer(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.CoauthorCS(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.CoauthorPhysics(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.Computers(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.Cora(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.Dataset(api_type='dgl', path='./data')[source]

Bases: object

_generate_masks_by_classes(num_class_samples=100, val_count=500, test_count=1000, seed=42)[source]

For Amazon and Coauthor datasets: - train: num_class_samples per class - val: val_count nodes from remaining - test: test_count nodes from remaining after val Works for both DGL and PyG graphs via self.graph_data

_generate_masks_by_ratio(train_ratio=0.8)[source]
_index_to_mask(index, size)[source]
_load_meta_data()[source]
get_name()[source]
load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.Photo(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]
class pygip.datasets.PubMed(api_type='dgl', path='./data')[source]

Bases: Dataset

load_dgl_data()[source]
load_pyg_data()[source]