pyhazards.datasets package¶
Submodules¶
pyhazards.datasets.base module¶
- class pyhazards.datasets.base.DataBundle(splits, feature_spec, label_spec, metadata=<factory>)[source]¶
Bases:
objectBundle of train/val/test splits plus metadata. Keeps feature/label specs to make model construction easy.
-
feature_spec:
FeatureSpec¶
-
metadata:
Dict[str,Any]¶
-
feature_spec:
- class pyhazards.datasets.base.DataSplit(inputs, targets, metadata=<factory>)[source]¶
Bases:
objectContainer for a single split.
-
inputs:
Any¶
-
metadata:
Dict[str,Any]¶
-
targets:
Any¶
-
inputs:
- class pyhazards.datasets.base.Dataset(cache_dir=None)[source]¶
Bases:
objectBase class for hazard datasets. Subclasses should load data and return a DataBundle with splits ready for training.
- load(split=None, transforms=None)[source]¶
Return a DataBundle. Optionally return a specific split if provided.
- Return type:
-
name:
str= 'base'¶
- class pyhazards.datasets.base.FeatureSpec(input_dim=None, channels=None, description=None, extra=<factory>)[source]¶
Bases:
objectDescribes input features (shapes, dtypes, normalization).
-
channels:
Optional[int] = None¶
-
description:
Optional[str] = None¶
-
extra:
Dict[str,Any]¶
-
input_dim:
Optional[int] = None¶
-
channels:
- class pyhazards.datasets.base.LabelSpec(num_targets=None, task_type='regression', description=None, extra=<factory>)[source]¶
Bases:
objectDescribes labels/targets for downstream tasks.
-
description:
Optional[str] = None¶
-
extra:
Dict[str,Any]¶
-
num_targets:
Optional[int] = None¶
-
task_type:
str= 'regression'¶
-
description:
pyhazards.datasets.registry module¶
pyhazards.datasets.transforms package¶
Reusable transforms for preprocessing hazard datasets. Currently placeholders; implement normalization, index computation, temporal windowing, etc.
pyhazards.datasets.hazards package¶
Namespace for hazard-specific dataset loaders (earthquake, wildfire, flood, hurricane, landslide, etc.). Populate with concrete Dataset subclasses and register them in pyhazards.datasets.registry.
Module contents¶
- class pyhazards.datasets.DataBundle(splits, feature_spec, label_spec, metadata=<factory>)[source]¶
Bases:
objectBundle of train/val/test splits plus metadata. Keeps feature/label specs to make model construction easy.
-
feature_spec:
FeatureSpec¶
-
metadata:
Dict[str,Any]¶
-
feature_spec:
- class pyhazards.datasets.DataSplit(inputs, targets, metadata=<factory>)[source]¶
Bases:
objectContainer for a single split.
-
inputs:
Any¶
-
metadata:
Dict[str,Any]¶
-
targets:
Any¶
-
inputs:
- class pyhazards.datasets.Dataset(cache_dir=None)[source]¶
Bases:
objectBase class for hazard datasets. Subclasses should load data and return a DataBundle with splits ready for training.
- load(split=None, transforms=None)[source]¶
Return a DataBundle. Optionally return a specific split if provided.
- Return type:
-
name:
str= 'base'¶
- class pyhazards.datasets.FeatureSpec(input_dim=None, channels=None, description=None, extra=<factory>)[source]¶
Bases:
objectDescribes input features (shapes, dtypes, normalization).
-
channels:
Optional[int] = None¶
-
description:
Optional[str] = None¶
-
extra:
Dict[str,Any]¶
-
input_dim:
Optional[int] = None¶
-
channels:
- class pyhazards.datasets.GraphTemporalDataset(x, y, adjacency=None)[source]¶
Bases:
DatasetSimple container for county/day style tensors with an optional adjacency.
Each sample is a window of shape (past_days, num_counties, num_features) and a label of shape (num_counties,).
- class pyhazards.datasets.LabelSpec(num_targets=None, task_type='regression', description=None, extra=<factory>)[source]¶
Bases:
objectDescribes labels/targets for downstream tasks.
-
description:
Optional[str] = None¶
-
extra:
Dict[str,Any]¶
-
num_targets:
Optional[int] = None¶
-
task_type:
str= 'regression'¶
-
description: