pyhazards package¶
Subpackages¶
- pyhazards.datasets package
- Catalog Summary
- Developer Dataset Workflow
- Submodules
- pyhazards.datasets.base module
- pyhazards.datasets.registry module
- pyhazards.datasets.transforms package
- pyhazards.datasets.hazards package
- Module contents
AEFADatasetCaravanStreamflowDatasetDataBundleDataSplitDatasetFPAFODTabularDatasetFPAFODWeeklyDatasetFeatureSpecFloodCastBenchInundationDatasetGraphTemporalDatasetHydroBenchStreamflowDatasetIBTrACSTropicalCycloneDatasetLabelSpecPickBenchmarkWaveformDatasetSeisBenchWaveformDatasetSyntheticEarthquakeForecastDatasetSyntheticEarthquakeWaveformDatasetSyntheticFloodInundationDatasetSyntheticFloodStreamflowDatasetSyntheticTropicalCycloneDatasetSyntheticWildfireSpreadDatasetSyntheticWildfireSpreadTemporalDatasetTCBenchAlphaDatasetTropiCycloneNetDatasetWaterBenchStreamflowDatasetavailable_datasets()graph_collate()load_dataset()register_dataset()
- pyhazards.models package
- Catalog Summary
- Developer Registry Workflow
- Submodules
- pyhazards.models.backbones module
- pyhazards.models.heads module
- pyhazards.models.builder module
- pyhazards.models.registry module
- Module contents
ASUFMCNNPatchEncoderClassificationHeadConvLEMCellEQNetEQTransformerFireCastNetFloodCastForeFireAdapterFourCastNetTCGPDGoogleFloodForecastingGraphCastTCHurricastHydroGraphNetHydroGraphNetLossMLPBackboneNeuralHydrologyEALSTMNeuralHydrologyLSTMPanguTCPhaseNetRegressionHeadSAFNetSegmentationHeadTCIFFusionTemporalEncoderTropiCycloneNetTropicalCycloneMLPTverskyLossUrbanFloodCastWRFSFireAdapterWaveCastNetWaveCastNetLossWavefieldMetricsWildfireASPPWildfireCNNASPPWildfireFPAWildfireForecastingWildfireMambaWildfireSpreadTSasufm_builder()available_models()build_model()cnn_aspp_builder()eqnet_builder()eqtransformer_builder()firecastnet_builder()floodcast_builder()forefire_builder()fourcastnet_tc_builder()google_flood_forecasting_builder()gpd_builder()graphcast_tc_builder()hurricast_builder()hydrographnet_builder()neuralhydrology_ealstm_builder()neuralhydrology_lstm_builder()pangu_tc_builder()phasenet_builder()register_model()saf_net_builder()tcif_fusion_builder()tropicalcyclone_mlp_builder()tropicyclonenet_builder()urbanfloodcast_builder()wavecastnet_builder()wildfire_aspp_builder()wildfire_forecasting_builder()wildfire_fpa_builder()wildfire_mamba_builder()wildfirespreadts_builder()wrf_sfire_builder()
- pyhazards.benchmarks package
- Submodules
- pyhazards.benchmarks.base module
- pyhazards.benchmarks.registry module
- pyhazards.benchmarks.runner module
- pyhazards.benchmarks.schemas module
- pyhazards.benchmarks.earthquake module
- pyhazards.benchmarks.wildfire module
- pyhazards.benchmarks.flood module
- pyhazards.benchmarks.tc module
- Module contents
- pyhazards.configs package
- pyhazards.reports package
- pyhazards.engine package
- pyhazards.metrics package
- pyhazards.utils package
Submodules¶
pyhazards.interactive_map module¶
Helpers for the external RAI Fire interactive map.
- pyhazards.interactive_map.RAI_FIRE_URL: str = 'https://rai-fire.com/'¶
Canonical URL for the external RAI Fire interactive map.
Module contents¶
- class pyhazards.Benchmark[source]¶
Bases:
ABCShared benchmark contract for hazard evaluators.
- _abc_impl = <_abc._abc_data object>¶
-
hazard_task:
str= ''¶
-
name:
str= 'benchmark'¶
- class pyhazards.BenchmarkConfig(name, hazard_task, metrics=<factory>, eval_split='test', params=<factory>)[source]¶
Bases:
object-
eval_split:
str= 'test'¶
-
hazard_task:
str¶
-
metrics:
List[str]¶
-
name:
str¶
-
params:
Dict[str,Any]¶
-
eval_split:
- class pyhazards.BenchmarkReport(benchmark_name, hazard_task, metrics, metadata=<factory>, artifacts=<factory>)[source]¶
Bases:
object-
artifacts:
Dict[str,str]¶
-
benchmark_name:
str¶
-
hazard_task:
str¶
-
metadata:
Dict[str,Any]¶
-
metrics:
Dict[str,float]¶
-
artifacts:
- class pyhazards.BenchmarkResult(benchmark_name, hazard_task, metrics, predictions=<factory>, artifacts=<factory>, metadata=<factory>)[source]¶
Bases:
object-
artifacts:
Dict[str,str]¶
-
benchmark_name:
str¶
-
hazard_task:
str¶
-
metadata:
Dict[str,Any]¶
-
metrics:
Dict[str,float]¶
-
predictions:
List[Any]¶
-
artifacts:
- class pyhazards.BenchmarkRunSummary(benchmark_name, hazard_task, metrics, report_paths=<factory>, metadata=<factory>)[source]¶
Bases:
object-
benchmark_name:
str¶
-
hazard_task:
str¶
-
metadata:
Dict[str,Any]¶
-
metrics:
Dict[str,float]¶
-
report_paths:
Dict[str,str]¶
-
benchmark_name:
- class pyhazards.CNNPatchEncoder(in_channels=3, hidden_dim=64)[source]¶
Bases:
ModuleLightweight CNN encoder for raster patches.
- class pyhazards.ClassificationHead(in_dim, num_classes)[source]¶
Bases:
ModuleSimple classification head.
- class pyhazards.ClassificationMetrics[source]¶
Bases:
MetricBase- _abc_impl = <_abc._abc_data object>¶
- class pyhazards.DataBundle(splits, feature_spec, label_spec, metadata=<factory>)[source]¶
Bases:
objectBundle of train/val/test splits plus metadata. Keeps feature/label specs to make model construction easy.
-
feature_spec:
FeatureSpec¶
-
metadata:
Dict[str,Any]¶
-
feature_spec:
- class pyhazards.DataSplit(inputs, targets, metadata=<factory>)[source]¶
Bases:
objectContainer for a single split.
-
inputs:
Any¶
-
metadata:
Dict[str,Any]¶
-
targets:
Any¶
-
inputs:
- class pyhazards.Dataset(cache_dir=None)[source]¶
Bases:
objectBase class for hazard datasets. Subclasses should load data and return a DataBundle with splits ready for training.
- load(split=None, transforms=None)[source]¶
Return a DataBundle. Optionally return a specific split if provided.
- Return type:
-
name:
str= 'base'¶
- class pyhazards.DatasetRef(name, params=<factory>)[source]¶
Bases:
object-
name:
str¶
-
params:
Dict[str,Any]¶
-
name:
- class pyhazards.ExperimentConfig(benchmark, dataset, model, report=<factory>, seed=0, metadata=<factory>)[source]¶
Bases:
object-
benchmark:
BenchmarkConfig¶
-
dataset:
DatasetRef¶
-
metadata:
Dict[str,Any]¶
-
report:
ReportConfig¶
-
seed:
int= 0¶
-
benchmark:
- class pyhazards.FeatureSpec(input_dim=None, channels=None, description=None, extra=<factory>)[source]¶
Bases:
objectDescribes input features (shapes, dtypes, normalization).
-
channels:
Optional[int] = None¶
-
description:
Optional[str] = None¶
-
extra:
Dict[str,Any]¶
-
input_dim:
Optional[int] = None¶
-
channels:
- class pyhazards.HazardTask(name, hazard, target, description)[source]¶
Bases:
objectCanonical hazard task label used by benchmark and config layers.
-
description:
str¶
-
hazard:
str¶
-
name:
str¶
-
target:
str¶
-
description:
- class pyhazards.LabelSpec(num_targets=None, task_type='regression', description=None, extra=<factory>)[source]¶
Bases:
objectDescribes labels/targets for downstream tasks.
-
description:
Optional[str] = None¶
-
extra:
Dict[str,Any]¶
-
num_targets:
Optional[int] = None¶
-
task_type:
str= 'regression'¶
-
description:
- class pyhazards.MLPBackbone(input_dim, hidden_dim=256, depth=2)[source]¶
Bases:
ModuleSimple MLP for tabular features.
- class pyhazards.ModelRef(name, task, params=<factory>)[source]¶
Bases:
object-
name:
str¶
-
params:
Dict[str,Any]¶
-
task:
str¶
-
name:
- class pyhazards.RegressionHead(in_dim, out_dim=1)[source]¶
Bases:
ModuleRegression head for scalar or multi-target outputs.
- class pyhazards.RegressionMetrics[source]¶
Bases:
MetricBase- _abc_impl = <_abc._abc_data object>¶
- class pyhazards.ReportConfig(output_dir='reports', formats=<factory>)[source]¶
Bases:
object-
formats:
List[str]¶
-
output_dir:
str= 'reports'¶
-
formats:
- class pyhazards.SegmentationHead(in_channels, num_classes)[source]¶
Bases:
ModuleSegmentation head for raster masks.
- class pyhazards.SegmentationMetrics(num_classes=None)[source]¶
Bases:
MetricBase- _abc_impl = <_abc._abc_data object>¶
- class pyhazards.TemporalEncoder(input_dim, hidden_dim=128, num_layers=1)[source]¶
Bases:
ModuleGRU-based encoder for time-series signals.
- class pyhazards.Trainer(model, device=None, metrics=None, strategy='auto', mixed_precision=False)[source]¶
Bases:
objectLightweight training abstraction with a familiar API: fit -> evaluate -> predict.
- _make_loader(inputs, targets, batch_size, num_workers, collate_fn, shuffle=True)[source]¶
- Return type:
Iterable
- evaluate(data, split='test', batch_size=64, num_workers=0, collate_fn=None)[source]¶
- Return type:
Dict[str,float]
- fit(data, train_split='train', val_split=None, max_epochs=1, optimizer=None, loss_fn=None, batch_size=32, num_workers=0, collate_fn=None)[source]¶
Minimal fit loop that works for tensor-based splits. Extend/replace with custom DataLoaders for complex data.
- Return type:
None
- pyhazards.build_model(name, task, **kwargs)[source]¶
Build a model by name and task. This delegates to registry metadata to keep a consistent interface.
- Return type:
Module
- pyhazards.open_interactive_map(open_browser=True)[source]¶
Open the RAI Fire map in the user’s browser when possible.
- Parameters:
open_browser (
bool) – Whether to attempt to open the default browser.- Return type:
str- Returns:
The canonical RAI Fire URL.