Note for myself to follow up on - when creating the data loading module with multiprocessing_context=forkserver, the forked instances aren't importing the site archives when loading pipelines. This could be a bug, or maybe we need to add an optional and/or automagical option into the pipelines to make them aware of the need for pre-imports.
The site archives "register" site-specific module classes into the pyearthtools.data.archive namespace, and this needs to occur before deserialising the rest of the pipelines, where they draw on those archives.
Note for myself to follow up on - when creating the data loading module with multiprocessing_context=forkserver, the forked instances aren't importing the site archives when loading pipelines. This could be a bug, or maybe we need to add an optional and/or automagical option into the pipelines to make them aware of the need for pre-imports.
The site archives "register" site-specific module classes into the pyearthtools.data.archive namespace, and this needs to occur before deserialising the rest of the pipelines, where they draw on those archives.