pub fn create_subspace_archiver<Block, Backend, Client, AS, SO>(
segment_headers_store: SegmentHeadersStore<AS>,
subspace_link: SubspaceLink<Block>,
client: Arc<Client>,
sync_oracle: SubspaceSyncOracle<SO>,
telemetry: Option<TelemetryHandle>,
create_object_mappings: CreateObjectMappings,
) -> Result<impl Future<Output = Result<()>> + Send + 'static>where
Block: BlockT,
Backend: BackendT<Block>,
Client: ProvideRuntimeApi<Block> + BlockBackend<Block> + HeaderBackend<Block> + LockImportRun<Block, Backend> + Finalizer<Block, Backend> + AuxStore + Send + Sync + 'static,
Client::Api: SubspaceApi<Block, PublicKey> + ObjectsApi<Block>,
AS: AuxStore + Send + Sync + 'static,
SO: SyncOracle + Send + Sync + 'static,
Expand description
Create an archiver task.
Archiver task will listen for importing blocks and archive blocks at K
depth, producing pieces
and segment headers (segment headers are then added back to the blockchain as
store_segment_header
extrinsic).
NOTE: Archiver is doing blocking operations and must run in a dedicated task.
Archiver is only able to move forward and doesn’t support reorgs. Upon restart it will check
SegmentHeadersStore
and chain history to reconstruct “current” state it was in before last
shutdown and continue incrementally archiving blockchain history from there.
Archiving is triggered by block importing notification (SubspaceLink::block_importing_notification_stream
)
and tries to archive the block at ChainConstants::confirmation_depth_k
depth from the block being imported. Block import will then wait for archiver to acknowledge
processing, which is necessary for ensuring that when the next block is imported, inherents will
contain segment header of newly archived block (must happen exactly in the next block).
create_object_mappings
controls when object mappings are created for archived blocks. When
these mappings are created, a (SubspaceLink::object_mapping_notification_stream
)
notification will be sent.
Once segment header is archived, notification (SubspaceLink::archived_segment_notification_stream
)
will be sent and archiver will be paused until all receivers have provided an acknowledgement
for it.
Archiving will be incremental during normal operation to decrease impact on block import and non-incremental heavily parallel during sync process since parallel implementation is more efficient overall and during sync only total sync time matters.