subspace_farmer/cluster.rs
1//! Cluster version of the farmer
2//!
3//! This module contains isolated set of modules that implement cluster-specific functionality for
4//! the farmer, allowing to distribute cooperating components across machines, while still working
5//! together.
6//!
7//! Specifically, 4 separate components are extracted:
8//! * controller
9//! * farmer
10//! * plotter
11//! * cache
12//!
13//! ### Controller
14//!
15//! Controller connects to the node via RPC and DSN. It handles notifications from node and
16//! orchestrates other components. It will send slot notifications to farmers, store and retrieve
17//! pieces from caches on requests from DSN, etc.
18//!
19//! While there could be multiple controllers shared between farmers, each controller must have its
20//! dedicated pool of caches and each cache should belong to a single controller. This allows to
21//! shut down some controllers for upgrades and other maintenance tasks without affecting farmer's
22//! ability to farm and receive rewards.
23//!
24//! ### Farmer
25//!
26//! Farmer maintains farms with plotted pieces and corresponding metadata. Farmer does audits and
27//! proving, retrieves pieces from plotted sectors on request, but doesn’t do any caching or P2P
28//! networking with DSN. When sectors need to be plotted/replotted, request will be sent to Plotter
29//! to do that instead of doing it locally, though plotter and farmer can be co-located.
30//!
31//! Farmers receive (de-duplicated) slot notifications from all controllers and will send solution
32//! back to the controller from which they received slot notification.
33//!
34//! ### Plotter
35//!
36//! Plotter needs to be able to do heavy compute with proportional amount of RAM for plotting
37//! purposes.
38//!
39//! There could be any number of plotters in a cluster, adding more will increase total cluster
40//! ability to plot concurrent sectors.
41//!
42//! ### Cache
43//!
44//! Cache helps with plotting process and with serving data to DSN. At the same time, writes and
45//! reads are while random, they are done in large size and low frequency comparing in contrast to
46//! farmer. Fast retrieval is important for plotters to not stay idle, but generally cache can work
47//! even on HDDs.
48//!
49//! There could be any number of caches in the cluster, but each cache instance belongs to one of
50//! the controllers. So if multiple controllers are present in the cluster, you'll want at least one
51//! cache connected to each as well for optimal performance.
52
53pub mod cache;
54pub mod controller;
55pub mod farmer;
56pub mod nats_client;
57pub mod plotter;